Sunday 17 February 2019

[Tutorial] Positioning Technique 2: Pedestrian Dead Reckoning, PDR

1. Introduction

An inertial navigation system (INS) is a navigation device that uses a computer, motion sensors (accelerometers) and rotation sensors (gyroscopes) to continuously calculate by dead reckoning the position, the orientation, and the velocity (direction and speed of movement) of a moving object without the need for external references. All inertial navigation systems suffer from integration drift: small errors in the measurement of acceleration and angular velocity are integrated into progressively larger errors in velocity, which are compounded into still greater errors in position. Since the new position is calculated from the previously calculated position and the measured acceleration and angular velocity, these errors accumulate roughly proportionally to the time since the initial position was input. Therefore, the position must be periodically corrected by input from some other type of navigation system or with respect to an external reference location.

Pedestrian Dead Reckoning (PDR) is a relative navigation technique. Starting from a known position, successive position displacements are added up. The displacement estimates can be in the form of changes in Cartesian coordinates (i.e. x and y coordinates) or, more typically, in heading and speed or distance. But unlike the INS which needs to use integration methods to compute the heading and distance. PDR is a way to directly use the step length of people as the displacement of each step when people walking. So the main work of this method is to detect and count the steps as well as heading estimation. PDR is suitable for data from low-accuracy sensors e.g. smartphone sensors, so it is widely used in location determination based on smartphones. So in this lab, we will focus on PDR instead of INS.

Thanks to the feature of relative navigation technique, PDR doesn't have limits which GNSS (GPS, Beidou) and other indoor positioning techniques have including line-of-sight conditions(for GPS, camera-based, lidar and so on), installing many infrastructures (for wifi, Bluetooth low energy..) and so on. So that PDR has been shown to be employed in many end applications.

The main sources of data for PDR include accelerometer gyroscope and magnetic field as shown below. The figures below just show some examples of data from accelerometer gyroscope and magnetic field. Note that these sensors measure the motion in three directions, x- axis, y-axis and z-axis. Also note that these axes are defined relative to the orientation of the phone, so if the phone is held horizontally in someone’s hand versus store vertically in someone’s pocket, the same motion will generate a motion signal along different axes.
Fig 1.1 Accelerometer
Fig 1.2 Gyroscope



Fig 1.3 Magnetic Field
Generic smartphones are equipped with these three sensors. And some smart bands and smartwatches are also embedded them. It is so convenient that PDR becomes more and more popular for researches. In this post, we will walk into the domain of PDR. In the next sections, I will introduce the methodology of PDR and offer you some examples to practice.


2. Methodology


2.1. overview

Fig 2.10 overview of PDR technology
From the figure above, the side length, st, is the length of one step at t. θ(t) is the azimuth or heading when people finish one step at t. At the upper-left corner, there shows a coordinate system which is called the measurement coordinate system (MCS) whose feature is The north direction is the X-axis, and Y is perpendicular with X.  

So the process of PDR is: people start from a black point at the bottom. Advancing one step to P1, there are two sections to be done. 1) Step-detection algorithms will be used to detect whether this step happens. 2) A heading estimation algorithm will be used to compute the heading/azimuth, θ(1)Assume our step-detection algorithm detect it accurately, the step length is S1 (the setting for the step length refers to section 2.2). Here we assume S1 is a known variable. we are able to use the equation below to compute the coordinate of P1:
x(t) = x(t-1) + St * cos(θ(t))
y(t) = y(t-1) + St * sin(θ(t))
t = 1,2,3....
Then walking to P2, P3..., we can compute their coordinates using above equations successively. 

This is the overview of PDR. The PDR technique is naturally decomposed into the step detection and estimation part and the heading estimation part. Each of these is discussed in turn.

2.2. set the step or stride length 

Note that the distance traveled during step depends upon several variables such as leg size, stride length or step size, physical fitness, the terrain and how cluttered the terrain is with objects and other people. Hence, if we calculate distance as the no. of steps times the step-size, we will get some variability because steps can vary while 1 person walks and because people can walk differently because of different leg length, etc. Hence, a key configuration value for the algorithm is to set the step or stride length.


Often the average stride length can be determined from your height. Measure your height. For women, simply multiply 0.413 by your height in centimeters. Men should multiply 0.415 by their height in centimeters. Round off the answer to the nearest whole number and this equals your stride length in centimeters. An average man's stride length is 78 centimeters, while a woman's average stride length is 70 centimeters

Actually, some researches focused on the estimation of the step length.  If you show interested in it you can refer to this

2.3. step detection and estimation part

Step detection is the progress to use raw data to detect the occurrence and number of steps. Taking accelerometer as an example as shown below, it is the data from the accelerometer while someone is walking. From fig 2.1, a regular fluctuation exists, especially in z-axis and magnitude.  When someone walks, the acceleration is minimal while any leg swings and is maximum when someone’s foot makes contact with the ground because the ground will offer a force to generate acceleration, more technical details showing here.
Fig 2.1 pattern of steps in accelerometer 
Besides, fig 2.2 is the display of data from the gyroscope when people walking. Though the effect is not as clear as that of the accelerometer. a rough patter can be found.
Fig 2.2 pattern of steps in gyroscope

The gyroscope represents parts of the body turning or rotating in order to better maintain the balance of the body as someone walks. Although steps could be detected using the accelerometer, some research papers have shown that the combining the accelerometer and gyroscope leads improves the accuracy for step detection.


Here are the two main data sources (accelerometer and gyroscope) for step detection. As for step detection algorithms, researches have put forward many methods like zero-cross, peak-valley detection, fast Fourier Transform (FFT) and so on. They all have their own features. We need to test their performance for our own scenarios. After detecting successfully each step, summing up them can get the distance moved.

In this lab, we will use the peak-valley detection method which uses the magnitude of an accelerometer. 
The keys to the peak-valley detection method are:
1)the continuous upward times need be more than 2.
2)the difference between peak and valley should be more than the threshold.
3)this threshold should be updated all the time.

Here is a display about the step data:

The overview of step data
the dots means steps detected by this peak-valley algorithm. These lines are data from the accelerometer.
The source code and data are here:>>

Practice #1:
Aims:
1)Downloading this peak-valley algorithm, implement it using provided data.
2)In this algorithm, there are five parameters will affect the accuracy:
self.value_num = value_num    #the size of threshold list, generally taking 4
self.time_interval = time_interval   #the timeinterval between two peaks
self.threshold_value = threshold_value     #the threshold for the difference between valley and peak
self.initialValue = initialValue         #the pre-set value for the difference between valley and peak        #it is used to decide whether to update the threshold_value
self.peak_threshold = peak_threshold  #it is used to deal with sudden increase of the acceleration even if it doesn't increase twice.     #and the parameters for computing the average_value.
Change the values of these parameters and test the performance.

2.4. heading estimation part

Generally, a compass is often used for heading. The major sensor for a compass is the magnetic field. However, with the unstable and inaccurate of the magnetic field. We use gyroscope and accelerometer to calibrate the magnetic field to compute heading. To make you fully understand the heading estimation, we provide the data from accelerometer, gyroscope and magnetic field for heading computation.  
  • Coordinate Systems
Fig 2.3 device coordinate system
FYI: (for fig 2.3) When a device is held in its default orientation, the X axis is horizontal and points to the right, the Y axis is vertical and points up, and the Z axis points toward the outside of the screen face. In this system, coordinates behind the screen have negative Z values.
Fig 2.4 global coordinate system
FYI: (for fig 2.4) X is defined as the vector product Y.Z (It is tangential to the ground at the device's current location and roughly points East). Y is tangential to the ground at the device's current location and points towards the magnetic North Pole. Z points towards the sky and is perpendicular to the ground.

Fig 2.3 shows the coordinate system for our smartphone (not the screen coordinate system). Fig 2.4 shows the global coordinate system. In PDR, the heading we use is the azimuth referring to the North. It is an angle from the North to the target direction clockwise, exactly the angle between North and Y-axis in the global coordinate system. In order to get the right heading of the phone,  we need to transform the angles in the smartphone system to the global coordinate system. 
  • Pitch, Yaw and Roll and Euler Angles (Φ, θ, ψ)
Though different systems have more or fewer discrepancies in the definition of rotation angles because of the different directions of X, Y, Z. But the meanings of pitch, yaw and roll are the same as shown below.  
Fig 2.5 Yaw
Fig 2.6 Pitch
Fig 2.7 Roll
For smartphone coordinate system, the yaw is the rotation around Z; the pitch is the rotation around X; the roll is the rotation around Y.

According to Euler's rotation theorem, any rotation may be described using three angles, Φ, θ, ψThe three angles giving the three rotation matrices are called Euler angles.  If the rotations are written in terms of rotation matrices D, C, and B shown below.
Fig 2.8 rotation matrices B, C, D
1. the first rotation is by an angle phi Φ about the z-axis using D;
2. the second rotation is by an angle theta θ in [0,pi] about the former x-axis (now x') using C;
3. the third rotation is by an angle psi ψ about the former z-axis (now z') using B.
  • Coordinate Systems Transform
From fig 2.8, we know from the device coordinate system to the global coordinate system, we need to rotate via three axes. The multiplication of three rotation matrices B, C, D is A, A = BCD, defined as:
Fig 2.9 A =BCD
A is the so-called rotation matrix.

Actually, there are two ways to represent the rotation: rotation matrix, A (=BCD); and quaternion (w,x,y,z). rotation matrix, quaternion and Euler angles can transform with each other.
def quatern2euler(q=[1.0,0.0,0.0,0.0]):
    R=[0.0,0.0,0.0,0.0,0.0]
    R[0]=2.0*q[0]*q[0]-1+2.0*q[1]*q[1]
    R[1]=2.0*(q[1]*q[2]-q[0]*q[3])
    R[2]=2.0*(q[1]*q[3]+q[0]*q[2])
    R[3]=2.0*(q[2]*q[3]-q[0]*q[1])
    R[4]=2.0*q[0]*q[0]-1+2.0*q[3]*q[3]
    phi = atan2(R[3],R[4])
    theta = -atan(R[2]/sqrt(1-R[2]*R[2]))
    psi = atan2(R[1],R[0])
    euler=[phi,theta,psi]
    return euler

def quatern2rotMat(q=[1.0,0.0,0.0,0.0]):
    R=np.array([0.0]*9).reshape(3,3)
    R[0,0] = 2.0*q[0]*q[0]-1+2.0*q[1]*q[1]
    R[0,1] = 2.0*(q[1]*q[2]+q[0]*q[3])
    R[0,2] = 2.0*(q[1]*q[3]-q[0]*q[2])
    R[1,0] = 2.0*(q[1]*q[2]-q[0]*q[3])
    R[1,1] = 2.0*q[0]*q[0]-1+2.0*q[2]*q[2]
    R[1,2] = 2.0*(q[2]*q[3]+q[0]*q[1])
    R[2,0] = 2.0*(q[1]*q[3]+q[0]*q[2])
    R[2,1] = 2.0*(q[2]*q[3]-q[0]*q[1])
    R[2,2] = 2.0*q[0]*q[0]-1+2.0*q[3]*q[3]    
    return R


def euler2rotMat(euler=[1.0,1.0,1.0]):
    phi=euler[0]
    theta=euler[1]
    psi=euler[2]    
    R=np.array([0.0]*9).reshape(3,3)
    R[0,0] = cos(psi)*cos(theta)
    R[0,1] = -sin(psi)*cos(phi) + cos(psi)*sin(theta)*sin(phi)
    R[0,2] = sin(psi)*sin(phi) + cos(psi)*sin(theta)*cos(phi)
    R[1,0] = sin(psi)*cos(theta)
    R[1,1] = cos(psi)*cos(phi) + sin(psi)*sin(theta)*sin(phi)
    R[1,2] = -cos(psi)*sin(phi) + sin(psi)*sin(theta)*cos(phi)
    R[2,0] = -sin(theta)
    R[2,1] = cos(theta)*sin(phi)
    R[2,2] = cos(theta)*cos(phi)    
    return R

def rotMat2euler(R=np.eye(3,3)):
    phi = atan2(R[2,1],R[2,2])
    theta = -atan(R[2,0]/sqrt(1-R[2,0]*R[2,0]))
    psi = atan2(R[1,0],R[0,0])
    euler=[phi,theta,psi]
    return euler

def rotMat2quatern(R=np.eye(3,3)):
    K = np.zeros(16).reshape(4,4)
    
    K[0,0]=(1.0/3.0) *(R[0,0] - R[1,1] - R[2,2])
    K[0,1]=(1.0/3.0) *(R[1,0] + R[0,1])
    K[0,2]=(1.0/3.0) *(R[2,0] + R[0,2])
    K[0,3]=(1.0/3.0) *(R[1,2] - R[2,1])
    K[1,0]=(1.0/3.0) *(R[1,0] + R[0,1])
    K[1,1]=(1.0/3.0) *(R[1,1] - R[0,0] - R[2,2])
    K[1,2]=(1.0/3.0) *(R[2,1] + R[1,2])
    K[1,3]=(1.0/3.0) *(R[2,0] - R[0,2])
    K[2,0]=(1.0/3.0) *(R[2,0] + R[0,2])
    K[2,1]=(1.0/3.0) *(R[2,1] + R[1,2])
    K[2,2]=(1.0/3.0) *(R[2,2] - R[0,0] - R[1,1])
    K[2,3]=(1.0/3.0) *(R[0,1] - R[1,0])
    K[3,0]=(1.0/3.0) *(R[1,2] - R[2,1])
    K[3,1]=(1.0/3.0) *(R[2,0] - R[0,2])
    K[3,2]=(1.0/3.0) *(R[0,1] - R[1,0])
    K[3,3]=(1.0/3.0) *(R[0,0] + R[1,1] + R[2,2])
    
    V,D=np.linalg.eig(K)
    q=[D[:,3][3],D[:,3][0],D[:,3][1],D[:,3][2]]
    return q

  • Heading computation
def MahonyAHRS_IMU(g=[0.0,0.0,0.0],a=[0.0,0.0,0.0],m=[0.0,0.0,0.0],q=[1.0,0.0,0.0,0.0],mgAxis='x',twoKp=1.0,twoKi=0.0,sampleFreq=1/0.003906250000000):
    '''
    arguments description
    g: gyroscope vector, dataType: list, Unit: rad/s
    a: accelerator vector, dataType: list, Unit: m/s^2
    m: magnetic vector, dataType: list, Unit: μT
    q:quaternion, dataType: list
    mgAxis: the main direction of magnetic, x- x axis [bx,0,bz], y- y axis [0,by,bz]
    twoKp: is a calibration coefficient which is similar to the K in Kalman filter. how much you trust the bias calibrated by acc and magn, how large you set this variable
    twoKi: is a mititation coefficient of integration, generally zero
    sampleFreq: means the sampling frequency of gyroscope
    '''
    
    '''normalization scalar'''
    recipNorm=0.0
    '''abbrevation'''
    q0q0=q[0]*q[0]
    q0q1=q[0]*q[1]
    q0q2=q[0]*q[2]
    q0q3=q[0]*q[3]
    q1q1=q[1]*q[1]
    q1q2=q[1]*q[2]
    q1q3=q[1]*q[3]
    q2q2=q[2]*q[2]
    q2q3=q[2]*q[3]
    q3q3=q[3]*q[3]
    
    '''integral error'''  
    integralFBx = 0.0
    integralFBy = 0.0
    integralFBz = 0.0
    
    if m[0] == 0.0 and m[1] == 0.0 and m[2] == 0.0:
        #IMU model
        if not (a[0] == 0.0 and a[1] == 0.0 and a[2] == 0.0):
            '''accelerometer data normalization--this process can remove unit or scale difference'''
            recipNorm = 1.0/sqrt(a[0] * a[0] + a[1] * a[1] + a[2] * a[2])   
            a[0] *= recipNorm
            a[1] *= recipNorm
            a[2] *= recipNorm
            
            Dvx=(q1q3-q0q2)*2
            Dvy=(q0q1+q2q3)*2
            Dvz=(q0q0-0.5+q3q3)*2
            
            Dex=(a[1]*Dvz-a[2]*Dvy)
            Dey=(a[2]*Dvx-a[0]*Dvz)
            Dez=(a[0]*Dvy-a[1]*Dvx)
            
            if twoKi > 0.0:
                integralFBx += twoKi * Dex * (1.0 / sampleFreq)
                integralFBy += twoKi * Dey * (1.0 / sampleFreq)
                integralFBz += twoKi * Dez * (1.0 / sampleFreq)
                g[0] += integralFBx
                g[1] += integralFBy
                g[2] += integralFBz
            else:
                integralFBx = 0.0
                integralFBy = 0.0
                integralFBz = 0.0

            g[0] += twoKp * Dex

            g[1] += twoKp * Dey
            g[2] += twoKp * Dez
            
        qDot=np.array(quaternProd(q,[0.0,g[0],g[1],g[2]]))*0.5    #this step is considering the effect of Earth rotation
        q=(np.array(q)+qDot*(1.0 / sampleFreq)).tolist()        
        # Normalise quaternion
        recipNorm = 1.0/sqrt(q[0] * q[0] + q[1] * q[1] + q[2] * q[2] + q[3] * q[3])
        q[0] *= recipNorm
        q[1] *= recipNorm
        q[2] *= recipNorm
        q[3] *= recipNorm
    #AHRS model    
    else:            
        if not (a[0] == 0.0 and a[1] == 0.0 and a[2] == 0.0):
            '''accleration normalization'''
            recipNorm = 1.0/sqrt(a[0] * a[0] + a[1] * a[1] + a[2] * a[2])   
            a[0] *= recipNorm
            a[1] *= recipNorm
            a[2] *= recipNorm
#            print(a)
            '''magnetic data normalization'''
            recipNorm = 1.0/sqrt(m[0] * m[0] + m[1] * m[1] + m[2] * m[2])   
            m[0] *= recipNorm
            m[1] *= recipNorm
            m[2] *= recipNorm
#            print(m)
            
            hx=2.0*(m[0]*(0.5-q2q2-q3q3)+m[1]*(q1q2-q0q3)+m[2]*(q1q3+q0q2))
            hy=2.0*(m[0]*(q1q2+q0q3)+m[1]*(0.5-q1q1-q3q3)+m[2]*(q2q3-q0q1))
#            print([hx,hy])
            
            if mgAxis=='x':#magnetic vector==[bx,0,bz]
                bx = sqrt(hx * hx + hy * hy)       
                bz=2.0*(m[0]*(q1q3-q0q2)+m[1]*(q2q3+q0q1)+m[2]*(0.5-q1q1-q2q2)) 
#            print([bx,bz])
                Dwx=(bx*(0.5-q2q2-q3q3)+bz*(q1q3-q0q2))*2
                Dwy=(bx*(q1q2-q0q3)+bz*(q0q1+q2q3))*2
                Dwz=(bx*(q0q2+q1q3)+bz*(0.5-q1q1-q2q2))*2
#            print([Dwx,Dwy,Dwz])
            if mgAxis=='y': #magnetic vector==[0,by,bz]            
                by = sqrt(hx * hx + hy * hy)       
                bz=2.0*(m[0]*(q1q3-q0q2)+m[1]*(q2q3+q0q1)+m[2]*(0.5-q1q1-q2q2)) 
#                print([by,bz])    
                Dwx=(by*(q1q2+q0q3)+bz*(q1q3-q0q2))*2
                Dwy=(by*(0.5-q1q1-q3q3)+bz*(q0q1-q2q3))*2
                Dwz=(by*(q2q3-q0q1)+bz*(0.5-q1q1-q2q2))*2
#                print([Dwx,Dwy,Dwz])
                
            Dvx=(q1q3-q0q2)*2
            Dvy=(q0q1+q2q3)*2
            Dvz=(q0q0-0.5+q3q3)*2
#            print([Dvx,Dvy,Dvz])
                        
            Dex=(a[1]*Dvz-a[2]*Dvy)+(m[1]*Dwz-m[2]*Dwy)
            Dey=(a[2]*Dvx-a[0]*Dvz)+(m[2]*Dwx-m[0]*Dwz)
            Dez=(a[0]*Dvy-a[1]*Dvx)+(m[0]*Dwy-m[1]*Dwx)
#            print([Dex,Dey,Dez])
            #calibration
            if twoKi > 0.0:
                integralFBx += twoKi * Dex * (1.0 / sampleFreq)
                integralFBy += twoKi * Dey * (1.0 / sampleFreq)
                integralFBz += twoKi * Dez * (1.0 / sampleFreq)
                g[0] += integralFBx
                g[1] += integralFBy
                g[2] += integralFBz
            else:
                integralFBx = 0.0
                integralFBy = 0.0
                integralFBz = 0.0

            g[0] += twoKp * Dex

            g[1] += twoKp * Dey
            g[2] += twoKp * Dez
        
        qDot=np.array(quaternProd(q,[0.0,g[0],g[1],g[2]]))*0.5
        q=(np.array(q)+qDot*(1.0 / sampleFreq)).tolist()        
        # Normalise quaternion
        recipNorm = 1.0/sqrt(q[0] * q[0] + q[1] * q[1] + q[2] * q[2] + q[3] * q[3])
        q[0] *= recipNorm
        q[1] *= recipNorm
        q[2] *= recipNorm
        q[3] *= recipNorm
    return q

EulerAngles = quatern2euler(q)


Practice #2
The examples we provide here is X, Y, Z reading from the accelerometer, gyroscope and magnetometer. The size of data is 6959 entries. Source code and data are downloaded here.
Fig 3.1 Examples Overview
Aims:
1) After downloading the source code and data, you should compute the heading using data from accelerometer, gyroscope and magnetic field.
2) After computing the heading, simulate PDR. "simulate" here means, because the data for step detection and heading estimation is not the same dataset. So we skip step detection part by user setting. The details are:
a. set step length: stepLen = 0.6;
b. set coordinate of the start point: startCoor = [1.0, 3.0]  #user-defined
c. set the time interval for steps here: for i in range(0, len(heading), 100):  100 means every 100 points advancing one step. 
3) display all coordinates computed by simulated PDR, like below

The PDR result overview:
Fig 3.2 The result overview

4. Conclusion

To simplify this tutorial, in this post, we didn't have a deep look at the step detection and step counting. Similarly, we consider the step length unchanged for one person. However, we discussed deeply about the heading estimation. As it concerns two different coordinate systems, to estimate the heading from the raw data, we need to use a rotation matrix to conduct coordinate transform. Therefore, for heading estimation, the key point is to understand the relationship between these two coordinate systems.

With a limited scale, we didn't talk about the drawback of PDR. Actually, with sufficiently frequent absolute position updates, dead reckoning’s linearly growing position errors can be a big issue for positioning. Because of that, PDR is not suitable to be used alone. Fusing PDR with other techniques is a popular way, e.g. wifi+PDR, BLE+PDR and ultrasonic+PDR. The main idea of this kind of fusion is to use a Kalman filter or Particle filter to achieve better positioning accuracy.


Extension: think about how to fuse WiFi+PDR using a Kalman filter?

No comments:

Post a Comment

[Research] Recurrent Neural Network (RNN)

1. Introduction As we all know, forward neural networks (FNN) have no connection between neurons in the same layer or in cross layers. Th...