EKF SLAM-The Most Primitive SLAM

By WangXinyu, Thu 17 August 2017, in category Robot

Math

EKF SLAM-The Most Primitive SLAM

Introduction

In SLAM, the robot is given measurements z1:t and controls u1:t, and acquires a map of its environment while simultaneously localizing itself relative to this map. From a probabilistic perspective, the online SLAM involves estimating the posterior over the current pose along with he map:

p(xt,m|z1:t,u1:t)

Assumptions

State Vector

yt=(x,y,θ,m1,x,m1,y,...,mn,x,mn,y)T

The state vector includes robot pose and landmark positions. The dimension of this state vector is 2N+3, where N denotes the number of landmarks in the map. In robot localization problem, the state vector only contains robot pose, but for this EKF SLAM problem, we need to update the position of landmarks also. So now the posterior can be denoted as:

p(yt|z1:t,u1:t)

Algorithm

——————————————

  1. Algorithm EKF_SLAM_known_correspondences(μt1, Σt1, ut, zt)

  2. ˉμt=g(ut,μt1)

  3. ˉΣt=GtΣt1GTt+Rt

  4. Kt=ˉΣtHTt(HtˉΣtHTt+Qt)1

  5. μt=ˉμt+Kt(ztˆzt)
  6. Σt=(IKtHt)ˉΣt

  7. return μt, Σt

——————————————

Basically, we are using EKF to estimate the belief of robot pose and landmark positions. The filter cycle is:

State Prediction

As robot moves, the state changes according to the Velocity Motion Model. Because the motion only affects the robot pose and all landmarks remain where they are, only the first three elements in the update are non-zero.

Fx=(100000100000100)3×(2N+3)

yt=yt1+FTx(vtωtsinμt1,θ+vtωtsin(μt1,θ+ωtΔt)vtωtcosμt1,θvtωtcos(μt1,θ+ωtΔt)ωtΔt)g+N(0,FTxRtFx)

The motion function g is approximated using a first degree Taylor expansion.

g(yt1,ut)g(μt1,ut)+Gt(yt1μt1)

Jacobian Gt is the derivative of g with respect to yt1 at ut and μt1.

Gt=g(x,y,θ,m1,x,m1,y,...,mn,x,mn,y)=(10vtωtcosμt1,θ+vtωtcos(μt1,θ+ωtΔt)01vtωtsinμt1,θvtωtsin(μt1,θ+ωtΔt)00010I)(2N+3)×(2N+3)

So the predicted mean ˉμt and covariance ˉΣt is the mean and covariance of Gaussian distribution yt.

ˉμt=μt1+FTx(vtωtsinμt1,θ+vtωtsin(μt1,θ+ωtΔt)vtωtcosμt1,θvtωtcos(μt1,θ+ωtΔt)ωtΔt)

ˉΣt=GtΣt1GTt+FTxRtFx

Measurement Prediction

We can predicte the measurement using the following measurement model.

zit=((mj,xx)2+(mj,yy)2atan2(mj,yy,mj,xx)θ)h+N(0,Qt)

Qt=(σr00σϕ)

i is the index of landmark observation in zt, j=cit is the index of landmark at time t.

The measurement function h is also approximated using a first degree Taylor expansion.

h(yt,j)h(ˉμt,j)+Hit(ytˉμt)

Jacobian Hit is the derivative of h with respect to the full state vector yt at ˉμt and j. And we use δ and q to make the formulas simpler.

δ=(δxδy)=(ˉμj,xˉμt,xˉμj,yˉμt,y)

q=δ2x+δ2y

Hit=h(x,y,θ,m1,x,m1,y,...,mn,x,mn,y)=(δxqδyq001×(2j2)δxqδyq01×(2N2j)δyqδxq101×(2j2)δyqδxq01×(2N2j))2×(2N+3)

The position of landmark j is ˉμj, which is cached the first time this landmark is observed.

(ˉμj,xˉμj,y)=(ˉμt,xˉμt,y)+(ritcos(ϕit+ˉμt,θ)ritsin(ϕit+ˉμt,θ))

So now we can calcute ˆzit, Kit and further μt and Σt.

ˆzit=(qatan2(δy,δx)ˉμt,θ)

Kit=ˉΣtHiTt(HitˉΣtHiTt+Qt)1

ˉμt=ˉμt+Kit(zitˆzit)

ˉΣt=(IKitHit)ˉΣt

When processing the ith observation in the loop, we update the predicted mean and covariance using the values we got from (i1)th observation. That is we update ˉμt and ˉΣt iteratively. And after the for loop, the updated ˉμt and ˉΣt are the final μt and Σt respectively.

μt=ˉμt

Σt=ˉΣt

References

Sebastian THRUN, Wolfram BURGARD, Dieter FOX. PROBABILISTIC ROBOTICS. 2005.

Cyrill Stachniss. EKF SLAM, YouTube.com