python - Estimating confidence intervals around Kalman filter -
python - Estimating confidence intervals around Kalman filter -
i have been working implement kalman filter search anomalies in 2 dimensional info set. similar first-class post found here. next step, i'd predict confidence intervals (for illustration 95% confidence floor , ceiling values) predict next values fall in. in add-on line below, i'd able generate 2 additional lines represent 95% confidence next value above floor or below ceiling.
i assume i'll want utilize uncertainty covariance matrix (p) returned each prediction generated kalman filter i'm not sure if it's right. guidance or reference how much appreciated!
kalman 2d filter in python
the code in post above generates set of measurements on time , uses kalman filter smooth results.
import numpy np import matplotlib.pyplot plt def kalman_xy(x, p, measurement, r, motion = np.matrix('0. 0. 0. 0.').t, q = np.matrix(np.eye(4))): """ parameters: x: initial state 4-tuple of location , velocity: (x0, x1, x0_dot, x1_dot) p: initial uncertainty convariance matrix measurement: observed position r: measurement noise motion: external motion added state vector x q: motion noise (same shape p) """ homecoming kalman(x, p, measurement, r, motion, q, f = np.matrix(''' 1. 0. 1. 0.; 0. 1. 0. 1.; 0. 0. 1. 0.; 0. 0. 0. 1. '''), h = np.matrix(''' 1. 0. 0. 0.; 0. 1. 0. 0.''')) def kalman(x, p, measurement, r, motion, q, f, h): ''' parameters: x: initial state p: initial uncertainty convariance matrix measurement: observed position (same shape h*x) r: measurement noise (same shape h) motion: external motion added state vector x q: motion noise (same shape p) f: next state function: x_prime = f*x h: measurement function: position = h*x return: updated , predicted new values (x, p) see http://en.wikipedia.org/wiki/kalman_filter version of kalman can applied many different situations appropriately defining f , h ''' # update x, p based on measurement m # distance between measured , current position-belief y = np.matrix(measurement).t - h * x s = h * p * h.t + r # residual convariance k = p * h.t * s.i # kalman gain x = x + k*y = np.matrix(np.eye(f.shape[0])) # identity matrix p = (i - k*h)*p # predict x, p based on motion x = f*x + motion p = f*p*f.t + q homecoming x, p def demo_kalman_xy(): x = np.matrix('0. 0. 0. 0.').t p = np.matrix(np.eye(4))*1000 # initial uncertainty n = 20 true_x = np.linspace(0.0, 10.0, n) true_y = true_x**2 observed_x = true_x + 0.05*np.random.random(n)*true_x observed_y = true_y + 0.05*np.random.random(n)*true_y plt.plot(observed_x, observed_y, 'ro') result = [] r = 0.01**2 meas in zip(observed_x, observed_y): x, p = kalman_xy(x, p, meas, r) result.append((x[:2]).tolist()) kalman_x, kalman_y = zip(*result) plt.plot(kalman_x, kalman_y, 'g-') plt.show() demo_kalman_xy()
the 2d generalization of 1-sigma interval confidence ellipse characterized equation (x-mx).t p^{-1}.(x-mx)==1
, x
beingness parameter 2d-vector, mx
2d mean or ellipse center , p^{-1}
inverse covariance matrix. see answer on how draw one. sigma-intervals ellipses area corresponds fixed probability true value lies within. scaling factor n
(scaling interval length or ellipse radii) higher confidence can reached. note factors n
have different probabilities in 1 , 2 dimensions:
|`n` | 1d-intverval | 2d ellipse | ================================== 1 | 68.27% | 39.35% 2 | 95.5% | 86.47% 3 | 99.73% | 98.89%
calculating these values in 2d bit involved , unfortunately don't have public reference it.
python numpy prediction kalman-filter
Comments
Post a Comment