Particle Filter

This project implements a Particle Filter (Sequential Monte Carlo) for 6-DOF robot pose estimation, developed as part of the RBE595 Robot Navigation course at Worcester Polytechnic Institute. It fuses IMU inertial data with AprilTag visual observations and is benchmarked against an Extended Kalman Filter (EKF) and VICON motion capture ground truth.


Why Particle Filter?

The Particle Filter makes no assumptions about the shape of the belief distribution — unlike the EKF, which requires Gaussian noise and linearized dynamics. By representing the posterior as a set of weighted samples (particles), it can handle multimodal distributions and highly nonlinear motion models, at the cost of higher computational demand.


Algorithm

Initialization — 2,000 particles are sampled from a prior distribution over robot poses (position + orientation).

Prediction — Each particle is propagated forward independently using IMU measurements (accelerometer + gyroscope integration), adding process noise to maintain diversity.

Update (Weighting) — When an AprilTag is detected, the PnP solver produces a pose observation. Each particle is weighted by the likelihood of observing that measurement given its state.

ResamplingLow-variance resampling is applied to focus computational resources on high-probability regions of the state space, avoiding particle degeneracy.

Estimation — The final pose estimate is computed as the weighted mean of all particles.


Comparison: Particle Filter vs EKF

Property Particle Filter Extended Kalman Filter
Distribution assumption Non-parametric Gaussian
Handles multimodality Yes No
Computational cost Higher (2,000 particles) Lower
Nonlinear handling Exact (sample-based) Approximate (Jacobian)
Accuracy (this project) Comparable to EKF Baseline

Both methods were evaluated on 8 datasets (studentdata0.matstudentdata7.mat) collected with a real robot equipped with an IMU and camera. RMSE metrics were computed against VICON ground truth for quantitative comparison.


View Code on GitHub