This project develops an end-to-end differentiable formulation for volumetric PIV that jointly estimates camera mapping refinements, a continuous 3D particle-density field, and an Eulerian velocity field. The particle density is represented as a sum of Gaussian kernels that can be rendered analytically to each camera view using Gaussian splatting.
The objective combines image-reprojection losses with a short-time advection consistency term implemented through photometric warping in 3D. The formulation includes an optical blur model through learned screen-space covariance terms and can incorporate calibration images without explicit feature correspondences.