When doing ancestral sampling of the posterior, each curve's conditional posterior depends on (a) it's relevant (noisy) observation and (b) the sampled (noise-free) values of the parent. We can incorporate both in the posterior by treating the noise-free values as observations with zero variance in the likelihood.
In practice, theres a minor issue implementing this. Recall that because 3D observations are degenerate in one direction (namely the backprojection direction), we prefer to work with the precision matrix, \(\Lambda\), instead of a covariance matrix. Under this representation, noise-free values have infinite precision, so operations on \(\Lambda\) are invalid. Instead, we take a hybrid appraoch, using both precisions and covariances.
Recall the standard formulation for the posterior mean:
Without loss of generality, assume zero-noise observations appear after noisy observations. We can rewrite the posterior in terms of the precision matrix \(\Lambda\) of the noisy values:
We can implement this by slightly modifying our code for the precision-matrix formulation of posterior. First, give all noise-free values a precision of 1.0 in \(\Lambda\), and then modify the \(I\) inside the parentheses by zeroing-out elements corresponding to noise-free values. Using prime symbol to denote the modified matricies, the result is
The expression for posterior covariance is derived in the same way, and has similar form.
Posted by Kyle Simek