Implemented a image-pyramid-based likelihood and results are much better! Also relaxed the prior, which appears to be overfit.
After first pass, the global maximum seemed to have been found. The second pass quickly refined it.
This wasn't happening with the prior I trained, which suggests overfitting.
Try using fewer eigenvectors.
Use likelihood mixture rather than product. Should be smoother and allow more non-optimal results.
Consider scaling by inverse of eigenvector dynamic range, so moving by 1 guarantees at least one pixel changing.
Consider using hyperpriors during training.
Train cubic spline covariance model.
Extend the track Pass 1: reproject to initialize sampler, run Pass 2: Use reprojected model as prior
Get full dataset results ** run all pairwise on all datasets
Run on detected data