Still trying to troubleshoot the difference between old and new likelihood implementations.
Reviewing what is known
Try using mean curve from old alg. in new alg.
10:13:35 AM
SOLVED.
Yesterday I noticed that setting mix to 1.0 give decent results, but 0.0 was terrible. Now I know why: 0.0 is a special case, in which a simplified calculation is used. There was a bug in that special case, where the normalization constant didn't take into account the redundant dimensions that we eliminated.
11:02:29 AM
ml_test_end_to_end_2.m
is now running
Running on subset of correspondence, results: >> test_ml_end_to_end_2([], [], [], false)
Computing marginal likelihood (old way)...
Done. (96.9 ms)
Old ML: 62.176782
Computing marginal likelihood (new way, legacy correspondence)...
Done. (112.3 ms)
New ML (Legacy corr): 63.859940
Computing marginal likelihood (new way)...
Done. (83.5 ms)
New ML (New corr): 71.964026
Surprised the magnitude of difference between "Old ML" and "New ML (Legacy corr)". I guess the posterior variance is larger in this case, which means the posterior means can differ more.
Running on full correspondence, results:
>> test_ml_end_to_end_2([], [], [], false)
Computing marginal likelihood (old way)...
Done. (6400.1 ms)
Old ML: 207.041742
Computing marginal likelihood (new way, legacy correspondence)...
Done. (8328.4 ms)
New ML (Legacy corr): 207.700616
Computing marginal likelihood (new way)...
Done. (6349.2 ms)
New ML (New corr): 793.585038
Interesting that "New ML (New Corr)" is dramatiacally higher than the old and legacy ML's. I suppose I should have been expected this, since this entire approach was motivated by the terrible correspondences arising from the legacy correspondence.
It's nice to see the ML for good correspondences getting even better. Hopefully we'll see even more improvement as we try new curve models.
Taking a break for lunch...
Next steps: