[Work Log] Optimizing posterior-sampling for pixel likelihood

October 30, 2013
Project Tulips
Subproject Data Association v3
Working path projects/​tulips/​trunk/​src/​matlab/​data_association_3
Unless otherwise noted, all filesystem paths are relative to the "Working path" named above.

Troubleshooting issues in "optimized" code in curve_tree_ml_4.

Biggest change was in how K, K_star, K_star_star were computed. Comparing against the reference implementation curve_tree_m_3 shows the "parent" indices aren't correct.


Solved. When incorporating noise-free sampled values, was using indices and covariance from noisy observed values.

Timing

Old speed:    20.7 s
New runtime:  75.7 s
Speedup: 3.6x

Still rendering black. why?

Apparently this is an issue with OSX's AMD driver -- geometry shaders fail after returning from sleep. Restarting program solves it.


Profiling v4

Only 19 calls to construct_attachment_covariance_3, down from ~500.

78% bottleneck in matrix multiplication / inversion.

Main problem is the 2500x2500 matrix from the main stem. Other curves are in the 200 to 500 dimension range, and are fairly fast.

Nystrom (2x) on curves with dimension greater than 1000 brings bottlenect from 18s to 11s, but as before, results are highly sensitive to the nystrom factor and using it without careful tuning / heursitics is risky.

Probably expoiting the Markov nature of the curve GP is the answer. Linear runtime vs. cubic.

Markov within-curve sampling

break output indices into blocks. For each block, get observation markov blanket. Combine observed data and ancestor data into covarance matrix and data vector, then sample.

Needed three covariance matrices: data vs. data, model vs. model, and model vs. data.

Opportunity for optimizing construct_attachment_covariance_3 - sample all simpling-pair covariances at once. 289 total combinations (i.e. calls to eval_kernel) per dataset, could reduce to one.

Seems like we can save computation in model covariance matrices by (a) exploiting the fact that it's always between points in the same view, and (b) only view changes between calls, spacial indices stay the same.


Implementing...


Theoretical issues with the within-curve markov assumption. If the markov blanket is chosen crudely, your samples could drift aimlessly until they reach observations in an unexpected place. In other words, extrapolation, or sampling from weak data is a mistake. Need a good criterion for when to stop takin on more evidence.

Posted by Kyle Simek
blog comments powered by Disqus