Since the skeletons are very different between views, the old graph-matching approach probably won't work. I believe the graph deformation idea is still a good one, but the likelihood involving correspondences won't work because it doesn't handle false positive or false negative parts of the graph.
A better idea is to go back to our old approach: use a pixel-based likelihood to evaluate deformed graphs. Use MCMC to explore the space and use the GP prior covariance for proposals or search directions. After computing the cholesky decomposition, this should proceed relatively quickly, even for large point sets.
The model is the medial axis transform of the original image, plus GP-distributed deformation, which includes an epipolar constraint. To render into the second image, we perform an inverse medial axis transform.
Because of the epipolar constraint, the number of effective dimensions is nearly halved. The correlation between nearby points should reduce dimension by an even more significant factor.
Train GP on ground truth data. (try adding runners between almost-bridged points) Train Bernoulli distribution over foreground, background
Algorithms binary image to graph All pairs shortest path. polyline w/ interpolated intensity x medial axis transform x Inverse medial axis transform specialized mcmc adaptive metropolis hastings hit and run sampling?
use a weaker distance function for geodesic distance (huber function?) make geodesic distance greater at junctions
Posted by Kyle Simek