[Work Log] Branching curve clique-tree

August 19, 2013
Project Tulips
Subproject Data Association v2
Working path projects/​tulips/​trunk/​src/​matlab/​data_association_2
Unless otherwise noted, all filesystem paths are relative to the "Working path" named above.

Thinking about attachments.

  1. Organize into connected components.

Reviewing ML code, with considerations to generalizing for attachments.

Can we exploit the structure in the branching-prior covariance matrix to speed-up inversion in the marginal likelihood? No obvious way.

Why did we abandon the clique-tree method? Because it wasn't clear that the perturb-models would be compatible with it. Now, it seems like it might be, as long as we set the markov-order high enough. More importantly, it might be necessary, because after adding attachment, the covariance matrix is growing too large to allow direct evaluation.

Construct clique tree per-track, conditioned on parent point. In case of base-curves, the parent point is the prior.


New fields

Inferring branch distance

The reconstructed lateral branch curves are usually have gaps of several millimeters from their parent branch. Assigning an index of zero to the first point will almost certainly result in a worse marginal likelihood, because these gaps are not well-modelled.

Instead, we should attempt to infer the branch distance(or better yet, try to marginalize over it).

Spent some time trying to derive a "fast update" for covariance matrices for inferring branch distance. In this scheme, the matrices are computed assuming branch distance is zero, and then a easily computed delta matrix is added to account for non-zero branch distance.

After some work, it looks like the smoothing variance kernel is too complicated to permit such a method. It's just easier to recompute the entire covariance matrix.

Constructing branching clique-tree

  1. Construct individual cliques trees, assuming zero position variance (but including branch distance). Call this the "raw prior."
  2. Given a branch-point-index, compute the markov-blanket of the data points on the parent.
  3. Given raw prior and markov-blanket, construct conditional clique node from raw clique node.
  4. Given clique tree, multiply and marginalize from tips to root.

All cliques will now be stored in track structure (currently named Corrs).

How to construct conditional prior?

The branch point distribution is given by the GP predictive distribution: \[ p(x_b | y_\mathcal{MB}) = \mathcal{N}(\mu_b, \Sigma_b) \] where \[ \begin{align} \mu_b &= K_* \left(K_{\mathcal{MB}} + S^{-1}\right)^{-1} y_{MB} \\ \Sigma_b &= K_b - K_*^\top \left(K_{\mathcal{MB}} + S^{-1}\right)^{-1} K_* \end{align} \] Here, \(K_*\) is the kernel of the branch index vs. the markov blanket indices, \(k(t_\mathcal{MB}, t_b)\).

The conditional child curve distribution is

\[ p(x_C \mid x_b) = p(x_c) = \mathcal{N}(x_b, \Sigma_c) \] Here, \(Sigma_c\) is the raw curve covariance, whatever it may be.
The marginal over the child curve arises due to the linear combination of random variables \(x_c\) and \(x_b\): \[ x_C = x_c + \left ( \begin{array}{c}I\\I\\ \ldots \\I \end{array}\right) x_b \] The corresponding distribution is: \[ p(x_C | y_\mathcal{MB}) = \mathcal{N}(\mu_C, \Sigma_C) \] where \[ \begin{align} \mu_C &= \left( \begin{array}{c}I\\I\\ \ldots \\I \end{array}\right) \mu_b \\ \Sigma_C &= \Sigma_c + \left( \begin{array}{c}I\\I\\ \ldots \\I \end{array}\right) \Sigma_b \left ( I \; I \ldots I \right ) \end{align} \]
Posted by Kyle Simek
blog comments powered by Disqus