Visualized results after capping likelihood variance. As expected, degree of spreading stops growing as perturb_rate_variance continues to grow.
...
Tasks
Long term goals
...
Visualizing curve-direction revealed a bizarre artifact: most curves start somewhere in the middle of the reconstructed curve!
...
Found issue - wasn't sorting by index when reconstructing.
Modified tr_curves_ml
to not include the background curve ml into the computation. Recall that the normal ML computation code doesn't actually return ML, but a ratio of the foreground curve ML and the background curve ML. This indicates how much the model improves over the "null model".
Since the background curve ml is constant during training, this shouldn't affect results. However, if you want to confirm the correctness of tr_curves_ml
against the reference implementation in curve_ml5.m
, you'll need to manually divide by a constant. See the documentation for tr_curves_ml
for more details.
Investigating the effect of reversing curves.
Visually determined which curves were reversed. See modified version of test/tmp_visualize_test
.
Hacked train/tr_construct_matrices.m
with hard-coded list of curves to flip. Re-ran training for IND-Perturb model.
Hypothesis: we should see larger values for perturb_rate_variance and/or perturb_smoothing_variance, and smaller values for perturb_position_variance.
Results:
Model:
smoothing_variance: 0.0020
noise_variance: 0.0720
position_variance: 1.3414e+04
rate_variance: 0.2378
perturb_smoothing_variance: 3.3860e-41
perturb_rate_variance: 1.5332e-06
perturb_position_variance: 0.4662
Final ML: -95.736042
Compare against old results:
Model:
smoothing_variance: 0.0019
noise_variance: 0.0718
position_variance: 1.6706e+04
rate_variance: 0.2135
perturb_smoothing_variance: 3.3860e-41
perturb_rate_variance: 1.4942e-06
perturb_position_variance: 0.4886
Final ML: -97.463243
Summary of changes:
smoothing_variance: +2.09%
noise_variance: 0.17%
position_variance: -19.7%
rate_variance: +11.3%
perturb_smoothing_variance: 0
perturb_rate_variance: +2.61%
perturb_position_variance: -4.59%
As expected, global position variance dropped; perturb rate grew while perturb position variance decreased.
Unexpected increase in rate_variance; expected it to stay constant. Possibly due to random fluctuations; both old and new values (0.214 and 0.238, respectively) are near the theoretical optimum (0.23, see next section).
Also unexpected small increase in global smoothing variance (expected to be constant); also possibly due to random fluctuations.
Literally no change to perturb smoothing variance. I'm starting to suspect something weird is going on with this value...
Was curious what the rate variance should be, assuming the rate vectors are drawn from a uniform distribution over the unit sphere.
Determined empirically that rate variance should be somewhere between 0.220 and 0.235. Code below.
% generate 10,000 3-vectors with distribution over direction
dir = rand(3,10000);
% normalize to lie on unit sphere
dir = bsxfun(@times, dir, 1./sum(dir.^2));
% Get the emperical covariance of the vectors
Sigma = cov(dir')
ans =
0.2105 0.0556 0.0580
0.0556 0.2297 0.0680
0.0580 0.0680 0.3735
% take the average of the diagonals
mean(diag(Sigma))
ans =
0.2290
This strongly suggests that the global rate variances we've seen in training are consistent with the theoretical value. Great!
Attempting to visualize perturbations between views.
First attempt: tweak test/test_visualize_test
to only display points from a particular view. Doesn't work great, because only part of the plant is visible in each view, and those parts differ between views.
Next attempt: tweak curve_max_posterior.m
to define a canonical index set for the curve, and then reconstruct for each view.
More tomorrow...
Training Background model
Posted by Kyle Simek