Doing background research inre: forwarded email from Kobus, "Fwd: What we really need from your students".
Paper 1: Three-Dimensional root Phenotyping with a Novel Imaging and Software Platform, by Clark, et. al.
Introduces a semi-automated method for semantic reconstruction of root structures from turntable images, using our lab's definition of semantic reconstruction, i.e. 3D curves with topology and part labels. It looks like they're using standard voxel-carving to identify foreground and background voxels (i.e. root and non-root voxels). This approach uses calibrated cameras to backproject silhouette images and take the intersection--in other words, visual hull but with voxels instead of polygonal meshes. Then the skeleton of the foreground voxels is extracted using a median filter method to extract 3D curves. Skeleton branches are then manually labeled by domain experts as one of several root types. There also seems to be some functionality to manually correct errors during backprojection and skeleton extraction phase.
They key contribution seems to be a list of 27 features derived from the resulting 3D data. Bushiness, centroid and volume distribution seem to be discriminative for classifying a specimen between the two different species under study. They also measure the amount of helical curvature and how much gravity affects growth, which has apparently not been studied in rice plants prior to this?.
Extracting most of the interesting features requires full semantic reconstruction, which is very difficult to obtain using known fully-automatic methods. Further, this approach requires a calibrated camera, which likely precludes us from using it for post-hoc analysis of existing datasets that might exist in Bisque, unless calibration data is available.
The "Clark Rice root" image provided on the wiki is a high resolution 2D image, which appears to be different from those used in the Clark paper, so it's unclear what its relevance is in this context.
Other notes
Its unclear how silhouettes are extracted, but likely just intensity thresholding with manually-chosen threshold value. They're using a lightbox background, so this approach seems sensible. There's probably some tunable parameters for the skeleton-extraction phase, but these aren't discussed.
Posted by Kyle Simek