Changes between Version 18 and Version 19 of u/erica/Amira


Ignore:
Timestamp:
11/11/15 16:01:30 (9 years ago)
Author:
Erica Kaminski
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • u/erica/Amira

    v18 v19  
    1 === October 5, 2015 ===
     1= 11/7/15 - Disperse algorithm =
    22
    3 The "auto" skeletonization method appears to take a threshold as an input. This threshold seems to 'mask' (or 'segment') the data so  to only look at voxels with values above this threshold. It then does a 'centerline' analysis, a 'distance map' analysis, and finally a 'thinning' of the skeleton so  that it is only 1 voxel across. To see the paper on this algorithm, it is attached to this page. It is also possible to do these 3 steps on your own in Amira, and thus gain more control over the skeletonization process.
     3After reading through the Disperse documentation, it is clear that Disperse is better than Amira for identifying filaments. This is because way Disperse considers each cell in the grid in relation to others, whereas Amira applies a black/white mask to the data.
    44
    5 Here are some results testing different thresholds in auto skeletonization mode, and comparing them to Federrath's figure (the paper that contains this figure is attached to this page).
     5Here is a brief outline of the Disperse algorithm:
    66
    7 [[Image(compare5.png, 50%)]]
    8 
    9 [[Image(compare2.png, 50%)]]
    10 
    11 [[Image(compare1.png, 50%)]]
    12 
    13 [[Image(comparept01.png, 50%)]]
     7Disperse goes through cell by cell and identifies critical points (max, min, and saddlepoints). It assigns a value, called the 'critical index' to each type of extrema as given here:
    148
    159
    16 There are a lot of clear filamentary structure that Amira is missing.
     10|| Max || 2 ||
     11|| Saddle point || 1 ||
     12|| Min || 0 ||
     13
     14After it does this, it considers pairs of critical points that only differ by 1 in the critical index (i.e. saddle points + max, or saddle points + min). These pairs are termed 'persistence pairs'.
     15
     16Now the persistence threshold is the difference between 2 points in a persistence pair. So if the points measure density, say, the threshold would say -- is the difference in density at the 2 critical points in the pair > or < the persistence threshold? If it is less than, that pair is thrown out. If it is greater than, it is kept.
     17
     18The idea is that pairs with lower 'persistence' (i.e. below the persistence threshold) are topologically weak structures -- meaning that they would not persist above some noise added to the data set. Even weak noise would disrupt the extrema in the pair such that they may no longer be extrema and thus the pair would be destroyed. Filtering the data using persistence is a way of keeping topologically relevant structures in the data set. Now, using the 'mean density' as a persistence threshold essentially gets rid of all data point pairs that are below the mean density (density is strictly positive).
     19
     20Now between all persistence pairs that survive, arcs are drawn that connect saddle points to the 2 extrema which connect to them (each saddle point is connected to exactly 2 extrema). Arcs are tangent lines to the gradient field in the data set. ''Filaments are, by default, arcs that connect a saddle point with 2 maximum''.
     21
     22= 11/6/15 - Using Amira with the mean density to create filaments =
    1723
    1824
    19 === October 6, 2015 ===
     25
     26= 10/6/15 - Segmentation masks in Amira =
    2027
    2128Amira segments the data to do skeletonization. This means it throws out pixels below a certain user defined threshold. Of the remaining pixels it 1) finds a centerline through the data that, 2) is through the middle, and then 3) thins this line to be 1 pixel across. This is all to retain "homotopy" of the filamentary network. This is not ideal for column density data: as the threshold is lowered to account for lower density pixels, the structures get blown out and the skeletonization algorithm breaks down. Here are some images that gradually increase the threshold to illustrate this (smaller to larger thresholds):
     
    3946
    4047
    41  
     48 = 10/5/15 =
     49
     50The "auto" skeletonization method appears to take a threshold as an input. This threshold seems to 'mask' (or 'segment') the data so  to only look at voxels with values above this threshold. It then does a 'centerline' analysis, a 'distance map' analysis, and finally a 'thinning' of the skeleton so  that it is only 1 voxel across. To see the paper on this algorithm, it is attached to this page. It is also possible to do these 3 steps on your own in Amira, and thus gain more control over the skeletonization process.
     51
     52Here are some results testing different thresholds in auto skeletonization mode, and comparing them to Federrath's figure (the paper that contains this figure is attached to this page).
     53
     54[[Image(compare5.png, 50%)]]
     55
     56[[Image(compare2.png, 50%)]]
     57
     58[[Image(compare1.png, 50%)]]
     59
     60[[Image(comparept01.png, 50%)]]
     61
     62
     63There are a lot of clear filamentary structure that Amira is missing.