LJSCTC can be provided via prebuilt executable for specific tasks like running a CTC dataset. this is generally how we proceed for CTC submissions. a runtime matching the version of matlab that the executable was built with is all that is required. we also utilize cuda-based image processing toolkit called the 'hydra image processor', https://git-bioimage.coe.drexel.edu/opensource/hydra-image-processor.
...
...
@@ -73,6 +57,12 @@ BF-C2DL-HSC_training_01 -- dataset is BF-C2DL-HSC training movie 01
.h5 -- this contains image data
/cacheDenoise -- folder with non-local means denoised images (cached here for performance)
Note that LEVERJS is mostly unsupervised. For most usage cases, we can validate on either training or testing data. Exception is the new support we use to train our two segmentation parameters. For now, we optimize this on the DET measure from the ground truth. This works ok, but is generally outperformed by picking reasonable parameters for each type of movie manually.
The two parameters are ```minimumRadius_um``` and ```sensitivity```. See src/getSegParams.m for details.
Because we work with both training and testing movies from the CTC, our internal path layout is a bit complicated. The CTC keeps those separate via a directory tree. We allow our internal .LEVER files to co-exist all in one folder. So, getting all the paths mapped is tricky. See src/get_ljsctc.m for details. Reach out if you run into trouble...