Skip to content
Snippets Groups Projects
Select Git revision
  • 573d371b6e4b11db7ff18eafb3edc6bba3bb79f6
  • master default protected
2 results

api.md

Blame
  • LEVERSC: Cross-Platform Scriptable Multichannel 3-D Visualization for Fluorescence Microscopy Images (API Documentation)

    Architecture

    The LEVERSC visualization tool is a node.js application for visualization of multichannel 3-D volumetric data using webgl. LEVERSC uses a local HTTP server port binding to communicate with image processing tools. Currently LEVERSC has plugins for ImageJ, Python and MATLAB. Additional plugins for KNIME and Julia are planned.

    This architecture is very flexible and supports fast, cross-platform communication between any image processing environment that supports HTTP POST/GET requests. A detailed API breakdown follows, as well as example usage from Python and MATLAB.

    Application Programming Interface (API)

    Ports

    LEVERSC Figure windows are represented by port bindings, beginning at port 3001 for figure 1 and port 3002 for figure 2, etc.

    /info (GET)

    This request returns a JSON response which currently is simply {"leversc":"electron"}. This is used to provide a quick check that LEVERSC is bound and running on the expected port. In future this response may include the LEVERSC build version for plugin verification.

    /loadfig (POST)

    This request posts a complete volume to the LEVERSC figure window.

    The payload of the /loadfig post request is a multipart HTTP request (Content-Type: multipart/form-data), consisting of a header and one or more lbins parts.

    Header Part

    The header part is a javascript object notation (JSON) encoded object representation of the metadata associated with the volume data (Content-Type: application/json). At a minimum the header must contain the following fields (values shown below are taken from the sample data):

    {
        "Dimensions": [512,512,50], // [x,y,z] dimensions of the volume
        "NumberOfChannels": 3, // Number of channels in the image data
        "NumberOfFrames": 1, // Must always be 1 for leversc data
        "PixelPhysicalSize": [0.37 0.37 0.6] // Size [x,y,z] of each voxel in a physical unit (e.g. microns). NOTE: This field is OPTIONAL (default: [1,1,1]), but required for proper visualization of anisotropic data.
    }

    The following optional fields may also be included to simplify the visualization setup, but can also be modified later through the user interface:

    {
        "ChannelNames":["Histone 2B"], // Name of each channel in the data
        "ChannelColors": [[1,0,0]] // Color for each channel
    }

    While the header can appear anywhere in the request, it is best practice to place the header part first.

    LBins Parts

    The LEVERJS binary (LBin) file type is a very simple type built to be easily converted to a 3-D texture for fast sampling in the graphics processing unit (GPU) The LBin structure format is described later in this section.

    One or more LBin structures must be sent as individual binary parts (Content-Type: application/octet-stream), each with the part name lbins. By convention each LBin part is given consecutive filenames beginning with lbin0, lbin1, etc. Order is important when processing LBins and the first LBin structure should appear first in the request. As detailed below a single image volume may require more than one LBin structure, this splitting is handled on the client-side (e.g. MATLAB, Python, ImageJ), and LEVERSC expects to receive all necessary LBin structures along with the header in a single multipart HTTP request. See e.g. LBinProvider.m and leversc.py, for implementation examples of client-side conversion from image volume data to LBin structures.

    The LBin format is an 8-bit texture-packed format, meaning that each LBin can support no more than 4 image channels (red, green, blue, alpha), and that each channel must be converted to a bit-depth of 8-bits per voxel. Images with more than 4 channels must be split into multiple LBin structures. This significantly increases sampling efficiency when raycasting on the GPU, at the cost of rearranging image data into LBin structures when sending to LEVERSC.

    The table below outlines the LBin structure format, which consists of a simple binary header (4 unsigned short fields), followed by unsigned 8-bit image data.

    LBin Binary Structure Format

    Field Offset [bytes] Size [bytes] Type Description
    num_channels 0 2 uint16 The number of channels (1 to 4) contained in this LBin structure. NOTE: this is not necessarily the same as the number of channels in the full image volume.
    dims_x 2 2 uint16 The size of the x-dimension in the image, must be the same as header.Dimensions[0].
    dims_y 4 2 uint16 The size of the y-dimension in the image, must be the same as header.Dimensions[1].
    dims_z 6 2 uint16 The size of the z-dimension in the image, must be the same as header.Dimensions[2].
    im8_data 8 num_channels * dims_x * dims_y * dims_z uint8 The unsigned 8-bit image data, linearly arranged in column-major ordering (c,x,y,z), this is similar to bitmap ordering, and is the expected format for 3-D texture data.

    /renderParams (GET/POST)

    This request sends or receives the volume parameters (channel colors and transfer functions) which control the volumetric visualization.

    The request/response payload is a JSON array representing colors and transfer functions per-channel (one object for each channel):

    [
        // Per-channel entry (default values are shown below)
        {
            "bVisible": true, // Show channel in rendering
            "alpha": 1, // Channel opacity for blending with other channels
            "dark": 0, // Minimum cutoff for dark values anything below "dark" will map to 0
            "medium": 0.5, // Curve midpoint intensity (0.5 is linear)
            "bright": 1, // Saturation for bright values anything above "bright" will map to 1
            "color": [1,0,0], // Channel color (red is default for channel 1)
            "name": "Channel 1", // Channel display name
        },
        // ... entries for channels 2 to num_channels
    ]

    /screenCap (GET)

    This request receives a capture image from the LEVERSC figure window. This can be used to render high-quality scripted movies from Python or MATLAB.

    The response payload is a PNG image of the captured window.

    /strDB/:strDB

    This request can be used to visualize an on-disk LEVER file rather than an image file, the :strDB argument must be a url-guarded fully qualified path to the .LEVER file.

    /uiParams (GET/POST)

    This request sends or receives the display status of interface view elements (UI elements) such as the view sidebar, toolbar buttons, and the scale bar. This is generally used for clearing interface elements during movie rendering.

    The request/response payload is a JSON object containing UI element selections:

    // Default values for fields are given below
    {
        "sidebar": "block", // HTML display style of the UI sidebar ("none" to disable)
        "webToolbar": "block", // HTML display style of the UI toolbar buttons ("none" to disable)
        "logoDiv": "block", // HTML display style of the LEVER logo image ("none" to disable)
        "clockButton": "block", // HTML display style of the UI clock button ("none" to disable)
        "time": 1, // Frame number displayed in the UI clock element
    }

    /viewParams (GET/POST)

    This request sends or receives the camera and sampling plane parameters that control the view of the volume.

    The request/response payload is a JSON object containing view parameters:

    // Default values for fields are given below
    {
        "zoom": 0, // Amount of camera zoom between 0 (zoomed out) and 1 (fully zoomed in), affects the camera field of view.
        "pos": [0,0,-5], // Camera position in world coordinates (only used for pan, see worldRot for rotating the volume)
        "worldRot": [1,0,0,0,  // 4x4 world rotation matrix. See the movie
                     0,1,0,0,  // making example for an example of programatic
                     0,0,1,0,  // control of the rotation.
                     0,0,0,1],
        "clipMode": 0, // Sampling plane mode (0,1, or 2)
                    //   0 - No clipping
                    //   1 - Clip front (display data behind the clipping plane)
                    //   2 - Slice clipping (Sample only the plane intersection slice)
        "planeCenter": [xDim/2,yDim/2,zDim/2], // Sample plane location in image coordinates. NOTE: this is a point on the plane, the plane orientation is determined by the view direction (worldRot).
        "bgColor": [0.4,0.4,0.4,1], // Canvas background color (including alpha)
        "volColor": [0,0,0], // Volume background color
    }