Skip to content
Snippets Groups Projects
Commit dea197d7 authored by Mark Winter's avatar Mark Winter
Browse files

Created a simple Python plugin example

parent 79fd2f3e
No related branches found
No related tags found
No related merge requests found
......@@ -12,6 +12,9 @@ A detailed API breakdown follows, as well as example usage from Python and MATLA
## Ports
LEVERSC *Figure* windows are represented by port bindings, beginning at port 3001 for figure 1 and port 3002 for figure 2, etc.
## ```/info (GET)```
This request returns a JSON response which currently is simply ```{"leversc":"electron"}```. This is used to provide a quick check that LEVERSC is bound and running on the expected port. In future this response may include the LEVERSC build version for plugin verification.
## ```/loadfig (POST)```
This request posts a complete volume to the LEVERSC figure window.
......
# LEVERSC: Cross-Platform Scriptable Multichannel 3-D Visualization for Fluorescence Microscopy Images
# Developing a LEVERSC Plugin
This section provides a concrete example of implementing a very basic Python module to send multichannel 3-D NumPy array data to the LEVERSC tool via the HTTP request API. Though this is an introductory example, very similar code is implemented in the [LEVERSC Python interface](../src/Python/leversc.py).
This section provides a concrete example of implementing a very basic Python module to send multichannel 3-D NumPy array data to the LEVERSC tool via the HTTP request API. Though this is a simplified introductory example, similar code is implemented in the [LEVERSC Python interface](../src/Python/leversc.py), please refer to the ```show``` method for more general and robust plugin code.
## Minimal API Implementation
The LEVERSC API provides many HTTP requests for controlling visualizations, but only the [```/loadfig```](api.md#loadfig-post) request must be implemented in order to send data to the LEVERSC application. It is also recommended that the [```/info```](api.md#loadfig-post) request also be implemented to check that the LEVERSC app is bound to the expected port. The Python, MATLAB, and ImageJ plugins all use [```/info```](api.md#loadfig-post) to quickly identify if the LEVERSC application has been launched as expected.
The [```/loadfig```](api.md#loadfig-post) request is a multipart HTTP request that requires that the data be in a correct texture-packed binary format and that the data be preceded by a minimal JSON header with image metadata.
## Setup Image Metadata
Image metadata can be easily represented as a Python dictionary. This is also simple to serialize to JSON. There are several optional fields in the image metadata, but the required fields are: ```Dimensions```, ```PixelPhysicalSize```, and ```NumberOfChannels```. The number of channels and dimensions can be immediately inferred from the image data. The physical size of voxels, must be taken from imaging characteristics of the microsope (here we assume equal sized voxels ```[1,1,1]```).
```Python
# Setup some default image information
# This use of dims[] assumes that im is at least 4-D and f-contiguous
dims = im.shape
imD = {"Dimensions": dims[:3],
"NumberOfChannels": dims[3],
"PixelPhysicalSize": [1,1,1],
"PixelFormat": "uint8"}
header_json = json.dumps(imD)
```
## Convert Images to ```uint8```
Input images must be sent as a single ```uint8``` value per-pixel. In Python this is a fairly simple conversion using NumPy:
```Python
# Convert im to uint8
# Compute maximum of each channel (assuming f-contiguous numpy layout)
chmax = np.amax(np.amax(np.amax(im, axis=0, keepdims=True), axis=1, keepdims=True), axis=2, keepdims=True)
# Divide each channel by its maximum to normalize, then multiply by 255 and quantize to uint8
im8 = ((255.0 * im) / chmax).astype("uint8")
```
**NOTE: This code assumes the image data ```im```, is already in column-major notation (an f-contiguous NumPy view).**
## Arrange Image Data in Texture-Packed Form
In order to sample image data as quickly as possible on the GPU, LEVERSC packs image data into as few RGBA textures as possible. This implementation significantly improves interactivity, but does require arranging the image data in the proper format when sending to the LEVERSC application. Specifically, at most 4 channels may be fit in a single RGBA texture and the channel dimension must be laid out as 4 (or fewer) contiguous RGBA bytes per-pixel. This is accomplished by the ```_im_to_lbin``` method in [leversc.py](../src/Python/leversc.py), a somewhat simplified version is presented here.
```Python
def im_to_lbin(im8,pidx,npack):
"""
Given the full 8-bit image (x,y,z,c) and the pack index (pidx) and the total max packing (npack=4). Return a correctly arranged RGBA/RGB/RG/R texture from the appropriate sub-image.
"""
dims = im8.shape
# Compute channel to start pack from
choffset = pidx * npack
# Compute size of pack npack (4) or less
chsize = min(dims[3]-choffset, npack)
# Get subimage
imsub = im8[:,:,:,choffset:(choffset+chsize)]
# Compute simple binary size of lbin (including 4 16-bit header fields)
lbin_size = 4*2 + np.prod(dims[:3])*chsize
# Create the output byte array
outbytes = bytearray(lbin_size)
# Pack header (pack_channel_count,x_size,y_size,z_size)
struct.pack_into("!HHHH", outbytes, 0, chsize,dims[0],dims[1],dims[2])
# Get byte view just past header
imout = np.frombuffer(memoryview(outbytes)[(4*2):], "uint8")
# Pack subimage with re-arranged data into output array
imout[:] = np.reshape(np.transpose(imsub, (3,0,1,2)), -1, order='F')
return outbytes
```
## Implementing ```/loadfig```
With the data conversion and arrangement functions above, the ```/loadfig``` call can be easily sent using the Python [Requests](https://docs.python-requests.org/) library.
```Python
# Compute required number of textures to store data
count_packs = math.ceil(dims[3] / 4)
# Multipart-post request formed as list
multipart = [("header", (None, header_json, "application/json"))]
for i in range(count_packs):
multipart.append(("lbins",("lbin%d"%(i), im_to_lbin(im8,i,4), "application/octet-stream")))
# Send request (127.0.0.1:3001)
resp = requests.post(url="http://127.0.0.1:3001/loadfig", files=multipart)
```
## Full source code
**NOTE: For this source code to properly execute the requests library will need to be installed, and the LEVERSC application will need to already be running locally, see the [readme](../readme.md) for details on installing and running LEVERSC. The scikit-image library will also be required if the cells3d example dataset is to be used.**
```Python
import math
import json
import struct
import requests
import numpy as np
from skimage.data import cells3d
def im_to_lbin(im8,pidx,npack):
"""
Given the full 8-bit image (x,y,z,c) and the pack index (pidx) and the total max packing (npack=4). Return a correctly arranged RGBA/RGB/RG/R texture from the appropriate sub-image.
"""
dims = im8.shape
# Compute channel to start pack from
choffset = pidx * npack
# Compute size of pack npack (4) or less
chsize = min(dims[3]-choffset, npack)
# Get subimage
imsub = im8[:,:,:,choffset:(choffset+chsize)]
# Compute simple binary size of lbin (including 4 16-bit header fields)
lbin_size = 4*2 + np.prod(dims[:3])*chsize
# Create the output byte array
outbytes = bytearray(lbin_size)
# Pack header (pack_channel_count,x_size,y_size,z_size)
struct.pack_into("!HHHH", outbytes, 0, chsize,dims[0],dims[1],dims[2])
# Get byte view just past header
imout = np.frombuffer(memoryview(outbytes)[(4*2):], "uint8")
# Pack subimage with re-arranged data into output array
imout[:] = np.reshape(np.transpose(imsub, (3,0,1,2)), -1, order='F')
return outbytes
# Load the scikit-image cells3d example dataset
cells = cells3d()
# Rearrange the dimensions to be in expected order (x,y,z,c) for column-major numpy array
im = np.copy(np.transpose(cells,[3,2,0,1]), order='F')
# Create a simple random image Red/Green image as example
im = np.asfortranarray(np.random.rand(128,128,30,2))
# Setup some default image information
# This use of dims[] assumes that im is at least 4-D and f-contiguous
dims = im.shape
imD = {"Dimensions": dims[:3],
"NumberOfChannels": dims[3],
"PixelPhysicalSize": [1,1,1],
"PixelFormat": "uint8"}
header_json = json.dumps(imD)
# Convert im to uint8
# Compute maximum of each channel (assuming f-contiguous numpy layout)
chmax = np.amax(np.amax(np.amax(im, axis=0, keepdims=True), axis=1, keepdims=True), axis=2, keepdims=True)
# Divide each channel by its maximum to normalize, then multiply by 255 and quantize to uint8
im8 = ((255.0 * im) / chmax).astype("uint8")
# Compute required number of textures to store data
count_packs = math.ceil(dims[3] / 4)
# Multipart-post request formed as list
multipart = [("header", (None, header_json, "application/json"))]
for i in range(count_packs):
multipart.append(("lbins",("lbin%d"%(i), im_to_lbin(im8,i,4), "application/octet-stream")))
# Send request (127.0.0.1:3001)
resp = requests.post(url="http://127.0.0.1:3001/loadfig", files=multipart)
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment