Search
Volumetric Viewer

Neuroglancer on Colab

Neuroglancer is a volumetric viewer for neuroscience that came out of Google. It's WebGL-based. It's the state of the art for such tools. It's liberally licensed.

This notebook explores getting Neuroglancer running on Colab. Part of that involves UI work. Part of the will probably involve transforming the brightfield's raw data into Neuroglancer precomputed format.

Neuroglancer is a "WebGL-based viewer for volumetric data" in popular use within the neuroscience community. It is the state of the art. The codebase started inside Google and was open sourced (Apache License 2.0).

In terms of getting Neuroglancer running on Colab, this might happen in multiple stages.

  • An iframe to code and data not on Colab
  • Code on Colab. Install neuroglancer on Colab and serve
    • There are Colab specific ways of getting data from server to client
  • Generate data on Colab VM and view in Colab UI

For the case of generating data, that target format would be [Neuroglancer precomputed])(https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed). The generated files could be saved off to... somewhere, say, GitHub (max 100 GB per repo). Optionally, they could just be served up and explored, just to thrown away some hours later.

Status

As of 2020-02-02, work on this hasn't gone beyond confirming that in some fashion Neuroglancer can be embedded into Jupyter notebooks, running on Colab.

Skel survey

Neuroglancer is a visualizer, not a segmentation/skeletonization algorithm, which is being using in this project.

Point cloud in neuroglancer has been happening since at least 2018-11. Skeletons seems to have been in the mix since 2016: β€œit supports a wide variety of data sources and is capable of displaying arbitrary (non axis-aligned) cross-sectional views of volumetric data, as well as 3-D meshes and line-segment based models (skeletons).”

So, because of its tech features and license, Neuroglancer is being used as the visualization client for this project. Also note that Allen Brain (Collman) is contributing code to Neuroglancer. GitHub https://github.com/google/neuroglancer/tree/master/src/neuroglancer/skeleton neuroglancer/src/neuroglancer/skeleton/ write read const swcStr = '# Generated by NeuTu (https://github.com/janelia-flyem/NeuTu)\n 1 0 4145 3191 3575 2 -1\n\ 2 0 4149 3195 3579 3.65685 1\n\ 3 0 4157 3195 3583 6.94427 2\n\ 4 0 4161 3195 3591 3.65685 3\n\ 5 0 4165 3199 3595 2 4\n';

describe('skeleton/decode_swc_skeleton', () => {
  it('decodes simple line skeleton', () => {

Support forum 2017-02: Someone asking NG group for skeleton demo
Hi Daniel, Unfortunately I can't point you to an example for skeleton visualization. However, you could use the code related to SkeletonSource in these two files: https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/brainmaps/frontend.ts https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/brainmaps/backend.ts as an example. The existing skeleton infrastructure assumes that you have a collection of skeletons with associated uint64 ids. In the "backend" code (which runs in the WebWorker), you would simply need to decode the SWC file into a Float32Array of vertex positions and a Uint32Array of pairs of edge indices into the vertex positions array, which should be assigned to chunk.vertexPositions and chunk.indices respectively. Currently the skeleton rendering is quite simple and does not support radius information, although that would be a valuable addition.

As an update, Alex Weston at Janelia previously wrote this code for parsing SWC files: https://github.com/janelia-flyem/neuroglancer/commit/27b7e0fa6133ecf22d293eb650f7d0c46b10f624 and integrated support for it into the DVID datasource: https://github.com/janelia-flyem/neuroglancer/commit/05aa3bcb33a6657c6fd5d1a945822304a8d075c6 She hasn't yet had chance to make it into a pull request, but she says you can feel free to do so if you'd like to use it for something that you want to contribute.

Simple iframe in html magic

The first little piggie's house is just an iframe in and %%html magic cell.

Note that this has issues with focus and scrolling which can be reduced simply by zooming the font size down.

%%html
<iframe
  width="100%" height="700px"
  src="https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22y%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22z%22:%5B3.0000000000000004e-8%2C%22m%22%5D%7D%2C%22position%22:%5B5523.99072265625%2C8538.9384765625%2C1198.0423583984375%5D%2C%22crossSectionScale%22:3.7621853549999242%2C%22projectionOrientation%22:%5B-0.0040475670248270035%2C-0.9566215872764587%2C-0.22688281536102295%2C-0.18271005153656006%5D%2C%22projectionScale%22:4699.372698097029%2C%22layers%22:%5B%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image%22%2C%22name%22:%22original-image%22%2C%22visible%22:false%7D%2C%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected%22%2C%22name%22:%22corrected-image%22%7D%2C%7B%22type%22:%22segmentation%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/ground_truth%22%2C%22selectedAlpha%22:0.63%2C%22notSelectedAlpha%22:0.14%2C%22segments%22:%5B%2213%22%2C%2215%22%2C%222282%22%2C%223189%22%2C%223207%22%2C%223208%22%2C%223224%22%2C%223228%22%2C%223710%22%2C%223758%22%2C%224027%22%2C%22444%22%2C%224651%22%2C%224901%22%2C%224965%22%5D%2C%22name%22:%22ground_truth%22%7D%5D%2C%22layout%22:%224panel%22%7D"
  />