Human in the viz-loop

On 14 December 2016 at Computing Insight UK with over 250 delegates and suppliers present I had the opportunity to discuss areas of human-vis within HPC – few tweets on the vis presentation (so far). The specific example of the IMAT beamline was used, but the past four years usage were considered. Thanks to Srikanth Nagella, Erica Yang and Martin Turner; Peter Oliver, Callum Williams, Joe Kelleher, Genoveva Burca, Triestino Minniti, Winfried Kockelmann

Abstract for CIUK Presentation

Energy selective imaging detectors with over 10MP sensors have been incorporated within large science laboratories allowing high-quality materials at the micro- (10-6 m) or nano- (10-9 m) scales analysis but creating a large data problem. A service for neutron analysis is being offered by the IMAT (Imaging and Materials Science & Engineering) instrument, at the ISIS pulsed neutron source in the UK. ULTRA is a compute intensive HPC platform enabling high-throughput neutron tomographic image data analysis so that images can be scrutinised during an experiment rather than as a batch-mode post-process operation.

Dataflow Problem:

Unlike normal computed tomography (CT) scans used in hospitals where one 2D image is acquired for each ‘shot’ (a rotation angle), in energy selective neutron imaging, an image stack comprising of potentially thousands of 2D images are collected at each ‘shot’. So for the MCP camera , capable of collecting 3,000 images per angle where each image uniquely corresponds to one of the 3,000 energy bands; this results in 0.3 million images during a 100 angle experiment.

New Materials Science Analysis:

The reason this is carried out is that neutron interactions can vary drastically with neutron energy for certain materials, allowing for chemistry discrimination [2]. The amount of neutrons that is able to penetrate through a material and reach the image detectors, namely, neutron intensity, is strongly affected by the crystalline structures and microstructures of a material, exhibiting Bragg edges [1]. The computational reconstruction needs to be near interactive as each peak represents a potential energy band region suitable for HPC reconstruction.

Creating a bespoke HPC engine:

A specific HPC-based analysis and visualisation technology is being employed to enable this new mode of operation. Traditionally, a typical 3D reconstruction takes mins, using the Filtered Back Projection algorithm (FBP) [3], one of the most common and fast algorithm. However, in energy selective imaging, the reconstructions need to be performed repetitively across selected energy ranges and as signal to noise levels are lower, iterative algorithms, which are much slower than the FBP algorithm are required and can take >100s of minutes to run.

ULTRA has been constructed to receive the data to a STFC HPC cluster, a distance of about a mile, on demand from the experimental facility and process the data directly – this then allows for different interaction modes gives instantaneous feedback for example through small mpeg movie clips, as well as final results transmitted by remote visualisation from the login node via paraview. Using the Savu pipeline , a python based dataflow mechanism, different options of filtering, reconstruction and presentation can be incorporated . We will explain the specific cluster based HPC hardware setup that includes GPU based login nodes designed to minimise data movement.

For the scientists the insights obtained through this analysis process is then used to steer the next experiment step, for example, to adjust sample positions and beam alignment, or to decide whether to use different reconstruction algorithms or parameters, or image filters.

Video Outreach:

We created a video that will be presented and described to help others develop a better understanding to create this kind of hardware/software dataflow experiment. A dataset is transferred (using the open source test date – SophiaBeads dataset [4], a microCT dataset), captured, reconstructed using the FBP algorithm from the TomoPy image reconstruction toolkit [6] running on a single node with 128GB RAM, 12 CPU cores using the STFC’s large HPC cluster, SCARF and then segmented using the algorithms available in the commercial software package Avizo [5].

[1] T. Minniti et. al., “Material analysis opportunities on the new neutron imaging facility IMAT@ISIS”, Journal of Instrumentation, Volume 11, March 2016, IOP Publishing Ltd and Sissa Medialab srl.
[2] J. Santisteban et. al., “Time-of-Flight neutron transmission diffraction”, J. Appl. Cryst. 34 (2001) 289.
[3] Peter Toft, “The Radon Transform – Theory and Implementation”, Ph.D. thesis. Department of Mathematical Modelling, Technical University of Denmark, June 1996.
[4] Sophia Bethany Coban, “SophiaBeads Datasets Project Documentation and Tutorials”, April 2015, MIMS EPrint: 2015.26.
[5] Avizo 9. “Avizo User’s Guide”, FEI Visualisation Sciences Group.
[6] Doǧa Gürsoy, Francesco De Carlo, Xianghui Xiao, and Chris Jacobsen, “TomoPy: a framework for the analysis of synchrotron tomographic data”, J Synchrotron Radiat. 2014 Sep 1; 21(Pt 5): 1188–1193. DOI: 10.1107/S1600577514013939.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s