Is VR ready for Vis

Was at a show session in London – hosted by jigsaw24 – with a range of the new headset VR  (all except Sony [IMO one of the easiest and most comfortable to wear – possibly because it is well balanced]) creators and some hardware suppliers. 100+ attendees, with a mix of commercial and academic interest.

wp_20170208_003_cw

After a techy roadmap

  • EditShare (showing mainly 360VR data capture and pipeline use) emphasising on data storage issues and;
  • HTC Vive with the new Nvidia cards for automatic optic distortion was good – as always nvidia themselves create great prototypes (see their circus games in VR – with PhysX and some tactile feedback);

wp_20170208_013_cw

An interesting sales pitch was as a full system can use up to 4mx3m in space with multiple tracking nodes – there is an opportunity for nvidia to sell a new high-end pc in a new location at home. This space is too big for many living rooms; but the garage is an ideal (when empty) location to host a VR studio to achieve the best experience – so you will need a new PC (with high-end Pascal architecture quadro card!) for the garage.

Couple of interesting add-ons were shown but not demonstrated live from HTC – the new wireless modes which add only a few ms of latency, but allow you to wear a headset and do back-flips and other gymnastic maneuvers, a common occurrence in the future! The other was the HTC Vive Tracker a small attachment you can fix to a physical device, say a baseball bat, model gun etc… Six attached to a body created a simple but effective motion tracking system.

 

A developer panel was formed from; Alchemy VR, Rewind, Halo and the Mill – discussing use cases after a few years of practice and experience. Key questions were what did not work that included mainly thinking that you are creating a standard “framed” 2D/3D movie and old ideas fade to start and focus on specific POV object have to be rewritten. You can move but in sharper steps, and can cut when in context, although should contact some fixed reference point, and can experiment with exploration but may need to guide users for example turning down certain lights to allow the user to change their POV to the required place.

 

  • Cool examples to search and download include the Everest exploration viewer – this takes the traditional vertigo effect seen in caves within immersive environments to a new cool location being half way up everest.
  • ‘Home’ is the spacewalk that has parts of this that break good immersive experience (loosing a frame of reference) but that is what ‘…being in space is like’.
  • For 360 video the one of the Great Barrier Reef is now a classic for its beauty and simplicity.
  • HTC claimed over 1000 available just from last year and this will expand.

 

So can you do visualisation within this medium – will consider in detail later – but as an experience I would say there is scope to add this scale effect and immersion to help understand data. Couple of quick examples I would include;

  • Understanding astronomy as is fully 3D and huge (see below from Manchester)
  • Seeing the ATLAS detector and large underground building type structure showed its scale and shape

wp_20160726_006_cw

MT

Human in the viz-loop

On 14 December 2016 at Computing Insight UK http://www.stfc.ac.uk/ciuk with over 250 delegates and suppliers present I had the opportunity to discuss areas of human-vis within HPC – few tweets on the vis presentation (so far). The specific example of the IMAT beamline was used, but the past four years usage were considered. Thanks to Srikanth Nagella, Erica Yang and Martin Turner; Peter Oliver, Callum Williams, Joe Kelleher, Genoveva Burca, Triestino Minniti, Winfried Kockelmann

Abstract for CIUK Presentation

Energy selective imaging detectors with over 10MP sensors have been incorporated within large science laboratories allowing high-quality materials at the micro- (10-6 m) or nano- (10-9 m) scales analysis but creating a large data problem. A service for neutron analysis is being offered by the IMAT (Imaging and Materials Science & Engineering) instrument, at the ISIS pulsed neutron source in the UK. ULTRA is a compute intensive HPC platform enabling high-throughput neutron tomographic image data analysis so that images can be scrutinised during an experiment rather than as a batch-mode post-process operation.

Dataflow Problem:

Unlike normal computed tomography (CT) scans used in hospitals where one 2D image is acquired for each ‘shot’ (a rotation angle), in energy selective neutron imaging, an image stack comprising of potentially thousands of 2D images are collected at each ‘shot’. So for the MCP camera , capable of collecting 3,000 images per angle where each image uniquely corresponds to one of the 3,000 energy bands; this results in 0.3 million images during a 100 angle experiment.

New Materials Science Analysis:

The reason this is carried out is that neutron interactions can vary drastically with neutron energy for certain materials, allowing for chemistry discrimination [2]. The amount of neutrons that is able to penetrate through a material and reach the image detectors, namely, neutron intensity, is strongly affected by the crystalline structures and microstructures of a material, exhibiting Bragg edges [1]. The computational reconstruction needs to be near interactive as each peak represents a potential energy band region suitable for HPC reconstruction.

Creating a bespoke HPC engine:

A specific HPC-based analysis and visualisation technology is being employed to enable this new mode of operation. Traditionally, a typical 3D reconstruction takes mins, using the Filtered Back Projection algorithm (FBP) [3], one of the most common and fast algorithm. However, in energy selective imaging, the reconstructions need to be performed repetitively across selected energy ranges and as signal to noise levels are lower, iterative algorithms, which are much slower than the FBP algorithm are required and can take >100s of minutes to run.

ULTRA has been constructed to receive the data to a STFC HPC cluster, a distance of about a mile, on demand from the experimental facility and process the data directly – this then allows for different interaction modes gives instantaneous feedback for example through small mpeg movie clips, as well as final results transmitted by remote visualisation from the login node via paraview. Using the Savu pipeline , a python based dataflow mechanism, different options of filtering, reconstruction and presentation can be incorporated . We will explain the specific cluster based HPC hardware setup that includes GPU based login nodes designed to minimise data movement.

For the scientists the insights obtained through this analysis process is then used to steer the next experiment step, for example, to adjust sample positions and beam alignment, or to decide whether to use different reconstruction algorithms or parameters, or image filters.

Video Outreach:

We created a video that will be presented and described to help others develop a better understanding to create this kind of hardware/software dataflow experiment. A dataset is transferred (using the open source test date – SophiaBeads dataset [4], a microCT dataset), captured, reconstructed using the FBP algorithm from the TomoPy image reconstruction toolkit [6] running on a single node with 128GB RAM, 12 CPU cores using the STFC’s large HPC cluster, SCARF and then segmented using the algorithms available in the commercial software package Avizo [5].

[1] T. Minniti et. al., “Material analysis opportunities on the new neutron imaging facility IMAT@ISIS”, Journal of Instrumentation, Volume 11, March 2016, IOP Publishing Ltd and Sissa Medialab srl.
[2] J. Santisteban et. al., “Time-of-Flight neutron transmission diffraction”, J. Appl. Cryst. 34 (2001) 289.
[3] Peter Toft, “The Radon Transform – Theory and Implementation”, Ph.D. thesis. Department of Mathematical Modelling, Technical University of Denmark, June 1996.
[4] Sophia Bethany Coban, “SophiaBeads Datasets Project Documentation and Tutorials”, April 2015, MIMS EPrint: 2015.26.
[5] Avizo 9. “Avizo User’s Guide”, FEI Visualisation Sciences Group.
[6] Doǧa Gürsoy, Francesco De Carlo, Xianghui Xiao, and Chris Jacobsen, “TomoPy: a framework for the analysis of synchrotron tomographic data”, J Synchrotron Radiat. 2014 Sep 1; 21(Pt 5): 1188–1193. DOI: 10.1107/S1600577514013939.

WING and a prayer

To do interactive visualisation you need an input device – fairly obvious statement but often been ignored and just the humble keyboard and mouse are used. Although, there are many novel devices over the decade last year we have brought one of the Wing devices a combined joystick and mouse. This was linked to the Drishti volume visualiser with a python layer that would then calibrate and smooth interactions allowing for different rates of response.

This means one single hand of a user has a 6DOF (degree of freedom): allowing for x-y-z along with roll-pitch-yaw operations.So you can move and rotate an object in one smooth operation. The other hand can be used to hold your coffee cup.

Details of the poster we, Martin Turner, Tim Morris and Mario Sandoval, presented is at: https://doi.org/10.5281/zenodo.154515

At the stand we asked people for various use cases that included:

  1. Control of an objects clipping plane – rotation allows the angle of the clipping-plane to be changed and the x direction moves the clipping-plane in the normal direction cropping the required part of the volume datasets.
  2. Controlling a light source; where rotation controls the direction that the virtual light source points and x-y controls the colour of the light as described by a colour wheel.
  3. Controlling viewing between multiple transfer functions: so moving in x direction blends between one of a list of pre-defined transfer functions.
  4. Controlling various parameters on a transfer function’s curves and points.

Next question was if you could have two Wing devices you now have >12 DOF available simultaneously in movements of two hands.

MT

Poster and stand at the ToScA 2016 event held in the University of Bath, September 2016.

Being Ironman

On the Visualisation Room at the Atlas Building in the Harwell Campus, Srikanth Nagella installed the Avizo VR Module – that allows full tracking and experimented on Tomographic Data:

  • Full immersive 3D interaction;
  • Support for single and multiple screens;
  • Support for single and multi-pipe and for graphic clusters;
  • Flexible customization for specific display geometry;
  • Head-tracking and tracked hand-held 3D input devices;
  • Control functionality via user interface; and
  • Control of 2D user interfaces with 3D devices (virtual mouse).

p20150424_007_9p40

WP_20150424_004m

Restructuring and starting a “new” Visualisation Group

The SCD within STFC is having Away Days in April and collated together with a five year plan for the complete Department. The Visualisation Group has a technology division strategy outline and operational plan but produces a list of future areas of support that include the concept of keeping the Human always within the Visuallsation loop.

Key Activities

The Visualisation Group, part of the Technology Division within SCD, was founded to support and maintain visualisation software and skills for large projects and user communities.

This has included working;

  • with the Hartree Centre hosting their visualisation centres connecting outputs from some of the largest computer in the UK making them human understandable;
  • with the Innovate UK’s Space Application Catapult and European Space Agency (ESA) producing bespoke solutions for their data analytical and command and control needs;
  • creating remote and distance data gathering and visualisation workflows to control the computational processes and
  • providing specialist local high-end equipment within the centres that are near to the main STFC image capturing x-ray, neutron and laser facilities.

Human in the Loop

The group is working to support the high-end visualisation centres within STFC, with the key objective to consider the human-in-the-loop as an integral part to pre- mid- and post-data visualisation needs from the major facilities, from archived data stores and from computational simulations. This it is believed is a key component to increasing the efficiency of the major STFC facilities allowing researchers’ work-plans to be controlled, changed and even stopped on the fly.

The group has an emphasis to work in harmony with collaborators across the STFC mission. To this end there are links to partners;

  • as well as the major imaging facilities (ISIS, Diamond Light Source, and Central Lasers Facility) and the Hartree Centre this includes the
  • Virtual Engineering Centre (University of Liverpool centre based in the Daresbury Labs.),
  • the Harwell Imaging Partnership (based in the RAL campus),
  • the Collaborative Computational Projects (CCPs based across SCD) and a
  • range of Research Council projects with other teams and groups within SCD and STFC.

An image of the initial posters shown here:

VisGrp_A0P_Poster02-15_mt2_fin

Visualisation User Needs Survey

At the end of 2014 we analysed a Visualisation Tools survey for certain HPC and computational users.

UserNeedsSurvey

The results from over 100 respondents are being edited at:
http://www.vizmatters.cs.manchester.ac.uk/

Executive summary states; for the global survey there were seven key outcome results that can be acted upon:

  • Three packages are the most-used packages by 26% of respondents. Conversely, another 31 packages are used by one or two users and account for a further 26% of respondents.
  • Producing publication quality plots is the most-used technique.
    However, the features making these packages the favourites are:
    – Software that is written specifically for their domain of interest.
    – Large datasets are handled efficiently.
    – Scripting or other ability to extend the tool is required.
  • Users second most favoured packages are general purpose visualisation tools.
  • Users were given five options for selecting their most required development. None emerged as being more needed than the others.
  • Conversely, large amounts of memory was clearly the most important requirement for high performance visualisation.
  • The main future challenges are suggested to be
    – The ability to handle large amounts of data
    – The ability to operate in a distributed environment.

A series of further surveys and follow-up questions are planned as well as afull review next October (2015).

CCPi User Survey

A series of questions asked users about their current and future needs. Some important issues were raised over the changing use of visualisation tools for Computed Tomography results.

Executive Summary:

  1. Software that is more popular now has moved to Avizo, ImageJ and Paraview
  2. Lots of reconstruction development (filtrers etc) but little segmentation
  3. CCPi core activities will focus on wrappers and development of community software for these three products.

 

Slide1 Slide2 Slide3 Slide4 Slide5 Slide6 Slide7 Slide8

At the CCPi Working Group meeting there was a short debate on the results which resolved to focus effort from VolView to Avizo and follow the direction from the user base. This was followed up at a Developers’ Workshop help in Nottingham University (23 July 2014).

CCPi Working Group June 2014. Included discussion on survey results.

CCPi Working Group June 2014. Included discussion on survey results.

SciVis4All a consortium for the future from EuroVis 2014

During EuroVis 2014 in June at the University of Swansea we had post dinner and then extended discussion meeting regarding the possibility of considering Visualisation as a CCP in itself or as a cross-service role to other CCPs.

1. The conference hosted about 250 people so there is a vibrant and large community which would be active in supporting. The UK component is a good size and this year it was reported that there was a significant increase in interest from a range of UK universities.

EuroVis Conference Dinner - June 2014 held at Swansea

EuroVis Conference Dinner – June 2014 held at Swansea

2. There is alterntaive networks within the UK including the Eurographics UK Chapter: who could support and assist in further network development. They also represent 10 or so people across the two main communities of Computer Graphics and Visualisation. A recent survey has involved trying to define the distinction between these two main groups.

Question is what is the key software deliverables;

  1. information visualisation or scientific visualisation – or both.
  2. specific API toolkit or major product including interface etc.
  3. web-based portal type programming or stand-alone application
  4. how localised a network of developers vs users

 

Swansea City is meant to be the wettest city in the UK- and we had to suffer the 30 min walk across the wilderness from the hotel to the university; there and back every day.

Gorgous sunshine on beach walk to University of Swansea from hotel during EuroVis 2014.

Gorgeous sunshine on the beach walk to the University of Swansea from our hotel during EuroVis 2014.

 

CCPi (Tomographic Imaging) Case Study

Tomographic Imaging of a Nokia 702 Mobile Phone and other Items

The Challenge

Spot anomalies using the software available. As part of an assignment a set of objects were scanned using an X-Tech 320 kV CT scanner. This was followed by an exploratory stage using the multiple methods available from visualisation systems.

Key items to be discovered included the following:

  • Understand how different materials and components can be separated.
  • Extract some understanding of the objects to gain insight in to the object’s function
  • Extract the shape of a visible or unknown component.
  • Analyse the density of the materials used and identify defects.

The Solution

Presented a range of 2D and 3D views that could be manipulated to see features. An ancient mobile phone, the Nokia 702 amongst other items were investigated. About 1500 2kx2k X-ray images were captured over evenly spaced rotation angles of the mobile phone. These were reconstructed to create a 3D volume approx 1500 x 1000 x 1000.

Example 1: Mobile Phone Nokia 702

The speaker mechanism (bottom) can be clearly seen, as well as loose components (right) that were not screwed down properly.

Left: Speaker componets clearly visable idefining 3D shape and measurable size and density. Right: Loose componets (extra screw) that has fallen into the phone during assembly is visable and measurable.

Left: Speaker components clearly visible identifying 3D shape and measurable size and density.
Right: Loose components (extra screw) that has fallen into the phone during assembly is visible and measurable.

The obvious aerial is actually an aesthetic feature, as it is not connected, with a mesh being used instead (below).

Two density transfer functions highlighting the mesh aerial.

Two density transfer functions highlighting the mesh aerial.

Very quick confirmation of discoveries has occurred but the full visualisation component can be seen in the following screenshot that describes and views the raw data, image projections, 3D reconstruction slices and final 3D volume visualisation (left to right).

Mobile Phone being viewed as raw data, image projections, 3D reconstruction and final 3D volume visualisation (Drishti)

Mobile Phone being viewed as raw data, image projections, 3D reconstruction and final 3D volume visualisation (Drishti)

Example 2: Golf Balls.

A Fitleist 2 (High-quality) versus a Srixon 4 (standard quality) golf balls; were both scanned at 2kx2k and then investigated using identical networks to discover material characteristics non-destructively. The high-quality golf ball has a very well-defined liquid core with an injection point, multiple tight winding impressions, as well as, far more complex layers of materials all visible and can be quantified in terms of shapen and size.

gb12

Example 3: Low Quality Padlock.

A very low quality padlock – with simple mechanism. Although the padlock uses a key the shape has been shown to be irrelevant to opening the lock. All that is needed is for the clip (far right) to be pulled apart so virtually any flat shape will work.

Visualisation network for the low-quality padlock. Although simple can still with interactive exploration discover new features within the volume data set.

Visualisation network for the low-quality padlock. Although simple can still with interactive exploration discover new features within the volume data set.

Example 4: High Quality Padlock.

A very high quality spherical security padlock – with complex mechanism. Requires a complex key structure, but it is still possible to retrieve the main components of the key shape.

Various views and colourmaps highlighting components within the high-quality padlock; as well as marked up components showing a possible 2D key structure to passist in picking the lock.

Various views and colourmaps highlighting components within the high-quality padlock; as well as marked up components showing a possible 2D key structure to assist in picking the lock.

Example 5 Homeland Security combination padlock

Homeland Security combination padlock. Understand how both combination and key mechanisms work; 1. Extract the correct combination code for unlocking the padlock, 2. Extract the shape of the key, and 3. Analyse the density of the materials used within the padlock manufacture and identify defects. Visualization filters can extract specific components – in this case recover non-destructively the combination lock values for the padlock. Alternative visualization filters allow analysis and measurement of defects introduced during the manufacturing process.

 

How to extract the combinations from a padlock. Step-0by-step process extracting and examining the components.

How to extract the combinations from a padlock. Step-0by-step process extracting and examining the components.

Density measurements illustrating poor and good quality manufacture wiithin the lock. Network is complete network for the extraction of the combinations.

Density measurements illustrating poor and good quality manufacture within the lock. Network is complete network for the extraction of the combinations.

 

The Benefits

The benefits of using visualisation in this case was to gain Insight as the human was able to see anomalies that an automated computer system could not. The question is when would an automated system – say for defect detection – be sufficient and used and then visualisation is not needed just data analysis and mining.

Credit to the students and researchers in the past who have created these visualisations and carried out the analysis.

Impact of Visualisation – a formula

Can visualisation methods be evaluated and should they.

A proposal presented at EuroVis 2014 held in the University of Swansea, Wales.

V = -T + I + E + C

This is the Value of a Visualisation is equal to the Time taken to understand the visualisation from the user plus the Insight the user gains from this plus the Essence gained for parts of this plus the Clarity defining the overall acceptance of the global data set by the user.

So let’s try this in action on a simple 3D example with users.

Example 1:

LiDAR visualisation for the geology community - can show rock escarpments and cave structures at myltiple sclaes with augmented meta-data and markers.

LiDAR visualisation for the geology community – can show rock escarpments and cave structures at myltiple sclaes with augmented meta-data and markers.

The previous image shows a geological structure in Egypt, Mount Sanai in stereoscopic 3D projection mode, with added markers showing way-points, landmarks, measurement fields as well as allowing ares of curvature for example to be highlighted and measured.

The time taken to do a demonstration is very low, and the number of Insight points (sometimes called Impact or Wow elements) is reasonably high as the user can discover anomalies, for example smooth areas of curvature in the rock face showing different strata. The Essence or global structure and understanding is also high as this is an intuitive 3D structure, and from this Confidence or Clarity in the data can be achieved. There are a few outliers and anomalies from the LiDAR data but the visualisation is very clean.

 Example 2:

CCP4software demonstration visualisation image - showing switchable components from proteins viewed in stereosc

CCP4software demonstration visualisation image – showing switchable components from proteins viewed in stereoscopic mode to an audience.