“Redundancy Visualisation”

This was a wonderful little term mentioned in passing at a synchrotron vis meeting. It sort of means what can be thrown away and still produce the message or story that the visualisation wishes to convey.

There are two areas where data can be thrown away – from the original or derived data set so you select parts that are appropriate; or from the items you wish to show in the visualisation itself – cropping isosurfaces or streamlines for example.

A tomography pipeline operation should be mentioned that addresses the 100GB problem – and could be said to be the first half.

100GB Problem

I have a scanned 3D data set that is about 4k x 4k x 4k in size, with 16 bits per voxel grey scale we have, 128 GB or raw data. How do we visualise this.

We can just use lots of CPUs and GPUs and this is fine – although not necessarily straightforward. See video from (TO Upload)

Do a simple dataflow so steps:

  • Load the complete data set into a fat memory workstation – you have to find one of these but there are ‘many’ 1/2 TB RAM systems out there.
  • Volume visualise the complete data set that works on simple GPU parallel code.
  • Select volume of interest
  • Crop this volume – aiming for about 1-2 GB
  • Extract this sub-volume and then possibly scale to 8 bits per volxel
  • You have a data volume about 1/2 – 1 GB that can go into your laptop for normal visualisation and hand editing / markup. Simple.

Not always practical but then there are lots of cool code that only works on <2GB volume due to meshing , level-set analysis and your heart and CPU are freed.

Important to go back to the raw volume and check you are have the right conclusions.

Advertisement

Visualisation needs RSEs

Imaging and visualisation has at its heart coding so the emergence of the Research Software Engineer as a career by being supported by the university, industrial and Research Council sectors is very welcome.

Vis From Manchester Central to being On the Top of The World – Still making the user via visualisation be deep within the HPC-Loop

Had the opportunity to present ideas of integrating human visualisation within the HPC (high-performance computation) loop. On 14 December 2016, Computing Insight UK 2016 launched with over 250 delegates and suppliers present; it was a great session to review the use of Visualisation within the Hartree Centre and describe how it has been important to keep the human in the visualisation/computational loop. This included the use of multi-use vis and discussion spaces as well as incorporating fat-memory GPU nodes at strategic locations; and then defined a future proposal to have an infinite job submission system that would stop only under human-visualisation control.

Needs for RSE to be integrated and employed

This talk was then modified and semi-repeated for a synchrotron (x-ray imaging) related workshop, an EU COST / PSI event on 9-12 January 2017 in Switzerland, which focused on the visualisation of complex imaging for a specific audience. For this we need software developers who can look at specific problems for users and have the dedicated time to create these solutions.

visameet2017

We submitting an EPSRC proposal with Manchester Research IT Services, for X-ray Tomographic Imaging, which is soon to successfully launch a new Flagship grant program in April 2017 – this will fund two RSEs from Manchester; Daniil Kazantsev (included in photo above 5th from right) and Jakob Sauer Jørgensen to revive the reconstruction codes within the reconstruction library; specifically these are for complex multi-channel data. Starting for three years both will be employed by the University of Manchester, but Daniil will be permanently based at the RAL – Harwell Campus next to the Diamond Light Source.

Links:

Is VR ready for Vis

Was at a show session in London – hosted by jigsaw24 – with a range of the new headset VR  (all except Sony [IMO one of the easiest and most comfortable to wear – possibly because it is well balanced]) creators and some hardware suppliers. 100+ attendees, with a mix of commercial and academic interest.

wp_20170208_003_cw

After a techy roadmap

  • EditShare (showing mainly 360VR data capture and pipeline use) emphasising on data storage issues and;
  • HTC Vive with the new Nvidia cards for automatic optic distortion was good – as always nvidia themselves create great prototypes (see their circus games in VR – with PhysX and some tactile feedback);

wp_20170208_013_cw

An interesting sales pitch was as a full system can use up to 4mx3m in space with multiple tracking nodes – there is an opportunity for nvidia to sell a new high-end pc in a new location at home. This space is too big for many living rooms; but the garage is an ideal (when empty) location to host a VR studio to achieve the best experience – so you will need a new PC (with high-end Pascal architecture quadro card!) for the garage.

Couple of interesting add-ons were shown but not demonstrated live from HTC – the new wireless modes which add only a few ms of latency, but allow you to wear a headset and do back-flips and other gymnastic maneuvers, a common occurrence in the future! The other was the HTC Vive Tracker a small attachment you can fix to a physical device, say a baseball bat, model gun etc… Six attached to a body created a simple but effective motion tracking system.

 

A developer panel was formed from; Alchemy VR, Rewind, Halo and the Mill – discussing use cases after a few years of practice and experience. Key questions were what did not work that included mainly thinking that you are creating a standard “framed” 2D/3D movie and old ideas fade to start and focus on specific POV object have to be rewritten. You can move but in sharper steps, and can cut when in context, although should contact some fixed reference point, and can experiment with exploration but may need to guide users for example turning down certain lights to allow the user to change their POV to the required place.

 

  • Cool examples to search and download include the Everest exploration viewer – this takes the traditional vertigo effect seen in caves within immersive environments to a new cool location being half way up everest.
  • ‘Home’ is the spacewalk that has parts of this that break good immersive experience (loosing a frame of reference) but that is what ‘…being in space is like’.
  • For 360 video the one of the Great Barrier Reef is now a classic for its beauty and simplicity.
  • HTC claimed over 1000 available just from last year and this will expand.

 

So can you do visualisation within this medium – will consider in detail later – but as an experience I would say there is scope to add this scale effect and immersion to help understand data. Couple of quick examples I would include;

  • Understanding astronomy as is fully 3D and huge (see below from Manchester)
  • Seeing the ATLAS detector and large underground building type structure showed its scale and shape

wp_20160726_006_cw

MT