Vis within Image Reconstruction

Within our Tomographic Imaging network we gave a talk Introducing the CCPi Core Imaging Library – Dr Edoardo Pasca (25 January 2022 11am), STFC’s Scientific Computing Department.

Abstract: The Collaborative Computational Project in Tomographic Imaging (CCPi) Core Imaging Library (CIL) is a versatile Python package for tomographic imaging intended for CT experimentalists to easily access optimised standard algorithms, create bespoke pipelines to handle novel imaging rigs, dynamic, hyperspectral, non standard scan geometry, to name a few. CIL is also intended for imaging specialists to allow the creation of novel reconstruction algorithms and assess them against established ones. CIL seamlessly handles the CT pipeline from loading data from common lab X-Ray CT machines, as NIKON or ZEISS, to FDK or iterative reconstruction. CIL also provides utilities for visualisation and exploration of the data. In this seminar, Dr Edoardo Pasca will talk about some examples of applications of the methods available with CIL.

This included a live demonstration – see below – where Edo created a reconstruction of a walnut, X-ray CT scan, where the data (middle image) was severely limited, but modern iterative solutions (right image) achieve almost the same as the gold standard result (left image).

Recording available soon – but one extra almost essential part is an automated visualisation module (also written in python) to describe the lab set up (the geometry of image capture and object space – credit to Gemma Fardell).

So to understand the visualisation of the resulting images; you need to understand the visualisation of the geometry of the laboratory setup.

Parallel Coordinates and other vis


Timos Kipouros – engineering vis for optimisation: use of ||coord for blade design and then understanding – has taken about ten years to convince managers of the practicalities.

‘… you are going to need a bigger screen’

Discussed the human-in-the-loop for Comp Eng Design cycle : and now in the future could add the dynamic data information (e.g. Rolls-Royce monitor engines as they fly)

“use optimisation to solve a problem but also to understand your problem”

Royalties – and old licencing rights: linking to ALCS

So I am a great fan of open source and royalty free access to academic research materials. The creative commons movement, green/open access publishing and the ability not to be charged multiple times to produce and then read material is great. See the Tomography (3D volume) zenodo pages of articles / data / code and links that we are starting up and maintaining for researchers; which is good as this also allows DOI tags for all entries meaning governance and many of the data management processes are met.

as well as the STFC linked github site for tomography:

But, if you are at a certain age, with academic writing, in the past you will have built up a legacy of material that still has copyright; even if no more of these books or articles are being sold and possibly now the rights to reprint have been returned to the author. So there are no more royalty payments being received and you have no indication of number of purchases.

fracbk geoviz_cs paradatass vizmatterss

Linking on these now go to resellers where you can buy a second hand version or even download copies. Many people I have been told are photocopying sections in libraries and gaining material that way which is fine by me, but when asked you can track down this kind of usage.

The ALCS was set up (details below) to link photocopy rights and other electronic rights where material is copyright back to the authors. It was an easy process to register all the key books and then wait for the next six month review to take place. The last one arrived and states the vast majority >90% of photocopying is from one book – the Fractal Geometry book – thanks to people in New Zealand for copying sections of the Geographic Visualization.


AUTHORS’ LICENSING AND COLLECTING SOCIETY 1st Floor, Barnard’s Inn, 86 Fetter Lane, London EC4A 1EN +44 (0)20 7264 5716,



Dealing with network software projects, many based on imaging, I have a preferred license model: that favors Apache License version 2.0, which allows virtually all uses for the code, and this can incorporate other licenses say for libraries etc. (options then are through GPL3).

Thanks over many discussion for this policy – note not binding officially in any sense – including contributing as a modified version of the CCP_PETMR / CCP_i / savu_DLS communities.

To contribute code to this project, (“the Software”) this is under the terms of the Apache License version 2.0 (AL2) or under the terms of the GNU General Public License 3.0 (GPL3). The Software provided is described as follows: {add details here}.

Any other software: unless indicated otherwise at the time of provision, such additional software shall be deemed incorporated into the definition of “the Software” under this Agreement and licensed to the owners under the same terms. Such contribution could for instance take the form of a submission of a pull-request or a git commit.

The ownership of the copyright in the Software licensed will remain with us, the contributor, and we request that the names of the authors as copyright owner, are acknowledged on any copy of the Software that you may release in original or modified form.

Furthermore, as stated by the AL2 terms and without intending to affect those terms, the Software is provided without any warranty as to its quality or fitness for any general or particular purpose.

We confirm that the Software was entirely created by the authors named and that the licence granted under this letter will be valid and binding upon us.


Apache license as used within PETMR is here (note links to GPL):

UK-USA visit and SSI/EPSRC presentation

We have carried out initial collaborations between the Manchester and STFC researchers within CCPi in the UK; with Kitware Inc in the USA. My presentation, on early part of this software collaboration, between UK- USA with poster and presentation, is on zenodo.


In our funded work both sides met and are considering research software for 3D volume reconstruction and quantitative visualisation; with a similar (possibly in the future interchangeable) python based pipelines.


There were two workshop in the last month on what I would refer to as discovering ‘quality’ software through collaboration: an EPSRC SSI meeting at MOSI (Museum of Science and Industry; 24 April 2018 – image above) and then a round table discussion with the CoSeC developers at STFC (Scientific Technology and Facilities Council; 30 April to 1 May 2018).

These meetings will report back on these factors that the development communities think are important; which for ‘quality’ software is not necessarily the same as the needs of the science or the funders. A few comments and question arose that we will think about in terms of future software collaboration activities.


“Be Intuitive and Be Brave”, C. Goble requested of the funding bodies – but this can be useful campaign call across the divide. There is a divide between the science vs the talent that created the tools.

  • How to value this software creation activity – when it possibly involves not just multiple organisations but multiple countries – where the sponsors/funders are only based in one part and may only think in one way.
  • Credits for software should be more like movie credits – so a career path rising up the specialisms until full recognition as a ‘director’ – then possibly we can have the Oscars for research software development.
    • In some sense this indicates how the process often involving a large cast of stars and support staff.
  • Long term life-cycle and sustainability is to be considered; with the issue that good software is 20+ years to implement and maintain: should quality equate to longevity.
  • A common aim is to make code – and the theory of code – that is useful or essential! then the term quality code can easily be applied.
  • If you have quality code, or not, when to do a major rewrite of the sw and how then can its status be maintained (or improved).
  • Essential is a testing strategy: a national Jenkins service is still being maintained with identity management carried out through Anvil (shib access)

To Github or To Not Github

An aside on code repository as this was an ongoing discussion since the EPSRC announced that the repository ccpforge is closing down. Alternatives need to be found. There are lots of options and in the UK we are likely not to have a cohesive national service for a period, but we need a code repository. A key choice for individual projects is to either build and use a local service – say one based at the university of Manchester or one based in SCD at STFC; where you have to have a person responsible for maintenance and identity management etc – or to use an international service say from gitlab or bitbucket and then if you wish to pay for this and to what level. So taking notes of comments:

  1. Paying for gitlab or bitbucket – annual fee, or monthly usage fee – say a few hundred pounds; is a popular option and is minimal in cost as long as there is a small number of developers and only a few private projects requested.
  2. Using mirror site etc is also popular but that does not use all the facilities and just keeps location – this is important possibly for branding issues and helping code to be found.
  3. Issues of branding and identity – means that a local service can have very tight control of branding but at a cost of having to renew and update software.  International services may removing or change rights, permissions for local branding (and worse – advertisement etc) so diluting identity and searchability of the code.

Ongoing debate and reports should be made in the near future to guide the UK community; and we have ongoing opportunities to increase UK-USA and UK-EU collaborations – grants to be written.


Celebrations at Royal Geographical Society – as awarded SSI links to EPSRC UK-USA collaboration

Over 2017 the CCPi ( had an active round of networking and outreach meetings for X-ray CT, culminating with a recognition dinner held at the Royal Geographical Society.


In the start of 2017 we provided a Letter of Support for a DOE project proposed by Kitware inc., as well as submitted our own CCP Flagship proposal. Both were awarded, creating an extra 6 FTE RSE in the UK and 5.1 FTE RSE (Research Software Engineers) in the USA until 2020.
Following this success we were left with a concern that there were no face-to-face networking funds, but alternative electronic systems were being considered until serendipity! – the EPSRC launched a grant with the support of the SSI (Software Sustainability Institute), to ask for travel and subsistence to foster collaborations between research software engineers on both sides of the pond. We applied and had a two week face-to-face collaboration with one of the main Kitware Inc. lead developers, Dr Marcus Hanwell. The topics focussed on their new Tomviz software product ( a purpose built open source application that can manage – the data collection; noise filtering, reconstruction, visualisation and final analysis of tomography data.

Talks, facility tours (university labs, ISIS (IMAT) and Diamond (i13) beamlines) and software installation sessions occurred, at RAL (Atlas Visualisation Facility), DL (Hartree Centre visualisation suite) and the University of Manchester (HMXIF) and were enabled by the current extensive CCPi network.


The CCPi now has a presence at three major annual imaging events in the country; each having 50+ attendees. These include an X-Ray user group symposium (ToScA) managed by the National History Museum (NHM) and Royal Microscopical Society (RMS), a technical forum supported by RCaH and DLS; and a “dimensional XCT” conference supported by NPL that is leading to formal BSI/ISO standards.


In 2018, for this UK-USA collaboration, we are looking at new user guides to be created, an open day for software show-and-tell event at RAL and further direct collaboration between the newly recruited software developers to share code and best practice; as well as links to other CCPs involving tomography type data. A follow on impact showcase event will be organised under the EPSRC RSE & ARCHER umbrella on Tuesday 24 April 2018.

TB Visualisation

In the UK there are some medium sized HPC systems being installed, funded by the Research Councils. The tier 2 HPC launch event on 30 March 2017 – and made announcements by the EPSRC CEO Philip Nelson and from Susan Morell (Birmingham)
Peter Vincent (Imperial) presented the PyFR higher-order polynomial CFD that works on all architectures -giving 13.7 petaflops on the USA Titan system. He described VTKm for remote visualisation that rendered the 1TB per-frame data in-situ -vis:
… this claimed and showed you could stream video sets from the 1TB data file stored in the HPC memory in this virtual wind-tunnel type simulation, without saving the data first (or importantly moving it anywhere). it used a type of Catalyst within kitware’s range of tools so could produce paraview type effects.
The results are very impressive, and similar systems are being used in other remote projects. It does require:
  • knowledge of what you wish to view before you compute
  • unable – or difficult – to change the visualisation results (sub-parameters)
  • unknown question as to if you should (or can) store the intermediate data – would take many minutes or longer to do this.
  • cost of visualisation is include – or added to – the cost of HPC so needs accounting for.