List of Published Research


Using Artificial Immune Systems to Sort and Shim Insertion Devices at Diamond Light Source

Joss Whittle, Mark Basham, Zena Patel, Edward Rial, Robert Oates, and Yousef Moazzam. Published in IOP Journal of Physics, 14th International Conference on Synchrotron Radiation Instrumentation (SRI 2021).

This work presents the Opt-ID software developed by the Rosalind Franklin Institute (RFI) and Diamond Light Source (DLS) in collaboration with Helmholtz-Zentrum Berlin (HZB). Opt-ID allows for efficient simulation of synchrotron Insertion Devices (ID) and the B fields produced by a given arrangement of candidate magnets. It provides an optimization framework built on the Artificial Immune System (AIS) algorithm for swapping and adjusting magnets within an ID to observe how these changes would affect the magnetic field of a real-world device, guiding ID builders in the steps they should take during ID tuning.

Code for Opt-ID is provided open-source under the Apache-2.0 License on Github: https://github.com/rosalindfranklininstitute/Opt-ID

Cite as:

@article{Whittle2022,
  title       = "Using Artificial Immune Systems to Sort and Shim Insertion Devices at Diamond Light Source",
  author      = "Whittle, Joss and Basham, Mark and Patel, Zena and Rial, Edward and Oates, Robert and Moazzam, Yousef",
  journal     = "IOP Journal of Physics",
  volume      = "2380",
  pages       = "012022",
  year        = "2022",
  doi         = {10.1088/1742-6596/2380/1/012022}
}

Sampling strategies for learning-based 3D medical image compression

Omniah Nagoor, Joss Whittle, Jingjing Deng, Ben Mora, and Mark W. Jones. Published in Machine Learning with Applications: Volume 8, 15 June 2022, 100273.

Recent achievements of sequence prediction models in numerous domains, including compression, provide great potential for novel learning-based codecs. In such models, the input sequence’s shape and size play a crucial role in learning the mapping function of the data distribution to the target output. This work examines numerous input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16-bit depth) losslessly. The main objective is to determine the optimal practice for enabling the proposed Long Short-Term Memory (LSTM) model to achieve high compression ratio and fast encoding–decoding performance.

Our LSTM models are trained with 4-fold cross-validation on 12 high-resolution CT dataset while measuring model’s compression ratios and execution time. Several configurations of sequences have been evaluated, and our results demonstrate that pyramid-shaped sampling represents the best trade-off between performance and compression ratio (up to 3x). We solve a problem of non-deterministic environments that allow our models to run in parallel without much compression performance drop.

Experimental evaluation was carried out on datasets acquired by different hospitals, representing different body segments, and distinct scanning modalities (CT and MRI). Our new methodology allows straightforward parallelisation that speeds-up the decoder by up to 37x compared to previous methods. Overall, the trained models demonstrate efficiency and generalisability for compressing 3D medical images losslessly while still outperforming well-known lossless methods by approximately 17% and 12%. To the best of our knowledge, this is the first study that focuses on voxel-wise predictions of volumetric medical imaging for lossless compression.

Cite as:

@article{Nagoor2022Sampling, 
    author    = "Nagoor, Omniah and Whittle, Joss and Deng, Jingjing and Mora, Ben and Jones, Mark W.",
    title     = "Sampling strategies for learning-based 3D medical image compression", 
    journal   = "Machine Learning with Applications", 
    volume    = "8",
    pages     = "100273",
    year      = "2022", 
    doi       = {10.1016/j.mlwa.2022.100273}
}

MedZip: 3D Medical Images Lossless Compressor Using Recurrent Neural Network (LSTM)

Omniah Nagoor, Joss Whittle, Jingjing Deng, Ben Mora, and Mark W. Jones. Presented at ICPR 2020 - Milan, Italy.

As scanners produce higher-resolution and more densely sampled images, this raises the challenge of data storage, transmission and communication within healthcare systems. Since the quality of medical images plays a crucial role in diagnosis accuracy, medical imaging compression techniques are desired to reduce scan bitrate while guaranteeing lossless reconstruction. This paper presents a lossless compression method that integrates a Recurrent Neural Network (RNN) as a 3D sequence prediction model. The aim is to learn the long dependencies of the voxel's neighbourhood in 3D using Long Short-Term Memory (LSTM) network then compress the residual error using arithmetic coding. Experiential results reveal that our method obtains a higher compression ratio achieving 15% saving compared to the state-of-the-art lossless compression standards, including JPEG-LS, JPEG2000, JP3D, HEVC, and PPMd. Our evaluation demonstrates that the proposed method generalizes well to unseen modalities CT and MRI for the lossless compression scheme. To the best of our knowledge, this is the first lossless compression method that uses LSTM neural network for 16-bit volumetric medical image compression.

Cite as:

@inproceedings{Nagoor2020MedZip, 
    author    = "Nagoor, Omniah and Whittle, Joss and Deng, Jingjing and Mora, Ben and Jones, Mark W.",
    booktitle = "2020 25th International Conference on Pattern Recognition (ICPR)", 
    title     = "MedZip: 3D Medical Images Lossless Compressor Using Recurrent Neural Network (LSTM)", 
    year      = "2020", 
    pages     = {2874-2881},
    doi       = {10.1109/ICPR48806.2021.9413341}
}

Lossless Compression For Volumetric Medical Images Using Deep Neural Network With Local Sampling

Omniah Nagoor, Joss Whittle, Jingjing Deng, Ben Mora, and Mark W. Jones. Presented at ICIP 2020 - Abu Dhabi, United Arab.

Selected for the ICIP 2020 Top Viewed Q&A Paper Award (2nd place).

Data compression forms a central role in handling the bottleneck of data storage, transmission and processing. Lossless compression requires reducing the file size whilst maintaining bit-perfect decompression, which is the main target in medical applications. This paper presents a novel lossless compression method for 16-bit medical imaging volumes. The aim is to train a neural network (NN) as a 3D data predictor, which minimizes the differences with the original data values and to compress those residuals using arithmetic coding. We evaluate the compression performance of our proposed models to state-of-the-art lossless compression methods, which shows that our approach accomplishes a higher compression ratio in comparison to JPEG-LS, JPEG2000, JP3D, and HEVC and generalizes well.

Cite as:

@inproceedings{Nagoor2020,
    author    = "Nagoor, Omniah and Whittle, Joss and Deng, Jingjing and Mora, Ben and Jones, Mark W.",
    booktitle = "2020 IEEE International Conference on Image Processing (ICIP)", 
    title     = "Lossless Compression For Volumetric Medical Images Using Deep Neural Network With Local Sampling", 
    year      = "2020",
    pages     = {2815-2819},
    doi       = {10.1109/ICIP40778.2020.9191031}
}

A Deep Learning Approach to No-Reference Image Quality Assessment For Monte Carlo Rendered Images

Joss Whittle and Mark W. Jones. Presented at CGVC 2018 - Swansea, Wales.

In Full-Reference Image Quality Assessment (FR-IQA) images are compared with ground truth images that are known to be of high visual quality. These metrics are utilized in order to rank algorithms under test on their image quality performance. Throughout the progress of Monte Carlo rendering processes we often wish to determine whether images being rendered are of sufficient visual quality, without the availability of a ground truth image. In such cases FR-IQA metrics are not applicable and we instead must utilise No-Reference Image Quality Assessment (NR-IQA) measures to make predictions about the perceived quality of unconverged images. In this work we propose a deep learning approach to NR-IQA, trained specifically on noise from Monte Carlo rendering processes, which significantly outperforms existing NR-IQA methods and can produce quality predictions consistent with FR-IQA measures that have access to ground truth images.

Cite as:

@inproceedings{Whittle2018, 
    author    = "Whittle, Joss and Jones, Mark W.", 
    booktitle = "Computer Graphics and Visual Computing (CGVC) 2018", 
    title     = "A Deep Learning Approach to No-Reference Image Quality Assessment For Monte Carlo Rendered Images", 
    year      = "2018", 
    month     = "Sep",
    pages     = {23--31}, 
    doi       = {10.2312/cgvc.20181204} 
}

Analysis of reported error in Monte Carlo rendered images

Joss Whittle, Mark W. Jones, and Rafał Mantiuk. Published in The Visual Computer, June 2017, Volume 33, Issue 6–8, pp 705–713. Presented at CGI 2017 - Yokohama, Japan.

Evaluating image quality in Monte Carlo rendered images is an important aspect of the rendering process as we often need to determine the relative quality between images computed using different algorithms and with varying amounts of computation. The use of a gold-standard, reference image, or ground truth is a common method to provide a baseline with which to compare experimental results. We show that if not chosen carefully, the quality of reference images used for image quality assessment can skew results leading to significant misreporting of error. We present an analysis of error in Monte Carlo rendered images and discuss practices to avoid or be aware of when designing an experiment.

Cite as:

@article{Whittle2017,
    author  = "Whittle, Joss and Jones, Mark W. and Mantiuk, Rafa{\l}",
    title   = "Analysis of reported error in Monte Carlo rendered images",
    journal = "The Visual Computer",
    year    = "2017",
    month   = "Jun",
    day     = "01",
    volume  = 33,
    number  = 6,
    pages   = {705--713},
    issn    = {1432-2315},
    url     = {https://doi.org/10.1007/s00371-017-1384-7},
    doi     = {10.1007/s00371-017-1384-7}
}

Implementing generalized deep-copy in MPI

Joss Whittle, Rita Borgo, and Mark W. Jones. Published in PeerJ-CS, 2016.

In this paper, we introduce a framework for implementing deep copy on top of MPI. The process is initiated by passing just the root object of the dynamic data structure. Our framework takes care of all pointer traversal, communication, copying and reconstruction on receiving nodes. The benefit of our approach is that MPI users can deep copy complex dynamic data structures without the need to write bespoke communication or serialize/deserialize methods for each object. These methods can present a challenging implementation problem that can quickly become unwieldy to maintain when working with complex structured data. This paper demonstrates our generic implementation, which encapsulates both approaches. We analyze the approach with a variety of structures (trees, graphs (including complete graphs) and rings) and demonstrate that it performs comparably to hand written implementations, using a vastly simplified programming interface. We make the source code available completely as a convenient header file.

Cite as:

@article{Whittle2016,
    title   = "Implementing generalized deep-copy in MPI",
    author  = "Whittle, Joss and Borgo, Rita and Jones, Mark W.",
    journal = "PeerJ Computer Science",
    year    = "2016",
    month   = "Nov",
    volume  = 2,
    pages   = {e95},
    issn    = {2376-5992},
    url     = {https://doi.org/10.7717/peerj-cs.95},
    doi     = {10.7717/peerj-cs.95}
}