Skip to content

Project Proposals

The following is a list of projects available for MLBD MRes students in the academic year - 2023/24

Algorithmic optimization of deformable mirrors for high-power laser experiments.

Project code: MLBD_1 Supervisors(s): Roland Smith (Physics/Plasma Physics and Light Section)

Note

This project can host up to 1 pair(s) of students.

Deformable mirrors use multiple computer controlled actuators to bend an optical surface, sometimes with an accuracy of just a few nanometer, and are used in instruments such as the James Webb telescope to radically improve the performance of an imaging system. In a high power laser these devices can be used to correct subtle spatial aberrations or temporal phase in a multi-terawatt, short-pulse laser beam and significantly improve its performance, or create new tools that allow for shaping of an optical pulse on few femtoscond timescales to "search" for new physics.

An “obvious” approach is to use a deformable mirror to simply optimize a laser focal spot in space, or pulse shape in time. Rather counter intuitively, many interesting and inherently non-linear processes driven by a laser (e.g. MeV particle acceleration or filamentation) can also benefit from a non-ideal spot or pulse shape. A major challenge in using this technique is that the mapping of individual “control values” from a computer to a real world mirror surfaces – and then to a physical process is inherently non-linear and in some cases not well understood. The “search space” is also very large, a 9 actuator mirror with a 12-Bit control system has ~3x10E32 different configurations, a 15 actuator system expands this to ~10E54 ! Finally, we also need to teach a computer to recognize “good” and “bad” results as it learns about the system, and avoid getting stuck in a local minimum. That could use a simple algorithm to give a single quality metric of a focal spot image, but for complex images with lots of fine-structure or an entire experiment with many additional control parameters, a neural net or other machine learning techniques such as Bayesian optimisation might have significant speed and “robustness” advantages. We might also need to use different ways to describe the problem, e.g. using a sum of Zernike polynomials to represent a complicated spatial phase profile in a “real” laser beam rather than just tweak a collection of mirror control values.

This project will use genetic (GA), Bayesian and other algorithmic approaches to optimize the “shape” in space and / or time of single and multiple mirror systems and use this to control the focus of a low power test beam, and if time allows, extend this to optimize high-power laser experiments. We will also investigate different image analysis and recognition techniques (assorted “direct” algorithms versus neural nets) to identify “good” and “bad” laser beams. We may also use machine optimization of finite element structural models of “new” mirror systems that can potentially be built and tested during the project, e.g. to exploit new actuator configurations.

Note

Any useful pre-requisite modules/knowledge? : N/A


IRIS: A New Global Tropical Cyclone Model

Project code: MLBD_2 Supervisors(s): Ralf Toumi (SPAT)

Note

This project can host up to 1 pair(s) of students.

Tropical cyclones are one of the most dangerous natural hazards now and more so in the future. Much about their fascinating genesis and evolution remains insufficiently understood (1). It is proving challenging to model this phenomenon because of the wide range of scale in time and space as well as the vast range of physical processes involved. Everything we know about atmospheric physics affects tropical cyclones. Synthetic or stochastic models are extremely powerful tools for risk assessment (2). In this project you will join the tropical cyclone research group to build a new global stochastic model of tropical cyclones called IRIS (Imperial College Storm Model) applying machine learning techniques. We also want this model to be Physics informed, not just statistical. The aim is for IRIS to make seasonal predictions and simulate the impact of climate change. For this we need to understand and include the role of the Pacific El Nino oscillation and other varaibility to enable forecasts globally. A version of IRIS is now also run on smartphones by the general public to create the largest open and free database of global tropical cyclone risk (3). You will join the largest research group in Europe working on tropical cyclones. 1. link 2. link 3. link

Note

Any useful pre-requisite modules/knowledge? : Beneficial, but not required: Atmospheric Physics


Harnessing machine learning to determine how space weather dynamics change in global simulations

Project code: MLBD_3 Supervisors(s): Martin Archer (Physics / Space and Atmospheric Physics Group)

Note

This project can host up to 1 pair(s) of students.

Space weather describes the changing environmental conditions in near-Earth space due to the interaction of the Sun with Earth's magnetic field. This interaction forms Earth's magnetosphere - ionosphere system, a highly complex space plasma environment. The process is also incredibly dynamic, with numerous wave-like phenomena emerging from the interaction, like wind waves on water and analogues of musical instruments resonating in space. These waves circulate energy throughout geospace, leading to impacts on regions such as the radiation belts, aurorae, and our atmosphere which ultimately can critically affect our technology.

One of the ways we are developing our understanding of this interaction is through global MHD (magnetohydrodynamic) simulations. Imperial has developed the UK's only global MHD code, called Gorgon, which is being used by the Met Office and European Space Agency for forecasting space weather. Given the vast size and variability of the geospace system, however, each simulation produces masses of data.

This project will harness cutting-edge machine learning techniques of dimensionality reduction and pattern recognition (such as Dynamic Mode Decomposition originally developed for fluids) to efficiently extract the key dynamics and waves from Gorgon simulations. Over several simulation runs, you will determine how the properties of the dynamics vary under different solar driving conditions to test our understanding of the physical processes driving space weather.

Dr Martin Archer is a UKRI Stephen Hawking Fellow in Space Physics in the Department of Physics. He studies the dynamics and waves that emerge from the interaction of the solar wind with Earth's magnetosphere and ionosphere. His research has been often highlighted by NASA and covered in numerous media outlets internationally, e.g. link

Note

Any useful pre-requisite modules/knowledge? : N/A


Dusty Star-Forming Galaxies in the Large Scale Structure of the Universe

Project code: MLBD_4 Supervisors(s): David Clements (Physics/Universe)

Note

This project can host up to 1 pair(s) of students.

The Herschel Space Observatory observed large regions of the sky at wavelengths of 250, 350 and 500 microns as part of the HerMES survey. Observations at these wavelengths are sensitive to dust in the interstellar medium of these galaxies which is heated by young stars. Galaxies that are luminous at these wavelengths are thus called Dusty Star Forming Galaxies, or DSFGs. The Herschel surveys have identified several hundred thousand such objects. A full understanding of these sources, however, requires combination of the Herschel data with observations at other wavelengths so that the properties of the DSFGs and the environments in which they are found can be properly determined. Do DSFGs, for example, lie in galaxy clusters or other large scale structures such as the cosmic web, or do they more often lie in less dense regions, generally referred to as the 'field'? Does the location of DSFGs change with time?

Our current knowledge of DSFGs suggests that they are generally found in the field in the local universe, but at higher redshifts, and thus at earlier times, they may be more often found in dense regions, such as galaxy clusters. These results are largely based on examining the properties of small samples of DSFGs, so larger, more comprehensive studies are needed. There is also the potential that some of the distant DSFGs are gravitationally lensed by foreground objects.

Several of the HerMES survey fields - W-CDF-S, ELAIS-S1, and XMM-LSS - are the subject of multiwavelength optical to near-IR observations which have been comboned to produce a catalog of about 20 million objects in these fields (Zou et al., 2022). This catalog includes optical/near-IR derived phtoometric redshifts for each object as well as a wealth of other information. The aim of this project is to compare the redshift distributions of galaxies in the neighbourhood of Herschel sources with the redshift distribution of the general population. In this way we can statistically examine the large scale structures associated with DSFGs, and see how this changes with far-IR colour (a crude measure of redshift). Individual examples of DSFGs that lie inside galaxy clusters or that are strongly gravitationally lensed will also be found through this analysis.

The project will consist of:

1 - familiarisation with the Herschel HerMES survey catalogs, the Zou et al. catalog.

2 - use of standrad astronomical tools such as astropy, topcat, SAO-DS9 and other tools

3 - statistical characterisation of the environments of Herschel detected DSFGs and comparison with the general galaxy population

4 - identification of individual DSFG sources of interest, that may be lensed, or lie in exceptional environments

This project represents an ideal opportunity for tudents interested in extragalactic adstronomy to gain hands on experience of working with large multiwavelength astronomical data, and to work at the forefront of far-IR space astronomy, with a significant potential for important scientific reults and new discoveries.

Useful reading for this project:

Infrared Astronomy: Seeing the Heat, D.L. Clements, book, available from the library; Oliver et al., 2012, MNRAS, 424, 1614 link Casey et al., 2014, Physics Reports, 541, 45 link Zou et al., 2022, ApJ Supp, 262, 15 link

Note

Any useful pre-requisite modules/knowledge? : Astrophysics and Cosmology may be helpful but are not required.


Reservoir computing with nanomagnetic arrays

Project code: MLBD_5 Supervisors(s): Will Branford (Physics/Matter)

Note

This project can host up to 2 pair(s) of students.

The energy cost of machine learning doubles every 3.4 months and is already great than the entire usage of Argentina. 21% of global energy production is predicted to be expended on IT by 2030. While performing machine learning on standard (von Neumann) architecture computers is expectionally powerful, it is also very inefficient. There is a drive towards hardware which is more suited to machine learning and can reduce this energy cost. The original machine learning algorithms were developed by Sherrington and Kirkpatrick to describe magnetic arrays. Because each magnetic dipole interacts with all its neighbours at no energy cost a nanomagnetic array is naturally massive parallel 'neural network'. One powerful application of nanomagnetic arrays is reservoir computing, where much of the computational heavy lifting is transferred to the physics but there is a standard computation machine learning performed on the outputs of the reservoir. The supervisor's group have large datasets available where nanomagnetic arrays have been trained using encoded field sequences and multiple reservoir outputs recorded. This project would test different machine learning strategies (e.g linear regression, ridge regression) to train the weights of the reservoir output in making predictions. An overview of the research can be seen in the SPICE seminar series talk, where the machine learning by reservoir computation is at the link

Note

Any useful pre-requisite modules/knowledge? : N/A


PyTorch based inverse-design of neuromorphic computing hardware.

Project code: MLBD_6 Supervisors(s): Will Branford (Physics/Matter)

Note

This project can host up to 2 pair(s) of students.

The energy cost of machine learning doubles every 3.4 months and is already great than the entire usage of Argentina. 21% of global energyproduction is predicted to be expended on IT by 2030. While performing machine learning on standard (von Neumann) architecture computers is exceptionally powerful, it is also very inefficient. There is a drive towards hardware which is more suited to machine learning and can reduce thisenergy cost. The original machine learning algorithms were developed by Sherrington and Kirkpatrick to describe magnetic arrays. Because eachmagnetic dipole interacts with all its neighbours at no energy cost a nanomagnetic array is naturally massive parallel 'neural network'. The supervisor’s group in EXSS Physics make nanoscale magnetic arrays for neuromorphic computation hardware1. There was a recent publicationshowing that Pytorch based methods could be used to design magnetic configurations with the based performance in a package called SpinTorch2.The supervisor is working with the original coders of SpinTorch on further development of the package and its use to inverse-design the magneticpattern we use as the scatterer. The SpinTorch designers were able to show that the design of a neural network hardware, where all neuromorphic computing functions, includingsignal routing and nonlinear activation are performed by spin-wave propagation and interference. Weights and interconnections of the network arerealized by a magnetic-field pattern that is applied on the spin-wave propagating substrate and scatters the spin waves. The project would consist of modifying the SpinTorch code such that is can simulate the exact structures made in the EXSS group, the then use thesoftware machine-learning to inverse-design the best magnetic array geometry and magnetic pattern to perform specific neuromorphic computingtasks in hardware. An overview of the research can be seen in the OASIS seminar series talks by Gyorgy Csaba (SpinTorch) and Kilian Stenning (Supervisor’s group, neuromorphic reservoir link J. C. et al. arXiv:2107.08941 (accepted Nature Nano) (2022). 2Papp, A. et al. Nature Communications 12, 6422 (2021).

Note

Any useful pre-requisite modules/knowledge? : Concepts in Device Physics would help


ND-GAr Data Acquisition Simulation (NDAQ-SIM) Optimisation

Project code: MLBD_7 Supervisors(s): Ioannis Xiotidis (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

The Deep Underground Neutrino Experiment (DUNE) is one of the future world leading particle physics experiments. DUNE aims to measure neutrino oscillations very precisely. These measurements will answer some key open questions in physics such as the origin of matter in the universe. A critical aspect for the success of DUNE, is the efficient ability to read and reconstruct particles observed in the detectors used for DUNE. This reconstruction allows the properties of the observed particles, such as momentum and direction, to be measured. Prior to the construction/installation of any detector, extensive simulations on “fake” data or data acquired through test-runs of smaller scale versions (test-beams) is being used. For this reason large simulation packages are developed in software which try to imitate as closely as possible the behaviour of the final system, by handling large amounts of simulated events (Monte-Carlo events). One of the future upgrade detectors of DUNE is called ND-GAr and will be based on high-pressure gas Argon, mainly optimised to provide a very precise understanding of neutrino interaction on Argon molecules. In preparation of ND-GAr the readout and data acquisition system is being designed and tested at the Test-Stand of an Over-Pressurized Argon Detector (TOAD) which is installed at Fermilab, US. An essential component to the system design is the ability to simulate its behaviour in software and be able to generate data in the same fashion as the hardware boards. For this reason the NDAQ-SIM package has been developed within the collaboration solely implemented in python. The goal of this project will be to optimise the existing implementation by translating initially the existing framework in a more efficient object based programming language and introduce multi-threading to achieve a better performance of the simulator as well as a more accurate implementation. The pair that decides to select this project will have a good opportunity to work on the future upgrades of an upcoming leading particle physics experiments and contribute to the decision making mechanism of a system design.

Note

Any useful pre-requisite modules/knowledge? : N/A


ML-assisted design of a dark matter experiment

Project code: MLBD_8 Supervisors(s): Henrique Araujo (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

Optimising the design of complex particle detectors is challenging due to the stochastic nature of particle interactions (models are usually not analytical and not differentiable) and the high dimensionality of the parameter space (hence computationally intensive Monte Carlo campaigns are needed). We are beginning to explore the design of a future major international experiment to search for dark matter scattering and other rare interactions, which we are also proposing to host in the UK link centrepiece of the experiment will be a liquid xenon detector with 60-80 tonnes of active mass, contained in a titanium cryostat, surrounded by ancillary “veto” detectors and water shielding. Various hardware structures hinder the propagation of background particles from the central detector out to the “veto” systems, making these less efficient – so we need to carefully optimise the design of these elements (materials, dimensions, costs). This is traditionally done with huge Monte Carlo simulation campaigns using millions of CPU hours. We want to bring ML-assisted design techniques and frameworks to this problem.

Technically, we need to maximise an “objective function” describing the aim(s) of the experiment subject to physical and other constraints. Frameworks are now being developed to enable this using differential programming techniques, and this project will explore these topics. One new effort of particular relevance is described here link will contribute to this important design activity, joining a small team of Monte Carlo simulation experts and radiation detection physicists at Imperial and elsewhere -- and help shape a major international project.

Note

Any useful pre-requisite modules/knowledge? : Nuclear & Particle Physics and Advanced Particle Physics may be beneficial but certainly not essential.


Machine Learning in Neutrino-Nucleus Interactions

Project code: MLBD_9 Supervisors(s): Minoo Kabirnezhad (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

The precise measurement of neutrino properties is among the highest priorities in fundamental particle physics, involving many experimental and theoretical efforts worldwide. Since the experiments rely on the interactions of neutrinos with nucleons within the nuclear environment, the planned advances in the scope and precision of future experiments, such as DUNE and Hyper Kamiokande. This requires the understanding and modelling of the hadronic and nuclear physics of these interactions, which are represented as interactions ‘cross sections’ models in neutrino event generators. Such models are essential to every phase of experimental analyses and their theoretical uncertainties play an important role in interpreting every result.

The hadron tensor (made of hadron current) in neutrino-nucleus interactions is the most challenging part of the cross-section calculation. It contains important nuclear information crucial for cross-section analysis and neutrinos oscillation measurements. However, their calculation is computationally prohibitively expensive! One approach that is currently used in event generators is to use pre-calculated tables for different kinematics. Within this approach, we lose our ability to estimate the systematic uncertainties from the theoretical models. This project will start by exploring our options to retain systematics information within a fast simulation. Other ways in which modern computing methods can be used to improve the accuracy and scope of neutrino-nucleus interaction modelling will also be a focus of the work.

Note

Any useful pre-requisite modules/knowledge? : Advanced particle physics or something equivalent would be beneficial.


Using machine learning to constrain climate projections

Project code: MLBD_10 Supervisors(s): Paulo Ceppi (Physics/SPAT)

Note

This project can host up to 2 pair(s) of students.

Clouds are one of the main uncertainties for future climate change. With global warming, clouds change and so does their impact of the radiation budget (the difference between absorbed sunlight and emitted infrared), which has a knock-on effect on climate change known as cloud feedback. This feedback is poorly understood as climate models cannot simulate clouds reliably. The project will involve applying statistical learning techniques (e.g. ridge regression) to cloud-radiative data from satellite observations and climate model simulations. The aim will be to better quantify how clouds respond to environmental changes from present-day data, so as to more accurately predict how clouds will change with global warming, and hence constrain global warming projections. The project will build on a successful initial study by Ceppi and Nowack 2021, published in PNAS.

Note

Any useful pre-requisite modules/knowledge? : Prior experience in atmospheric physics is useful but not essential.


Learning the dynamics of immune response

Project code: MLBD_11 Supervisors(s): Barbara Bravi (Mathematics/Biomathematics)

Note

This project can host up to 1 pair(s) of students.

The dynamics of immune response in cancer and during infections is poorly understood. Characterising responding immune cells is a sought-after target in therapy and vaccine design. In this project, our goal is to develop machine learning methods to learn the equations that describe responding immune cells from longitudinal data. These data map the abundance of different populations of immune cells at different time points, which is informative about response because only the populations of immune cells recognizing cancer or an infection proliferate and increase in number. We will resort to Neural Differential Equations (NDEs)[1,2] to learn a response equation for each immune cell population, which will be an important tool to identify genuinely responding immune cells.

Potential questions to address: Identifying responding immune cell populations is challenging, because the dynamics of response is stochastic, and data are noisy and subject to technical bias [3]. To disentangle true response from fluctuations due to noise/bias, one potential direction is to design a transfer-learning approach within NDEs to learn the differences of the responding cells dynamics relative to the dynamics purely due to noise/bias, which can be learnt using replicates of the same sample. We could also use synthetic data, generated from a cell population dynamics model including sources of noise/bias, to thoroughly benchmark the method.

References: [1] P. Kidger, link (2022). [2] A. Alaa et al., link (2022). [3] M. Puelma Touzel et al., link (2020).

Note

Any useful pre-requisite modules/knowledge? : N/A


Digital Pathology and AI for the Image Analysis of Breast Cancer Biopsies

Project code: MLBD_12 Supervisors(s): Chris Phillips (Physics/Matter)

Note

This project can host up to 2 pair(s) of students.

We have developed a technology, Digistain, that has proven very effective at diagnosing Breast Cancer and informing treatment choices. It uses spectroscopy to measure chemical changes caused by the onset of the disease. At the moment though , we still need a human to select the area of the tissue ( the "region of interest" , RoI) we want to analyse. New open source software is emerging that can outline the cells in the images, and perform a range of statistical analyses on them. This project will examine the extent to which this software can itself select the RoI for us.

Note

Any useful pre-requisite modules/knowledge? : N/A


Developing a high-throughput processing workflow for state-of-the-art microscopy

Project code: MLBD_13 Supervisors(s): Andrew Rose (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

Single molecule localization microscopy (SMLM) won a Nobel Prize in 2014 and has helped take fluorescence microscopy beyond the diffraction limit by imaging fluorescent molecules labelling a sample one at a time, enabling their position to be determined with a precision <30 nm. It is, however, historically a labour- and computationally- intensive technique with low throughput that has predominantly produced “pictures” rather than quantitative readouts.

We are working to develop automated SMLM instruments for applications including basic research and drug discovery, and are exploring cluster analysis of the single (fluorescent) molecule localizations to obtain quantitative readouts of biomolecular interactions in cells.

An interdisciplinary collaboration between the High Energy Physics and Photonics Groups in the Physics Department at Imperial has been exploring a Bayesian approach to cluster-finding that potentially offers superior results compared to other algorithms, with a high-performance, multithreaded C++ implementation mitigating it's higher computational cost, offering an over 18,000-fold speed-up over previous implementations. However, many aspects of the workflow, before and after the clustering, still rely heavily on manual partitioning, classification and interpretation of data. We aim to develop a scalable work-flow handling the vast SMLM data sets generated from high throughput SMLM assays in 96-well sample plates, where manual processing is infeasible.

The aim of this project is 1) to develop ML models to assist with and eventually replace the manual processing steps; 2) to further validate, profile and optimize the core C++ code as required, and 3) to deploy the workflow on the Imperial College HPC cluster to analyse experimental SMLM data acquired in multidisciplinary collaborations between the Photonics Group and the Department of Surgery & Cancer at Imperial College School of Medicine.

Note

Any useful pre-requisite modules/knowledge? : N/A


New approaches to estimating greenhouse gas emissions in London

Project code: MLBD_14 Supervisors(s): Heather Graven (Physics, SPAT)

Note

This project can host up to 2 pair(s) of students.

This project will use new computational approaches and new data sources to estimate greenhouse gas (GHG) emissions in London, particularly CO2 and CH4. The project will address new needs and opportunities to better understand GHG emissions for London city, for companies, and for citizens, particularly as there are new reporting rules for emissions and policies to achieve net zero emissions. New data sources include satellite data, atmospheric measurements, and other big data such as internet or map searches. Academic, government and industrial partners will be involved. See work from ClimateTrace for examples in this field.

Note

Any useful pre-requisite modules/knowledge? : Atmospheric Physics would be beneficial


Estimating Earth Radiation Budget Components

Project code: MLBD_15 Supervisors(s): Jacqui Russell, Helen Brindley (Physics/Space and Atmospheric Physics)

Note

This project can host up to 1 pair(s) of students.

Earth’s climate is largely determined by its energy budget: the amount of sunlight absorbed by Earth and the amount of infrared energy emitted to space. Broadband measurements of the top of the atmosphere reflected shortwave and outgoing longwave energy, along with knowledge of the incoming energy from the Sun, allow us to determine this energy budget and are essential to our understanding of the Earth system. These measurements are difficult to make and rely on a few satellite instruments capable of making observations across the wide wavelength range required.
Sensors designed for monitoring clouds, the surface and atmospheric components such as greenhouse gases observe in narrow spectral bands with wavelengths targeted at specific features. These sensors are the mainstay of meteorological passive remote sensing and are mounted on all the meteorological geostationary satellites, as well as many satellites in low Earth orbit. Being able to accurately infer broadband radiance from these narrowband observations is an important problem in Earth Radiation Budget science as they can be used to supplement the direct observations. The geostationary METEOSAT second generation metrological satellites (METEOSAT-8, 9, 10 and 11) each carry an advanced narrowband imager (SEVIRI) along with a broadband radiometer (GERB). The SEVIRI instrument observes in 11 channels distributed through visible, near infrared and infrared wavelengths. Imperial College provides the scientific leadership for GERB, which is the only instrument in the world designed to measure the Earth’s radiation budget from geostationary orbit. The instrument makes broadband observations in two channels covering the reflected solar and outgoing longwave wavelength range from the ultraviolet to the far infrared (0.34 to >100 \mum). Top of the atmosphere radiation budget fluxes are derived from these GERB observations every 15 minutes for the region 60E to 60W and 60N to 60S. The goal of this project is to develop an appropriate ML approach and apply it to multi-year co-incident observations from SEVIRI and GERB in order to derive estimates of broadband radiances from the SEVIRI measurements. This has been done previously using a classical, regression based approach but we anticipate an improvement in speed and accuracy may be possible using newer tools from the ML field (e.g. link This work is particularly timely because the GERB instruments will reach the end of their operational life over the next few years and there are no plans for a continuation. However, narrowband observations from the METEOSAT platforms will continue: this work can thus lay the foundations for extending geostationary Earth Radiation budget estimates into the future while also facilitating improvements to the current record by enabling gaps in the existing GERB record to be bridged.

Note

Any useful pre-requisite modules/knowledge? : Atmospheric Physics module beneficial but not a pre-requisite


Interpretable Machine Learning for Space Weather Forecasting

Project code: MLBD_16 Supervisors(s): Mike Heyns (Space Physics)

Note

This project can host up to 1 pair(s) of students.

This project aims to develop novel interpretable machine learning techniques to forecast space weather drivers at mid-latitudes, and do so in an operational context. Space weather has become increasingly relevant with our dependence on modern infrastructure, in particular affecting the power grids and satellite operations that we rely on daily. With a physical understanding of the driving near-Earth current systems involved and in-situ measurements of the solar wind, machine learning approaches provide a great platform for operational modelling. Using solar wind data from the ACE and DSCOVR satellites link and ground magnetometer measurements link we will focus on forecasting the ground geomagnetic field response. Emphasis will be placed on creating an interpretable framework that allows for uncertainty in the model to quantified and the output ultimately actioned on. Interested students can choose from two distinct open research questions. The first is to extend an existing framework to include physically inspired basis functions to describe the near-Earth equivalent current systems and their ground geomagnetic field responses. Upendran et al. (see below) made use of spherical harmonics to achieve global prediction in an initial model, but variations using Spherical Fourier Neural Operators (SNFOs) or Dynamic Mode Decomposition (DMD) may provide novel insight that includes not only the spatial response of the system, but also the temporal dependence. The second research question lies in creating a generalizable approach to utilize real-time in-situ solar wind data for downstream models that require processed science quality data. This is a particular challenge for AI/ML based models, where training is often restricted to science quality data. Here the aim would be to develop real-time in-situ solar wind embeddings and train a pipeline to include propagation time, plasma interactions and basic interpolation for gaps in the data. The result would be generally applicable to a host of downstream models, including physics-based models, such as the Gorgon global MHD code used in space weather forecasting deployments at the UK Met Office and ESA.

Useful references: 1. Camporeale contextualises the current state of machine learning in space weather and highlights some of the challenges faced in producing actionable operational link Upendran makes use of a spherical harmonic basis to drive ground geomagnetic field estimation from in-situ satellite measurement link

  1. Bonev presents a new library for using Spherical Fourier Neural Operators, which has interesting application in terms of modelling global spatial and temporal dynamics at arbitrary resolutions link and link Dynamic Mode Decomposition (DMD) link

  2. Schmid presents a detailed review of DMD applications in various contexts, including using sparse observations that would be relevant here link Baumann makes use of random forest and gradient boosting machines to compute the appropriate propagation from L1 to bowshock, i.e., a step that is needed to convert real-time data to the science quality data found in the OMNI dataset link

  3. Cameron takes the approach of propagating the solar wind from L1 to bowshock using a 1.5D MHD model, which allows plasma interactions to be included. This approach may form the basis for initial training to include similar link

Note

Any useful pre-requisite modules/knowledge? : Space Physics (PHYS70019) would be beneficial


Applying deep learning to particle identification in the Water Chernekov Test-beam Experiment at CERN

Project code: MLBD_17 Supervisors(s): Nick Prouse (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

The Water Cherenkov Test-beam Experiment (WCTE) is currently being prepared to take data in next year with particles from a beam at CERN. This experiment will allow measurements of muons, electrons, protons, gamma rays, and other particles, to understand how these particles interact in a water Cherenkov detector and to test and develop the detector technology. Identifying the type of particles interacting in the detector is essential to make physics measurements of those particles. Applications of deep-learning technologies, including convolutional neural networks and graph networks, are now being explored as an advancement over existing methods. The WCTE provides a unique opportunity to test these new techniques with data from a well-characterised particle beam, which has not previously been possible for water-Cherenkov detectors.

In this project, preliminary simulation-based development of deep learning methods will be adapted to apply to real WCTE data. In particular, the classification of muons vs electrons, could be used as the first ever demonstration of these techniques in a physical detector. Prior to entering the water-Cherenkov detector, electrons and muons are already identified, allowing the performance of our deep learning classification technique to be measured. There will be opportunities to explore different network architectures and understand how training on simulation might affect their ability to work on real data. There could also be opportunities to further develop the network architectures to include adversarial training techniques for robustness against imperfect simulated training data.

Note

Any useful pre-requisite modules/knowledge? : The NPP and APP courses would be beneficial but not required


Machine learning for reconstructing charged particles in the Hyper-Kamiokande experiment

Project code: MLBD_18 Supervisors(s): Nick Prouse (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

The Hyper-Kamiokande experiment is currently being constructed in Japan, a successor to the Nobel prize-winning Super-Kamiokande. With 8 x large volume, it will measure the properties of neutrinos to a greater precision than ever before, searching for differences in neutrinos and anti-neutrinos to help understand the dominance of matter over anti-matter after the big bang. As a water-Cherenkov detector, it will observe Cherenkov light produced by charged particles, so reconstructing the properties of charged particles produced by neutrino interactions is an essential step in measurements of neutrinos. Traditional maximum likelihood reconstruction methods are limited by approximations and assumptions in the underlying likelihood model, and by the computational complexity of calculating the likelihoods. Modern machine learning approaches allow improvements to both the computational performance and the reconstruction precision.

Machine learning methods have started to be successfully used in studies of smaller water Cherenkov detectors and this project will focus on scaling up these approaches to the much larger Hyper-Kamiokande detector. This will involve exploring neural network architectures that can handle sparse high-resolution geometric data, to extract particle energies, positions and directions. Existing CNN-based architectures would be adapted and analysed to understand their performance, with the opportunity to also explore graph networks or point-cloud based networks as well as other techniques to improve performance through data transformations and augmentation.

Note

Any useful pre-requisite modules/knowledge? : The NPP and APP courses would be beneficial.


Using Generative Adversarial Networks to enhance particle detector performance

Project code: MLBD_19 Supervisors(s): Nick Prouse (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

In this project deep learning techniques will be explored for enhancing the data analysis of the water Cherenkov detectors of the Hyper-Kamiokande project. The Hyper-Kamiokande detector will consist of a monolithic water volume, an order of magnitude larger than its predecessor Super-Kamiokande. Both the far detector and a new intermediate detector will observe neutrinos coming from an upgraded high-power beam, providing high neutrino interaction rates. Combined with the reduced statistical errors from this high rate, methods to reduce systematic uncertainties are needed to enable the experiment's unprecedented precision for neutrino measurements.

Particle physics measurements are often limited in precision by uncertainties in understanding and simulating the physical processes taking place in the detector and the response of the detector itself. Imperfect detector calibration and modelling used in Monte-Carlo simulations will limit the ability to correctly analyse its data. A novel approach to controlling these uncertainties would be to use machine learning models to identify and fix deficiencies in simulated data used as part of the analysis of actual detector data. This project will investigate the use of generative adversarial networks to take imperfect simulated detector data and modify the data to correct for deficiencies. Procedures would be developed using semi-supervised training combining simulated data and detector calibration and control samples, to build models to be later applied to physics data. Challenges to address would include understanding biases that the model may learn and ensuring the deficiency correction procedure minimises these issues.

Note

Any useful pre-requisite modules/knowledge? : The NPP and APP courses would be beneficial.


Inference aware optimisation for Higgs boson measurements

Project code: MLBD_20 Supervisors(s): Nicholas Wardle (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

Data collected during Run 2 of the Large Hadron Collider (2016-2018) has seen us make significant strides in understanding the Higgs boson and how it interacts with other particles. In particular, the clean final-state signature of the diphoton decay channel has been extremely useful in measuring some of the rarer Higgs boson processes that occur at the LHC.

Run 3 of data-taking, which began in 2022, will see us at least double our current dataset. Despite this, the gain in sensitivity from purely increasing the statistics will not be a game-changer in the field. Instead, we must turn our attention to more sophisticated analysis techniques to isolate Higgs boson events. In this project we will use inference aware machine learning optimisation algorithms to classify Higgs boson events using the diphoton decay channel at the CMS experiment. The aim is to develop a novel "all-in-one" ML classifier to improve our simultaneous measurement of many Higgs boson cross sections, ultimately improving the experimental sensitivity to physics beyond the Standard Model.

Note

Any useful pre-requisite modules/knowledge? : Nuclear and particle physics courses would be beneficial


Machine learning enhanced background methods for high-energy physics analyses using CMS data

Project code: MLBD_21 Supervisors(s): Daniel Winterbottom (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

This project focuses on improving the accuracy and robustness of high-energy physics analyses. A critical aspect of these analyses involves effectively accounting for background processes that can imitate the signal. This is achieved by either simulating these background processes or employing data-driven methodologies for their estimation. Both approaches require the derivation of correction factors to address discrepancies in the background process description. Traditionally, these correction factors have been determined based on a limited set of O(2) variables, owing to the complexity of handling higher-dimensional parameter spaces. Unfortunately, this simplification often leads to an imperfect description of the background characteristics, resulting in substantial systematic uncertainties. These uncertainties can significantly impact the precision and sensitivity of the analyses.

Machine learning (ML) offers a unique advantage in reducing dimensionality, making it an ideal candidate for enhancing the existing methods. This project will involve working with proton-proton collision datasets collected by the CMS experiment at the LHC. The primary objective is to develop innovative ML-based algorithms capable of deriving correction factors in high-dimensional parameter spaces. This project will involve a comprehensive exploration of various ML algorithms, including boosted decision trees and neural networks. Both supervised and unsupervised learning approaches will be investigated to uncover the most effective strategies for deriving accurate correction factors. By systematically comparing and evaluating the performance of these ML algorithms, we aim to identify the optimal technique for addressing the limitations of traditional methods.

Note

Any useful pre-requisite modules/knowledge? : Courses about experimental particle physics would be beneficial.


Measuring the top quark mass with simulation-based inference using the CMS detector

Project code: MLBD_22 Supervisors(s): George Uttley (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

The project is to measure the mass of the top quark with boosted top jets collected by the CMS experiment. The CMS experiment is one of two general purpose detectors at the Large Hadron Collider (LHC). The top quark's properties play a pivotal role in the Standard Model (SM) of particle physics. A precision measurement of the mass is essential for testing the SM's predictions, refining fundamental parameters and detecting potential signs of new physics beyond our current understanding.

The project aims to develop novel data analysis methods, using simulation-based inference. An example of this is using machine learning tools such as normalising flows to learn probability density functions. The relatively clean signatures of boosted top jets is an excellent playground to trial these innovative techniques. The additional data from the ongoing Run-3 of the LHC and the use of simulation-based inference could lead to a significantly improved measurement of this fundamental quantity in nature.

Note

Any useful pre-requisite modules/knowledge? : None


Machine learning-empowered protein sequencing by two-dimensional partial covariance mass spectrometry

Project code: MLBD_23 Supervisors(s): Vitali Averbukh (Physics/Light)

Note

This project can host up to 1 pair(s) of students.

The goal of the proposed project is to maximise the protein sequencing power of the recently developed 2D partial covariance mass spectrometry (see T. Driver et al., PRX 10, 041004 (2020); see also Physics Today, DOI:10.1063/PT.6.1.20201023a). This will be achieved by formulation of the optimal self-correcting partial covariance mapping using machine learning (ML) approaches, such as Bayesian optimisation.

The principle of detection of collision fragments for the study of the structure of colliding projectiles and the mechanisms of their decomposition is applied generally across 12 orders of magnitude of collision energy, from organic and biochemical mass spectrometry (~10 eV) to particle physics (up to ~10 TeV). Especially valuable information is provided by the coincidence detection of two or more fragments proving their origin to be in the same decomposition event . However, coincidence measurements in molecular physics, e.g. using the golden standard COLTRIMS or “reaction microscope” setup , are only possible in the idealised conditions of a single decomposition detection and a vast amount ( > ~ 10^5 ) of such individual measurements are required to reach reliable conclusions.

For atoms and small molecules, these stringent requirements can be circumvented using the statistical technique of covariance mapping which, instead of requiring a true coincidence detection, focuses on the statistics of the signal fluctuations in a regular, non-coincidence measurement. Application of covariance mapping to larger species has not been successful until recently, because of the large number of spurious signals. In 2020, we introduced a new method of self-correcting partial covariance, a conceptually new type of covariance mapping spectroscopy based on a single readily available parameter extracted from the measured spectrum itself (the total ion count – TIC, PRX 10, 041004 (2020)). We have constructed a new type of analytical mass spectrometric measurement based on the self-correcting TIC partial covariance: two-dimensional partial covariance mass spectrometry (2D-PC-MS). We have demonstrated that it successfully resolves correlations between fragments of macro-molecules in mass range up to and above 10^4 Da, enabling high fidelity characterisation of a biopolymer sequence, e.g. of peptides, proteins and oligonucleotides.

The TIC-based partial covariance is an approximation that in no way guaranteed to lead to the fastest convergence of the true correlations with the number of measured spectra and/or to the maximal number of the revealed correlations. Therefore, there is a strong motivation to seek for the optimised single parameter derived from the spectrum itself, which is different from the TIC. We assume that the optimal parameter, W(X), is a weighted sum of fragment intensities, X_i : W(X)=\sum_i w_i X_i where the weights w_i are the optimisation parameters and W(X)=TIC if w_i =1 with i spanning the physical range of the possible fragment mass to charge ratios. Besides being a natural mathematical generalisation of the TIC, the above equation is physically motivated: fragmentation-to-detection efficiency of any experimental apparatus depends on the mass to charge ratio of the fragment. Optimisation of the w_i weights on the basis of the large amount of available measurements is a typical problem to be solved by ML algorithms, if one can formulate a suitable "loss function" to be minimised in the course of the optimisation. In this project we will formulate a series of such physically motivated "loss functions" and apply various ML algorithms for optimising the weights, w_i to reach highest quality of peptide sequence reconstruction from the 2D-PC-MS datasets.

Note

Any useful pre-requisite modules/knowledge? : None


Ionacoustic imaging in vivo

Project code: MLBD_24 Supervisors(s): Kenneth LONG (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

The LhARA collaboration is developing ionacoustic imaging as an innovative solution to the problem of real-time dose measurement in situ in particle-beam therapy. Ionacoustic imaging exploits the shock wave generated from the energy deposited by the particle beam to provide the acoustic signal that is used to generate an image of the dose profile. Present reconstruction techniques are capable of reconstructing the dose distribution in homogenous media. The reconstruction of the dose profile in vivo is a complicated problem as the reconstruction process must account for tissue inhomogeneity, the presence of blood vessels, bones, and organ boundaries. The full power of the technique will be accessed when the ionacoustic image can be co-registered with the images used to plan the treat.

In this project you will work with medical physicists and radiation biologists from the Institute of Cancer Research and the Oxford Institute of Radiation Oncology) to develop the first prototype ionacoustic image-reconstruction and co-registration algorithm. This process will involve the development of algorithms to generate the ionacoustic image, process and co-register the image with planning computed tomography (CT) images. The co-registration of the dose map with tissue topology will require the developing of adaptation of advanced computer vision techniques. The outcome will be a first evaluation of the feasibility of the ionacoustic dose-profile imaging.

Note

Any useful pre-requisite modules/knowledge? : The Physics of Medical Imaging and Radiation Therapy


Inference methods for varying dimensional model spaces

Project code: MLBD_25 Supervisors(s): Carlo Contaldi (Physics/Universe)

Note

This project can host up to 1 pair(s) of students.

This project will investigate methods for posterior distribution approximation in applications where the dimension of the parameter space is unknown. An example of this problem is timestream analysis when an unknown number of transient signals are contained in noisy data. The target application is the analysis of the LISA satellite Gravitational Wave observations (see e.g. link We will start with the Reversible Jump Markov Chain Monte Carlo (RJMCM) method, which is gaining traction, but we will also explore other alternatives.

The project will start by building simple timestream simulations as a testbed for comparing different techniques. Complexity will be added progressively, and if time permits, methods developed will be tested on extensive LISA Data Challenge simulations.

Note

Any useful pre-requisite modules/knowledge? : Information theory


Characterising and removing magnetic spacecraft interference

Project code: MLBD_26 Supervisors(s): Tim Horbury (Physics/SpAt)

Note

This project can host up to 1 pair(s) of students.

Solar Orbiter is a spacecraft, launched in 2020, that is travelling closer to the Sun than Mercury and exploring near-Sun space. It carries a magnetic field instrument, built in the Physics Department at Imperial, that measures the small magnetic fields carried from the Sun by the solar wind. Unfortunately these fields are so small that those generated by the spacecraft, for example by currents in wires or thermoelectric currents due to temperature gradients, are significant. We currently remove these signals using unsophisticated methods which are not very accurate and these can impact the quality of the science data.

In this project you will use in-flight data to characterise the spacecraft signals, and develop new algorithms to remove them from the magnetic field signals. This project is fairly open-ended: we have done some preliminary work but there is a lot more we can do. You will work with the engineering team and we expect the output of this project to become an operational product used in generating data for the worldwide science community.

Note

Any useful pre-requisite modules/knowledge? : No


Identifying phenomena at the proton gyroscale in the solar wind

Project code: MLBD_27 Supervisors(s): Tim Horbury (Physics/SpAt)

Note

This project can host up to 1 pair(s) of students.

We have recently identified, by eye, a new class of small phenomena in the solar wind: they are most obvious in the magnetic field, which increases and has a rotation. These seem to be associated with instabilities in the plasma and they might be important in driving turbulence and heating throughout the heliosphere - and potentially in other stellar winds.

These are short - less than a second long, which is the proton gyroscale - so are hard to spot in the data. We have new measurements closer to the Sun with Parker Solar Probe and Solar Orbiter: as we go in, they are even smaller, but become far more common.

We want to develop a new method of identifying these objects and classifying them, then looking at their properties in a statistical way, to determine their effects. If the project goes well, I would expect this to turn into a referred paper.

Note

Any useful pre-requisite modules/knowledge? : Space physics or plasma physics would be helpful but are not required.


Quantum control with statistical machine learning

Project code: MLBD_28 Supervisors(s): Florian Mintert (Physics/Light)

Note

This project can host up to 1 pair(s) of students.

All tasks of quantum information processing require accurate control over the quantum mechanical hardware. Given the high level of imperfection of existing hardware, it is a very challenging task to learn how to control and actual device.

Note

Any useful pre-requisite modules/knowledge? : quantum information or foundations of quantum mechanics is helpful, but not mandatory


Machine learning for extreme ultrashort laser pulse measurement

Project code: MLBD_29 Supervisors(s): Helder Crespo (Physics)

Note

This project can host up to 1 pair(s) of students.

In this project you will apply machine learning to the exciting field of ultrafast science, where you will be looking at some of the shortest physical events ever produced and measured in a systematic way. We’re talking about ultrashort light pulses with temporal durations in the few femtosecond range (1 fs = 10^{-15} seconds). These extreme light flashes are produced by special lasers and have unique and very desirable properties for a number of applications, including materials processing, medical imaging, microscopy, spectroscopy and scientific research. They are also behind the exciting field of attosecond science, where the delicate ultrafast dynamics of molecules, atoms, and electrons can be observed and controlled in real time up to the limits imposed by quantum uncertainty. A key element in ultrafast science and technology is the temporal characterisation of ultrashort laser pulses, which is done indirectly because the duration of these pulses is way beyond the bandwidth of current electronics. This usually requires measuring both the amplitude (the easy bit) and phase (the difficult bit) of the electric field of the pulses, for which several techniques based on nonlinear optics, such as FROG and SPIDER, have been devised over the years (see, e.g., [1]). A more recent technique, dispersion scan (d-scan) [2], has greatly simplified the notoriously difficult task of measuring few- and single-cycle pulses. Both FROG and d-scan are based on measuring a 2D data set. This is very important because ultrashort pulse measurement with these techniques is a phase retrieval problem, which is ill-posed in 1D but solvable in 2D. Both methods usually rely on nonlinear optimisation algorithms to iteratively find the phase that produces a given 2D measurement. Recently, deep neural networks have been applied to FROG [3] and to d-scan [4] measurements, while applications of machine learning in ultrafast photonics have been receiving increased attention [5]. However, as pulses become shorter, their spectra become larger and their phases increasingly complex, which poses additional problems. In this project you will develop and explore machine learning approaches to phase retrieval of extremely short pulses (see, e.g., [6]) at the edge of present-day technology.

References: [1] Ian A. Walmsley and Christophe Dorrer, “Characterization of ultrashort electromagnetic pulses,” Adv. Opt. Photon. 1, 308-437 (2009) [2] M. Miranda, T. Fordell, C. Arnold, A. L’Huillier, and H. Crespo, “Simultaneous compression and characterization of ultrashort laser pulses using chirped mirrors and glass wedges,” Opt. Express 20, 688-697 (2012). [3] T. Zahavy, A. Dikopoltsev, D. Moss, G. I. Haham, O. Cohen, S. Mannor, and M. Segev, “Deep learning reconstruction of ultrashort pulses,” Optica 5, 666-673 (2018). [4] S. Kleinert, A. Tajalli, T. Nagy, and U. Morgner, “Rapid phase retrieval of ultrashort pulses from dispersion scan traces using deep neural networks” Opt. Lett. 44, 979–982 (2019). [5] G. Genty, L. Salmela, J.M. Dudley et al., “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2021). [6] H. M. Crespo, T. Witting, M. Canhota, M. Miranda, and J.W.G. Tisch, “In situ temporal measurement of ultrashort laser pulses at full power during high-intensity laser–matter interactions,” Optica 7, 995-1002 (2020).

Note

Any useful pre-requisite modules/knowledge? : Beneficial: Lasers (6) and Laser Technology (7)


Using swarm minimisation algorithms to measure neutrino oscillations

Project code: MLBD_30 Supervisors(s): Mark Scott (Physics/Particles)

Note

This project can host up to 1 pair(s) of students.

The T2K experiment measure the quantum mechanical phenomenon of neutrino oscillations by firing a beam of muon neutrinos across Japan. This number and "flavour" of the neutrinos in the beam is measured when the beam is produced and then sampled by the so-called "far" detector after the neutrinos have oscillated into a different set of flavour eigenstates.

As is common in particle physics we extract the interesting information from our data by minimising a likelihood. At T2K this likelihood has approximately 800 parameters that we try to simultaneously minimise, which is computationally challenging. Swarm minimisation algorithms are a novel class of minimiser that uses the natural behaviour of animals to efficiently search for minima in a likelihood space. This can provide significant benefits compared to traditional minimisation methods, such as more robust convergence and the ability to minimise non-continuous likelihoods.

In this project you will review existing swarm minimisation methods to choose an appropriate set of algorithms for the neutrino oscillation likelihood minimisation. You will then code these algorithms and test them on multi-dimensional functions before implementing them into the T2K oscillation fitting framework. By the end of the project you should have a working swarm minimiser that can then be compared to the traditional minimisation methods currently used by T2K.

Note

Any useful pre-requisite modules/knowledge? : Computational Physics is useful, but not a pre-requisite.


Detecting magnetic reconnection in the solar wind

Project code: MLBD_31 Supervisors(s): Naïs Fargette (Physics/SPAT)

Note

This project can host up to 1 pair(s) of students.

What do a coronal mass ejection and a geomagnetic storm have in common ? Both are large scale spectacular events triggered by a powerful energy conversion mechanism called magnetic reconnection. As many astrophysical systems are dominated by magnetic field dynamics, magnetic reconnection is a fundamental process, as it converts magnetic energy into kinetic and thermal energy. This conversion occurs on kinetic scales, and allows large-scale remodelling of the magnetic field topology of the stars, planets and interplanetary medium.

In the past few years, magnetic reconnection has been measured in-situ by several plasma-physics missions probing the near Earth environment and inner heliophere. While the signature of magnetic reconnection in spacecraft data is now well known, most of recent statistical studies still rely on visual inspection of the data or threshold approaches for detection. In this work, we propose for the students to put in place a ML algorithm to detect automatically magnetic reconnection in the solar wind, using an existing database as a training set. This has not been achieved to this date, and detecting numerous new events in spacecraft data will bring insights into the physic of magnetic reconnection, especially close to the Sun with the Parker Solar Probe and Solar Orbiter missions.

Note

Any useful pre-requisite modules/knowledge? : N/A


Tracking with Quantum Computers in HEP (High Energy Physics)

Project code: MLBD_32 Supervisors(s): Ioannis Xiotidis, Christopher Brown (Physics/DUNE-CMS)

Note

This project can host up to 1 pair(s) of students.

A crucial component of high energy physics experiments is the ability to measure particle properties with the best possible accuracy. Such measurements allow experimentalists to determine rare processes that occurred in the detectors and be able to reveal the secrets of the Universe. One of the most important parameters to be measured is the momentum of a particle when transversing a detector, as this is intricately linked to the energy of the particle as it is being produced. Performing the measurement of a particle’s momenta is what physicists refer to as particle tracking. In modern experiments (such as CMS, DUNE, etc.) particle tracking is a complicated task which requires a lot of computational power. The main source of limitation arises from the size of the data that has to be processed so that rare occurrences can be observed. An innovative solution to the problem which utilises future technologies is based on quantum computers. Such devices are inherently parallelised and hence can perform tedious computational tasks very efficiently. As this is a field at its infancy most of the current applications rely on simulation of quantum devices in conventional machines. The selected candidate will be working on measuring the efficiency of quantum computers in tasks like particle tracking by capitalising on the group’s extensive experience from the classical side.

Note

Any useful pre-requisite modules/knowledge? : N/A