Research degree opportunities in Computing Science and Mathematics
A PhD in computing or mathematics can be the first step into an academic career and a passport to some of the most interesting technology jobs in the world. We are welcoming students for study towards a PhD or MPhil degree in data science, artificial intelligence, mathematics, biological modelling and other areas of computer science.
We have a limited number of funded places each year, and these are advertised on FindAPhD.
We also welcome students from the UK and abroad who have their own funding or who wish to develop a proposal to apply for a scholarship. We will help you develop your research question and your proposal with a view to you studying at Stirling. You may have your own ideas for a research question and we would be happy to help you shape them into a high quality PhD proposal. Alternatively, you may find one of our existing research projects the perfect fit for your own interests. We also offer a professional doctorate programme, in which you can work on a project for your employer (who covers the costs) and earn a PhD at the same time.
The list below describes some PhD opportunities that are available right now. If you have a scholarship opportunity or private funding, please contact the supervisor listed for the project that interests you.
Computing Science PhD opportunities
Title: Fair artificial intelligence for reducing peak electricity consumption
Supervisor: Dr Simon Powers
The UK and the EU have both recently updated their legislation to put net zero emissions targets in place for 2050. This requires moving away from using fossil fuels for energy generation, and moving towards renewable sources such as photovoltaic cells and wind turbines. To exploit these renewable energy sources effectively, we need to reduce the peak demand for electricity. Traditional approaches to this have been based on time of use pricing – a utility company sets peak and off-peak hours, and charges households more to use their appliances in peak hours, with the aim of discouraging them from doing this. However, this approach has not been successful in the UK in widely shifting energy usage patterns (e.g. we are still facing the prospect of blackouts this winter because peak consumption is too high). And moreover, time of use pricing inherently discriminates against households on lower incomes.
This project will develop alternative approaches, drawing on theory from social science to develop agent-based protocols for reducing peak electricity consumption in a way that people perceive as treating them fairly. These are based on the idea that each household can have an agent running on their smart meter, into which they can input their preferences for when they would like to run their appliances. Their agent then negotiates with the agents of other households to come up with an allocation of times that satisfies each household’s preferences as far as possible, while reducing peak consumption. The project will develop and test several such protocols in simulation, and perform online user studies to test how fair people find them.
Title: Do trustworthy AI techniques actually have an effect on people's trust?
Supervisor: Dr Simon Powers
Why do people trust or distrust artificial intelligence (AI)? To predict the effects that different techniques for trustworthy AI, such as explainable AI, might have on people's trust, we need to be able to objectively measure this. But current work is very limited because it relies either on subjective survey results, or measures of trust that only apply to one application of AI. This limits our ability to generalise and build predictive models. To address this, we will use game theory to model the trust decision a person takes when they interact with various AI systems, including recommender systems and machine learning models.
By using game theory, we will be able to account for the fact that the interests of AI system designers and the people using the AI systems are often not fully aligned (indeed, if they were there would be little need for regulations such as the EU AI Act). For example, an AI recommender tool designed by a retailer might recommend more expensive products than the user wants or needs, or might use an end user’s data in ways that are not in the user’s best interest. Likewise, conversational AI tools such as ChatGPT benefit from gaining as many users as possible, and may do this by trying to please the user by providing answers on a vast range of topics, even when these answers are incorrect. This strategic nature of human-AI interactions is likely to have a large effect on whether people trust different applications of AI, and what signals of trustworthiness affect their decision, but has been ignored in previous behavioural experiments that measure trust in applications of AI. And crucially, by accounting for the potential for misalignment of interests, we can determine which signals of trustworthiness may mislead people to trust systems that are untrustworthy.
Title: Multimodal Generative AI in Medical Imaging
Supervisor: Dr Hazrat Ali
Large Language Models (LLMs) are revolutionizing the field of Medical Artificial Intelligence, primarily through their advanced capabilities in processing textual and tabular data within healthcare. Despite these advancements, the application of LLMs in the Medical Imaging domain remains underexplored. There exists significant potential to harness the multimodal data processing capabilities of LLMs to develop innovative AI tools for medical imaging while integrating diverse forms of data, which could lead to enhanced diagnostic accuracy and improved patient outcomes. This research seeks to develop novel AI-driven solutions that can more accurately analyze and interpret complex medical images. The outcomes of this research have the potential to significantly advance the field, offering new tools that can support clinicians in making more informed decisions, ultimately leading to better patient care and outcomes.
Title: Generative AI for synthetic aperture radar (SAR) data processing
Supervisor: Dr Vahid Akbari
The project aims to develop generative AI methods, such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAE), for synthetic aperture radar (SAR) data processing including image synthesis and resolution enhancement, and noise reduction. These relatively new techniques have shown impressive results in the optical image field, enabling, for instance, the generation of very convincing fake images. Thus, the student has to investigate how these methods can be used or adapted for similar data generation in radar imaging. The student will follow the progressive development path of GANs and VAEs and explore some applications for SAR image domains transformations, while monitoring their expected performances for monitoring purposes.
Title: Language models for synthetic aperture radar data
Supervisor: Dr Vahid Akbari
This research will explore the application of large language models (LLMs) to understand scattering mechanisms in synthetic aperture radar (SAR) data. By leveraging the advanced capabilities of LLMs, the research aims to improve the interpretation of SAR images and the identification of diverse scattering phenomena. This innovative approach has the potential to significantly advance SAR data analysis, leading to more accurate environmental monitoring and remote sensing applications.
Title: Synthetic Data for Trustworthy AI
Supervisor: Dr Paulius Stankaitis
Synthetic data has great potential in addressing data scarcity issues in the AI domain. One challenge of generating good synthetic data using physics-simulators for training deep learning models is ensuring that the generated synthetic data sufficiently captures the complexity and variability of real-world scenarios to effectively train the neural networks. This project would investigate the use of realistic synthetic data and optimisation techniques for improving the trustworthiness of AI models.
Title: Deepfake and Fake news detection
Supervisor: Dr Leonardo Bezerra
Due to the growing presence of social media or social networking sites people are digitally connected more than ever. This also empowers citizens to express their views in multitude of topics ranging from Government policies, events in everyday life to just sharing their emotions. However, the growing influence experience by the propaganda of fake news is now cause for concern for all walks of life. Election results are argued on some occasions to have been manipulated through the circulation of unfounded and sometime doctored stories on social media. In addition to fake text, there has been huge growth of AI based image/media manipulation algorithms commonly known as ‘deepfake’. Near realistic fake videos are being generated that contributes significantly to spreading misinformation. This project will research on developing new algorithms that combines deep learning based Natural Language Processing (NLP) and Computer Vision (CV) techniques to detect fake news and prevent misinformation spreading.
Title: Predicting the Performance of Backtracking While Backtracking
Supervisor: Dr Patrick Maier
Backtracking is a generic algorithm for computing optimal solutions of many combinatorial optimisation problems such as travelling salesman or vehicle routing. Unfortunately, the time a backtracking solver requires to find an optimal solution, to prove optimality, or to prove infeasibility is very hard to predict, which limits the practicality of such solvers for real-world problems.
Research in algorithms has mainly focused on specific problem classes and on identifying characteristic features of hard problem instances. Instead, this project aims to mine a generic backtracking solver for performance data at runtime (that is, while solving a particular problem instance) and to build statistical models that can be used to estimate the future performance of the solver on the current problem. Interesting estimates include: How likely is it that the current solution is optimal? Assuming the current solution is optimal, how long will it take to prove optimality? Can the search be parallelised, and if so, how many CPUs would be required to get the answer in one hour?
Topic: The application of cognitive computational methods to enhance vocational rehabilitation
Supervisor: Dr Sæmundur Haraldsson
Vocational Rehabilitation (VR) is a field within healthcare which aims to assist long term sick-listed and unemployed individuals to enter the workforce or education [2]. VR has yet to fully embrace the use of cognitive computer systems, including Artificial Intelligence (AI) approaches. As such it offers indefinite avenues of research for inquisitive minds, e.g., predicting future regional demand for VR, optimising VR pathways for maximum probability of success, and many more. Potential PhD candidates would collaborate with international partners of the ADAPT consortium to exploit state-of-the-art AI and Data Science methods to improve decision making and planning in VR. The projects would form the foundation for the field of VR informatics with international real-world impact on people's health and wellbeing as well as current societal issues.
Topic: Bio inspired Peer-to-Peer Overlay algorithms
Supervisor: Dr Mario Kolberg
Peer-to-Peer (P2P) overlay networks are self-organising, self-managing, and hugely scalable networks without the need for a centralised server component. Utilizing inspiration from biological processes to construct and maintain P2P overlays has attracted some research interest to date. The majority of related solutions focus on providing efficient resource discovery mechanisms using swarm intelligence techniques. In fact such techniques have proven performance benefits in regard to routing and scheduling in dynamic networks, while they have also inherent support for adaptability and robustness in light of node failures. Conversely, except for very few examples, using such techniques for topology management has not really been exploited. This project will investigate the use of bio-inspired solutions for topology management addressing some of the techniques’ challenges such as relatively high computational and messaging complexity.
Topic: Machine Learning approaches to tackle Cyber Attacks
Supervisor: Dr Mario Kolberg
The range of internet services has increased dramatically in recent years, however, at the same time cyber-attacks have grown both in number and sophistication endangering user trust and uptake of such services. Thus there is a need for researchers to develop solutions to these evolving cyber-attacks. However, these attacks are evolving as attackers keep changing their approaches.
Security measures such as firewalls are put in place as the first line of network defense to safeguard these networks but attackers are still able to exploit vulnerabilities in these networks. Intrusion Detection Systems (IDS) have shown potential to be a successful counter measure against potential attacks. However, there are still many open issues, such as their efficiency and effectiveness in the presence of large amount of network traffic. Several IDS have been proposed that can differentiate between attacks and benign network traffic and raise an alarm when a potential threat is detected. However, these systems must be able to analyse large quantity of data in real time to be applicable in modern networks. Unfortunately the larger the data quantity, the more irrelevant information stored. One solution may be to extract key features and apply Machine Learning (ML) techniques to detect attacks. This project will investigate using ML approaches to detect intrusion attacks at runtime.
Topic: Understanding and Visualising the Landscape of Multi-objective Optimisation Problems
Supervisor: Prof. Gabriela Ochoa
In commerce, industry and science, optimisation is a crosscutting, ubiquitous activity. Optimisation problems arise in real-world situations where resources are constrained and multiple criteria are required or desired such as in logistics, manufacturing, transportation, energy, healthcare, food production, biotechnology and others. Most real-world optimisation problems are inherently multi-objective. For example, when evaluating potential solutions, cost or price is one of the main criteria, and some measure of quality is another criterion, often in conflict with the cost. The analysis of multi-objective optimisation surfaces is thus of paramount importance, yet it is not well developed. This project will look at developing and applying network-based models of fitness landscapes and search trajectories to multi-objective optimisation problems. The ultimate goal is to provide a better understanding of algorithms and problems and demonstrate that better knowledge leads to better optimisation across a number of domains
Topic: Artificial Intelligence Sight Loss Assistant
Supervisor: Dr Kevin Swingler
The Artificial Intelligence Sight Loss Assistant (AISLA) project aims to use state of the art computer vision and artificial intelligence to develop personal assistant technology for people with sight loss. Topics within the project include computer vision, natural language processing and human-AI interfaces. A PhD in AI and computer vision can lead to an academic career or jobs in industries such as automotive, building self driving cars, digital assistant design or security. Companies like Google, Amazon and Facebook are at the forefront of commercial AI.
Topic: Interpretable Machine Learning for Time Series Analysis
Supervisor: Dr Yuanlin Gu
In challenging scenarios marked by strong uncertainty or limited data size, the performance and reliability of predictive model can be negatively affected. This project aims to develop interpretable machine learning models along with method for generating and selecting explainable features. This will uncover the relationship between system outputs and the complex changing impacts of inputs, facilitating easier model fine-tuning based on the insights and knowledge gained. The developed methods will be applied in multidisciplinary areas such as engineering, finance, environment, etc
Topic: Efficient search techniques for large-scale global optimisation problems in the real world
Supervisor: Dr Sandy Brownlee
Optimisation problems become very difficult at the large scales: like allocating thousands of skilled engineers to jobs, or prioritising where to spend public money in improving energy efficiency of thousands of homes. This project will look at how to learn the structure of these problems, allowing us to intelligently divide them up so they can be solved efficiently, and how to present the outcomes intuitively to decision makers so they can make informed choices.
Topic: Search-based software improvement
Supervisor: Dr Sandy Brownlee
Software is everywhere, and more efficient software has enormous benefits (i.e., more responsive mobile apps; reducing environmental impact of datacentres). In many cases there is even a trade-off between functionality and efficiency, yet improving existing code is difficult because it is easy to break functionality and there is a lot of noise when we measure performance, whether run time, memory consumption, or energy use. This project will explore how search-based approaches like genetic algorithms can be integrated with the latest large language models and best practice from software engineering to improve the efficiency of code, accounting for these difficulties.
Topic: Building Smaller but More Efficient Language Models
Supervisor: Dr Burcu Can Buglalilar
Large Language Models (LLMs) are data hungry and require massive computational and energy resources. This has two implications. First, they are less effective when applied to low-resource languages that lack data resources for building Natural Language Processing (NLP) models, leaving out a large part of the world's population. Second, training such LLMs is currently extremely energy intensive, which has a negative impact on the environment. In this research, we aim to build smaller but more efficient small language models using theories from other fields, including but not limited to linguistics, psycholinguistics and cognitive science.
Topic: Small Data Learning
Supervisor: Dr Keiller Nogueira
The recent impressive results of methods based on deep learning for computer vision applications brought fresh air to the research and industrial community. Although extremely important, deep learning has a relevant drawback: it needs a lot of labelled data in order to learn patterns. However, some domains do not usually have large amounts of labelled data available which, in turn, makes the use of such technique unfeasible. This project will research strategies to better and efficiently exploit deep learning using few annotated samples, including self-supervised learning, meta-learning, and so on.
Topic: Automatic Open-World Segmentation
Supervisor: Dr Keiller Nogueira
Semantic segmentation is the task of assigning a semantic category to every pixel in an image. Current methods for dealing with this task can learn in a simple way, but cannot replicate the human ability to learn progressively and continuously. This project will research novel segmentation approaches capable of evolving over time by automatically identifying, pseudo-labelling, and incrementally learning from samples of unknown classes seen during the inference.
Mathematics PhD opportunities
Topic: Sports statistics using generalized Bradley-Terry likelihood methods
Supervisor: Dr Robin Hankin
Many sports such as football, test cricket, and chess have the possibility of a draw and dealing with this statistically is not straightforward. This project will compare and contrast different methods of addressing draws in the context of sports statistics including reified Bradley-Terry and weighted likelihood functions.
Topic: Born rigidity
Supervisor: Dr Robin Hankin
In classical mechanics, an object's being "rigid" has a very clear definition. This definition needs to be altered when relativistic considerations become important, the relevant concept being 'Born rigidity'. This project will generalize Born rigidity to cover inelastic string under various kinematic scenarios. One application of these ideas might be to understand the behaviour of light inelastic string in the Kerr metric.
Topic: Stability in eco-evolutionary meta-community models
Supervisor: Dr Gavin Abernethy
In theoretical ecology, meta-community models simulate the interactions between several discrete populations of multiple species in different patches on a spatial environment. We can study how the spatial patterns of species occurrence depend on the physical environment and dispersal behaviour, and how interactions by competitors or predators and prey can enable co-existence or lead to extinctions. Simulated experiments predict the biodiversity impact of habitat destruction, climate change or habitat fragmentation to reveal principles for conservation and landscape management. Eco-evolutionary modelling further incorporates rules for speciation, so that in this project you will explore how evolutionary, ecological, and spatial mechanisms inform each other to shape the emergent ecosystem and influence its stability against perturbation.
Topic: Optimising disease control measures for novel outbreaks
Supervisor: Dr Anthony O'Hare
This project will use census, demographic, and travel network data to model a disease outbreak in a country given some high-level input such as incubation period and R0 value and use Artificial Intelligence to determine the optimal disease control measures, e.g. closing schools, closing rail lines etc. Also, for a given amount of vaccine, you will determine the optimal distribution of the use of the vaccine.
Title: Modelling Pathologies in Cardiac Cells
Supervisor: Dr Anya Kirpichnikova
Cardiac modelling serves as a crucial tool in comprehending the mechanisms of pathophysiology in both healthy and afflicted hearts. The prospective PhD project centres around cardiac modelling, specifically focusing on creating and examining models of both healthy and diseased ventricular cells. As part of this project, the candidate will acquire proficiency in sophisticated techniques for model development and analysis. These techniques will encompass virtual population methodology and sensitivity analysis, aimed at identifying cardinal cellular attributes influencing disease manifestation.
Title: Designing obstacles for the Network Simulator 3
Supervisor: Dr Anya Kirpichnikova
In the realm of network simulation, the process of integrating obstacles plays a pivotal role in achieving realistic and reliable results. This project is a combination of various methods of wave propagation techniques in the presence of obstacles together with the implementation of the results in coding, i.e. implementing obstacles within Network Simulator 3 (NS-3), a popular tool utilized extensively for network research and development. The approach considers the physical characteristics of real-world barriers and their impact on signal propagation, effectively enabling more accurate simulations of varied environmental conditions. By incorporating variables such as material type, size, and location of obstacles, the model should emulate their interference in signal strength, reflection, refraction, diffraction, and absorption.
Title: Novel methods for ECG classification
Supervisor: Dr Anya Kirpichnikova
Electrocardiogram (ECG) classification is critical in diagnosing cardiac abnormalities, offering an essential tool in preventative health measures. Despite advancements in this field, there remain significant opportunities for improving the accuracy and reliability of ECG classification methods. This research proposal aims to explore novel mathematical and signal processing techniques to enhance ECG classification and support timely intervention for cardiac patients.
Topic: How to avoid tipping points in the food system
Supervisor: Prof Rachel Norman
The way we produce, distribute and purchase food is referred to as the food system and has many non-linearities in it. In this project we will look at the role of tipping points, and in particular ways in which we could avoid them. The project will use mathematical models to describe aspects of the food system and we will take a theoretical approach to the analysis alongside considering particular case studies of previous tipping points, for example the collapse of some cod populations.
Topic: When does a new infectious disease outbreak occur and when does it die out?
Supervisor: Prof Rachel Norman
There have been a significant number of emerging infections which are either diseases which we have not seen before or ones which enter a new region. For example, Covid-19 had not been seen until the end of 2019 and it seems to have come from wildlife. However, we are challenged with these new infections more frequently as the way we interact with our environment changes. This project will use mathematical models to look at what features cause an outbreak to occur or the disease to die out. This project will use stochastic SIR type models which are coupled non-linear differential equations to understand what happens at the start of an outbreak. If a small number of individuals get infected (for example if a pathogen passes from wildlife into humans), what pathogen characteristics are more likely to result in an outbreak? The approach will be a combination of theoretical exploration of parameter space for different models and consideration of specific diseases.
Topic: Sparse multidimensional exponential analysis in computational science and engineering
Supervisor: Dr Wen-shin Lee
Exponential analysis might sound remote, but it touches our lives in many surprising ways, even if most people are unaware of just how important it is. For example, a substantial amount of effort in the field of signal processing is essentially dedicated to the analysis of exponential functions of which the exponents are complex. The analysis of exponential functions whose exponents are very near each other is directly linked to super-resolution imaging. As for exponential functions with real exponents, they are used to portray relaxation, chemical reactions, radioactivity, heat transfer, and fluid dynamics.
Since exponential models are vital to being able to describe physical as well as biological phenomena, their analysis plays a crucial role in advancing science and engineering. The proposal investigates several multidimensional uses of exponential analysis in practice.
Topic: Exponential analysis meets Wavelet theory
Supervisor: Dr Wen-shin Lee
In the past years, sparse representations have been realised as linear combinations of several trigonometric functions, Chebyshev polynomials, spherical harmonics, Gaussian distributions and more. Recently also the paradigm of dilation and translation was introduced for use with these basis functions in sparse exponential analysis or sparse interpolation. As a result, high-resolution models can be constructed from sparse and coarsely sampled data, and several series expansions (Fourier, Chebyshev, ...) can be compactified. The above are available in one as well as higher dimensions. The similarity with wavelet theory remains largely unexplored.
Topic: Improving Antibiotic Dosage Regimens: Mitigating for risks associated with Varying Patient Compliance
Supervisor: Dr Andy Hoyle
The rise of antibiotic resistance is putting increasing pressure on our health service, and it is estimated that over 30,000 deaths per year across the EU are associated with resistant bacteria. Yet, there is little movement away from conventional antibiotic regimens, whereby we apply a constant daily dosage, e.g. X mg (or 1 tablet) per day for N days. This project will combine mathematical modelling and Artificial Intelligence, and aims to find optimal antibiotic regimens which maximise host survival, minimise the emergence of resistance and mitigate against uncertain patient compliance.