Unveiling insights, building trust

Explore my ongoing research in explainable and trustworthy artificial intelligence. My work investigates how machine learning systems can remain accurate while also being transparent, interpretable, and defensible in high-stakes settings such as healthcare and finance.

Through rigorous experimentation, structured validation, and explainability techniques, I aim to design AI systems that can be trusted, audited, and meaningfully understood by decision-makers.

Research Interests

My research interests lie in the development of trustworthy and causally informed AI systems for high-stakes decision-making. I am particularly interested in bridging machine learning with causal inference to move beyond correlation-based prediction toward intervention-aware, policy-relevant modelling.

Key themes include:

  • Causal representation learning

  • Counterfactual reasoning in clinical AI

  • Robustness and failure analysis of predictive systems

  • Evaluation frameworks for explainability in regulated environments

  • Assurance and defensibility of AI systems in healthcare and finance

I am especially motivated by research that strengthens the connection between technical modelling, real-world deployment, and accountability.

Pioneering trustworthy AI

My research focuses on explainable and trustworthy AI for high-stakes decision systems in healthcare and finance. I investigate how predictive models can remain accurate while also being interpretable, auditable, and defensible in real-world settings.

My MSc dissertation examined model interpretability and transparent evaluation practices, and I am currently extending this work under academic supervision. My work involves SHAP-based explainability, structured validation frameworks, robustness analysis, and responsible AI evaluation methods.

I am particularly interested in bridging rigorous machine learning research with deployable, real-world systems that decision-makers can genuinely trust.

A methodical research approach

My research is grounded in structured problem definition, rigorous validation, and transparent evaluation. I prioritise interpretability and robustness alongside performance, ensuring models are not only accurate but defensible.

I formulate testable research questions, select appropriate evaluation metrics, stress-test models under realistic conditions, and analyse failure cases systematically. This approach reflects an emphasis on intellectual honesty and responsible AI development.

My goal is to build systems that are technically sound, auditable, and suitable for deployment in sensitive domains such as healthcare and finance.

My signature research: Explainable AI for healthcare risk prediction

My MSc dissertation focuses on developing and evaluating interpretable machine learning models for diabetes risk prediction. The research examines how SHAP-based explanations can enhance transparency, clinical understanding, and trust in predictive systems.

Rather than optimising performance alone, I investigated model reliability, robustness, and failure modes through structured experimental design, careful feature engineering, and cross-validation. The work emphasises accountability in high-stakes AI applications.

I am currently extending this research under the supervision of my MSc dissertation supervisor, refining the methodology and exploring pathways toward academic publication.

This project reflects my broader research interest in trustworthy AI, decision-support systems, and the responsible deployment of machine learning in healthcare and finance.

Causal Counterfactual Explainability for Diabetes Prediction (C-CEP)

This research extends my MSc dissertation on diabetes risk prediction by integrating temporal deep learning with causal inference. The project develops a novel framework that combines LSTM-based prediction models with causal effect estimation to generate actionable counterfactual explanations for high-stakes clinical decision-making.

Moving beyond correlation-based explanations

Traditional explainability tools such as SHAP and LIME provide feature importance based on correlations. In contrast, this work models causal relationships between risk factors such as BMI, fasting plasma glucose, and lifestyle variables.

By constructing a clinically informed causal DAG and estimating treatment effects using modern causal ML frameworks (e.g., DoubleML, Causal Forests, EconML), the system can simulate realistic “what-if” interventions — such as:

• What happens to predicted risk if BMI decreases by 2 units?
• How does fasting glucose improvement alter long-term risk?

This transforms explanations from descriptive to intervention-aware.

Current Status

This project is currently under continued development under the supervision of my MSc dissertation supervisor. The work focuses on refining the causal framework, extending validation across additional datasets, and preparing a manuscript suitable for submission to a medical AI or explainable ML journal.