Research Notes on Trustworthy & Explainable AI
I use this page to share short, practical research notes from my MSc dissertation extension, developed under academic supervision. The focus is explainable and trustworthy machine learning for high-stakes settings (healthcare and finance): how to evaluate models properly, how to make explanations defensible, and what breaks in real deployment. These are working notes, not polished papers.

Research Notes on Explainable & Trustworthy AI
I use this page to document ongoing research in explainable and trustworthy machine learning, with a focus on high-stakes applications in healthcare and finance. These notes explore model interpretability, robustness evaluation, calibration, validation design, and decision accountability. They extend my MSc dissertation work and reflect an active research process rather than polished publications.

Intended Audience
These research notes are written for AI recruiters, research supervisors, and data scientists who value methodological rigor and responsible AI development. They are also intended for practitioners interested in moving beyond raw model performance toward transparency, robustness, and defensible deployment.
My aim is to demonstrate structured analytical thinking and a research mindset that connects academic rigor with applied machine learning.

Trustworthiness by design: A structured approach
After reading my notes, I want it to be clear that trustworthy AI is a design principle, not an afterthought. High-performing models are not enough. Systems must be interpretable, rigorously validated, and defensible in real-world settings.
My work takes a structured approach to problem definition, evaluation, and transparency. Each note outlines the research question, methodology, validation strategy, results, and limitations. The goal is simple: to build AI systems that are not only accurate, but auditable, responsible, and practically deployable.
Create Your Own Website With Webador