Artificial Intelligence
This resource hub offers insights, frameworks, and tools to support the ethical and responsible use of artificial intelligence (AI) across healthcare education and academic settings. It brings together materials that explore key principles, practical approaches, and strategic considerations for integrating AI into teaching, research, administration, and technology management. Designed for educators, clinicians, researchers, administrators, and technology leaders, this collection provides guidance on topics such as AI integration, procurement, system assurance, data ethics, and governance. New resources will be added over time to reflect emerging developments and evolving priorities in the field.

FAIR Enough: Develop and Assess a FAIR-Compliant Dataset for Large Language Model Training? | Data Intelligence | MIT Press
This article presents a practical framework for making large language model (LLM) datasets FAIR—Findable, Accessible, Interoperable, and Reusable. It’s a useful resource for researchers and developers looking to build ethically responsible AI, offering a checklist, case study, and tested methods for bias reduction and transparency throughout the model training process.
A buyer's guide to AI in health and care | GOV.UK
The UK Government’s “A Buyer’s Guide to AI in Health and Care” is a concise, practical resource designed to help healthcare organizations make informed, ethical, and legally sound decisions when procuring AI technologies. It outlines ten key questions covering areas such as regulatory compliance, data protection, product performance, and ethical governance. This guide is especially useful for procurement teams, NHS leaders, and digital transformation professionals seeking to assess AI solutions responsibly and align with standards like the Data Ethics Framework and NHS Code of Conduct.
The Model for Responsible Innovation | GOV.UK
The UK government's "Model for Responsible Innovation" is a practical framework developed by the Department for Science, Innovation and Technology's Responsible Technology Adoption Unit (RTA). It assists public and private sector teams in responsibly developing and deploying AI and data-driven technologies.
Introduction AI Assurance | GOV.UK
The UK government's "Introduction to AI Assurance" guide offers a clear and accessible starting point for organizations aiming to develop and deploy AI systems responsibly. It explains key concepts and situates assurance within the broader AI governance landscape, supporting safe and ethical AI adoption. This resource is particularly valuable for those new to AI assurance, providing foundational knowledge to navigate the complexities of AI system evaluation and oversight.
Portfolio of AI Assurance Techniques | GOV.UK
The UK government's "Portfolio of AI Assurance Techniques" is a valuable resource for organizations involved in the design, development, deployment, or procurement of AI systems. Developed by the Department for Science, Innovation and Technology's Responsible Technology Adoption Unit, the portfolio offers real-world case studies demonstrating various AI assurance methods. These examples illustrate how different techniques can be applied across sectors to ensure AI systems are trustworthy, ethical, and compliant with relevant standards. By mapping these techniques to the UK's AI regulatory principles, the portfolio serves as a practical guide for implementing effective AI assurance practices.
Unlocking Public Sector Artificial Intelligence | World Economic Forum
The World Economic Forum's "Unlocking Public Sector Artificial Intelligence" initiative offers a practical framework to assist governments in responsibly adopting AI technologies. Central to this initiative is the "AI Procurement in a Box" toolkit, which provides guidelines and tools for ethical and effective AI procurement. By embedding these principles into procurement processes, the initiative aims to enhance transparency, foster innovation, and set a baseline for harmonized AI standards across the public sector.
Understanding artifical intelligence ethics and safety | GOV.UK
The UK government's guidance on "Understanding Artificial Intelligence Ethics and Safety" serves as a foundational resource for public sector teams developing or deploying AI systems. It offers a high-level overview of AI ethics, emphasizing the importance of integrating ethical considerations throughout the AI project lifecycle. The guidance outlines key ethical building blocks and principles to ensure responsible AI delivery, aiming to prevent unintended harms and promote positive societal impact. This resource is particularly beneficial for data scientists, engineers, and policymakers seeking to align AI initiatives with ethical standards and public trust.
Explaining decisions made with AI | ICO
The ICO and The Alan Turing Institute’s “Explaining Decisions Made with AI” guide helps organizations explain AI-driven decisions clearly and lawfully under UK GDPR. It outlines types of explanations, practical steps to implement them, and organizational responsibilities—making it a key resource for building transparent and trustworthy AI systems.
Ada Lovelace Institute
The Ada Lovelace Institute is an independent research organization dedicated to ensuring that data and AI technologies serve people and society. Through rigorous research, policy analysis, and public engagement, the Institute addresses critical issues such as AI governance, public trust, and ethical data use. Its resources are invaluable for policymakers, technologists, and researchers seeking to develop AI systems that are fair, transparent, and aligned with societal values.
Asilomar AI Principles | Future of Life Institute
The Asilomar AI Principles, developed by the Future of Life Institute in 2017, provide a foundational framework for the ethical development and governance of artificial intelligence. Comprising 23 principles, they address key areas such as research goals, transparency, safety, value alignment, and the equitable distribution of AI benefits. Endorsed by leading AI researchers and public figures, these principles have significantly influenced global discussions on AI ethics and policy. They serve as a valuable resource for anyone involved in AI development, policymaking, or ethics, offering guidance to ensure AI technologies are developed responsibly and for the benefit of all.
IEEE SA | Autonomous and Intelligent Systems (AIS)
The IEEE's Autonomous and Intelligent Systems (AIS) initiative provides a comprehensive framework for the ethical development and deployment of AI technologies. It offers globally recognized standards, such as the IEEE P7000™ series, which address key socio-technical issues in AI, including transparency, accountability, and human well-being . Additionally, the initiative promotes education and training programs to empower stakeholders in designing responsible AI systems . These resources are invaluable for organizations seeking to align their AI projects with ethical principles and international standards.
Browse Standards | Get Program | IEEE Xplore
The IEEE P7000 series is a comprehensive suite of standards developed by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. These standards offer a structured approach to integrating ethical considerations into the design and deployment of AI systems. These standards are instrumental for organizations aiming to develop AI systems that are ethical, transparent, and aligned with societal values. They serve as a valuable resource for policymakers, technologists, and researchers committed to responsible AI development.