It’s a cliché at this point to say AI (artificial intelligence) is the future; there are people still struggling to work their smartphones who could tell you that. AI tools are too capable, and the commercial advantage too significant, for them not to become a sizeable part of our work and personal lives.
But the clinical trials industry isn’t like most commercial sectors. AI adoption here isn’t just a matter of efficiency or speed; it carries implications for privacy, safety, and ethics, as well.
We have a huge responsibility when adopting and integrating this technology in this sector. The impact of a single system-level error isn’t just operational. It could introduce bias into a patient-facing workflow or create a compliance risk that takes months to resolve.
That means that AI’s main strength – its ability to increase efficiency at scale – is also its greatest weakness in this use case. Most end users will only see a final output; they’re divorced from the data, rules, and reasoning behind it, which means its harder to catch errors, bias, or data misuse.
Responsible AI Use in Clinical Trials
When we first started exploring the use of AI tools at Velocity, it quickly became apparent that we needed an AI governance framework by which to assess the appropriateness of models for use in our sector. Integrating AI into clinical trials requires a framework that is robust, responsible, and designed for complex, highly regulated, and sensitive environments.
We started with the basics and reviewed established frameworks that were already available. The team looked at Gartner’s AI TRiSM model, Google’s Secure AI Frameworks (SAIF), and McKinsey’s AI at Scale. Each offered its own advantages: Gartner’s focused on trust and risk management, Google’s focused on infrastructure-level security, and McKinsey’s on enterprise-wide AI integration.
Ultimately, we adopted Gartner’s TRiSM model for its simplicity, coverage, and adaptability. It emphasises operationalizing AI governance through three pillars:
- Trust: Transparency, fairness, privacy
- Risk: Operational, legal/regulatory, and safety failures
- Security: Protection against unauthorised access and data leakage
To strengthen this foundation, we mapped each TRiSM pillar to Microsoft’s Responsible AI principles. Every AI model must meet Velocity’s criteria before it can be included in the approved AI list:
Trust:
- Can the model provide a human-interpretable explanation for its output?
- Does the model demonstrate consistent performance across demographic groups?
- Is the model prevented from using our data for training or unintended retention?
Risk:
- Can the model’s operational, legal, and compliance failures be anticipated and mitigated?
Security:
- Is the model protected against unauthorized access and data leakage?
This is now the required framework for any AI at Velocity, and all models must pass each of these assessments to be successfully approved for use
From policy to deployment
Our AI Governance framework is managed by a cross-functional governance group that includes leaders from Technology, Legal, Operations, and People. As Raghu Punnamraju, Velocity CTO, says, “All AI models used by Velocity must comply with our integrated TRiSM model. The AI governance team reviews assessments of each of the models and either approves or rejects them. We are continuously reviewing and improving the framework as we learn more.”
Our TRiSM governance framework operates as a shared layer across every level of AI adoption at Velocity. It is our mechanism for enforcing explainability, identifying failure points, and maintaining control of data and output quality. It applies to:
- Data flows: Whether it’s vendor systems, patient data, or operational inputs, everything entering our environment is subject to controls and accountability.
- Embedded tools: From Microsoft Copilot to plug-ins and assistants, we evaluate them through the TRiSM framework and assess how these tools behave inside our workflows before rollout.
- Third-party AI: Teams bringing in their own, external AI tools follow the same vetting process.
- Built models: In-house systems, including VISION, must meet all trust and compliance standards.
Fit for Purpose
There’s no shortage of AI tools on the market. What’s rare is the ability to evaluate them with rigor and decide what belongs in your business and, critically, what doesn’t. That’s what our governance model is built for, to make sure the tools we use actually work for the world we operate in.