Raghu Punnamraju, Velocity CTO, On AI in Clinical Research

Velocity is using multimodal LLMs, like OpenAI’s GPT-4o and others, to enhance productivity. At the forefront of this shift is Raghu Punnamraju, Velocity’s Chief Technology Officer. In this article he shares his thoughts on the rapid evolution of LLMs, and how quickly new tools are being applied to improve quality and efficiency.

We have been anticipating the democratization of multimodal large language models (LLM) for capabilities for some time and are glad to see the LLM giants accelerating this commoditization. Being a clinical research site organization that deals with sensitive patient data, it is encouraging and comforting to see that these models consider privacy, security, and compliance as critical factors.

Aspects of GPT-4o are General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) compliant. However, it falls short of complying with 21 CFR Part 11, which requires assurance over the authenticity of electronic records. The onus will be on consumers of these models to ensure compliance and we anticipate that those working on life-sciences-specific applications will address this key gap soon.

The competition between OpenAI’s GPT-4o, Google’s Gemini and Project Astra, Anthropic’s Claude 3, and others enables Velocity and other players in the life sciences space to orchestrate the best use of these models, creating solutions that borrow from each platform to best meet business needs.

GPT-4o expands on its multimodal capabilities, which we believe will benefit Velocity and the Life Sciences sector at large in three ways: improve the efficiencies with pre-processing of data, increased use co-piloted code in software development, and positively impact a company’s return on investment by increasing productivity.

Firstly, the advent of LLM agents or AI systems coupled with their newly attained speed and effectiveness of handling unstructured data and multimodal content will open new possibilities in improving operational quality, workflows, and patient pathways. The immediate benefits of releases like GPT-4o are in operational quality and efficiency gains, which Velocity is actively exploring to streamline processes for accounts receivables, analyzing patient data and even helping to build study protocols.

Secondly, when it comes to Velocity’s own tech stack, our tech development has increased twofold. With the launch of GPT-4o and the multimodal capabilities it offers, we believe this will increase even further. For example, 50% of Velocity’s code is generated by AI. The quality of this co-piloted code is significantly better than code generated from scratch and organizations not embracing such an approach will lose out on time, budget, and quality gains.

Finally, increased productivity means digital transformation can be expedited. Multimodal LLMs like GPT-4o can have a commercial impact on organizations thanks to the time and cost savings it creates. In fact, we estimate that if these models are used in the right way, operations could improve anywhere from twice as much to 10 times as much. For example, study builds can now happen in hours instead of days and payment reconciliations could occur in hours instead of weeks. That means studies can be delivered quicker and participants can be reimbursed quicker.

While it is exciting, it is equally important to be aware of Responsible AI applications. We strongly recommend Human-in-the-Loop (HIL) practices to enforce this responsibility as we embrace the continuous and exponential evolution of LLMs.

Posted in ,

Quality. Continuity. Velocity.