Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ninja-forms domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the rocket domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Using Analytics and AI to Improve Data Quality in Clinical Trials

Using Analytics and AI to Improve Data Quality in Clinical Trials

This webinar was held on Tuesday, July 9. A summary of the Q&A and Audience Poll Questions are included below.
Where do you think AI can help you?
How Often do you use the data you currently collect in a clinical trial to impact patient behavior DURING the trial?
Which aspect of AI do you find most promising for improving clinical trial data quality?
Can AI be used to standardize, harmonize, and partially tokenize the data?
The answer is yes! This is an effective use for smart data mapping. The domain for this will be important. The AI predictions will be better if you can limit the domain. For example, if your intent is medical history for conmed, which is a pain point for data aggregation, be sure and select a training set that ingests drug information data from reliable sources and not open web. There are good models and bad models for every challenge. You need to think through your domain and carefully select the training data and algorithm.
How welcoming are the regulatory bodies with the use of AI in any aspect across the clinical trial phase?
The regulatory bodies are getting there, and they are trying to help and clarify how to use AI. They move slowly. Their focus remains what is referred to as “Responsible AI,” defined as: Safe & Reliable; Trustworthy; Explainable Ethical & Reliable. We interpret that as their intent to make sure the data and the tools are fit-for-purpose. In clinical trials, this is key for any technology, including AI.
What obstacles do you anticipate in using AI derived data for regulatory submissions?
The safe use of Artificial Intelligence is key. Delegating to a black box creates risk. Users must understand for what purpose the models are used. There is a challenge of ethics and bias. If you only feed data of one group of patients, you could have bias. Many AI engines that provide diagnosis were trained on US-based patients, and do not include patients from Europe, Africa, and Asia. Fit-for-purpose in compliance is important. Don’t try to boil the ocean. Pick out what you need and apply the models to those.
When will AI assisted platforms show sponsors a positive ROI?
We are thinking of a singular AI implementation that will take a trial from beginning to end cheaper and faster – that may happen someday, but it is far. Instead, you should look at applications of AI in Analytics or Operations, for example, and identify tasks that are repetitive in nature or very labor intensive such as DM, Biostats or Medical Writing. For example, a very impressive AI application in Medical Writing is when AI converts tables or data into prose and narratives. Except that by using AI, you avoid the laborious implementation and maintenance of Rule Based engines. More importantly, the algorithm refines itself over time if you construct an appropriate feedback loop. Of course, you will always need the Human-in-the-loop. Another way is through computer vision AI which reduces the need for in- person visits to a clinic. This has been deployed in medication adherence and digital biomarker technologies to date and these applications are growing in capability.
With all the AI solutions being developed, what do you think the workforce should do to be ready to use the solutions?
The Greek philosopher Heraclitus is credited with the idea that the only constant in life is change. In the same way that the tech/internet revolution at the turn of the century changed how we live, work and play, so will AI. Bear in mind that AI will not replace humans, but humans using AI will replace other humans in the workforce.
Our company culture is “be curious.” We believe that brings self-improvement for the workforce and innovation for the industry. A few additional thoughts:

It would help to have a high-level understanding of how AI works. There are a number of free online accredited courses that can help you.
Recognize that ChaptGPT is only one application of AI. Large Language Models or LLP is only one flavor of AI. There are many other approaches that may be more compatible for your challenge.
Begin to prepare and curate your data. Make sure that you have proper continuity and “ownership” across all your data as it will be critical for whatever AI applications that you may later identity.
Which area do you see AI data application increasing?
The application of AI data is a big topic. The most likely way this would be used today is understanding what is going on with participants in clinical trials. We are conducting scientific experiments on participants, and historically, we collect data in a very fixed manner. AI allows us to have more exploratory data than we have had before, and this allows us to gain more insights going forward which would be difficult to discern using traditional analytics.
How would you suggest we think about trust and AI? For example, vetting the results of an AI-based ETL could take as much or more time than building the same ETL programmatically.
That’s an excellent question. Trust is crucial when using AI in any domain, especially in AI-powered integration tools. Here are some points to consider:
AI systems should be transparent about their operation. It’s important to understand how the AI arrives at particular results, ensuring the decision-making process is clear and explainable.
The results produced by the AI should be regularly tested against automated benchmarks and validated through manual spot checks. Incorporating more human-in-the-loop (HITL) oversight initially can help build trust until the system proves reliable.
While the initial implementation of AI-powered integration tools may require significant effort to build trust, comparable to creating a one-time ETL process programmatically, the AI system will become more efficient over time as it learns and improves.
Do you use sociobehavioral data to predict patient adherence to prescribed therapy in clinical trials? If so, how do you use it?
We don’t use the behavior to describe therapy. We use it to manage interventions and ensure that participants are in compliance with the protocol. That is a potential in terms of adaptive protocol design whereby a participant’s behavior may lend to more favorable outcomes in one study regimen vs. another such as fewer or more in-clinic visits or using an injectable vs. a pill taken orally.
Could AI eventually progress to the point where entire trials could be run using only AI subjects rather than human subjects or a hybrid of both?
This is a tough one. We have a longer horizon for realizing anything close to that. Synthetic control arms are the closest we have to that today. This would include using RWD and extrapolating that into an artificial control subject with some mixed results. This is possible today. In the context of a clinical trial, we are still working through the AI models and understanding pharmacokinetics of compounds. Having the idea of an AI human model is transformative, but from a general public perception standpoint, most people would be hesitant to take a medication that has only been researched through AI models. There is capability, technology and societal pressure that will make this a long road. Incremental gains may be impactful in the short term.
The Biology part of AI (using biological models) is on the far horizon, but we are more optimistic of the procedurally regulatory trial process. This includes automating data management and productivity. AI is more here and now in this area.
Let’s zoom in. What are the incremental gains? What are the hugely repetitive tasks that the model could repeat? Let’s look at the gains here. There are companies working on simulations, but those are supercomputer activities which must be validated against existing human models and data.