The five-step guide to AI adoption in clinical practice

Jeroen van Duffelen proposes a five-step programme for adoption of artificial intelligence in clinical practice.

Adoption of medical imaging AI is about getting your hospital or screening programme ready to implement the right solution for a clinical need. Running into speed bumps along the way is common for early adopters. How do you define the needs, budget, and outcomes? Which boxes should you check when selecting vendors? How do you manage internal stakeholders? The adoption curve is steep. Luckily, you don’t have to climb it alone.

Drawing from our experience deploying AI in clinical practice and lung cancer screening, I’ve designed a five-step guide to streamlined adoption. If you’re looking to adopt artificial intelligence but don’t know where to start, these actionable tips and advice will see you through. For the video breakdown of the steps, watch this presentation from ECR 2020.

1. Consider

Where do you start working with AI? First, look away from all the solutions out there, and focus on your organisation. Bring together all the stakeholders into a project team that includes the sponsor, if applicable, IT and legal representatives. Involving them from the beginning will expedite the process.

Start with defining the challenge you are looking to solve, or the specific clinical question that is relevant to your workflow. Some hospitals are looking to experiment with the technology, while others aim to solve a particular issue. Over the past years, I have seen the latter getting more out of AI, which is why my advice is to start from a clinical challenge.

When considering this challenge, make sure to already determine your expected outcomes. When is the adoption a success? Are you aiming to have an AI solution in use? Should it apply to a certain patient population, or yield specific results like time or cost savings?

Also, although it may seem early, this is also the stage to organise a budget dedicated to the AI solution. The size of this budget should relate to the cost or time savings a solution is expected to bring. Both the amount secured and its source will impact the next steps. For example, it will guide you to look for PhD researchers versus seeking a vendor that offers a mature solution.

2. Evaluate

The AI in healthcare space is widely populated; a Google search or a look at the list of vendors at the RSNA can confirm that. To weigh the existing options for your scope, do your (desk) research using this high-level checklist for each solution:

How was the AI solution validated?

It is important that the claim that the AI solution has validated covers the use case you identified in the previous step. Take the time to understand if the manufacturer has done studies confirming this claim.

How does it integrate into the workflow?

Try to get a feel of the amount of effort needed to add an AI system into your workflow. A good practice is to start with an AI solution that is easy to integrate with the current workflow and IT infrastructure. Workflow integration is of utmost importance for the radiologist; in this article, we explained why that is and how it works.

What regulations does the solution fulfill for use in clinical practice?

Commercialising medical devices requires a CE Mark in the EU and an FDA clearance in the US. Note that local regulations may apply to different countries. Again, pay attention to which claim is covered by the acquired certification.

3. Choose

By this stage, you should have narrowed your search down to a few vendors. This is the moment to go in-depth into the workflow and test if a specific solution is a good fit from both a clinical and a technical standpoint. A well-integrated AI system should not create hurdles for physicians, such as requiring them to leave their workstation to upload studies. It should further blend within the existing IT infrastructure.

There are two checks that are vital to make the right choice:

Validate the accuracy

Legitimate vendors would have done a study and can provide a clinical background for accuracy. To know if the solution is good at performing the defined task for your organisation, ask questions about the datasets used to develop and test the AI solution.

There are three datasets required to build an AI model: a training dataset, a validation dataset, and a test dataset.

The test dataset is the most relevant to look at because it is what the accuracy is based on. The performance on this dataset should be applicable to your hospital, with its specific protocols, type or number of scanners, and patients. To achieve this, the test set must cover the patient population your organisation serves (e.g. types of patients, comorbidities distribution, etc.). Thus, inquire about the specifics of the test set and the performance of the AI model on this dataset.

Secondly, you may want to know what the size of the training dataset is and how it was labelled. Both quantity and quality are important to train an accurate AI model. Labeling the data should be done by experienced radiologists, preferably with multiple readers per study.

Check the regulatory compliance

In Europe, medical device classification is divided up between risk Class I, Class IIa/b, or Class III. If looking for a solution for clinical practice, be wary of Class I medical devices. The new Medical Device Regulation, which will come into force in May 2021, will require many AI products currently classified as Class I devices to update their classification. For instance, software that supports diagnostic decisions should fall under Class II at a minimum. For more guidance on the new regulation, read our recent expert piece.

Apart from the regulatory approval, check if the vendor also has a quality management certification (e.g. ISO 13485). Reviewing the data processing policy and the cybersecurity measures in place will further help you understand if the AI company is going the extra mile in regard to safety.

A bonus tip for the choosing stage: do a reference check. Ask other organizations how they are working with the AI solution you have chosen. You may get the insights you need to make the final decision.

4. Approve

Approving the chosen solution internally requires the involvement of and coordination between IT and PACS administrators, procurement officers, physicians, often also privacy departments and legal officers. If you have a project team in place since the first step, you should be well on track.

To move forward and avoid delays, assign an internal AI champion responsible for driving the project. This may be an executive sponsor, a budget holder, or a department manager. One of my learnings from past deployments is that the risk of failure is high without a person fulfilling this role. What I have further learned as vendors is the importance of empowering the AI champion, by providing the necessary information and documentation in a timely manner.

Furthermore, make sure end users are trained to use the new medical device. If they don’t benefit from it, the impact of the AI solution will be limited. Additionally, setting up a feedback mechanism with the AI vendor from the get-go will help improve the AI product.

5. Deploy (& evaluate)

All the paperwork is signed – well done! To make the deployment work, create a clear project plan, including actions, timelines, and owners. Depending on the type of deployment – on-premise or cloud-based – different actions will be needed. As outcomes, set the deployment and acceptance dates, make agreements on the service levels, fixes, and upgrades, and discuss post-market surveillance.

The initial or trial phase of using the AI solution should show if it answers the problem you were trying to solve. It is a good moment to revisit step one and start evaluating the results to decide if you will continue using the solution.

A common question I get at this stage is: “Do I need to do a full clinical study?” The answer fully depends on the purpose of using the product. It is necessary for research, but not for other use cases. What matters is validating that the AI solution is adding value to your clinicians and their patients.

Make it better

AI adoption does not end with deployment. Service and maintenance are essential, and their quality often a differentiating factor between AI vendors. The implementation process usually acts as a good test for the AI companies fulfilling their promises and being prompt when handling requests.

Beyond these five steps, you and your organisation play a role in improving the chosen AI solution through valuable feedback and feature suggestions. The collaboration between humans and software allows us to achieve much more than humans would on their own. If done right, it can be transformative for patients.

Are you ready to start the AI journey? Get in touch!

Jeroen van Duffelen, COO & Co-Founder

Jeroen van Duffelen is COO and co-founder of Aidence. Jeroen’s entrepreneurial spirit led him to teaching himself software engineering and starting his own company commercialising an online education platform. He then tried his hand in the US startup ecosystem where he joined a rapidly scaling cloud company. Jeroen returned to Amsterdam where he ran a high-tech incubator for academic research institutes, it is here Jeroen first got his taste for applying AI to healthcare.

Is artificial intelligence the key to effective and sustainable lung cancer screening?

Lizzie Barclay doctor

Dr Lizzie Barclay explores how artificial intelligence can influence lung cancer screening.

Radiology as the starting point

Imaging plays a fundamental role in lung cancer screening programmes. So, when it comes to improving technology to support the programmes, the radiology department is a good place to start.

The goal of screening is to pick up early cancers which can be treated and potentially cured, therefore improving patient outcomes (as outlined in the NHSE long term plan). Low dose CT has been shown to provide sufficient image quality for detection of early disease, whilst minimising radiation dose in asymptomatic individuals. Thoracic radiology expertise is required to determine which lung nodules may be malignant and therefore require invasive investigation, and which are likely benign and can be monitored with intermittent imaging. Appropriate follow-up recommendation helps avoid unnecessary invasive procedures, such as biopsies, and minimise patient anxiety, which are important measures of the efficacy of lung cancer screening programmes.

End to end lung cancer screening involves input from many healthcare professionals, and intelligent computer systems across specialities would benefit multidisciplinary teamwork. Thus, beyond image analysis, there are many opportunities for technology to add further support for effective and sustainable screening programmes. For instance, it could aid in the optimisation of image acquisition, access to imaging reports and relevant clinical details, tracking patient follow up, or in communication between patients and GPs.

Where AI-based image analysis makes a difference

Reading and reporting CT scans is time-consuming, and within a workforce which is already under strain, introducing a new CT-screening programme seems like a tall order. AI-driven solutions can support radiologists and contribute to successful lung cancer screening by bringing improvements in three areas:

  1. Performance

Computer intelligence can increase the performance and productivity of CT reporting, freeing up time for radiologists to spend on clinical decision making and complex cases. Specifically, AI software is well-suited for precise:

  • Detection of elusive lung nodules, and differentiation of subtle changes
  • Automatic volume measurements, to help determine the appropriate frequency of monitoring (e.g. stable vs growing nodule, according to the BTS guidelines).

What further distinguishes computers from humans is the absolute consistency in their high performance, without being impacted by common external stressors to which a radiologist would be exposed (e.g. time-pressure, workload and interruptions).

  1. (E)quality

Having a ‘second pair of eyes’ looking at the scan can increase the confidence of the radiologist in their own assessment. Additionally, making the AI-driven, accurate measurements available regardless of the level of expertise of the reporting radiologist could not only benefit quality assurance, but also equality within the radiology department. The use of AI would reduce the need for all scans to be reported by the most experienced thoracic radiologists with interest in early lung cancer detection, and instead facilitate spreading the workload across the workforce.

Another use case concerns quality assurance when outsourcing to teleradiology companies. AI-based image analysis can improve consistency of reporting, drive the recommended terminology use, and, essential for lung cancer screening, ensure access to relevant prior imaging for comparison and change assessment over time.

  1. Efficiency (via integration)

An intelligent computer system should not slow down reporting turnaround times, but improve efficiency, as well as quality, to ultimately minimize time to diagnosis (for example, the NHSE long term plan introduces a 28-days standard from referral to diagnosis or rule out).

Older CAD technology was often described as ‘clunky’ – requiring images to be uploaded to separate systems for analysis. Additional manual steps between image acquisition and the radiology report make the process time consuming, and often require radiology support staff to manage the workflow. It is important to consider allocative and technical efficiency which play important roles in the evaluation of screening programmes, and their impact on healthcare systems.

An AI-driven image analysis software which is fully-integrated in the radiologist’s pre-existing workflow can provide automatic results, without needing additional departmental resources. An additional benefit of fully-integrated AI solutions is that their use is not restricted by time or place, therefore supporting flexible and remote working. In the context of the COVID-19 pandemic, it’s been encouraging to see the increase in remote reporting, whilst maintaining a functioning department, in many hospital trusts. Going forward, it will be interesting to see whether radiologists will have the option to continue to work remotely where possible.

Valuing input from healthcare professionals

New lung cancer screening programmes will be monitored regularly to evaluate their effectiveness and determine areas for review. Commitment from all parties to work together will facilitate optimisation of the pathway to achieve better patient outcomes and positive impacts on healthcare systems.

In our experience, close collaboration between medtech and healthcare professionals is important for learning lessons along the way. Understanding radiologists’ needs helps tech teams develop a clinically valuable tool.

For example, Aidence’s interactive lung nodule reporting tool, Veye Reporting, was designed based on the needs of radiologists involved in reporting lung screening scans. From our conversations with them, we understood that following the detailed and complex reporting protocols in lung cancer screening programmes make for labour-intensive, repetitive tasks.

Veye reporting

To help them produce reports that follow the standardised NHSE proforma and facilitate audit for quality assurance, we added Veye Reporting as a feature to Veye Chest, focusing on making it easy-to-use and efficient. With this tool, the radiologists further have control over which nodules to include in the report, different sharing options, and the choice to add incidental findings.

What’s next?

Cancer services have been impacted by the COVID-19 health emergency. In the UK, screening has been paused and planning to (re-) start at the end of 2020 or beginning of 2021. Talks of introducing screening are ongoing in various European countries, as are concerns of catching up with the backlog of screening scans.

The British Society of Thoracic Imaging and the Royal College of Radiologists released these considerations for optimum lung cancer screening roll-out over the next five years. Their statement below is a reminder of why it is worth overcoming challenges and leveraging technology to make screening programmes a success:

BSTI_RCR statement

Dr Lizzie Barclay, Medical Director

Dr Lizzie Barclay’s areas of interest are thoracic radiology and medicine, innovation, and improving patient outcomes and healthcare professionals’ wellbeing.

Lizzie is originally from Manchester, UK. After graduating from the University of Leeds Medical School (MBChB), and Barts and the London School of Medicine (BSc sports & exercise medicine), Lizzie spent four years working as a doctor in Manchester and Liverpool NHS Trusts, including two years in Clinical Radiology. She has presented her work on lung cancer imaging at national/international conferences, and recently contributed to Lung Cancer Europe’s “Early Diagnosis and Screening” event at the EU Parliament in Brussels.

https://www.aidence.com/

You may be interested in the BIR Lung Cancer Imaging: Update for the not-so-new normalon 11 September 2020. This will be available for members in the BIR online learning libraryafter the live virtual event.