From 2010 to 2017, the ImageNet project annually released large annotated visual datasets as part of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC(1)); the idea was to advance the development of computer vision programs that could accurately detect, localize and classify objects. Significant advances were marked by the development of the ‘Alexnet’ architecture which won the 2012 ILSVRC by a considerable margin. This helped kickstart a surge of interest that would spread into many different industries, a metaphorical ‘big bang’ as people imagined all the potential applications of convolutional neural networks enabled by ever more powerful graphical processing units and increasing availability of large volume datasets.
Radiology in particular found itself a fertile ground for the development of computer vision applications so much so that as of August 2023, 77%(2) of FDA cleared AI-enabled medical devices are within radiology. This is in part due to the transition from conventional film screen to digital radiology and the storage of images brought about by the widespread implementation of PACS from the late 1990s which has led to an abundance of imaging datasets from which to train deep learning algorithms. It is estimated approximately 90%(3) of healthcare data is imaging data.
A combination of clinicians, researchers, engineers, entrepreneurs and venture capitalists have further driven the development of AI solutions to help address global problems facing radiology such as ever-increasing workloads and workforce shortages. As a result, there are currently over 200 radiology AI vendors and over 350 regulatory (FDA or CE) cleared radiology AI applications available today (2,4,5).
The entire ecosystem cannot support such large numbers of AI applications and vendors which is subsequently leading to market consolidation. According to Signify Research(6) the number of deals made is falling from a high of 61 in 2021 to 24 so far in 2023 (Q1-Q3) however the average deal size has increased this year from $18million (2021) to $20.9 million (2023) suggesting more selective distribution of funding to fewer later stage companies.
Tough competition in saturated market segments adds additional pressure by forcing AI vendors to seek ongoing product differentiation and avoid commoditization whilst also remaining competitively priced. There are, for example, over 20 FDA/CE cleared commercially available AI algorithms designed to detect intracranial haemorrhage (brain bleed) on CT imaging as well as over 20 separate FDA/CE cleared AI applications designed to analyse a chest X-ray for abnormal findings. As each product continues to iteratively improve via new version updates, new and useful functionality is being unlocked; for instance, AI is increasingly able to compare with prior relevant studies to provide outputs showing change in pathology over time – a key component of image interpretation.
AI vendors looking to quickly round out their offering and increase their use case coverage may consider partnering with other AI vendors who have applications that would complement their existing product line. Alternatively vendors may focus on expanding their own native product portfolio or consolidating into an all-in-one solution which reduces the need for the healthcare provider to procure and integrate multiple separate narrow point solutions.
Comprehensive AI solutions detecting multiple pathologies has, however, generally been constrained by the approaches of different regulatory bodies. For example, in Europe there are CE cleared chest X-ray AI applications which can detect and localize over 100 findings whereas in the USA current commercially available chest X-ray AI applications are typically only FDA cleared for a handful of findings and usually for case level triage only.
The discrepancies in product feature set and availability amongst different geographic regions presents opportunities for vendors to be first in a market where there is no pre-existing predicate device on which to base a 510K regulatory filing. As an example, paediatric bone age assessment on X-ray using AI is a fairly well established(7) use case in Europe where there are multiple commercially available CE cleared products however the same cannot be said for the USA where there are currently no FDA cleared equivalents.
Value and efficiency can also be delivered in other components of the radiology imaging lifecycle which may arguably have a lower barrier to acceptance and adoption, e.g. ordering imaging studies in line with appropriate use criteria, workforce and patient scheduling, modality optimisation, faster reporting and ensuring follow up recommendations are actioned. Whilst use cases focusing on the operational aspect of radiology practice do not typically garner as much attention as pixel based deep learning AI models, they nonetheless can offer healthcare providers concrete return on investment such as increased scanner throughput and increased revenue generation via follow up exams.
Over time we expect to see more best-in-class products emerge with first-mover advantage paying dividends to vendors who have generated traction and are able to deliver ongoing value to customers. It may therefore be harder for newer vendors to displace incumbents with a large pre-existing install base and significant funding behind them. In addition, expectations are also growing from healthcare providers around how much clinical evidence is sufficient, how AI should be integrated within the workflow and how much it is worth paying for.
To reduce friction and facilitate AI adoption, platforms are increasingly being seen as a solution to the technical and contractual overheads involved in procuring and integrating multiple discrete AI applications. Beyond workflow orchestration and scalability there are similar evolutions and shifts in how platforms can continue to provide value to healthcare providers. This could involve supporting the technical deployment of in-house developed applications, facilitating custom workflows or supporting evaluations of new AI products to prove ROI and efficacy as well as help in managing and monitoring AI products post deployment.
We are increasingly seeing a growing ecosystem of vendors, platforms and big tech companies providing the tools and infrastructure for individuals and healthcare providers to develop their own machine learning operations. This can range from help with data curation, labelling, low-code model development, deployment and monitoring. While this has typically been of most interest to academic medical centres and research organisations there is increasing awareness that healthcare providers hold large volumes of valuable untapped imaging data which can be better utilised. For example, federated platforms to support aggregated dataset curation(8), research or validation of AI algorithms, internal data analytics or even monetisation of healthcare data.
Gaps in the commercial market of AI algorithms could also potentially be addressed by the development of in-house algorithms guided by clinicians and researchers. National initiatives such as the NHS COVID-19 imaging database(9) and Trusted Research Environments(10) in the UK help provide the data environment researchers can use to develop and improve AI algorithms. Consortiums or networks of hospitals can similarly help pool resources under shared data governance to co-create and deploy AI models across participating sites.
There are multiple differences between in-house developed and commercially available algorithms which relate to the different incentive structures and who the key stakeholders are driving development. We can see areas where several specialties and use cases are over-represented commercially and other areas where there is a lack of regulatory cleared algorithms; this is perhaps most noticeable in paediatric imaging where the American College of Radiology (ACR) has set up a working group(11) to help address this gap. With the growing democratisation of AI model development, clinicians and researchers may be more empowered to create their own solutions in scenarios where it may not be economically feasible or technically possible (through lack of access to datasets) for commercial vendors.
AI in radiology has come a long way since Alexnet dramatically improved the performance of computer vision object detection applications in 2012, however there remains more work to be done before we can get to widespread adoption.
Payment of AI applications remains of paramount importance for both vendors and healthcare providers. Whilst in the USA there have been separate reimbursements via NTAP (New Technology Add on Payments), New Technology APC (Ambulatory Payment Classifications) and certain CPT codes, the proposal of a set of clear criteria(12) for when an AI application may be eligible for separate reimbursement may help incentivise AI developers to generate strong robust evidence of improved efficiency or better patient outcomes.
Ongoing developments in regulations such as the FDA draft guidance(13) on Predetermined Change Control Plans for Artificial Intelligence/Machine Learning-Enabled Medical Devices helps to lay out the intended processes for AI algorithms to improve performance and quickly adapt to local or new data without requiring a completely new regulatory submission. This could improve health equity by enabling AI applications to positively adapt to the different patient characteristics/demographics of where they are deployed. More significantly, proposed legal frameworks such as the European Union AI Act(14) and the recently issued USA Executive Order on the Safe, Secure and Trustworthy Development and use of Artificial Intelligence(15) are set to have a drastic impact on the regulation of radiology AI by placing additional safety and transparency measures on developers of AI. Robust regulation is vital for the maturation of AI in healthcare however in the short-medium term this may lead to further geographic discrepancies in the availability of commercial radiology AI as vendors prioritise markets(16) with a perceived lower barrier to entry.
Recent developments in generative AI and more specifically large language models, popularised by ‘ChatGPT’, has also bolstered interest in their use within radiology as a way to improve operational efficiency e.g. reviewing Appropriate Use Criteria to support vetting and protocolling of imaging studies. Whilst the regulatory framework is still needed to manage the usage of large language models in healthcare, it could pave the way for the next evolutionary phase of radiology AI applications based on multi-modal AI models(17) which can process data from a diverse range of different sources to generate more meaningful and accurate predictions or outputs.
In the future this could mean an AI model that takes on more of the spectrum of cognitive work typically expected of a Radiologist. An AI model that can understand clinical context conveyed textually in the patient history and the ability to correlate this with the findings of pixel-based imaging AI on previous and current imaging to derive the most likely diagnosis. Just as the move to digital radiology and PACS reporting fundamentally changed the way radiologists worked, further changes can be expected as the ongoing evolution of new technologies continues to shape, but not replace, the radiologist of tomorrow.
Last updated: 11/9/23
Jamie Chow trained as a doctor in the NHS where he completed his radiology specialty training in 2021. Now working for Blackford as Clinical Lead, Jamie leverages his expertise to assist healthcare providers in implementing effective AI adoption strategies that enhance patient outcomes and streamline clinical workflows.