--- title: "When AI enters hospitals, risks such as \"surgical errors and misidentification of organs\" also arise" type: "News" locale: "en" url: "https://longbridge.com/en/news/275537054.md" description: "According to U.S. data, after the introduction of AI in Johnson & Johnson's TruDi system, the number of failure reports surged tenfold, even leading to risks of death due to \"navigation misguidance\"; data shows that the recall rate in the first year after AI devices were approved reached as high as 43%, far exceeding that of traditional devices. However, the key team at the U.S. Food and Drug Administration (FDA) has seen significant staff reductions, and the traditional regulatory framework is nearly ineffective in the face of a massive number of applications, leaving patients as unwitting \"test subjects.\"" datetime: "2026-02-11T01:54:32.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/275537054.md) - [en](https://longbridge.com/en/news/275537054.md) - [zh-HK](https://longbridge.com/zh-HK/news/275537054.md) --- > Supported Languages: [简体中文](https://longbridge.com/zh-CN/news/275537054.md) | [繁體中文](https://longbridge.com/zh-HK/news/275537054.md) # When AI enters hospitals, risks such as "surgical errors and misidentification of organs" also arise Medical device companies are rapidly embedding AI into surgical and diagnostic equipment, using "intelligence" as a growth selling point, while also **bringing new failure modes and liability risks into hospitals, with reports of suspected injuries and failures received by regulatory agencies on the rise.** According to a Reuters review of safety and legal records, as well as interviews with doctors, nurses, scientists, and regulators, the U.S. Food and Drug Administration (FDA) has received reports in recent years of issues such as **misleading surgical navigation, missed reports of arrhythmias, and prenatal ultrasounds "misidentifying body parts."** In a specific case, Acclarent, a medical device company under healthcare giant Johnson & Johnson, announced in 2021 the introduction of machine learning algorithms for its sinus surgery navigation system, TruDi. Reuters cited unverified reports from the FDA stating that **after the introduction of AI, reports of failures and adverse events related to TruDi have reached at least 100, far exceeding the single-digit levels before AI was added,** and multiple lawsuits alleging suspected injuries have emerged. **As risks are exposed, regulatory capabilities are also under pressure.** Reuters cited five current and former FDA scientists who stated that as a wave of AI medical device applications floods in, it has become more difficult for the FDA to "keep up the pace" after key team reductions. ## **TruDi Case: Surge in Reports After AI Integration, Lawsuits Point to "Misleading Navigation"** Acclarent introduced machine learning into the TruDi Navigation System in 2021 to assist ENT doctors in sinus-related surgeries. According to Reuters, the device added AI functionality about three years after its market launch. Before the addition of AI, the FDA received **seven unverified reports of device failures** and one report of patient injury; after the addition of AI, the FDA received at least **100 unverified reports of failures and adverse events.** Reports cited by Reuters indicate that at least 10 people were injured between the end of 2021 and November 2025, with most incidents reportedly related to TruDi incorrectly indicating the intracranial position of instruments to the surgeon. **Consequences described in the reports include cerebrospinal fluid leaking from the nasal cavity, mispenetration of the skull base, and strokes caused by accidental damage to major arteries.** Two stroke patients filed lawsuits in Texas, alleging that the AI in the TruDi system contributed to their injuries. One complaint stated that the product "might have been safer" before integrating AI. Integra claimed there is "no credible evidence" showing a causal link between the TruDi system, AI technology, and the alleged injuries. ## **Signals such as "misidentifying body parts": FDA Reports Point to Various AI-Enhanced Devices** The FDA emphasized that there are inherent limitations to adverse event and failure reports, which may lack details, be redacted to protect trade secrets, or have multiple reports for the same event, making it impossible to attribute causation based solely on reports. Nevertheless, Reuters statistics show that between 2021 and October 2025, at least 1,401 reports submitted to the FDA involved 1,357 products using AI listed by the FDA (which also stated that this list is not exhaustive), with at least 115 mentioning software, algorithm, or programming issues Reuters cited a report submitted to the FDA in June 2025, stating that Sonio Detect, used for prenatal ultrasound, has an algorithm issue of **“incorrectly labeling fetal structures and associating them with incorrect body parts.”** The report did not mention any harm to patients. The manufacturer, Samsung Medison, stated that the report “does not indicate any safety issues.” Another type of clue comes from heart rhythm monitoring. Reuters reported that at least 16 reports claimed that Medtronic's **AI-assisted cardiac monitoring devices failed to identify abnormal rhythms or cardiac arrest**, and these reports did not mention any injuries. Medtronic told Reuters that after reviewing, the company believes the device only missed detecting one abnormal event, “which did not result in patient harm,” and stated that some incidents were related to data display issues rather than the AI itself, while refusing to elaborate on each case. ## **Recall Study: The recall rate of AI devices is twice that of overall device recall rates, with defects exposed more quickly** In addition to individual case reports, recall data is also strengthening investors' focus on the “post-market risk curve.” Reuters cited a research letter published in August 2025 in the _JAMA Health Forum_, stating that researchers from Johns Hopkins, Georgetown, and Yale found that **60 FDA-authorized AI medical devices were associated with 182 product recalls, of which 43% occurred within less than a year after approval.** The study stated that **this recall rate is approximately twice that of all devices approved under similar FDA regulations.** ## **Approval Pathways and Safeguards: Most AI devices do not require patient trials for approval, traditional frameworks are being questioned** Reuters pointed out that the FDA typically requires clinical trials for new drugs, but medical devices follow different review pathways. Dr. Alexander Everhart, a lecturer at Washington University in St. Louis and a medical device regulatory expert, told Reuters that **most AI-enabled devices entering the market do not need to be tested on patients but instead meet regulatory requirements by referencing previously authorized devices that do not have AI capabilities.** Everhart believes that the uncertainties brought by AI are challenging existing practices. He told Reuters that the FDA's traditional regulatory approach to medical devices is “inadequate” for ensuring the safety and effectiveness of AI devices, and he is concerned that in reality, **there is more reliance on manufacturers' self-regulation, raising questions about whether regulatory safeguards are sufficient.** ## **Regulatory Capacity Under Pressure: The number of authorizations has doubled, while key teams have been downsized, increasing workload** Reuters reported that there are currently at least 1,357 FDA-authorized **medical devices using AI, which is more than double the number before 2022.** Reuters cited insiders saying that early last year, **the Trump administration began dismantling the AI team in a cost-cutting initiative led by Elon Musk (DOGE)**, with about 15 of the approximately 40 AI scientists in DIDSR being laid off or choosing to leave, and the Digital Health Center of Excellence, responsible for AI device policy formulation, also lost about one-third of its staff, approximately 30 people Some former employees stated that **after the layoffs, the workload of some reviewers nearly doubled, and "when resources are insufficient, problems are more likely to be overlooked."** HHS spokesperson Andrew Nixon told Reuters that the FDA applies the same strict standards to AI-assisted medical devices, such as those using machine learning, as it does to other products, and emphasized that patient safety is the highest priority. The FDA is still recruiting and training talent in the fields of digital health and AI ### Related Stocks - [Johnson & Johnson (JNJ.US)](https://longbridge.com/en/quote/JNJ.US.md) - [iShares Global Healthcare ETF (IXJ.US)](https://longbridge.com/en/quote/IXJ.US.md) - [iShares US Medical Devices ETF (IHI.US)](https://longbridge.com/en/quote/IHI.US.md) - [SPDR® S&P Health Care Equipment ETF (XHE.US)](https://longbridge.com/en/quote/XHE.US.md) - [Vanguard Health Care ETF (VHT.US)](https://longbridge.com/en/quote/VHT.US.md) - [State Street®HlthCrSelSectSPDR®ETF (XLV.US)](https://longbridge.com/en/quote/XLV.US.md) ## Related News & Research - [BUZZ-RadNet rises as it acquires France's radiology AI firm Gleamer](https://longbridge.com/en/news/277451303.md) - [10:10 ETLRS SECURES IDE TO STUDY ITS THERMOSUIT COOLING DEVICE IN A PIVOTAL TRIAL OF ISCHEMIC STROKE PATIENTS](https://longbridge.com/en/news/277488497.md) - [Samsung Electronics to Introduce 2026 Bespoke AI AirDresser](https://longbridge.com/en/news/277395349.md) - [C3.ai Plummets 20% After Earnings. Should You Buy the Dip in AI Stock Now?](https://longbridge.com/en/news/277084149.md) - [Mizuho Financial Group to Replace 5,000 Administrative Jobs with AI in Productivity Push](https://longbridge.com/en/news/277189019.md)