Artificial intelligence (AI) promises to revolutionize the health care industry. AI’s efficiency and accuracy — combined with human creativity and empathy — could usher in a new era of medical treatment, in which diseases and conditions that once seemed untouchable can finally be addressed. Already, AI systems have produced marked improvements in diagnosis solutions such as X-rays and CAT scans. These may come at a cost, however, as the many opportunities presented by artificial intelligence in medicine are accompanied by a few significant risks.
In a perfect world, AI would only be used for good. The unfortunate reality remains, however, that these advanced systems can easily be manipulated to harm the patients they’re charged with protecting. Even unconscious problems hold a greater potential to cause issues when AI is involved. This could complicate future DC medical malpractice cases, as the source of negligence can be difficult to determine when AI is involved.
Below, we shed light on these fears, as well as potential solutions that can safeguard the type of systems that integrate artificial intelligence into medicine.
A 2019 paper published in the journal Science reveals the danger of manipulations known as adversarial attacks. During these incidents, seemingly minor digital data is changed to alter how AI systems behave. These shifts may lead to missed or inaccurate diagnoses, which can be deadly for vulnerable patients.
According to Harvard Medical School and M.I.T. researcher Samuel Finlayson, many of the greatest concerns related to AI and medicine stem from the industry’s financial incentives, which may lead to the manipulation of billing codes. Finlayson explains that “the inherent ambiguity in medical information…allows for high-stakes decisions to swing on very subtle bits of information.” This could impact how payouts are handled in hospitals or even whether medical device manufacturers are able to achieve regulatory approval.
A key example of this danger is highlighted in a New York Times story from 2019: shifts in AI diagnosis language may prompt changes that benefit insurers. In turn, businesses may adopt practices that bring in greater profits, rather than focusing on those that are more effective from a treatment standpoint. Payout-related manipulation already occurs among many doctors, but this could be accelerated as AI systems become more prominent.
Bias is a huge problem in the health care sector. Already, inequality can be seen in every area of medicine, with minority patients suffering a lower standard of care, and, as a result, worse outcomes. While AI may initially seem like a great solution for overcoming human biases, the opposite appears to be the reality: this technology can easily perpetuate existing issues.
This problem can be attributed to the concept of algorithmic bias, in which algorithms exacerbate existing inequalities related to race, gender, sexual orientation, disability, and socioeconomic status. This problem is cited in a 2019 paper published in the Journal of Global Health. Highlighting the implications of algorithmic bias for the health care sector, the report explains that this form of bias represents far more than a simple engineering conundrum. Rather, it reflects harmful worldviews which, if not properly addressed, can wreak havoc on entire populations.
A key example of the societal basis of this problem: prediction procedures for heart disease. Cardio risk systems such as the Framingham Heart Study score might deliver reliable results for Caucasian patients, but they do little for African Americans. This problem results from disparities in representation during genomic and genetic research, which are dominated by Caucasian subjects. Similarly, women of childbearing age were long shut out of medical studies.
Take these existing problems and add an algorithm that streamlines the research process — it’s easy to see why biases can quickly create huge problems. This problem can creep in at any point of the process and tends to have a trickle-down effect that impacts the remainder of research efforts. Hence, the importance of including people from a variety of backgrounds on data science teams. Proactive measures for promoting diversity, equity, and inclusion in health care and research will also be critical moving forward.
AI solutions may be efficient, but there’s no true replacement for the level of empathy that human health care providers bring to every interaction with their patients. We tend to underestimate the extent to which human connection influences long-term outcomes, but a paper from the journal AI & Society suggests that it’s foolish to neglect this element of the health care experience. According to Berkeley Public Health bioethics professor Jodi Halpern, this plays a huge role in diagnosis accuracy, as patients are far more likely to disclose sensitive, but critical information to trusted doctors who demonstrate genuine empathy.
Halpern adds that strong relationships with empathetic doctors also make patients more likely to stick with treatment protocol and use positive coping mechanisms when they receive bad news. She fears that reliance on AI could erode the expectation of building empathetic relationships between doctors and patients. As such, she recommends that AI be used when it’s practical, but never to “replace primary doctor-patient relationships as sources of therapeutic empathy.”
Artificial intelligence provides a great deal of potential for a health care industry in dire need of every technical advantage possible. Still, it would be foolish to ignore the many risks associated with AI. If these are addressed head-on, issues such as algorithmic bias and adversarial attacks can be avoided. The result could be a more equitable and secure health care system — and a brighter future for those who have previously been underserved.
Do you suspect that a health care provider’s misapplication of artificial intelligence in medicine caused you or a loved one unnecessary suffering? Or have other forms of modern technology been used negligently as you pursued medical treatment?
At Regan Zambri Long PLLC, we understand both the opportunities and negative implications of cutting-edge technology. We can help you secure medical malpractice damages. Contact us to learn more about your options.Tagged AI, artificial intelligence, healthcare, healthcare bias, Medical Malpractice, medicine, medicine safety, Misdiagnosis