The healthcare industry, like all industries, is experimenting with AI tools. As we have commented before, the legal issues that are present with the use of AI tools apply to all industries and consideration should be given to mitigating those risks.

Another consideration for the healthcare industry was recently thoughtfully outlined by Carrie Pallardy of Information Week in her post entitled “How AI Ethics Are Being Shaped in Health Care Today.” She posits that as AI is used in health care decisions, there is a “clear potential for harm.” Although a study in JAMA Internal Medicine found that ChatGPT outperformed physicians in answering patients’ questions and could “ease the burden on clinicians and making patient care better,” her interviews with providers led her to the conclusion that the use of AI tools may harm patients. One of her interviewees concluded: “Will patient harm be inevitable? Yes, the question is how much.”.

Those in the healthcare industry who are contemplating the use of AI tools in the clinical setting should be aware of a number of resources Pallardy lists, including guidelines from the European Union, the American Medical Association, the World Medical Association, the World Health Organization, and the Coalition for Health AI. All of these publications should be considered when determining how to govern the use of AI tools in a clinical setting. Pallardy concludes, and I wholeheartedly agree, that the development of AI tools is far outpacing the ability of organizations and regulators to monitor, put guardrails around, evaluate, and implement appropriate regulation. This leaves the governance and ethical considerations of the use of AI tools in the healthcare industry largely with healthcare organizations. All the more reason for healthcare organizations to be leading the effort now to determine the appropriate strategy, ethical constraints, and governance of the use of AI tools in patient care for the well-being of patients.

[View source.]


Source link