Blog posts

The ethics of artificial intelligence in Healthcare
Published on 22 July 2023

Artificial Intelligence hold great promise in various sector including the healthcare. As the use of artificial intelligence (AI) technologies continues to infiltrate the healthcare space, the security of patient information, electronic health records (EHRs) and communications related to patients’ medical care has become an increasingly critical focus point for healthcare organizations.

The proliferation of AI in healthcare has brought with it many questions about AI ethics that are being hotly debated in academia, industry, and the media.

As the healthcare industry moves, albeit slowly, away from paper-based documentation, AI technology and machine learning programs will be increasingly relied upon to capture and manage highly sensitive patient information.

While we have all seen some of the amazing things that AI technology has achieved such as the ChatGPT, there are drawbacks beyond questions about security risks.

One fact that may come as a surprise to some is that the data that was collected in the chatbot’s programming process has not been updated in over 18 months.

Different perceptions of ethics

There has been an increasing proliferation of voices calling for a globally binding standard. A European Commission’s expert group on artificial intelligence presented a set of Ethics Guidelines in late 2018.

In June 2023 the European Parliament agreed to a first draft of the law on artificial intelligence.

Numerous other guidelines and expert opinions exist on the issue. However, a systematic analysis conducted by the Health Ethics and Policy Lab at the federal technology institute ETH Zurich found that no single ethical principle was common to all of the 84 reviewed documents on ethical AI.

Given the considerable differences between ethical principles of individual states, it is uncertain whether common international AI regulations will ever be agreed upon. However, five principles were mentioned in more than half of the 84 reviewed documents: transparency, justice and fairness, preventing harm, responsibility, data protection, and privacy.

Heated discussions about AI are also reported at tech giants like Google. In 2021, the company fired two ethics experts in a dispute, in a decision that calls into question whether a moral code surrounding AI is really a priority for Big Tech.

The need of Establish guidelines and enforce compliance:

One way that organizations can optimize their AI strategy is to establish definitive guidelines and enforce compliance with them. By setting specific policies on the strategic and ethical use of patient data that has been generated using AI technologies, health plans and the companies they partner with can then train their workforce to ensure a secure and efficient internal data management process.

It’s also important to recognize that, as the collection of data becomes more automated and less human-managed, AI strategies that are implemented incrementally will reap the benefit of addressing snags at early stages versus—potentially—too late.

Establishing achievable milestones that each occur in relatively short time frames will likely produce better long-term results as this allows time to identify and resolve issues before they can turn into bigger problems.