AS the world of technology continues to change , the use of Artificial Intelligence(AI) has become inevitable with the health sector becoming part and parcel of the transformation.
While there has been a variety of ethical concerns around the implementation of AI in healthcare, arguments have been that the use of the technology can potentially enable solutions to some of the challenges faced by healthcare systems around the world.
AI generally refers to a computerized system (hardware or software) that is equipped with the capacity to perform tasks or reasoning processes that we usually associate with the intelligence level of a human being.
Role of AI in health care
In healthcare, AI has the potential to drive scientific breakthroughs, aid in diagnosis, treatment, and management of diseases, accelerate drug development and delivery, control costs, and support health equity.
The usage of AI in healthcare involves a variety of technologies that allow machines to accomplish tasks that normally entail human intelligence, such as problem-solving, learning, and decision-making.
Artificial intelligence may help scientists discover genetic differences that explain why, for example, some people with high blood pressure respond well to a particular medication while others do not.
It can also aid in explaining why certain people experience specific side effects when they undergo a treatment that’s generally well-tolerated.
Researchers on the other hand can make use of AI to discover new healing solutions for existing medicines.
Researchers at eprints argue that a challenge and prerequisite for implementing AI systems in healthcare is that the technology meets expectations on quality to support the healthcare professionals in their practical work, such as having a solid evidence base, being thoroughly validated and meeting requirements for equality.
Considering that health care is all about people and their existence, it has become critical for the implementation of AI in healthcare to sensitive to ethics.
Ethical considerations for implementing AI in health care
While there is no generally recognised ethical framework, there are a number of ethics that are important in developing Ai in health care.
These include, fairness, transparency, trustworthiness, accountability, privacy, and empathy.
Fairness
According to Bukowski et al, 2020, AI health systems must ensure access to health care is equitable so as not to contribute to health disparities or discrimination.
This implies that AI models should be trained with appropriate and representative datasets to reduce biases, make accurate clinical predictions, and reduce discrimination making affair representation of facts.
Governing bodies and healthcare institutions should develop normative standards for AI in healthcare which should inform how AI models will be designed and deployed in the healthcare context.
AI design should thus ensure fair process and fair allocation of resources are applied unswervingly, with a view to safeguard against combative attack or the introduction of biases or errors through self-learning or malicious intent.
Transparency
Transparency has often been referred to as a key challenge for acceptance, regulation, and deployment of AI in healthcare.
It focuses on the ability to explain and verify the behaviors of AI algorithms and models according Blobel et al, (2020) who further asserts that transparency and explanations of clinical decisions are essential for medical imaging analysis and clinical risk prediction.
Where patient data may be shared with AI developers, there must be a process to seek fully informed consent from patients and, if it is unfeasible/impractical to seek approval, data must be anonymized so that individual patient details cannot be recognized by the developers.
Trustworthiness.
To build trustworthiness in AI, both clinicians and patients need to be engaged.
Clinicians must explain how AI works and what its advantages and limitations are, and patients need to be willing to accept AI and engage in AI-driven healthcare ( Reddy et al., 2020).
To address lack of trust from a clinician and patient perspective, the scholar further proposed a governance model with a multipronged approach that includes technical education, health literacy, and clinical audits.
In relation to clinician’s trust, AI applications need to be designed to respect the autonomy of patients (Bukowski et al., 2020).
Accountability
The principle of accountability comprises safety where consideration is given so that the AI system will not cause harm or danger to its users and third parties (Blobel et al., 2020).
In AI development this implies that a system should be able to take responsibility or account for its actions.
Challenges to implementing AI in health care
While AI presents vast opportunities, there are also vast challenges assassinated with implementing AI especially in health care.
A research carried out by BM Health Servers in Sweden highlighted several implementation challenges in relation to AI within and beyond the healthcare systems in general and in organisations.
The challenges comprised conditions external to the healthcare system, internal capacity for strategic change management, along with transformation of healthcare professions and healthcare practice.
The study results pointed to the need to develop implementation strategies across healthcare organisations to address challenges to AI-specific capacity building.
While laws and policies are needed to regulate the design and execution of effective AI implementation strategies, the research also highlighted the need to invest time and resources in implementation processes, with collaboration across healthcare, county councils, and industry partnerships.