Review Article
Creative Commons, CC-BY
Ethical Considerations in AI Applications for Biomedical Science and Patient Care
*Corresponding author: Bolanle Abimbola, University of Oviedo, C. San Francisco, Spain.
Received: November 22, 2024; Published: December 02, 2024
DOI: 10.34297/AJBSR.2024.24.003269
Abstract
The paper discusses the ethical issues of using AI in biomedical science and patient care. It shares possible AI adoption benefits and challenges, especially regarding data privacy, algorithmic bias, and decision-making accountability. The ever-changing regulatory landscape and legal responsibility associated with AI in healthcare have been discussed here. Studies emphasize ethical dilemmas and press on the need for proactive ethics assessment, ongoing monitoring, and interdisciplinarity. The paper concludes with the way forward in ethical AI, including a standardized set of moral guidelines, a robust data governance framework, and explainable AI methods.
Keywords: Artificial intelligence, Healthcare, Ethics, Bias, Privacy, Accountability, Regulation, Explainable AI, Interdisciplinary collaboration
Introduction
Artificial Intelligence is rapidly reshaping biomedical science and patient care, affording unparalleled opportunities in diagnosing, treating, and preventing disease. AI algorithms may further study vast volumes of data, recognize patterns in such data, and predict a wide range of medical outcomes with unparalleled precision. These will contribute to better patient care and facilitate seamless healthcare systems. Applying AI in basic biomedical science and direct patient care creates severe ethical issues. It will continue debating opportunities and challenges in AI Health, related ethical considerations-data privacy, algorithmic bias, accountability in decision-making processes-and analyze the regulatory and legal landscape surrounding AI in health. Finally, the paper will provide the design of AI in a more ethical way and implementation, underpinning that this must be done by interdisciplinary collaboration with continuous oversight in case there is ever a need to roll out AI technologies that carry moral and socially commendable values.
The Role of AI in Biomedical Science
Artificial intelligence today is being used in a wide range of biomedical science applications, including:
Diagnostics
AI algorithms will interpret medical imaging tests, including X-rays or MRI scans, to identify abnormal results (Figure 1).
Personalized Medicine
AI will analyze patient data at an individual level and predict treatment responses.
Drug development
AI will also further accelerate the identification of drug candidates and their efficacy and safety.
Patient monitoring
AI-powered wearable devices track vital signs and trigger warnings about a patient's possible health issues (Figure 2). AI algorithms, especially Convolutional Neural Networks (CNNs), are game changers in medical image analysis. Like the human brain, they are very good at pattern and feature identification in images. They apply different sets of filters on the image, extracting applicable features at each layer. These features then classify the image, for example, identifying a tumor in an X-ray or detecting signs of diabetic retinopathy in a retinal scan. Applications such as these could lead to massive improvements in patient outcomes, significant reductions in medical errors, and enhancements in healthcare's overall efficiency. Yet, finding a balance in ethical implications for AI in healthcare will be crucial in ensuring patient benefits and safety and that moral codes are not violated.
Ethical Challenges in AI Implementation
Data Privacy and Patient Consent
AI algorithms in healthcare require tremendous patient data to learn from and predict. This includes Electronic Health Records (EHRs), genomics information, and even lifestyle information achieved with wearables, of which all this information is sensitive and private. Moreover, privacy for patients is guaranteed [1]. Thus, tight data security will be implemented, not breached or accessed without authority. More importantly, the consent to use data should be informed. The basis on which data is used and accessed and for what purpose are critical aspects in which the patients should raise concerns. Other issues that have sprung up with the increased usage of AI activities in health include data ownership and control. There has been sustained tension between using data to advance interests in healthcare and protecting patient privacy. Most AI algorithms, particularly deep learning models, are 'black boxes.' For instance, deep neural networks designed to predict patient outcomes might have many layers and interconnected nodes, which complicate the logic behind their predictions. The opacity of such systems raises serious accountability concerns and several questions about identifying potential biases in the system.
Algorithmic Transparency and Decision-Making Accountability
Most AI algorithms are black boxes, particularly in the deep learning class. It simply means that even experts have yet to learn about their inner mechanics or decision-making processes [2,3]. This lack of transparency is fundamental, creating accountability concerns because it would be hard to attribute responsibility and fix liability when adverse events occur. Consequently, Explainable AI (XAI) has emerged as a field of investigation directed toward making AI decision-making more transparent and understandable. This explainability, however, is still a far cry, particularly for complex models.
Potential Biases in AI Systems
Artificial Intelligence learns from whatever data it comes across during training. This may well reflect some currently existing biases of society or health care systems, in which case the AI system would continue to amplify such biases by making unequal treatments and outcomes among different patient groups, further compounding the problem of health disparities. It thus follows that development algorithms take into consideration diversity and excellence in representation (Table 1).
Regulatory and Legal Considerations
The rapid development of AI and integration into health has resulted in the emergence of a very complex regulatory landscape globally [4]. Legal frameworks are generally drafted before such advanced technologies have gone through the pain of addressing the unique challenges that AI has thrown up due to having been created under uncertainty by developers, practitioners, and patients.
Current Regulations Governing AI in Healthcare
In the United States: The FDA regulates AI-driven medical devices and provides guidance concerning Software as a Medical Device, including AI-powered diagnosis and treatment recommendations. It focuses heavily on safety, effectiveness, and rigorous clinical validation that addresses bias and maintains transparency. This framework standardizes patient health information privacy and security protection under the Health Insurance Portability and Accountability Act (HIPAA) [5,6]. It enforces the implementation of security measures within the AI system, consent to be taken from a patient regarding the use of data, and access to patient-related health information.
In Europe: The General Data Protection Regulation (GDPR) establishes a high standard for safeguarding personal data and privacy. Essential cornerstones for transparency, purpose limitation, data minimization, or AI applications in health sectors on large volumes of patient data. Besides this legislation, many ethical policies and guidelines emanate from the WHO to national medical societies. Policies underpin propositions of ethics beyond finding AI in healthcare justified or otherwise based on principles of beneficence, non-maleficence, autonomy, and justice (Table 2).
Compliance Challenges for AI Developers and Practitioners
There are significant challenges in navigating this dynamic regulatory landscape.
Dynamic Nature of AI: AI algorithms are continuously updated and renewed. Thus, their compliance with ever-changing regulations has been a burden [7,8]. Proactive regulatory compliance shall require the developers to update themselves regarding the latest guidelines and develop their system accordingly.
Non-harmonization: Most of the time, a lack of harmonization in AI regulations within healthcare jurisdictions burdens companies operating across borders and further raises a degree of complexity and costs related to compliance.
Legal Responsibilities and Liabilities: Many vital questions regarding legal responsibility and liability, in the event of errors or other adverse events, come into play when applying AI to healthcare. Thus, finding out who will be liable if the AI system fails to make a proper diagnosis or proposes inappropriate treatment becomes intricate and has high legal and ethical implications. That line of responsibility starts to blur when AI intervenes. Where does the responsibility vest with the algorithm developer, the health institution that deployed the system, or the clinician who relied on AI's recommendation? On the other hand, AI raises the additional problem of product liability. If it malfunctions or hurts the patient it powers, who will compensate that patient? This gets even more complicated with the belief that algorithms change and learn every day, so it would only be possible to establish a fixed point of responsibility.
Balancing Innovation and Responsibility
Rapid development in AI in health creates an acrobatic balancing act in capturing its potential to transform healthcare while remaining confident that this crucial technology has been developed and deployed ethically and responsibly [9]. This requires an active debate on many dimensions of interest for the patient, responsibility by developers, and consequences for society.
Strategies for Ethical AI Design and Implementation
Embed Ethics from the Beginning: Ethical considerations should be part of every step in the AI life cycle, from design and development to deployment and assessment. This will also involve high-level ethical impact assessments, identification of risks, and building mitigation strategies.
Ensure Data Privacy and Security: Data should be guaranteed to be private and secure using sound mechanisms related to data governance concerning the interest of patient privacy and relevant regulation on protection from data.
Role of Interdisciplinary Teams in Creating Fair Algorithms: An interdisciplinary team is a prerequisite to developing just and responsible algorithms, especially in sensitive healthcare. Well, here's a breakdown of their importance and how they help mitigate the bias to promote ethics in AI
Diverse Perspectives: AI algorithms do not emerge from a vacuum. They tend to carry the biases, suppositions, and beliefs of those creating the technology. An interdisciplinary team brings together individuals from different backgrounds; computer scientists, healthcare professionals, social scientists, and legal experts will challenge the biases above and ensure that algorithms can be developed from a completely different perspective.
Domain Expertise: Each discipline brings unique knowledge and expertise to the table. Clinicians genuinely understand the practical challenges of actual patients necessary for any AI solution to ensure it is relevant and beneficial for clinical practice [10-12]. Social scientists might help understand how social and cultural reasons could lead to bias in health data and algorithms. Legal experts will also ensure AI systems adhere to all relevant regulations and legal frameworks.
Holistic Approach: AI development needs to be holistic; that is, it must consider not just the technical but also social, ethical, and legal aspects. Interdisciplinary teams may nurture a deeper understanding of the challenges and opportunities of AI in healthcare.
Enhanced Problem-Solving: Diverse teams do a better job of identifying problems and solving complex ones. The various skills and perspectives in interdisciplinary teams can devise highly innovative and effective solutions.
How Interdisciplinary Teams Contribute to Fair Algorithms
Identifying and Mitigating Bias:
a) Data Diversity: Teams can ensure that the training data they use is representative and diverse, ensuring that AI systems' outcomes are not biased.
b) Bias Detection: Multidisciplinary experts may give valuable inputs on data biases, algorithmic biases, and social biases as sources of possible biases in algorithms.
c) Fairness Metrics: AI development recognizes and incorporates fairness metrics so that teams can understand how this algorithm would impact the different groups involved.
Promoting Ethical Design and Implementation:
a) Ethical Frameworks: Ethics experts and social scientists can help inform the development of AI systems based on moral principles grounded in societal values.
b) Transparency and Explainability: Teams can come together and figure out how to make AI systems more transparent and explain decisions made to instill trust and accountability.
Importance of Ongoing Oversight and Ethical Committees
The dynamic nature of AI requires auditing and constant review [13]. It would entail that ethics committees and review boards establish financial policies through which the AI systems are harnessed for responsible use [14]. The case studies of such applications made by these committees look out for unintended outcomes, deliberate on ensuing ethical challenges, and make recommendations for improvement.
Case Studies and Real-world Ethical Dilemmas
Real-life case studies address ethical considerations in translating AI into real-world healthcare settings. The cases discussed herein highlight that balancing innovation and ethics requires careful planning, continuous situational monitoring, and proactive steps toward resolving ethical conflicts.
Biased Risk Assessment Algorithms
For example, one study in science reported that one standard algorithm used for predictions of health needs badly underestimated the health risks of Black patients relative to white patients with comparable conditions [15,16]. This is because, relying on health care costs as a proxy for health needs, the algorithm effectively preordained that existing racial disparities in access to care were preserved.
Lessons learned: The key lessons learned are that data diversity and representation are cardinal; second, bias detection and mitigation must be done. Algorithms must be trained on diverse datasets representative of the populations they will serve.
Data Sharing Raises Privacy Concerns
An app to detect renal failure, developed in cooperation with Google DeepMind and the Royal Free London NHS Foundation Trust, had severe privacy problems. This project shared data from 1.6 million patients without explicit consent [17,18]. It was more than dubious who owned the data, whether it was transparent, or whether this may be a potential misuse of sensitive health information.
Lessons Learned: The lesson learned is that transparency and consent lie at the heart of data-sharing agreements. Data-sharing agreements should also be made to respect patient privacy upfront: every patient should be informed of the purpose for which their data is used and permitted to give consent. The algorithm in the case study on bias in risk assessment algorithms would likely use a machine learning model, such as logistic regression or random forest, to predict health needs from patient data. These models apply a statistical relationship to extract data patterns and make predictions but may also propagate biases in the data.
Approaches to Resolving Ethical Conflicts: Ethical conflict resolutions for AI applications involve addressing many aspects, including
Proactive Ethical Assessment: Extensive ethical reviews of AI systems should be undertaken to identify potential risks and develop mitigation strategies.
Interdisciplinary Collaboration: Experts in ethics, law, healthcare, and computer science should be involved in developing guidelines on ethical AI and its application.
Public Engagement and Dialogue: Comprehensive public communication and discussion on the ethical implications of AI in health, including diversified perspectives, should be facilitated.
Future Directions of Ethical AI
The more AI technologies evolve and become integrated into health systems, the more new ethical challenges will arise that need to be reflected upon, discussed, and continuously adapted [17,18]. This is where future directions and proactive handling become necessary to ensure that responsible AI development and implementation are beneficial to humans. Here are some critical future directions in ethical AI for biomedical science and patient care.
Ethical Considerations for Emerging AI Applications
The future direction shall also focus on or predict the ethical burden of newer AI applications like prosthetics, brain-computer interfaces, and gene editing technologies through proactive ethics discourses and guideline developments about unique challenges (Table 3).
Robust Frameworks for Data Governance and Privacy
The future direction would be all-inclusive data governance policies, ownership and access, and sharing-related concerns, including safety [18]. It may involve studying new data de-identification and anonymization methods and providing secure data-sharing platforms that ensure protection for patient privacy while enabling collaboration in research and development.
Algorithmic Bias and Fairness
In the future, effort needs to be channeled into the detection and mitigation of sophisticated techniques, fairness metrics integral to the development process of AI, and the usage of more representative diverse datasets. This also relates to the social and cultural causes leading to biased health data and algorithms.
Recommended Practices for Developers and Clinicians
a) Emphasize a human-centered Approach to the Development of AI: At the heart of the AI development lifecycle, center the patient's well-being and ethics.
b) Continuous Ethical Reflection and Learning: Be aware of recent ethical guidelines on technology and be part of the ongoing debate on the moral issues of AI in health.
How AI Can Exacerbate Health Disparities
a) Prejudiced Data: AI algorithms are taught by the data they see within their training. If this data reflects existing biases in health care access, treatment, or outcomes, then the AI system can continue to magnify these disparities (Figure 3). For example, an algorithm trained on data from a population receiving limited preventative care might not correctly understand symptomatology or underestimate the risk of specific diseases in underserved populations.

Figure 3: AI algorithms can perpetuate existing disparities in healthcare access if they are trained on biased data or reflect societal inequalities.
b) Lack of Diversity in Development: If the teams developing AI algorithms lack diversity, their unconscious biases might unknowingly inform how these systems are designed and implemented. This could lead to less accurate or effective algorithms for specific populations.
c) Unequal Access to AI Technologies: The benefits of AI in healthcare might only be equitably available to some, thus widening the gap between those who have access to advanced technologies and those who do not. This could result in further disparities in health outcomes.
How AI Can Mitigate Health Disparities
a) Identifying and Addressing Bias: AI can be used to analyze large datasets and identify patterns of bias in healthcare delivery. This may raise awareness of disparities and help inform interventions for their solution.
b) Improving Access to Care: AI-powered telehealth platforms and remote monitoring tools can improve access to care for underserved populations, especially those in rural or remote areas.
c) Personalized Interventions: AI can interrogate patient data for those at the highest risk of given conditions and create appropriate interventions. This helps reduce disparities in prevention and treatment.
d) Empowerment of Patients: AI-driven technologies will deliver personalized information and support to patients so they can make evidence-based decisions about their health and navigate the healthcare system.
Conclusion
AI integrated into biomedical science and patient care can revolutionize healthcare as we know it. From accelerating drug discovery and enabling personalized medicine to increasing diagnostic precision, AI can change how we prevent, diagnose, and treat diseases. In truth, one can never appreciate one's full potential if one is to avoid an array of ethical issues related to such robust technology. However, profound success in AI in health requires us to engage with ethical values and best practices. Let transparency, accountability, and fairness unleash the transformative power of AI for good: improve patient care, further medical knowledge, and build a more just and ethical healthcare system for all.
Acknowledgements
None.
Conflict of Interest
None.
References
- CHARI (2024) View of Understanding Bias in Artificial Intelligence Algorithms Intended for Clinical Care. Stanford.
- Moustafa Abdelwanis, Hamdan Khalaf Alarafati, M Saleh, M Can (2024) Exploring the risks of automation bias in healthcare artificial intelligence applications: A Bowtie analysis. Journal of Safety Science and Resilience 5(5).
- T Hagendorff (2020) The ethics of AI ethics: An evaluation of guidelines. Minds and Machines 30(1): 99-120.
- M Ryan, BC Stahl (2020) Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications, Journal of Information, Communication and Ethics in Society 19(1).
- MD Dubber, F Pasquale, S Das (2024) The Oxford Handbook of Ethics of AI. Oxford University Press 74(2).
- Nithesh Naik, BM Zeeshan Hameed, Dasharathraj K Shetty, Dishant Swain, Milap Shah, et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front Sur 9(862322): 1-6.
- van Giffen, D Herhausen, T Fahse (2022) Overcoming the pitfalls and perils of algorithms: A classification of machine lerning biases and mitigation methods. J Business Res 144(1): 93-106.
- S Wachter, B Mittelstadt, C Russell (2021) why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review 41: 105567.
- Habli, T Lawton, Z Porter (2020) Artificial intelligence in health care: Accountability and safety, Bull World Health Organ 98(4): 251-256.
- S Fazelpour, D Danks (2021) Algorithmic bias: Senses, sources, solutions, Philosophy Compass 16(8).
- E Ferrara (2024) GenAI against humanity: nefarious applications of generative artificial intelligence and large language models. Journal of Computational Social Science.
- Andy Nguyen, Ha Ngan Ngo, Yvonne Hong, Belle Dang, Bich Phuong Thi Nguyen (2023) Ethical framework for Artificial Intelligence and Digital technologies. International Journal of Information Management 62(1): 102433.
- Jessica Morley, Caio CV Machado, Christopher Burr, Josh Cowls, Indra Joshi, et al. (2020) The ethics of AI in health care: A mapping review. Soc Sci Med 260: 113172.
- T Grote, P Berens (2020) On the ethics of algorithmic decision-making in healthcare. J Med Ethics 46(3): 205-211.
- Wang, S Liu, H Yang, J Guo, Y Wu, J Liu, et al. (2023) Ethical Considerations of Using ChatGPT in Health Care. J Med Internet Res 25(1): e48009.
- Basu (2023) Use of race in clinical algorithms. Science Advances 9(21).
- M Schedl, VW Anelli, E Lex (2024) Transparency, The Springer international series on information retrieval: 69-102.
- H Zohny, J McMillan, M King (2023) Ethics of generative AI. J Med Ethics 49(2): 79-80.