Implementing Artificial Intelligence in Health Care: Data and Algorithm Challenges and Policy Considerations

Artificial Intelligence (AI) is already driving fundamental changes in health care operations and patient care, and is showing promise to significantly advance the quadruple aim - enhance the patient experience, improve population health, reduce costs and improve the provider experience. However, along with these impressive advances and the realistic potential to transform health care in the near term come thorny questions about liability and accountability, algorithmic bias and representative data, and the ability to accurately interpret and explain data. We address the challenges of implementing AI in routine health care practice by looking at the role of data and algorithms and the implications for medical malpractice. We then summarize ongoing efforts to create AI policy and regulation globally to address these challenges in order to enable the mainstreaming of AI in health care.

having further investigations and interventions recommended or even being sent to hospice.
The responsibility of establishing guidelines and policies lies under the aegis of government and regulating bodies, ideally with input from key stakeholders that include physicians, administrators, public and private institutions and patients. Along with developing new and retooled laws and regulations, these bodies should design policies to encourage helpful innovation, generate and transfer expertise, and foster broad corporate and civic responsibility to address critical issues raised by AI technologies.
As AI systems become mainstream in health care, they are susceptible to errors and failures. In a recent study an AI system was trained to learn which patients with pneumonia had a higher risk of death. It inadvertently classified patients with asthma as being at lower risk. This was because the system failed to comprehend that people with pneumonia and a history of asthma were directly admitted into the hospital and received treatment that significantly reduced their risk of dying. The machine therefore interpreted this as someone with pneumonia and asthma having a lower risk of death [7].
Another potential obstacle to AI going mainstream is the ability to "bias [8]" the data to cause misdiagnosis. Adversarial attacks are the ability to misclassify an output by engineering the inputs. Areas like dermatology, ophthalmology and radiology are susceptible as there are enormous incentives from which providers could benefit [9]. For example, per CMS guidelines, insurance companies have to pay for vitrectomy surgery for a confirmed diabetic retinopathy diagnosis. In order to reduce the number of procedures without making a change to the CMS policy, could an insurance company use adversarial noise to bias the positive images?
Ultimately, the impact on end-users depends on how they perceive, interpret and tolerate these shortcomings. As AI becomes more embedded in our daily lives, mistakes could have serious if not deadly consequences. Whether the mistake is an accident caused by a self-driving car [10] or a misdiagnosis of a health condition, AI systems are and will increasingly be put under heavy scrutiny.
In this paper, we will address some challenges of implementing AI in routine health care practice by looking at the role of data and algorithms and the implications for medical malpractice. We will then share ongoing efforts in creating AI policy and regulation to address these challenges in order to enable the mainstreaming/ mass adoption of AI in health care.

Challenges for Artificial Intelligence in Health Care
AI is becoming an increasingly advanced, sophisticated, and meaningful field, and its uses and implications are far-reaching. For example, researchers at Stanford recently developed an algorithm that can detect pneumonia from chest x-rays at a level exceeding the ability of practicing radiologists [11]. Scientists at Google also developed a deep learning approach that could predict inpatient mortality, unexpected readmissions, and long length of stay more accurately than existing models by mining data from the electronic health records [12].
At the heart of AI's value proposition is the ability to process vast amounts of data and then act on that data through algorithms using techniques such as machine learning and cognitive computing [13].
Harnessing the power of machines to process data much faster than any human being can, AI technologies have the potential to identify health care diagnoses, treatment plans and trends quickly by sifting through information and analyzing patient histories. These findings could support, inform, and enable physician decision-making.
Two factors that present challenges to implementing AI in health care are the role of algorithms and data and the potential ramifications for medical malpractice.

Role of Algorithms, Data
Computer algorithms are at the core of AI and are part of applications in education, financial services, health care, navigation, and manufacturing. These algorithms are being used to make health care decisions like prioritizing activities for staff members or triggering interventions for admitted patients-such as was reported at John Hopkins Hospital [14]. In these situations, how can one trust the algorithm to do the right thing, every time? Take the case of 'Deep Patient' where researchers at Mount Sinai Hospital applied deep learning to 700,000 patient records. Without expert guidance the tool was able to identify patterns and predict the onset of diseases such as cancer of the liver. The tool also predicted the onset of schizophrenia but offered no clue as to how it did so.
For a condition that is very difficult to predict even by the most experienced psychiatrist, the way the AI system came up with its decision is known as the "black box" problem [15]. So how can we trust such a system? Along with algorithms, AI relies heavily on patient data. To train a machine to identify specific conditions, hundreds of thousands of data elements are needed. For example, Google is using 46 billion data points to predict the medical outcomes of hospital patients [16].
The challenge here is that this need for data runs up against current models of patient privacy, consent, and control. Implementing AI in health care could violate HIPAA (Health Insurance Portability and Accountability Act) policies [17].
Although HIPAA does not specifically address technologies such as AI, HIPAA was put in place to protect individuals' medical records and other personal health information. And there is a rapidly emerging move by consumers and patients to stop the "gold rush" mentality of industry to toss aside privacy and consent models [18].
Companies like Luna DNA and other blockchain formulations are empowering consumers and patients to share data [19], but retain control.
Patients are now understanding the importance of meaningful notice, consent, and control over their data, both HIPAA-covered data and data outside of HIPAA jurisdiction.

Implications for Medical Malpractice
As algorithms ingest large quantities of high-quality datasets from across the health care ecosystem, the use of AI will, over time, result in fewer misdiagnoses and errors [20]. The AI machine will be able to predict diagnosis based on complex relationships between the patient and expected treatment results without explicitly identifying or understanding those connections. We are now entering an era where the medical decision-making burden shifts from the physician to an algorithm. What happens when the physician pursues an improper treatment-based on an algorithmthat results in an error?
In the US, medical malpractice is a professional tort system that holds physicians liable when the care they provide to patients deviates from accepted standards so much as to constitute negligence or recklessness [21]. The system has evolved around the conception of the physician as the trusted expert, and presumes for the most part that the diagnosing or treating physician is entirely responsible for his or her decisions and thus accountable if the care provided is negligent or reckless. In England and Wales, medical liability is under the law of tort, specifically negligence [22]. It is general practice in cases of clinical negligence that National Health Service Trusts (NHS) and Health Authorities are liable and are the bodies that are sued, rather than individual clinicians. The NHS has provided guidance that it will accept full financial liability where negligent harm has occurred, and will not seek to recover costs from the health care professional involved [23].
Who is liable for erroneous care based on a decision made by a machine is a hard problem to solve. Future malpractice guidelines should incorporate such considerations by including health care professionals but also software companies that created the algorithm.
In 2018, IDx-DR became the first and only FDA-authorized AI system used for the autonomous detection of diabetic retinopathy using deep learning algorithms [24]. IDx, the company that developed the system, carries medical malpractice and liability insurance [25]. The autonomous diagnostic AI is responsible for performing within specification for on-label use of the device, while in an off-label situation, the liability for an erroneous treatment decision typically would be with the physician using it off-label.
As AI evolves rapidly, the adoption and diversity of applications in medical practice are outpacing the critical need of establishing standards and guidelines to protect the health care community.
There is a need for key stakeholders from both the public and private sector to collaborate and recommend policy guidelines to enable the safe use of AI.

Ongoing Efforts in Creating AI Policy
A systematic search revealed multiple initiatives underway in governments, academia, international consortiums, organizations and technology companies where policy guidelines and recommendations are being identified to address both the data and algorithm components of AI. Subject matter experts in these groups include members from science, engineering, economics, ethics, regulation and policy. While the majority of these recommendations do not directly address the health care industry, many could be applicable to the use of AI in health care.
In the following table we briefly summarize activities, in chronological order starting in 2016, that have gained significant momentum based on their impact and highlight if these are guidelines and recommendations to drive policy or enacted into legislation. We also call out if these recommendations apply to health care (Table 1).

General Data Protection Regulation (GDPR)
The GDPR [26], adopted by the European Parliament and Moving beyond recitals [35], GDPR addresses algorithmic discrimination by two key principles. The first, data sanitization, is the removal of special categories from datasets used in automated decision making. This principle is introduced by Article 9: Processing of special categories of personal data, which establishes a prima facie prohibition against "the processing of data revealing racial or ethnic origin" and other "special categories". It is strengthened under Article 22: Automated individual decision-making, including profiling, which specifically prohibits "a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her" that is "based on the special categories of personal data referred to in Article 9" (art. 22 (2)).
The second principle, algorithm transparency, introduces the "right to explanation" [30], whereby data subjects are entitled to "meaningful information about the logic involved, as well as the significance and the envisaged consequences" when automated decision making or profiling takes place (art. 13(2)(f); art. 14(2) (g)). In

The One Hundred Year Study on Artificial Intelligence
In the fall of 2014 a long-term investigation of the field of AI and its influence on people, their communities, and society called "The One Hundred Year Study on Artificial Intelligence" was commissioned [28]. The study considered the science, engineering, and deployment of AI-enabled computing systems. The study panel reviewed AI's progress in the recent years, envisioned the potential advances that lay ahead, and described the technical and societal challenges and opportunities the field raised in areas of ethics, economics, and the design of systems compatible with human cognition.
To help address the concerns about the individual and societal implications of rapidly evolving AI technologies, the study panel offers three general policy recommendations [28]. The first is to define a path toward accruing technical expertise in AI at all levels of government. Effective governance requires more experts who understand and can analyze the interactions between AI technologies, programmatic objectives, and overall societal values. The second focuses on removing the perceived and actual impediments to research on the fairness, security, privacy, and social impacts of AI systems. The third recommends increasing public and private funding for interdisciplinary studies of the societal impacts of AI. Better understanding the national AI R&D workforce needs [29].

National Science and Technology Council
For health care, the recommendation was to improve explainability and transparency related to the use of AI algorithms, as they are based on deep learning and are opaque to users, with few existing mechanisms for explaining their results. From a liability perspective, physicians need to know why a decision was suggested and need explanations to justify a diagnosis or a course of treatment.

Royal Statistical Society (RSS)
In April 2017, the Royal Statistical Society (UK) [30]

Association of Computing Machinery (ACM)
In May 2017, the ACM US Public Policy Council and the ACM Europe Council Policy Committee issued a set of seven principles designed to address: (a) potential harmful biases generated by algorithms that are not transparent; and (b) biased input data used to train these algorithms [31]. These recommendations were not health care specific, but laid the foundation to provide context for what algorithms are, how they make decisions, and the technical challenges and opportunities to prevent and mitigate potential harmful bias.
The seven principles for algorithmic transparency and accountability focused on creating awareness of biases and potential harm, access and redress for individuals that are affected, accountability for using algorithms to make decisions, explanation of the procedures followed by the algorithm and decision made, data provenance, auditability of models, algorithms, data and decision and finally validation and testing of methods and results.

Organization for Economic Co-operation and Development (OECD)
In September 2017, OECD published a report on Algorithms and  • GDPR explicitly addresses algorithmic discrimination by

American Medical Association (AMA) Guidance for Health Care Stakeholders
• "Data Sanitization" -removing special categories from data sets used in automated decision making.
• "Right to Explanation" whereby data subjects are entitled to" meaningful information about the logic involved, as well as the significance and the envisaged consequences" when automated decision making or profiling takes place • Enacted into legislation 2.

Group of Seven (2016; G-7) [27]
• G-7 ministerial meeting on information and communication technology proposed setting up an international set of basic rules for developing AI.

3.
The One Hundred Year Study on Artificial Intelligence (2016; US) [28] military applications were deemed to be outside the scope of this initial report. This is not to minimize the importance of careful monitoring and deliberation about the implications of AI advances for defense and warfare, including potentially destabilizing developments and deployments. The report is designed to address four intended audiences. For the general public, it aims to provide an accessible, scientifically and technologically accurate portrayal of the current state of AI and its potential. For industry, the report describes relevant technologies and legal and ethical challenges, and may help guide resource allocation. The report is also directed to local, national, and international governments to help them better plan for AI in governance.
• Seven-point strategy with recommendations specific to health care (explain-ability and transparency)

5.
Royal Statistical Society (RSS) (2017; UK) [30] • RSS's recommendations to the House of Commons Science and Technology Select Committee inquiry around the use of algorithms in decision-making • Recommendations include setting up an independent data ethics council to provide advice to government, public and private sector on the use of algorithms • First policy on health care Augmented Intelligence • Report provided that the overarching goal of AI in health care is to be human-centered and augment human intelligence and advance the quadruple aim: improve population health; improve health outcomes and patient satisfaction; increase value; and improve health care team satisfaction 9.

International Telecommunications Union (ITU) and World Health Organization (WHO) Joint Focus Group on Artificial Intelligence for Health[34]
Recently formed focus group to identify opportunities for international standardization of AI for health-relevant data, information, algorithms, and processes, which will foster the application of AI to health issues on a global scale. The goal is to establish a standardized assessment framework with open benchmarks for the evaluation of AI-based methods for health, such as AI-based diagnosis, triage or treatment decisions. report stated that the overarching goal of AI in health care is to be human-centered and augment human intelligence and advance the quadruple aim: improve population health; improve health outcomes and patient satisfaction; increase value; and improve health care team satisfaction (Table 2).

International Telecommunications Union (ITU) and World Health Organization (WHO) Joint Focus Group on Artificial Intelligence for Health
The International Telecommunication Union (ITU) has established a new Focus Group on "Artificial Intelligence for Health" (FG-AI4H) in partnership with the World Health Organization (WHO) [34]. FG-AI4H will identify opportunities for international standardization of AI for health-relevant data, information, algorithms, and processes, which will foster the application of AI to health issues on a global scale. In particular, it will establish a standardized assessment framework with open benchmarks for the evaluation of AI-based methods for health, such as AI-based diagnosis, triage or treatment decisions.

Summary and Conclusion
AI has the potential to drive valuable transformation in health and in the health care ecosystem. However, several concerns continue to impede the assimilation of AI into the mainstream of health care and multiple other fields. These concerns include algorithmic transparency, liability, accountability, algorithmic bias, representative data, interpretability and explainability.
Numerous governments, consortiums and academic or scientific groups have assembled expert stakeholders from multiple disciplines to work toward recommending action steps to alleviate these concerns. A critical component of these recommendations is making long-term investments in AI research along with understanding and addressing the ethical, legal and societal implications of AI. Additional steps include creating awareness of biases and potential harm along with explaining the procedures used by an algorithm and how a decision was made.
Establishing independent councils to provide impartial advice to the government, the private sector and the public on topics related to use of algorithms will also help alleviate concerns with taking AI mainstream.
For health care professionals, the recommendations by the American Medical Association (AMA) may hold the most appeal, as they advance the concept that physicians and a machine working in combination have the greatest potential to improve clinical decision-making and patient health outcomes.
Finally, this field is still evolving and thus the entire industry may have to take a wait and see approach before settling on the right set of policies and regulations to mainstream AI in health care.