Medical Ethics in Artificial Intelligence 

August 16, 2022

The past century has seen remarkable advancements in science and technology, with Alan Turing’s 1950 paper “Can Machines Think?” paving the way for artificial intelligence (AI) [1]. Currently, AI algorithms are largely specialized to a single area; for example, Deep Blue is an AI programmed to play chess extraordinarily well [2]. However, if technology advances to allow AI to be equipped with advanced thinking abilities, it could eventually perform as well as or even better than humans in many broader domains, such as governance and medicine. As artificial intelligence is increasingly used in the health and medical field, attention to the ethics of its use is critical.

Although powerful, AI algorithms may feature flawed structure or logic that leads to unfair outcomes based on race or socioeconomic status. To define fairness and ensure just behavior by AI, scientists and researchers from fields such as biomedical science, psychology, law, philosophy, and economics need to communicate with the computer scientists and engineers behind the physical product itself. Ethical standards differ by geographical location and cultural background; however, all societies can agree on honesty, justice, non-malevolence and respect for autonomy [3]. Creating thinking machines more intelligent than humans raises concern over how to ensure these AIs will use their advanced intelligence for good rather than ill.  

Neuro-technical developments have revolutionized treatment for patients with neuromuscular disorders. These brain-computer interfaces (BCIs) gather neural information and transform them into commands that can be captured by external devices to carry out the desired action, often with the help of AI; thus, the main goal of BCIs is to restore function or improve quality of life for disabled individuals [4]. BCIs are a rapidly growing enterprise which holds immense potential for the future; even in its current state, it is exciting scientists and clinicians around the world. However, according to the justice principle, neurological interventions that best serve the patient’s interests and needs are necessary and future BCIs must be developed with that in mind [5]. It has also been argued that most BCI literature regards disability as a medical issue rather than a social one, which would mean the voices of the disabled are often left unheard [6].  

Another controversial topic around medical ethics and AI is to the possibility of selectively erasing the memory of patients with post-traumatic stress disorder or other neuropsychological disorders to improve their quality of life. While the potential therapeutic benefit is large, the advent of technologies that allow this also bring into question the ways in which it could be used for harm [5].  

Individual right to privacy and doctor-patient confidentiality is one of the biggest components of the AI controversy. This essential human right is often threatened by AI and big data, as machine learning programs require the storage and maintenance of extremely large samples, and data de-identification is still an area of active research [3]. AI systems should be held to strict regulations, such as “data minimization,” where AI only collects what is needed and deletes the data when possible [7]. In her novel Frankenstein, Mary Shelley concluded science in an anarchic world can be a destructive force; ethical consideration is needed to ensure safety and wellbeing of the public [3]. Better governance and surveillance are the recommended methods for preventing possible misuse of biotechnologies [8].  

In the midst of the rapid advancements in artificial intelligence, medical ethics and neuroethics gains significance. Moving forward, it may eventually become necessary to modify the current principles, rules and regulations within ethics to match our perpetually changing world.  

References 

  1. Church, A. (1937). A. M. Turing. On Computable Numbers, With an Application to the Entscheidung’s Problem. Proceedings of the London Mathematical Society, 2 s. Vol. 42 (1936–1937), pp. 230–265. The Journal of Symbolic Logic, 2(1), 42–43. https://doi.org/10.1017/S002248120003958X 
  1. Hsu, F., Campbell, M. S., & Hoane, A. J. (1995). Deep Blue System Overview. Proceedings of the 9th International Conference on Supercomputing – ICS ’95, 240–244. https://doi.org/10.1145/224538.224567  
  1. Keskinbora, K. H. (2019). Medical Ethics Considerations on Artificial Intelligence. Journal of Clinical Neuroscience, 64, 277–282. https://doi.org/10.1016/j.jocn.2019.03.001 
  1. Glannon, W. (2006). Neuroethics. Bioethics, 20(1), 37–52. https://doi.org/10.1111/j.1467-8519.2006.00474.x  
  1. Keskinbora, K. H., & Keskinbora, K. (2018). Ethical Considerations on Novel Neuronal Interfaces. Neurological Sciences, 39(4), 607–613. https://doi.org/10.1007/s10072-017-3209-x  
  1. Wolbring, G., & Diep, L. (2016). The Discussions Around Precision Genetic Engineering: Role of and Impact on Disabled People. Laws, 5(3), 37. https://doi.org/10.3390/laws5030037  
  1. Design, E. A. (2016). A Vision for Prioritizing Human Well-being With Artificial Intelligence and Autonomous Systems. IEEE Glob Initiat Ethical Considerations Artif Intell Auton Syst, 13.  
  1. Andorno, R. (2005). The Oviedo Convention: A European Legal Framework at the Intersection of Human Rights and Health Law. 2(4), 133–143. https://doi.org/10.1515/jibl.2005.2.4.133