A surgical robot performs surgery during 2023 World Artificial Intelligence Conference at Shanghai World Expo Exhibition and Convention Centre on July 6, 2023 in Shanghai, China. (Photo by VCG/VCG via Getty Images)
During the 11th International Philosophy of Medicine Roundtable (2024), I presented a paper titled, Should Autonomous Surgical Robot be Regarded as a Surgeon? I argued that since autonomous surgical robots lack the creativity involved in surgical procedures, because surgical operation requires more than following procedures, they should not be regarded as surgeons.
During the presentation, a member of the audience made a comment, which triggered different lines of questioning and discussion. She said: “Regardless of whether these robots are creative or not, I would not trust a surgical robot to perform a surgical operation on me.” This comment raises the question of whether we should trust AI technologies, particularly in the healthcare sector.
There have been various arguments about why we should not trust AI technologies in healthcare. For instance, some argue that because these technologies are not transparent enough on how they get their output, they should not be trusted. Some even argue that AI technologies are essentially epistemic technologies, so the only trust we should have in them is epistemic trust.
As such, we should not rely on them for surgical operations or other essential things done in healthcare sectors. One of the major arguments against trusting these AI technologies in the healthcare sector is that they are not accountable. They argue that because these technologies are not accountable, we should not rely on them for surgical operations or any other medical emergency.
Why do these arguments look plausible on a surface level? A deeper look shows that they are not as valid as the proponents claim.
For starters, let us consider the claim that AI technologies should not be relied upon for surgical operations because they are only epistemic agents. This argument suggests that AI technologies’ essential role is to provide information, analyse data, or recognise patterns, and nothing else.
However, confining AI technologies to an epistemic technology ignores its broader possibilities and capacity outside of knowledge improvement. For instance, care robots are used to administer drugs to patients or even provide comfort to the patient. These tasks are more than just providing information.
Also, while artificial intelligence does play a significant part in offering knowledge and supporting decision-making procedures, it has the power to affect many facets of human existence, including social interaction, creativity and problem-solving in non-epistemic domains.
Furthermore, the position that AI technologies should not be trusted because they are not transparent is also problematic to sustain. This position claims that AI technologies lack openness about their algorithms, data-handling techniques and decision-making processes. Since transparency is a prerequisite for trust, particularly in situations involving primary human goods at risk, AI technologies should not be trusted. However, while it is crucial for there to be openness in building confidence in artificial intelligence systems does not necessarily mean that transparency must exist before one can extend trust.
For example, the human brain comprises billions of neurons linked by trillions of synapses, and as such, many facets of how the brain works, especially in areas like awareness, decision-making, and emotion, remain not entirely understood even with many studies in neuroscience.
Nevertheless, in many spheres of life, we mostly rely on human judgments even when we do not entirely grasp these procedures or how they come up with such judgments. This prompts a vital question: why should we not treat artificial intelligence systems with the same confidence that we do people, even if they lack transparency, just as humans do? The puzzling nature of this question suggests that trust can exist without transparency.
Next is the issue of accountability. As mentioned, the case against trusting artificial intelligence because of its lack of accountability rests on the theory that, being only algorithmic and data-driven entities, AI systems lack the moral and ethical agency required for appropriate responsibility.
They argue that these AI technologies cannot comprehend or accept the results of their actions or make deliberate, moral judgments. AI cannot be held accountable in any meaningful sense without moral agency, so it cannot be a reliable entity. However, in some cases, trust could exist without accountability, and accountability can exist without trust.
For instance, let us consider the possibility that accountability in AI technologies can exist independently of trust. Strict responsibility policies comprising regulatory compliance, audits, and openness criteria are usually followed in critical applications like autonomous cars or healthcare diagnostics.
Notwithstanding these steps, public confidence in these technologies might stay low because of problems with malfunction, ethical questions, or ignorance of artificial intelligence decision-making procedures. Similarly, in financial services, artificial intelligence systems might be subject to strong control and responsibility to monitor organisations. Still, many individuals might mistrust these systems because of concern about prejudice, mistakes, or lack of control. This demonstrates that trust and responsibility may coexist apart as they cannot guarantee moral confidence by themselves.
Also, trust in AI technologies can exist independently of accountability measures. For example, there are cases where trust exists apart from systems of responsibility. For example, despite the lack of thorough responsibility policies, many consumers rely on consumer-grade artificial intelligence products, including virtual assistants like Siri or Alexa.
Though they are not wholly aware of the accountability mechanisms controlling these technologies, users depend on these systems because of their ease, perceived dependability, and good prior experiences. Likewise, depending on good prior experiences, user-friendly interfaces, and perceived efficacy, people might trust AI recommendation systems on social media platforms or e-commerce sites without necessarily thinking through the accountable processes in place. This shows that trust can exist without any responsibility regulation put in place.
So, to answer the question of whether we should trust AI technologies, from the argument presented above, whether we should trust AI systems or not does not depend on factors like accountability, openness, or regulatory compliance but on elements like perceived dependability, simplicity of use, efficacy, and prior experiences.
What, then, are we to do to increase trust in these AI technologies? Developers and legislators should understand that increasing trust in artificial intelligence calls for more than merely putting responsibility systems into effect. They have to interact with people to solve ethical questions, worries, and misconceptions about AI decision-making procedures. Then, users should evaluate artificial intelligence technology according to their experiences, the performance of the system, and the ethical standards of the developers instead of assuming that adherence to responsibility criteria is enough for trustworthiness.
Ukpaka Paschal is a PhD candidate at the University of Johannesburg. He was a Research Fellow at the Global Studies Center Gulf University for Science & Technology. He is a researcher at UJ Centre for Philosophy of Epidemiology, Medicine, and Public Health, and at UJ Metaverse Research Unit. Paschal is currently investigating whether Large Language Models are Authors as his PhD project.