Is society ready? –ScienceDaily

0

With the accelerated evolution of technology, artificial intelligence (AI) is playing an increasing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take action on their behalf. A research team studied how humans react to the introduction of AI decision-making. Specifically, they explored the question: “is society ready for ethical AI decision-making?” by studying human interaction with self-driving cars.

The team published its findings on May 6, 2022 in the Journal of Behavioral and Experimental Economics.

In the first of two experiments, researchers presented 529 human subjects with an ethical dilemma a driver might face. In the scenario created by the researchers, the driver of the car had to decide whether to hit one group of people or another – the collision was inevitable. The accident would cause serious damage to one group of people, but save the life of the other group. Study subjects had to rate the car driver’s decision, when the driver was a human and also when the driver was an AI. This first experiment was designed to measure the bias people might have against ethical AI decision-making.

In their second experiment, 563 human subjects answered the researchers’ questions. The researchers determined how people react to the debate over ethical AI decisions once they are part of social and political discussions. In this experiment, there were two scenarios. One involved a hypothetical government that had already decided to allow self-driving cars to make ethical decisions. Their other scenario allowed subjects to “vote” whether or not to allow self-driving cars to make ethical decisions. In both cases, the subjects could choose to be for or against the decisions made by the technology. This second experiment was designed to test the effect of two alternative ways of introducing AI into society.

The researchers observed that when subjects were asked to rate the ethical decisions of a human or AI driver, they had no definite preference for one or the other. However, when subjects were asked their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, subjects had a stronger opinion against AI cars. The researchers believe that the discrepancy between the two results is caused by a combination of two elements.

The first element is that individuals believe that society as a whole does not want ethical AI decision-making, and so they assign positive weight to their beliefs when asked for their opinion on the matter. “Indeed, when participants are explicitly asked to separate their responses from those of society, the difference between AI eligibility and human drivers disappears,” said Johann Caro-Burnett, assistant professor at the Graduate School of Humanities and Social Sciences, Hiroshima. University.

The second element is that when introducing this new technology in society, allowing discussion on the subject has mixed results depending on the country. “In regions where people trust their government and have strong political institutions, information and decision-making power improve how subjects evaluate ethical AI decisions. In contrast, in regions where people don’t trust their government and have weak political institutions, decision-making ability deteriorates how subjects evaluate ethical AI decisions,” Caro-Burnett said.

“We find that there is a social fear of ethical AI decision-making. However, the source of this fear is not intrinsic to individuals. Indeed, this rejection of AI comes from the fact that the individuals believe to be the opinion of society,” said Shinji Kaneko, a professor at Hiroshima University’s Graduate School of Humanities and Social Sciences and the Network for Peace and Sustainability Education and Research. . So when not explicitly asked, people show no signs of bias against ethical AI decision-making. However, when asked explicitly, people show an aversion to AI. Moreover, where there is more discussion and information on the subject, the acceptance of AI is improving in developed countries and deteriorating in developing countries.

The researchers believe that this rejection of a new technology, which is mainly due to the incorporation of beliefs of individuals on the opinion of society, is likely to apply to other machines and robots. “Therefore, it will be important to determine how to aggregate individual preferences into a single social preference. Moreover, this task will also need to be different across countries, as our results suggest,” Kaneko said.

Source of the story:

Materials provided by Hiroshima University. Note: Content may be edited for style and length.

Share.

Comments are closed.