Today, machine learning can help determine our eligibility for loans, the jobs we get, and even who goes to jail. But can computers make fair judgments when it comes to these potentially life-changing decisions? German researchers have found that, with human supervision, people perceive a computer’s decision to be as fair as a human decision. On September 29, the related article was published in Patterns, a journal of Cell Press.
“A lot of the discussion about fairness in machine learning has focused on technical solutions, such as how to fix unfair algorithms, and how to make systems fair.” Computing Society, University of Mannheim, Germany “But our question is, what do people think is fair? It’s not just about developing algorithms, it needs to be socially acceptable and conform to real-world normative beliefs,” said scientist and paper co-author Ruben Bach.
Automated decision making, i.e. making conclusions only by computers, excels at detecting patterns by analyzing large data sets. Computers are often considered objective and neutral compared to humans, and human biases can influence judgment. However, when computer systems learn from data that reflect discrimination patterns in the human world, biases can “creep” into computer systems. Understanding the fairness of computer and human decision-making is critical to building a fairer society.
To understand people’s perception of fairness towards automated decision-making, researchers surveyed 3,930 participants in Germany. The researchers gave them what-if scenarios related to banking, jobs, prisons, and unemployment systems. In these scenarios, they further compared different situations, including whether the decision resulted in a positive or negative outcome, where the data used for the evaluation came from, and who made the final decision—human, computer, or both.
“Unsurprisingly, we found that fully automated decision-making was unpopular,” said co-first author Christoph Kern, a computational social scientist at the University of Mannheim, “but interestingly, when When there is human oversight in automated decision-making, people perceive the fairness to be similar to human-centred decision-making.” The results show that when humans are involved, people perceive the decision to be fairer.
When a decision involves the criminal justice system or employment prospects, people are more concerned about the fairness of the decision. It may be that given the greater burden of loss, participants felt that decisions with positive outcomes were fairer than those with negative outcomes. Those systems that make use of additional irrelevant data from the internet are considered less fair than those that rely only on relevant data, confirming the importance of data transparency and privacy. Overall, the findings suggest that context matters.
Researchers say automatic decision-making systems need to be carefully designed when it comes to fairness.
While the hypothetical scenarios in the survey may not fully translate into reality, the team is already brainstorming to better understand fairness. They plan to further study how different people define fairness. They also want to use similar surveys to ask more questions about justice, fairness in the distribution of community resources, and more.
“In a way, we hope that those in the industry can take these results as food for thought, as something they should check before developing and deploying automated decision-making systems,” Bach said. “We It also needs to ensure that people understand how data is processed and how decisions are made based on data.”
Read the original paper: https:https://www.cell.com/patterns/fulltext/S2666-3899(22)00209-4
For more latest research, please follow the official WeChat account of Cell Press Cell Press “CellPress Cell Science”