Scale Events
timezone
+00:00 GMT
Articles
May 5, 2022

Ethical AI: Now Is the Time to Act

Businesses that don’t address bias and other ethical AI issues could face both legal and regulatory compliance headaches, said panelists at the Index AI Summit conference.

Stan Gibson
Stan Gibson

The unprecedented power of artificial intelligence is raising thorny ethical questions that are giving rise to regulatory action. To avoid compliance issues and legal challenges, businesses leveraging AI should take steps now. 

“When technologies achieve scale, they come under massive scrutiny. At first, AI was not regulated. But questions are coming up now,” said Mehran Sahami, professor of computer science at Stanford University. “When [AI] models affect humans, it’s a huge impact,” said Adam Wenchel, CEO and co-founder of Arthur, a company that sells software that monitors and detects bias in AI algorithms. 

Sahami and Wenchel shared their observations during the online session “AI and Ethics” at Index AI Summit 2022. You can find a video of the discussion, moderated by Jonathan Vanian, technology writer and Brainstorm AI co-chair at Fortune magazine, here

In order to continue to develop AI responsibly, it is essential to enable consumers, academics, and business leaders to push for legislation that is practical, technologically informed, and effective to protect people who might be at risk, the panelists agreed. 

The questions raised during the session take on particular urgency because spending on AI is ramping up rapidly. In the United States, AI expenditures will grow at a compound annual growth rate of 26% from 2021 to 2025, reaching $120 billion annually in 2021, according to a new report by IDC

Even Small Biases in Algorithms Can Create Widespread Unfairness

Machine learning models that are trained by historic information can develop biases that might hurt minority groups, the panelists noted. “AI brings up questions whether people are being treated equitably. Is there historic bias? We don’t want to reinforce those things in AI,” said Sahami. 

Wenchel agreed: “AI is replacing a lot of human decision making with automated decision making. There can be biases incorporated into the decision making,” he said. 

For example, if a database of facial images used to train an ML algorithm for an automated driver’s license system mainly contains people of one ethnicity, those of other ethnicities could be unable to use the system. 

Such examples have stirred social justice activists and led to the founding of the Algorithmic Justice League. The FTC, meanwhile, is regulating the use of AI in the consumer credit industry by banning biased algorithms. 

Embedded Biases Can Lead to Lawsuits 

A company that develops ML models without testing for bias and then deploys a biased system on a large scale is opening itself up to significant liability and business risk, said Sahami. “From a pure business standpoint, this is part of the price of building AI systems,” he said. 

Tweaking algorithms for more equitable outcomes might be desirable, but it requires the participation of multiple parties. “It would involve getting different stakeholders to give their input into the process regarding the outcome we want to see societally. Then we would have to adjudicate among the different value preferences to come up with a solution. It will become, not just a matter of technology, but of politics,” said Sahami. 

With financial risk in play, shareholders are leery of the downside of companies that rely on AI. “Investors want to know what measures are being put in place to prevent biases,” said Wenchel. “Algorithms can be used safely, but if you don’t monitor for bias, the odds are they won’t be very ethical and they will open up a company to liability,” he said, adding, “It’s absolutely critical to have guardrails in place.” 

Embed AI Ethics into Corporate DNA

In December 2021 the New York City Council passed a law barring companies from using automated employment-decision tools unless the technology has passed a bias audit. In addition, companies will have to notify job applicants if the tool is used to make hiring decisions. The law becomes effective in January 2023, with fines ranging from $500 to $1,500 per violation. 

Federal and state legislatures are also considering AI regulations.

“The industry needs to understand it will be regulated,” Sahami said. “That will create guardrails that will be good for us, assuming they are made with an understanding of the technology.” In the face of such regulations and with more on the way, Sahami’s advice to startups is simple: “Build ethics into your DNA now.” 

Wenchel added, “Don’t just build a system and then think about the impact. Think before the project is approved. Make sure there is an ethical lens.”

Get Going with AI Governance 

As AI matures, AI governance, which includes bias testing, is growing in importance. Pressure from investors and activists, meanwhile, is converging to force lawmakers to enact legislation to create regulatory guardrails. 

To stay on the right side of these new laws, business leaders will need to show that they have put ethical considerations at the forefront of their corporate strategies and that they are actively working to eliminate bias in their AI algorithms. 

Learn More:

Dive in
Related
Blog
Is 2022 The Year NLP Goes Mainstream?
By Jaikumar Vijayan • Apr 14th, 2022 Views 1.7K
Blog
Is 2022 The Year NLP Goes Mainstream?
By Jaikumar Vijayan • Apr 14th, 2022 Views 1.7K
28:38
video
The Governance of Artificial Intelligence
Oct 6th, 2021 Views 1.7K