Scale Events
+00:00 GMT
Articles
November 15, 2021

Responsible AI: 6 Principles Every Team Should Follow

Responsible AI: 6 Principles Every Team Should Follow

Doing AI in a responsible way is key to achieving your business goals.

Responsible AI: 6 Principles Every Team Should Follow

Every company in every vertical industry is becoming an artificial intelligence company, said David Carmona , general manager of AI and innovation at Microsoft. Going forward, AI will be a core competitive differentiator.

A change of this magnitude comes with plenty of risks and challenges. A few years ago, the big worries around AI included learning about the technology and seeking out hard-to-find skills. Today, ethical challenges and risks are high on the list, said Carmona, but the good news is that many company leaders are serious about responsible AI.

Camona recently spoke about responsible AI innovation at AI and machine learning (ML) conference Scale TransformX. He shared six principles and identified four steps organizations must take to develop AI responsibly.

<br> <div style="position: relative; padding-bottom: 56.25%; height: 0;"><iframe src="https://fast.wistia.com/embed/medias/41a42q4o3h" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div> <br>

The journey to becoming an AI company, he said, requires understanding what responsible AI means, identifying principles to guide your organization, and having a plan.

What AI companies will look like

Among other things, an AI company will move beyond measuring input metrics such as accuracy and performance and focus on business results such as cost savings, new revenue streams, customer satisfaction, and employee productivity. Such companies will also empower every employee and every subject-matter expert so they can “co-reason” with AI, said Carmona. That’s the natural next step, he added.

For example, pharmaceutical giant Novartis uses AI to expand its domain knowledge across the company, involving over 50,000 employees, from research to manufacturing to distribution, Carmona said. To do so, it has brought together critical information across data sources. Using its scientific knowledge and co-reasoning on top of AI models can accelerate discovery, said Shahram Ebadollahi, Chief Data and AI Officer at Novartis.

The company is building elemental blocks of AI that its associates can put together in new ways. “This will allow them to reason, innovate, and ultimately, augment their expertise and creativity,” Ebadollahi says.

Six principles of responsible AI

Putting responsible AI into action starts with principles that reflect intentions, values, and goals. They are the foundation of a company’s AI program and will guide every stage of the software development cycle.

Organizations need to reflect on AI’s challenges and define an approach to each. Microsoft started its AI journey early in 2016 and has since developed six core principles for responsible AI:

  • Fairness,
  • Reliability and safety,
  • Transparency,
  • Inclusiveness,
  • Accountability, and
  • Privacy and security.

Use these as inspiration, Carmona suggested, but he also encouraged companies to go through the same exercise that Microsoft did to develop their own principles.

Four steps for developing AI responsibly

To make responsible AI principles actionable, organizations must evolve them to practices so they can be applied to your development cycle, said Carmona. He summarized AI responsibility development in four steps:

  • ** Define your principles. **It's the starting point for anything related to responsible AI, said Carmona. They must reflect intentions, values, and goals. They must come from the top of the organization and be communicated internally and externally. These principles are relevant for every vertical, he added.

  • **Create practices across every discipline and process. **Once you define your principles, don't stop there, said Carmona. This second step brings those principles to life. Responsible AI must be infused into every activity involved with the development and operations of any AI system. Microsoft has created many of these practices for its own development process and has made them available for use in its responsible AI resource center, said Carmona. This includes practices such as inclusive design guidelines, and conversational AI guidelines.

  • **Establish a governance process tailored to the organization. **Centralize the oversight and guidance within the organization. Make sure to truly adhere to your principles. Microsoft uses a hub-and-spoke model for its system of governance, which balances accountability with authority, said Carmona. At the center is the Office of Responsible AI, which leads the operations of Microsoft’s principles across the entire organization.

  • Expand the conversation beyond the organization, across industry, and throughout society. Work together to maximize the potential of AI for positive change.

“People who design and deploy AI systems must be accountable for how their systems operate,” said Carmona. “You cannot relegate accountability to an algorithm.”

Learn more

Watch David Carmona's TransformX talk, “Driving Competitive Edge with Responsible AI Automation,” and read the transcript here to learn more real-world advice.

Dive in
Related
30:38
video
Driving Competitive Edge with Responsible AI Innovation
Oct 6th, 2021 Views 4K
Blog
5 Core Roles Every AI Team Should Consider
By Elliot Branson • Feb 8th, 2022 Views 5.6K
24:24
video
Applying AI to Redefine Every Industry
Oct 24th, 2022 Views 3.8K
Blog
Why Building AI with Empathy Matters
By Esther Shein • Dec 14th, 2021 Views 2K