Scale Events
+00:00 GMT
Articles
November 29, 2021

The Responsible Way to Use AI

The Responsible Way to Use AI

Here's how LinkedIn is building fairness and privacy into its algorithms.

Esther Shein
Esther Shein
The Responsible Way to Use AI

AI plays an increasingly important role in business, but organizations face immense challenges if people don’t use it responsibly, said Ya Xu, vice president of engineering at LinkedIn.

Xu’s team of more than 300 technologists ask themselves daily what it means to build AI and technology that puts LinkedIn’s members first, she said during a recent talk at the artificial intelligence (AI) and machine learning (ML) conference Scale TransformX.  


Here are the key takeaways from her talk.

Responsible Design

As they build LinkedIn’s AI systems, Xu’s team follows responsible design, a core pillar being the responsible AI principles developed by Microsoft, she said: fairness, reliability and safety, inclusiveness, privacy and security, accountability, and transparency (see Figure 1). Responsible AI considers intent and impact and uses training that has appropriate demographic representation.



Figure 1: LinkedIn’s responsible AI efforts are driven by responsible design. Source: LinkedIn

Xu said that fairness and privacy are the two areas where LinkedIn has made the most progress. Fairness goes beyond using an algorithm. The steps involved in putting fairness into practice at LinkedIn include auditing and assessing existing products and systems, mitigating unfairness on the platform, and building fairness into its development process as a default, and practicing continuous detection and monitoring, Xu said.

Representation

LinkedIn is working to improve fairness in its connection recommendation feature, which it calls “PYMK” (people you may know) and which accounts for 40% of all connections made on LinkedIn.

The steps needed to ensure that the algorithm will make effective recommendations include analyzing gender parity in a pool of generated candidates and calculating how many invitations are accepted.

One challenging aspect of measuring whether representation is fair to all lies in determining what the reference distribution is. LinkedIn addresses that using what Xu called “the funnel survival ratio,” to see if the representation has changed throughout the process.

LinkedIn found that its algorithms don't introduce any disparities in PYMK, since both males and females are equally likely to be on the recommended list. However, females are more likely to receive invitations and are less likely to accept them, Xu said.

Outcome

The best evidence LinkedIn has for the effectiveness of its recommendations is invitation acceptance rates. The setting for algorithm rankings asks if LinkedIn sees equal outcomes for equal scores.

The “intent vs. impact” framework indicates a strong focus on impact, she said. LinkedIn also measures intent and evaluates how the models perform by gender. For the most recent PYMK model, LinkedIn found the differences to be small.

Mitigation

LinkedIn takes three approaches to mitigation: adjusting the training data, adjusting the training model, and adjusting the model scores after training is finished.

Xu said that although reweighting the training data might seem to be the most obvious approach, it is not as effective at improving the model as expected. Re-rankers, which fall into the third mitigation bucket, tend to be the easiest to implement across different models and are effective at achieving the outcome test, she said.

Fairness, monitoring, and detection need to be built into the product development cycle as a default, Xu said.

LinkedIn also relies heavily on “the most powerful tool we have, experimentation.” Xu explained that all of the platform’s features go through testing, whether it's a UI feature or an algorithm change. Experimentation provides the ability to introduce “fairness awareness in every change that we make.”

How to Address Data Privacy

LinkedIn collects a lot of sensitive data that it uses to measure and improve fairness on the platform, which means Xu’s team must balance data utility with privacy. LinkedIn uses differential privacy,” which Xu said has become the new standard for data privacy protection. Because so many research communities have started sharing their work in the data privacy area, LinkedIn is creating new algorithms to make differential privacy a reality on its site.

Learn More

To hear more about LinkedIn’s approach to responsible AI, watch Xu’s presentation, “A Responsible Approach to Creating Global Economic Opportunities with AI,“ and read the full transcript here.

 

Dive in
Related
24:00
video
A Responsible Approach to Creating Global Economic Opportunities With AI
Oct 6th, 2021 Views 2.4K
Blog
The Future of AI and Language Models: Matching Data and Compute Power Is Key
By Esther Shein • Jan 26th, 2022 Views 2.4K
Blog
The Next Big Step in Robotics: Machines That Adapt to People
By Esther Shein • Mar 29th, 2022 Views 1.3K
Blog
Why Building AI with Empathy Matters
By Esther Shein • Dec 14th, 2021 Views 2.1K