timezone
+00:00 GMT
SIGN IN
  • Home
  • Events
  • Learn
  • Help
Sign In
  • Home
  • Events
  • Learn
  • Help
play
Using Deep Learning to Detect Abusive Sequences of Member Activity on LinkedIn
Posted Nov 05 | Views 161
# Tech Talk
Share
SPEAKER
James Verbus
James Verbus
James Verbus
Staff Machine Learning Engineer @ LinkedIn

James Verbus is a staff machine learning engineer on the Anti-Abuse AI Team at LinkedIn. His current focus includes the development of advanced, scalable modeling techniques and improving AI developer productivity. Before he began using AI to prevent abuse at LinkedIn, he spent his days looking for a different type of rare event while building and operating the world’s most sensitive dark matter detector a mile underground in an abandoned gold mine. James received his Ph.D. in experimental particle astrophysics from Brown University.

+ Read More

James Verbus is a staff machine learning engineer on the Anti-Abuse AI Team at LinkedIn. His current focus includes the development of advanced, scalable modeling techniques and improving AI developer productivity. Before he began using AI to prevent abuse at LinkedIn, he spent his days looking for a different type of rare event while building and operating the world’s most sensitive dark matter detector a mile underground in an abandoned gold mine. James received his Ph.D. in experimental particle astrophysics from Brown University.

+ Read More
SUMMARY

The Anti-Abuse AI Team at LinkedIn creates, deploys, and maintains models that detect and prevent many types of abuse, including the creation of fake accounts, member profile scraping, automated spam, and account takeovers. Bad actors use automation to scale their attempted abuse.

There are many unique challenges associated with using machine learning to stop abuse on a large professional network including maximizing signal, keeping up with adversarial attackers, and covering many heterogeneous attack surfaces. In addition, traditional machine learning models require hand-engineered features that are often specific to a particular type of abuse and attack surface. To address these challenges, we have productionalized a deep learning model that operates directly on raw sequences of member activity, allowing us to scalably leverage more of the available signal hidden in the data and stop adversarial attacks more effectively. Our first production use case of this model was the detection of logged-in accounts scraping member profile data.

We will present results demonstrating the promise of this modeling approach and discuss how it helps to solve many of the unique challenges in the anti-abuse domain.

+ Read More
Terms of Use
Privacy Policy
Powered by