Scale Events
+00:00 GMT
Sign in or Join the community to continue

Are Transformers Becoming the Most Impactful Tech of the Decade?

Posted Oct 06, 2021 | Views 3.1K
# TransformX 2021
# Breakout Session
Share
speaker
avatar
Clement Delangue
Co-Founder & CEO @ Hugging Face

Clement Delangue is co-founder and CEO of Hugging Face, a 100,000+ member community advancing and democratizing AI through open source and open science. Clement started his career in product at Moodstocks, a machine learning startup for computer vision (acquired by Google in 2016). He is passionate about building AI products.

+ Read More
SUMMARY

Clement Delangue, CEO of Hugging Face insightfully describes through the evolution of Transform models. These foundational models have generated unprecedented excitement in academia and industry, enabling huge advances in NLP like next sentence prediction, question answering, and text summarization. He shares why it's important to ensure they are used in an ethical and transparent manner. With that in mind, he discusses the work being done to ensure AI fairness in foundational models.

+ Read More
TRANSCRIPT

Nika Carlson (00:15): Next up, we're excited to welcome Clement DeLangue. Clement is the co-founder and CEO of Hugging Face, a 100,000 plus member community advancing in democratizing AI through open source and open science. Hugging Face has more than a thousand companies using their library in production. Clement started his career in product at Mood Stocks, a machine learning startup for computer vision, which was acquired by Google. He is passionate about building AI products. Clement, over to you.

Clement DeLangue (00:50): Hi everyone. Super happy to be here for the Scale TransformX conference. And I'm going to start this keynote with some kind of like letters, importance mainstream features that you might have noticed from some of the most popular products out there. And you'll see after I'll tell you what is the common denominator between all of them.

Clement DeLangue (01:18): The first one you might have noticed that first Google got better and better in the last few months. Two, that they're starting to release new features. Like for example, the ability to get an answer to your question right away on the page of the results. You know, instead of showing you a list of links where you can see some answers, they are finding one answer that they think is the right one and showing it to you right away before all the results.

Clement DeLangue (01:54): Something else that you might have noticed is that when you're writing an email on Gmail, or when you're writing on Google Docs or Microsoft Word, you're getting these very personalized auto-complete now. It's not just like what you had on your phone, just auto-completing one word and the same thing for everyone. It's taking into account what you wrote before, how you've been writing, and some of your information to suggest some very good and adequate texts for you to write much faster.

Clement DeLangue (02:37): Speaking of texts, you might have seen that translation is now present on most social networks and platforms. It's super important for me. I'm French, as you might have heard from my accent, because it's starting to translate automatically on Facebook, on Twitter, on LinkedIn, on most platforms in hundreds of languages. Last, you might have heard of something called Github Copilots. Obviously Github is the leading software engineering platform. Now what it does and what it introduced with copilot is a way for developers to get an auto-complete of their code. A little bit what you have in text editors.

Clement DeLangue (03:26): You'll tell me like, "What's the relation between all of these. Why am I talking about all these new features?" It's because it's all powered by transformers or transformer models as we call them. This is one of the reasons why during these keynotes, I want to talk to you about why I believe transformers are becoming the most impactful tech of the decade.

Clement DeLangue (04:03): Let's go back to the beginning first, to see how these started. In 2017, few people noticed the release of the paper called "Attention is All you Need" by Bashwani and Associates, coming out of Google Brain proposed a new network architecture called the Transformer, based solely on the attention mechanisms. It will change the field of machine learning forever. Directly inspired from the paper BERTS, or B Directional Encoder Representations from Transformers was released a year later by a team from Google.

Clement DeLangue (04:52): The same year was created Hugging Face transformers and released on Github. It's became in just a few months, the most popular library for machine learning, that more than 5,000 companies today are using. Here, for example, you can see a graph of the gross of the number of Github stars for Hugging Face transformers in the reds, compared to Apache Spark that gave birth to the company Databricks. Apache CAFCA that gave birth to the company Conference, and Mongo from Mongo DB.

Clement DeLangue (05:38): Hugging Face transformers is being arguably the fastest growing open source project of the past three years. It has even been described by Sebastian Rudder, from Deep Mines as the most impactful tool for current NLP. An NLP, if you don't know it, is the machine learning field that relates to texts compared for example, to vision time series or speech.

Clement DeLangue (06:10): So how did that happen? To me, this was made possible mostly by three major trends. First, hardware has gotten exponentially better, especially GPU's or TPUs. Leading companies like Nvidia or Google have made compute not only cheaper, but also more powerful and more accessible. Previously only large companies with special hardware were able to process the high levels of computing power needed to train very large transformer models.

Clement DeLangue (06:55): Second, we've seen that the web mostly composed of texts, but also with images, voice, provided a large open and diverse data set. Sites like a Wikipedia, or Reddit provided gigabytes of texts and data sets for models to be properly trained.

Clement DeLangue (07:20): And the last missing piece to the puzzle was transfer learning, for finally being able to create these new transformer models that are going to change the field forever. Before going further, I wanted to focus a little bit on the last piece on transfer learning, to understand how different it is from a traditional machine learning techniques. I think that explains some of their success and why they're so promising, and they could be the major technology trends of the decade.

Clement DeLangue (07:59): The traditional way we train a machine learning model is to gather data sets to initialize our waste from scratch. And then we train our model on new data. Issue faced with a second task, task two here. You gather a new data set and randomly initialize your waste from scratch. Again, same thing, if you have a short task. Now it's a way transfer running works is quite different.

Clement DeLangue (08:30): We use all the knowledge on the past task to learn a new task. As you can see here, you see the source task, the knowledge, the learning system. And it works especially well to move from one task to another, with taking advantage of all the learning and knowledge that has been gathered. It helps learn with a very limited number of new data pieces to get more accurate results, and we can fill in the gaps.

Clement DeLangue (09:12): There are many different ways to do transfer learning, but the way we are doing it now in transformers is called sequential transfer learning, as there are two steps. The first step is called a pre-training. You take a base model, train it with a very large Corpus, usually a slice of the web. Spend millions of dollars of compute. Weeks, not even days, weeks of training. And you get a pre-trained language model or pre-trained transformer model.

Clement DeLangue (09:56): Once you have your pre-trained model, it becomes much, much easier for the second step, which is a dictation, or fine tuning, on a small dataset, your own data. For example, for your own task, for your own domain, or for your own language. At the end, you get your own fine-tuned model that you can use in your application. For example, for Hugging Face, over 5,000 companies are using these fine-tuned language models.

Clement DeLangue (10:35): This new architecture, and the fact that it's super easy to find them on your own domain, on your own task, on your own language, led to the emergence, not only of BERDS, but tons of similarly architectures transformer models, like GPT or BAFTA, ExcelNet, BERDS, and led to thousands of fine-tuned models that have been published on the Hugging Face model hub. Here, for example, you have the most popular ones you can see, BAFTA, Based, BERDS, T-5, GPT, GTBERDS, and many, many more. There are more than 20,000 models that have been shared on the Hugging Face model hub.

Clement DeLangue (11:30): So the next question is why these models become so popular? I think part of the answer first is that the constantly elevated the state of GIs. What it means for your use case in the industry is much more accuracy for your predictions. For example, on SQADS that we're seeing here, the Stanford Question Answering Data Sets, question answering is the tasks that we've seen on my first lights from Google will give you to text as a context. Then you ask a question and it's finds the answer to your question in the context text. Here, you can see that the exact match. So the right answer score went from 70 to a most 90 in just six months, thanks to transform the models.

Clement DeLangue (12:30): The funny thing is that as the authors realize that the benchmark might be too easy for transformer models, they decided to design a new version, SQUAD 2, that was supposed to be harder. However, these new models just nail these benchmarks, as easily as the first one. Here, you can see, they went from 70 to 90 exact match in just a matter of months again.

Clement DeLangue (13:00): Interestingly easy progress. Didn't just apply to one type of task here, question answering, but across the boards. First to most of the NLP tasks. So the text ones. And then, two, starting with computer vision biology, chemistry time series, and going to other domains.

Clement DeLangue (13:23): The interesting thing is that these models learn sufficient general knowledge to perform well on the huge range benchmarks. For example, the GLUE, General Language Understanding Evaluation benchmark, which evaluates models on the collection of tasks, was stopped right away by BERDS when it was released. In just a year, models were surpassing human baselines, promisingly moving us from the world where only humans could understand natural language, to a world where algorithms can start to help in classifying, understanding, and generating natural language, too. I'm not saying that we are human level for all the tasks, but the fact that we're getting there for a specific benchmark, for specific tasks like that is very exciting.

Clement DeLangue (14:27): And what we've seen is that these models progressively went from being useful in the tasks that we know, like text classification, text generation, information extraction, to new use cases that weren't possible at all before. Some of dues or multimodel models, right? So it's the ability to use transform models, not only for a text, but sometimes texts with image, video with text, speech plus video. For example, you can see here from the daily model, from Open the Eye, which has an open source version on the Hugging Face model hub, that you give it a text, a prompt, an armchair with the shape on of an avocado. And it's going to create a brand new image based on that. This opens the door to so many new use cases in the industry. So many new contexts where we'll be able to apply these transformer models. And as we start to apply them, if we were, something I wanted to talk about is the ethics of them.

Clement DeLangue (15:55): We can't talk about the impact of transformer models without talking about ethics. Because the truth is, these models are biased. They're biased because they're data sets, and because of the way we're building them. An example of this is a gender bias. Here, you can see, for example, the BERDS model for next word prediction. And if you can give the BERD model a prompt, "this man works as" and ask it to predict the next word, see that is going to give you lawyer, printer, doctor, waiter, mechanic. Whereas if you prompt it with "this woman works as", you're going to get nurse, waitress, teacher, maid, prostitute. You can see here a clear gender bias. This is not acceptable, and it's time for the fields not only to think about it, but also to take action.

Clement DeLangue (17:07): This is one of the reasons why we've been lucky to see Meg Mitchell, one of the most renowned researcher on the topic, to join us. And she will help us. She will help Hugging Face create tools for AI fairness. We've already started working on it. Most interesting thing that I've seen the past few months are the model cards. What is it? They are a standardized way for researchers to communicate and share the potential limitations and biases of their models.

Clement DeLangue (17:50): On the Hugging Face model hub, we've tried to incentivize contributors to add model cards. We've quite a lot of success. Now we have 5,000 data sets and model cards that have been added to the Hugging Face hub. Why is it important? It is because for example, now that we know that BERDS has some gender bias issue, communicate that well in the model card, then the companies or the engineers using BERDS can know, for example, not to use it for resume filtering because it's going to be biased. This is very important.

Clement DeLangue (18:35): Another initiative that we've been taking is to try to democratize access to information by allowing these new models to work in more languages. As we know, English is very dominant in the world today, and a lot of the research resources are dedicated to English. This is something we need to change. So something that we've organized is a Big Community Sprints with other 500 scientists all over the world, working on bringing these new transformers to other languages, especially what we call "low resource languages", which are languages that have very small datasets and that very few people are working on. It allows for example, to create more speech models, more translation models, and over, I think, a hundred languages, thanks to this Community Sprint. For example, to the translation and give access to more information to more people. And this is just the beginning of, she thinks, a lot of today's society's problems, from climate change, vaccines, toxicity of public platforms, transformers need to help, and they could help.

Clement DeLangue (20:13): For example, on social networks, with all the toxicity that you have, with all the violent to bases, the fake news comments, you can't have a human check all of them. First because there would be the price after a few weeks. And second, because the quantity of it is too big. We need machine learning and transformers to help. That's why every single company today needs to be intentional about driving more social impact with transformers. This is why I'm super excited about the impact of transforming models. For them not only to be the most impactful take of the decade, but also perhaps the most useful one.

Clement DeLangue (21:04): If you're interested to keep the conversation going, feel free to check Hugging Face. Reach out to me directly on, on Twitter or LinkedIn. Thanks everyone and enjoy the rest of the event. Bye-bye.

+ Read More

Watch More

0:47
Overcoming the Most Difficult Challenges in Autonomous Vehicles
Posted Oct 24, 2022 | Views 2.8K
# TransformX 2022
# Expert Panel
# Autonomous Vehicles
# Computer Vision
The State of AI Report 2021
Posted Dec 08, 2021 | Views 8.3K
# Tech Talk
The Future of AV Sensors - Enabling the Autonomous Vehicles of Tomorrow
Posted Oct 06, 2021 | Views 3.8K
# TransformX 2021
# Fireside Chat