Scale Events
+00:00 GMT
Articles
February 7, 2022

Building the Next Generation of NLP Applications: Why Research and Applied ML Are Both Essential

Building the Next Generation of NLP Applications: Why Research and Applied ML Are Both Essential

Richard Socher explains why it's important to strike a balance between research and applied ML, so you can iterate with customers and impact the actual product and revenue.

Esther Shein
Esther Shein
Building the Next Generation of NLP Applications: Why Research and Applied ML Are Both Essential

The future of AI is here—it’s just not equally distributed. And while AI has the potential to fundamentally change some industries, there will be, at some point, a cutoff between the have-nots and the haves. 

That was the message from Richard Socher, the fifth-most-cited researcher in natural language processing (NLP) and the CEO of You.com, an ad-free, privacy-preserving search engine. 

In a sweeping fireside chat with Scale AI CEO Alexandr Wang at the TransformX conference, Socher, who was previously the chief scientist at Salesforce, discussed how corporate leaders should apply research to production applications. He also spoke about the shrinking gap between pure research and the building of revenue-generating products, and how enterprises in all industries can avoid the common pitfalls of AI adoption.

“What's fascinating about AI nowadays is that the gap between pure, publishable academic research and actual products is actually getting smaller and smaller, largely thanks to deep learning, where you have amazing libraries and you can hack up a quick prototype. You can play around in the research world with new ideas.” —Richard Socher

Two Buckets for Using AI

Thanks to a lot of tooling, people can very quickly get those ideas into production. Socher said he learned to strike a balance between research and applied ML by focusing on how to iterate with customers very quickly and eventually have impact on the actual product and generate revenue.

You gain the freedom to do pure research by solving applied problems, such as improving the ability of chatbots. “There are a bunch of simple things that you can already do in a product right now that would help companies massively save time and money,’’ he said.

Wang asked Socher why most companies have yet to become early adopters of AI and machine learning (ML), despite the incredible amount of progress that has been made from a research perspective compared to a decade ago. Every industry is going to be changed by AI, but companies will fall into two buckets, Socher said: the ones that will become both more efficient and competitive by using AI for most processes (for example, using self-driving cars), and the ones that will use it to gain some efficiencies.

Pick an AI Core Competency

One big trend in ML and deep learning has been creating larger models and getting more data to fuel those algorithms. Socher believes that, over time, algorithms will matter less and less. What becomes more important is how you train a model, make it better, and fine-tune it.

Once a company decides whether it sits in the bucket where AI will transform its entire industry or the one where it will just give it greater efficiencies, it can figure out how it will be impacted by the current advancements in AI research.

Socher said companies must determine the level of AI they will need for their core competency. For example, an insurance company will need top AI people who can identify and classify risk at very high levels of accuracy, because that’s the most important area of their business. But if you need a service chatbot, you might be able to use an outside partner for your marketing automation.

He expects there will be cases where companies apply a mix of internal and external AI expertise.

The Future of Software Engineering Is in Using Libraries

Software development has reached a point where it’s less about people having to write new kinds of algorithmic code to being able to use all the available libraries, Socher said. Also on tap are increased usage of SaaS and B2B offerings that help with testing and with the automated scaling of some models and analytics.

Companies have to decide when they can use external services, when they can rely on an existing, open-source software package, and when they have to innovate and build something themselves. Services such as the OpenAI Codex, which translates natural language to code and is in private beta-testing mode, will amplify that, Socher said.

Fewer people will need to address low-level, difficult algorithmic questions, he said. More will be needed to craft packages and create something that has no bugs and can still function at a high level despite layers of abstraction.

It’s Hard to Build Something That’s Just Drag and Drop

Today, it’s easier to build on top of new abstractions—for example, creating a web app that scales reasonably well even without coding, Socher said. This will be the case with AI as well, but it’s still not there yet.

Right now, it’s hard to build something that’s just drag and drop. Socher said he believes that, eventually, with just a few clicks, “all these leaky abstractions will get less leaky and you’ll … have automated away more and more of that complexity.”

Already, a lot of AI tooling companies are able to abstract away a lot of complex NLP models and provide an out-of-the box NLP system.

“We see that in the ML tooling space already. And this is actually one area that I love about AI. … It’s a very forward-looking community that always tries new experiments on how to publish papers.” —Richard Socher

That’s a beautiful way to just speed up innovation across the whole community, he added.

More ML Tools Will Be Available

Socher made a few predictions about the future of AI. One is that AI will become a more common, general tool, with more automation. Tools will provide the ability to run experiments and collect and label data. The tools will also help users understand problems in the data, biases that may exist in their datasets, and issues with distribution shifts, he said.

There will also be tools aimed at specific verticals to develop applications in healthcare, automotive, B2B, and enterprise software.

Challenges to Reach Artificial General Intelligence

Over the long term, Socher also predicted, people will continue making strides solving interesting artificial general intelligence (AGI) problems. But there are three major roadblocks, and not enough people are working on them, he added. Socher’s three calls to action for the AI community to move toward AGI are:

  • Better multitask learning
  • Better objective functions or more research on objective functions, to ultimately get to the point where AI or ML generates its own objective functions
  • Greater ability to do some fuzzy logic and logical capabilities

The AI community is set up to do one task better than everyone else, and ignore everything else, as opposed to doing 10 tasks well, Socher said.

Learn More

Dive in
Related
41:39
video
Building the Next Generation of NLP Applications With Richard Socher
Oct 6th, 2021 Views 4.4K
Blog
Practical Applications of AI in Businesses Now—and What’s Next
By Steven J. Vaughan-Ni... • Apr 5th, 2022 Views 1.5K
Blog
How to Use TensorFlow.js to Create JavaScript-Based ML
By Esther Shein • Dec 22nd, 2021 Views 11K
Blog
The Future of AI and Language Models: Matching Data and Compute Power Is Key
By Esther Shein • Jan 26th, 2022 Views 2.3K