Scale Events
+00:00 GMT
AI Research and Industry Trends
March 16, 2022

The Week In AI: Robots Shape Shift, Synthetic Data, Weaviate, and 1,000-Layer DeepNet

The Week In AI: Robots Shape Shift, Synthetic Data, Weaviate, and 1,000-Layer DeepNet
# Robotics
# Data Augmentation
# The Week in AI

Find out how shape-shifters draw the map for the future of soft robots, witness the rise of data augmentation, hear about the launch of an AI-centric database, and explore a 1,000-layer transformer model.

Greg Coquillo
Greg Coquillo
The Week In AI: Robots Shape Shift, Synthetic Data, Weaviate, and 1,000-Layer DeepNet

The Week in AI is a roundup of key AI/ML research and news to keep you informed in the age of high-tech velocity. This week, soft robots change shape to manipulate delicate materials, data augmentation techniques aim to reduce bias, machine learning finds a home in vector databases, and DeepNet scales to 1,000 stable layers. 

Shape-Shifters Will Define the Future of Robots

Physicists have discovered a new way to coat soft robots in materials that allow them to move and function in a more purposeful way. The research, led by the United Kingdom’s University of Bath, was described in Science Advances

Using a method called “active matter,” the authors of the study wrapped an elastic ball in a layer of tiny robots to program shape and behavior. The modeling breakthrough could be a turning point in the way robots are designed. Imagine being able to customize the shape and size of drug delivery capsules based on patient use cases. 

To expand on this concept, researchers may find a way to influence the shape, movement, and behavior of a soft solid not by its natural elasticity but by human-controlled activity on its surface. 

The discovery also unlocks many possibilities for the next generation of machines, whose function will come from the bottom up. In other words, instead of being governed by a central controller (as robotic arms in factories are), these future machines would be made of modular units that cooperate with each other to determine the machines’ ultimate role, movement, and function. 

Based on the research’s progress, this proof of concept promises future technology where soft robots are squishier and better at picking up and manipulating delicate materials. In the next phase, which has already begun, researchers are testing collective behaviors, as in an army of soft robots banded together to act on a common task. 

Synthetic Data Addresses Gap Caused by Imbalance

Generative adversarial networks (GANs) may have gotten a bad reputation due to the rise of deepfakes, but they are also responsible for the rise of synthetic data generation. The mechanism is not only used to avoid the imbalance that is often present in datasets, but also to create training data when training examples are scarce. 

To train computer vision algorithms last year, researchers at Data Science Nigeria used AI to create synthetic images of African fashion because the wealth of existing datasets featured only Western clothing. 

Although these computer-generated samples contain the same statistical characteristics as the genuine article, they will be only as unbiased as the real data used to produce them. For example, synthetic data may help increase the proportion of minority faces in a dataset, but those faces can end up being less natural due to limited ground truth data. 

Nonetheless, talk is increasing as to whether synthetic data for AI should still be considered among the top 10 breakthrough technologies in 2022. AI-based image generation has evolved from driverless cars being trained on virtual streets to supplying digital human faces on demand and helping to create more accurate finance and insurance use cases. 

It doesn’t stop there. Text data augmentation techniques are also being used to address the lack of data from minority languages, known as the “Big Data Wall,” to train deep neural networks. Techniques that include textual noise injection, spelling error injection, and word replacement using thesaurus and paraphrase generation have increased model accuracy by up to 21.6% for supervised binary classification tasks. 

Weaviate, the AI-Centric Database with High-Performance Search, Debuts

SeMI Technologies launched a vector database called Weaviate that lets users interact with unstructured data, including vectors and across text, audio, and images, unlocking incredibly powerful use cases. 

Weaviate is part of a wave of AI-centric databases that merge machine learning with data storage. 

The new model includes a vector-based core that leverages AI algorithms to increase search engine scope by removing the constraints of exact matching. For example, while traditional databases require attributes to be spelled correctly or include the exact code for locating records, Weaviate can find entries using “similar” or “nearby” search methods. 

AI helps the open-source database by being able to specify what it means to be “nearby” in a multidimensional space. 

Some of Weaviate’s prebuilt models include Deepset’s Haystack for semantic search and Jina.ai’s document-based search. This technology supports the demand of computation-heavy AI algorithms as they plow through larger and more complex sets of data. 

DeepNet Scales to 1,000 Layers

Microsoft researchers developed an effective deep transformer stabilizer, called DeepNorm, that has allowed DeepNet to scale to 1,000 layers. Recent years have witnessed a trend toward large-scale transformer models. Capacity has exponentially increased from millions of parameters to billions and even trillions. Large-scale models yield high performance on a wide range of tasks, and show impressive abilities in few-shot and zero-shot learning. 

Despite an enormous number of parameters, their depths are limited by the training instability of transformers. DeepNorm is the new normalization function that combines the best of both worlds, including the good performance of postlayer normalization and the stable training of prelayer normalization.  

This makes DeepNorm a good alternative for both stabilizing models and allowing them to scale. With 2,500 attention and feed-forward network sublayers, DeepNet is now one order of magnitude deeper than previous deep transformers. 

To test their scaling theory, the researchers were able to prove that a 200-layer DeepNet with 3.2 billion parameters significantly outperformed—by 5 BLEU points—a multilingual model with 7,482 translation directions, 48 layers, and 12 billion parameters. 

Researchers focused on machine translation to collect data from running experiments in this first study. In the future they will look to extend DeepNet to support more diverse tasks, such as language model pretraining, protein structure prediction, and BERT pretraining of image transformers.

Why These Stories Matter

Over the years, robots have been a physical manifestation of AI. We will continue to see them evolve through integration with nanotechnology that will help humans live healthy lives. 

On the data front, more users will leverage augmentation techniques to appease some of the lingering concerns around model bias. However, there’s much room for improvement. 

In addition, the emergence of AI workloads directly in databases using high-performance computing is a sign that data-hungry AI is less likely to “run short” of supply. 

And as AI continues to transform into trillion-parameter multitask models, we should continue to collaborate with researchers, domain experts, regulators, and end users to ensure a safe and ethical application of the technology. 

Until next time, stay informed and get involved!

Dive in
Related
Blog
The Week in AI: Rapid Robots, Quantum Leaps Roadblock, Generative Trains ML, AI Forecasts
By Greg Coquillo • Apr 6th, 2022 Views 1.5K