timezone
+00:00 GMT
SIGN IN
  • Home
  • Events
  • Content
  • People
  • Messages
  • Channels
  • Help
Sign In
The Week in AI

The Week in AI: Rapid Robots, Quantum Leaps Roadblock, Generative Trains ML, AI Forecasts

MIT robot outruns humans, quantum ML gets a boost, generative models produce more realistic synthetic data, and AI improves weather forecast accuracy. Greg Coquillo gets you up to speed.
Greg Coquillo
Greg Coquillo
April 6, 2022
The Week in AI: Rapid Robots, Quantum Leaps Roadblock, Generative Trains ML, AI Forecasts

The Week in AI is a roundup of key AI/ML research and news to keep you informed in the age of high-tech velocity. This week: MIT’s Mini Cheetah learns how to run faster than the average human, quantum machine learning solves a data scalability roadblock, generative models train machine learning, and statistics and AI combine to help weather forecasts become more accurate.

MIT’s Mini Cheetah Teaches Itself How to Run

Robotic researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) used AI-powered simulations to help their Mini Cheetah robot quickly learn how to run, resulting in a new gait that allows the robot to move faster. Engineering and programming robots that can handle any situation in the real world has proven to be an insurmountable difficulty, but adaptability is the key to making them move faster and more confidently across varying terrains. 

Due to their manual nature, previous approaches that allowed robots to adapt to new terrains were more careful and time-consuming, inevitably setting them up for failure when they encountered something new. MIT’s improved approach allows robots to learn by trial and error. However, as with a human toddler, letting robots simply run wild to have all these experiences on their own isn’t safe. 

Researchers accelerated Mini Cheetah’s development by skipping the robot’s “childhood,” full of the random learning experiences that most humans go through, and turned to AI and simulations. In just three hours’ time, Mini Cheetah experienced 100 days’ worth of virtual adventures over a diverse variety of terrains and learned countless new techniques for modifying its gait so that it can effectively locomote from Point A to Point B no matter the terrain. 

The newly acquired skills also allow the robot to monitor how its components are functioning, helping it run more efficiently. Mini Cheetah is capable of reaching a top speed of over 8.7mph, faster than the average human. 

However, teaching a robot how to run based on this new approach isn’t the endgame. Scientists plan to teach robot hands how to safely handle thousands of different objects they’ve never touched before and teach autonomous drones how to fly in inclement weather, by using safe simulations instead of sending them out in the real world to learn by trial and error. 

Qubit Entanglement Gives Quantum Machine Learning a Boost

Scientists have discovered a new way to eliminate exponential overhead due to training data for quantum machine learning (QML) using a newfound quantum version of the no-free-lunch theorem. The new study, from Kunal Sharma et al., finds that a strange quantum phenomenon known as entanglement, which Einstein dubbed “spooky action at a distance,” may be the answer to successfully implementing QML. Prior to this discovery, scaled training data as a requirement for larger QML models was seen as a roadblock to full QML application. 

The no-free-lunch theorem insinuates that an ML algorithm's average performance depends on how much data it has, suggesting that the amount of data ultimately limits ML’s performance. When applied to QML, this theory indicates that the exponential growth of training data could potentially eliminate the edge that quantum computing could have over classical computing. 

The scientists’ findings, verified using quantum hardware startup Rigetti’s Aspen-4 quantum computer, imply that adding more entanglement to QML can lead to exponential computing scale up. This extra set of entangled qubits, known as “ancillas,” can help the QML circuit interact with many quantum states in the training data at the same time, thus experiencing a speedup even with relatively few ancillas. 

Scientists describe one potential futuristic application of this work as “black box uploading.” For example, they could use black box uploading to analyze the standard model, currently the best explanation for how all the known elementary particles behave, based on data collected from entangled protons being collided inside the atom smashers at CERN, the largest particle physics lab in the world. 

Generative Model Produces Realistic Synthetic Data for ML Training

MIT researchers developed a method for training ML models that employs a generative model to produce extremely realistic synthetic data that can train another model for downstream vision tasks. Datasets can cost millions of dollars to create, if usable data exists. And even the best ones can contain biases that negatively impact a model’s performance. 

To address these constraints, many scientists have evaluated the use of synthetic data from a generative model instead of from real data, while getting around some of the privacy and usage-rights problems that limit how actual data may be distributed. Moreover, generative models can be configured to eliminate particular attributes, such as race or gender, to overcome biases in traditional datasets. 

One of the benefits of generative models lies in learning how to modify the underlying data on which they are trained. For example, a model trained on pictures of vehicles can “imagine” how cars would look in new scenarios or situations it hasn’t seen before, thus generating images with cars in different positions, colors, or sizes. When they paired a contrastive model with a pretrained generative model, the researchers discovered that their system performed better compared to several different image classification models trained using real data. 

Further discoveries indicated that increasing the number of uniquely generated samples to train the contrastive model resulted in even higher performance. The researchers warn that these models can pose privacy concerns because they can reveal source data in some situations. They plan to address this issue in the near future, in addition to using the novice training method for edge cases (a dog and its owner sprinting down a highway), which are rarely present in real data. 

Statistics and AI Methods Help Correct Systematic Error of Weather Models

Researchers from Karlsruhe Institute of Technology (KIT) use methods of statistics and ML to issue effective warnings by increasing forecasting accuracy and reliability of wind gusts. Extreme weather phenomena such as wind gusts and winter storms require precise forecasts to prevent severe damage to the environment and promote better protection of humans and animals. 

The new study finds that taking geographical information and meteorological variables such as temperature and solar radiation into account significantly improves forecast quality, especially when using neural network–based AI models. As KIT doctoral researcher Benedikt Schulz told Phys.org, “wind gusts are difficult to model, and their predictability with the numerical weather forecast models used by weather services is limited and subject to uncertainties.”

Today, despite efforts to better estimate uncertainties, ensemble weather forecasts still have systemic errors. To improve forecast accuracy, Sebastian Lerch and Schulz started using different statistics and AI methods for the postprocessing of ensemble learning–based forecasts of wind gusts, which resulted in a 36% error reduction. In addition, they analyzed forecasts from 175 observation stations made with the German Weather Service and found that AI methods produced better models than more than 92% of the stations. 

This is due to neural networks’ ability to learn complex and nonlinear relationships from big datasets. To maintain the momentum for developing weather forecast methods at the intersection of statistics and AI, the researchers plan to further collaborate with various weather services around the world.

Why These Stories Matter

Teaching robots how to quickly adapt to unpredictable terrain conditions via AI simulations unlocks countless applications and use cases where deploying a human is inadequate or dangerous, such as rescue missions. 

Though QML looks to achieve speed compared to classical methods through qubit entanglement, hybrid classical-quantum systems are a more viable solution for future classification needs, due to commercial launch uncertainties in the world of quantum computing. 

Meanwhile, the development of hyper-realistic synthetic data produced by generative models will continue to improve the performance of complex ML algorithms. Whether AI is used to better forecast the weather or predict the quality of next season’s agricultural harvest, humans stand to greatly benefit for decades to come. 

Until next time, stay informed and get involved!

Learn More

Data Augmentation in Computer Vision: A Guide

Related Blogs

The Week in AI: Earthquake Detection, ML Reasoning, Color Night Vision, Memristors
The Week in AI
Apr 20th, 2022
Terms of Use
Privacy Policy
Powered by