Scale Events
+00:00 GMT
AI Research and Industry Trends
July 27, 2022

The Week in AI: A Robot Learns to Walk, an AI Sees Like Humans, Project AirSim Improves Drone Training, and More

The Week in AI: A Robot Learns to Walk, an AI Sees Like Humans, Project AirSim Improves Drone Training, and More
# Robotics
# AI Infrastructure

AI teaches a robot dog to walk in one hour, better vision for robots, PrefixRL designs faster, more efficient chips, and Project AirSim makes drone training easier.

Greg Coquillo
Greg Coquillo
The Week in AI: A Robot Learns to Walk, an AI Sees Like Humans, Project AirSim Improves Drone Training, and More

The Week in AI is a roundup of high-impact AI/ML research and news to keep you up to date in the fast-moving world of enterprise machine learning. From a robot dog that uses neurons to quickly learn how to walk to an AI-based algorithm that designs faster, more power-efficient chips, here are this week’s highlights.  

Four-Legged Robot Learns How to Walk within an Hour

Researchers at the Max Planck Institute for Intelligent Systems (MPI-IS) designed Morti, a four-legged robot the size of a dog, in an attempt to better understand how newborn animals learn to walk in such a short time. The study, “Learning Plastic Matching of Robot Dynamics in Closed-loop Central Pattern Generators,” was recently published in the journal Nature Machine Intelligence

In animals, the spinal cord hosts a network for neurons responsible for the coordination of leg muscles and tendons to help them walk while avoiding environmental danger. At birth, they immediately start using innate reflexes that prevent them from falling and getting hurt. However, animals are often exposed to danger from predators due to the extensive time it takes to gain full control of their motor functions. After some practice, the nervous system becomes fully in sync with the leg muscles, giving the newborns the ability to keep up with their mother’s pace when walking. By emulating that process, the researchers were able to teach the robot to walk in an hour.

To teach Morti to walk, researchers equipped the robot with two components: 

  1. A virtual spinal cord, which hosts a central pattern generator (CPG) containing labeled data based on modeled walking patterns. The CPG is similar to the human and animal network of neurons responsible for muscle contractions and control parameter settings for the robot to take the appropriate steps while walking. 
  2. A Bayesian optimization algorithm that executes parameters from the CPG, then compares the sensory data from leg movements with the expected data from the virtual spinal cord to make the necessary adjustments. 

Coordination between CPG parameters, the algorithm and the virtual spinal cord data is done using an iterative method that allows Morti to learn from stumbles and adjust its motion based on the environment. Morti’s learning algorithm gets updated when it fails to eliminate stumbles over time—an indication that it has been struggling to adapt to unexpected surfaces. 

To keep the robot running efficiently, the researchers designed it to consume just 5 watts of power (industrial robots can draw hundreds of watts when in use). In addition, traditional controllers factor the robot’s mass and body geometry to guide it through its walking journey. This is analogous to providing robots with an artificial method to learn to walk. However, like animals in nature, Morti has no knowledge of the robot’s mass and body geometry, and so it relies on its adaptive algorithm and data collected from leg sensors to discover the optimal path to walking successfully. 

This AI Sees Images Better Than the Human Eye

University of Central Florida researchers have developed an artificial intelligence system that can see like a human and immediately understand its environment to take further actions. According to the study, recently published in the journal ACS Nano, the system performs better than the human eye due to its ability to detect different ranges of wavelengths, from ultraviolet to visible to infrared. Robotics and self-driving cars should benefit greatly from this new technology, the researchers say. 

Today’s conventional image technologies decouple data processing, recognition, and sensing. This system combines all three procedures on one-inch chips. Using this technology during night travels, self-driving cars will be able to capture images beyond what is directly in front of them to make a 360-degree assessment of the environment, resulting in better adaptation to road conditions and safer driving performances. 

The secret of this new AI tool lies in the design of nanoscale surfaces composed of molybdenum disulfide and platinum ditelluride, which give it the ability to discern and recognize various wavelengths. To test its accuracy, the researchers used the number 3 in ultraviolet and the number 8 in infrared. 

The device had an accuracy between 70% and 80% while trying to sense and recognize these numbers. The researchers estimate that the devices could become available for use in the next five to 10 years, and they plan on leveraging its versatile image wavelength processing features as a unique selling point. 

Microsoft’s Project AirSim Accelerates Drone Flight Training with AI

The Microsoft research team recently launched Project AirSim, an updated version of its open-source tool AirSim that leverages AI to design, build, train, and test autonomous aircrafts such as drones through a 3D simulated environment. With the new version of this tool, users will no longer need an in-depth knowledge of coding and ML to fine tune AI models. 

The goal of the end-to-end platform is to allow users to save time and cost while developing models that can successfully acquire, interpret, select, and organize sensory information captured in the real world, using fault-tolerant, reusable simulations. According to Microsoft, the system collects, analyzes, and runs pre-trained AI models on data from “programmable simulations of sensors, physics, airframes, batteries, pre-defined, geo-specific dynamic environments.” 

Leveraging the Azure infrastructure to speed up AI training, Project AirSim will allow users to search, manage, and combine real-world, synthetic, and hybrid data from first- and third-party providers. Microsoft will make libraries of simulated 3D environments available to AI model trainers, and with minimal technical knowledge, those trainers will be able to teach drones which steps to take during the different phases of a flight, including takeoff, navigation, and landing. 

Project AirSim has many use cases, such as for package and passenger carriers that want to use drones and for aerial manufacturing plant inspections. For now, Microsoft has made the system available only as a limited preview, but it plans to release it soon for enterprise and consumer usage. 

NVIDIA’s PrefixRL Builds Faster, Smaller, More Power-Efficient Circuits 

The NVIDIA AI Research team unveiled PrefixRL, a research learning-based approach that uses deep reinforcement learning to design and build circuits that process information faster and that run more efficiently. The chips are smaller than conventional ones and achieve increased performance after each chip design iteration. NVIDIA’s paper, “PrefixRL: Optimization of Parallel Prefix Circuits Using Deep Reinforcement Learning,” was published in arXiv

The goal of PrefixRL, which focuses on arithmetic circuits called “parallel prefix circuits,” is to find the optimal balance between area, delay (a measure of processing speed), and power consumption. During optimal design iterations, the reinforcement learning agents are rewarded when circuit and procession latency improves. 

Researchers used the Q-learning algorithm to train the agent with grid representation as the output, while using a fully convolutional neural network architecture as the input. However, PrefixRL is computationally demanding. It needed 256 CPUs for each physical simulation of each GPU. In addition, the 64b adder device consumed more than 32K GPU hours to train. 

To address this challenge, the team developed Raptor, a distributed reinforcement learning platform that takes advantage of NVIDIA’s hardware infrastructure to gain training efficiency. Specifically, PrefixRL can use Raptor to distribute workloads across CPUs, GPUs and even “spot instances,” when available. During testing, PrefixRL was found to design  64b adder circuits with 25% lower area than traditional software, such as EDA, while maintaining the same delay performance. 

Dive in
Related
Blog
The Week in AI: An ML Quantum Advantage, an AI Carbon Footprint Calculator, a Robot Assistant
By Greg Coquillo • Jul 6th, 2022 Views 3.4K