Scale Events
timezone
+00:00 GMT
AI Research and Industry Trends
June 1, 2022

The Week in AI: An Image Generator, Universe Mapper, Wildfire Predictor, and Marine Sound Translator

# Image Generator
# Mapping
# The Week in AI

Imagen converts text to images, DeepSqueak decodes marine mammal sounds, SHEEP maps the universe, and a vegetation mapper helps prevent fires.

Greg Coquillo
Greg Coquillo

The Week in AI is a roundup of high-impact AI/ML research and news to keep you up to date in the fast-moving world of enterprise machine learning. From a state-of-the-art, text-to-image generator to a marine mammal language decoder, here are this week’s highlights.  

Google’s Imagen Wows the Crowd

Google researchers recently announced the launch of Imagen, a text-to-image diffusion model that’s taking on OpenAI’s DALL·E 2

It creates images based on written text or words that describe a scenario. Imagine a cute Corgi living in a house made out of sushi. A human artist can bring this description to life in artwork, but now several AI models can do this with no intervention other than the text description. 

Under the hood, Imagen builds on the power of large transformer language models to understand text. The most remarkable part of Imagen is not its accuracy, however; it’s the photorealism depicted in the pictures. Google researchers also made a key discovery with Imagen: Generic large language models are surprisingly effective at encoding text for image synthesis. 

However, the AI system comes with limitations, including ethical challenges facing text-to-image research broadly. This is due to the fact that data requirements of text-to-image models require researchers to rely heavily on large, mostly uncurated, web-scraped datasets. The problem with such datasets, according to the researchers, is that they often reflect oppressive viewpoints, social stereotypes, and derogatory associations that serve to marginalize some groups of people. 

Although a subset of Imagen’s training data was filtered to remove noise and undesirable content such as toxic language, completely eliminating harmful social stereotypes from LAION-400M, the large dataset used to train Imagen, has been very difficult. For this reason, Google concluded that Imagen won’t be suitable for public use until researchers can develop a new way to benchmark social and cultural bias in future image generations.

SHEEP Maps Our Universe

Researchers at the University of Porto’s Instituto de Astrofísica e Ciências do Espaço (IA) in Portugal announced the creation of SHEEP, a machine-learning algorithm that can distinguish celestial objects such as stars, galaxies, quasars, or supernovae. Classifying astronomical sources has been a long-standing challenge due to the sheer number of them, their distance from Earth and each other, and the complexity of the universe. 

Traditionally, time-consuming imaging and spectroscopic surveys have been among the main sources for understanding of the visible content of the universe. SHEEP is a supervised ML pipeline that estimates photometric redshifts and uses this information to classify objects. 

Researchers discovered that including the redshift and the coordinates of the objects helps the AI model understand them within a 3D map of the universe. The researchers combine these inputs with color information to make better estimations of source properties. SHEEP’s evolutional work is part of the researchers’ effort toward exploiting the expected deluge of data to come from the James Webb Space Telescope and other existing surveys, by developing AI systems that efficiently classify and characterize billions of sources. 

The team hopes that SHEEP technology will be an integral part of the European Space Agency’s Euclid space mission, scheduled for launch in 2023, allowing the exploration of an unmatched set of photometric data for billions of galaxies.

Hitachi Vegetation Manager Prevents Wildfires

Hitachi Energy, formerly known as Hitachi ABB Power Grids, announced the new AI-driven, first-of-its-kind Hitachi Vegetation Manager, part of the company's Lumada Inspections Insights that can analyze trees to prevent wildfires. This is a closed-loop system that leverages AI and advanced analytics to improve the accuracy and effectiveness of an organization’s vegetation planning efforts. 

This includes figuring out when to cut trees to protect power grids, for instance. 

The AI model, developed at research centers in Japan, takes images of trees and forests from a variety of visual sources, including photos, videos, and satellite imagery from Maxar. Hitachi Vegetation Manager combines the images with climate, ecosystem, and cut-plan data to provide utilities with gridwide visibility. 

Using AI to track and analyze vegetation is essential for utilities around the world, which are dealing with unprecedented climate-related challenges. In 2021, global wildfires generated about 6,450 megatons of CO2 equivalent—approximately 148% more than the European Union’s total fossil fuel emissions in 2020. Historically, as a highly regulated sector, the utility industry hasn’t been a leader in AI and other emerging technologies due to the lack of good-quality datasets. 

With Hitachi Vegetation Manager, arborists no longer need to walk along miles of transmission lines to identify every tree species or the risk each presents. Once species data, including location and details such as the quality of the soil, is input in the model, the algorithm can take weather precipitation data, analyze the tree species growth profile, and predict where growth will or won’t happen. 

DeepSqueak Decodes Marine Mammal Languages

A researcher from the University of Washington School of Medicine collaborated with two companies to release an upgraded AI tool called DeepSqueak that sifts through underwater sounds to track and study marine mammals

The two companies involved are Ocean Science Analytics, Humans & Dolphins Talking LLC and BioSci LLC.

DeepSqueak’s name was inspired by its previous version back in 2019, a deep learning algorithm that was first used to categorize the different ultrasonic squeals of mice. Today, in research published at the 182nd meeting of the Acoustical Society of America, researchers are applying the technology to vast datasets of marine mammal sounds. 

Recently, recordings of songs have helped identify an unknown population of blue whales in the Indian Ocean and a never-before-heard species of beaked whale. And since the majority of the ocean is out of humans’ physical reach, underwater sound could help researchers better understand marine life, including mammal swimming patterns, their density and abundance, and how they interact with one another. 

DeepSqueak effectively filters through sound data in the ocean and creates what look like heat maps based on where certain acoustic signals are heard and at what frequency, eliminating hours of data collection previously required from traditional methods. During testing, the fully automated tool has consistently been able to detect the calls of specific marine mammals such as humpback whales, delphinids, and fine whales, despite the background noise. 

By connecting contextual information to vocal signals, DeepSqueak’s creators hope to allow scientists to study the nuances between animal vocalizations and behavior, a major step in better understanding marine life. 

Why These Stories Matter

Though Imagen has impressed some observers with its image generation when compared to the recently upgraded DALLE-2, practitioners should remain cautious about interpreting the model as being free of bias. Google’s decision to maintain and improve the system in-house means Imagen won’t be available to the public for some time. 

Meanwhile, AI adoption continues to increase in areas such as in the ocean and outer space, where underwater microphone arrays and new telescopes have unlocked massive new datasets for researchers to explore, respectively. In the case of astronomical explorations, AI promises a detailed cartography of the universe under the Euclid space mission, which will shed light into the nature of the enigmatic dark matter and dark energy.

Until next time, stay informed and get involved! 

Learn More

Dive in
Related
Blog
The Week in AI: An Image Hallucinator, a Photonic Chip, a Traffic Decongestant
By Greg Coquillo • Jun 15th, 2022 Views 2.7K
Blog
The Week in AI: An Image Hallucinator, a Photonic Chip, a Traffic Decongestant
By Greg Coquillo • Jun 15th, 2022 Views 2.7K
Blog
The Week in AI: Advances in PaLM, Robot Art, DALL·E 2, and GSLM
By Greg Coquillo • Apr 13th, 2022 Views 1.7K