Scale Events
timezone
+00:00 GMT
AI Research and Industry Trends
April 20, 2022

The Week in AI: Earthquake Detection, ML Reasoning, Color Night Vision, Memristors

# Deep Learning
# The Week in AI

AI and quantum computing combine to create ‘smart’ computing, an algorithm that sees humans patterns in ML reasoning, converting infrared images to full color, and deep learning that filters out noise for better earthquake detection.

Greg Coquillo
Greg Coquillo

The Week in AI is a roundup of key AI/ML research and news to keep you informed in the age of high-tech velocity. From better earthquake detection to ML that converts infrared images to full color, here are this week’s highlights.

UrbanDenoiser’s Deep Learning Algorithm More Accurately Detects Earthquakes

Researchers from Stanford found a way to get clearer signals from approaching earthquakes despite all the usual human-generated vibrations in bustling cities. They created a deep learning algorithm that improves the detection capacity of earthquake-monitoring networks in cities and other built-up areas by filtering out background seismic noise. 

The system, called UrbanDenoiser, can boost overall signal quality, recovering signals that might otherwise be too weak to register. 

Monitoring stations in earthquake-prone cities in South America, Mexico, the Mediterranean, Indonesia, and Japan could greatly benefit from this technique. Today, earthquakes are monitored by seismometers that continuously measure seismic waves from vibrations in the ground. UrbanDenoiser was trained on California datasets that comprised 80,000 samples of urban seismic noise from Long Beach and 33,751 samples of earthquake activity from San Jacinto. 

The model detected substantially more earthquakes from the Long Beach dataset and allowed users to easily work out how and where they started. The model also observed four times more seismic activities, compared to the officially recorded number, in the “denoised” data from a 2014 earthquake in La Habra, California. 

Moreover, the application of AI isn’t restricted to hunting earthquakes that are already in progress. Researchers from Penn State have been developing and training deep learning models to predict how changes in measurements could indicate imminent earthquakes. And recently, another Stanford team trained models that measure the arrival times of seismic waves within an earthquake signal to estimate the quake’s location. 

In the past, seismologists would have to process graphs from a high volume of data collected by sensors to identify patterns by sight. Deep learning removes this burden by accelerating the detection process, while performing at higher accuracy. 

Shared Interest Analyzes How Well Machine Learning Reasons Like a Human

MIT and IBM researchers created a method called Shared Interest (SI), which enables users to aggregate, sort, and rank individual explanations from ML decisions to rapidly analyze a model’s behavior. Funded by the MIT-IBM Watson AI Lab, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator, SI analyzes quantifiable metrics that compare how well a model’s reasoning matches that of a human, while helping uncover concerning trends in decision making. 

These insights allow users to quickly and quantitatively assess whether a model is trustworthy and ready for deployment in the real world. For example, engineers can determine whether a model frequently becomes confused by distracting, irrelevant features, such as background objects in photos. That determination allows them to take measures that prevent low performance once launched in production. More precisely, SI leverages saliency methods, popular techniques that reveal how an ML model made a specific decision. 

In the case of image classification, saliency methods can highlight areas of an image that are important to the model when it makes a decision. This means, for instance, in an image classified as a dog, the dog’s head would be highlighted, indicating that those pixels were important to the model at decision time. SI would then compare model-generated salient data to human-generated ground truth data, which would box in the entire dog, to verify how well they align. 

The researchers used three case studies to showcase how useful SI could be both to non-experts and ML practitioners. In the first case study, SI was used in an attempt to help a dermatologist diagnose cancer from photos of skin lesions. The dermatologist ended up not trusting the model due to many predictions made based on image artifacts instead of actual lesions. 

In the second case study, SI helped an ML researcher evaluate a particular saliency method to reveal previously unknown pitfalls in a model. Researchers could then analyze thousands of correct and incorrect decisions much more quickly than with manual methods. 

In the third study, SI delivered insight in a specific image-classification example. The researchers manipulated the ground truth area of the image to perform a what-if analysis that determined which features were most important for each prediction. 

Though SI’s capabilities are impressive, its performance depends on the quality of the saliency methods that power it. If the methods contain bias or are inaccurate, SI will inherit those weaknesses. However, there’s no sign of progress slowing down. Next, the researchers plan to apply SI to tabular data for medical records, as well as current saliency techniques in an attempt to improve them. 

ML Converts Infrared Data to Full-Color Images

Infrared night-vision systems that see in color may soon become a reality. Researchers used ML to create color images of photographs illuminated with infrared light. They hope this technique can help create imaging systems that can operate where the use of visible light is impossible, such as in retinal surgery. 

Traditionally, night-vision systems work by illuminating an area with infrared radiation and detecting the reflection or by using ultrasensitive cameras to detect small amounts of light even at night. Since both techniques used by current night-vision systems produce monochromatic images, the researchers looked for ways to produce multicolor images of objects without having to expose them to visible light. 

To achieve this, computer scientist Pierre Baldi and ophthalmologist Andrew Browne, both of the University of California, Irvine, created a multi-wavelength infrared illumination system that captures the spectral reflectance of color palettes. They then printed 200 images of human faces from a public database to illuminate them with six selected wavelengths (three visible and three infrared), to detect the intensity of the reflected light. 

Next, they developed a computational deep learning algorithm that predicted the reflectance at three visible wavelengths based on the values of reflectance from those three infrared wavelengths. The generated images were fed to a “discriminator” that evaluated the predicted images against a reference image by trying to tell them apart. Successful distinctions between predicted and ground-truth data were fed back to the generator to help refine its predictions. 

Post-training, the system successfully produced accurate color reconstructions of the visible image from infrared data. At the end of the experiment, the researchers sought human assistance to evaluate the visual quality of images generated by the deep learning model. The human subjects consistently rated images from the deep learning model as clearer and more accurate than those produced by a simple linear regression model, which predicted images from data collected at only one infrared wavelength. 

This work could have applications in security, military operations, and animal observation. However, for medical applications such as retinal surgery, the system will need to allow video imaging through a higher data acquisition rate, as well adapt to biologically relevant samples such as retinal tissues. 

AI-Based Memristor Allows Smart Quantum Computing

Scientists in Austria and Italy have developed a quantum version of memristors that could lead to quantum neuromorphic computers. The details of their findings appeared in the journal Nature Photonics

A memristor, or memory resistor, is a building block for electronic circuits, with switches that can remember whether they were toggled on or off after power is turned off. These components resemble synapses, which are links between neurons in the human brain. Therefore, memristors can act like artificial neurons capable of both computing and storage

Recent research suggests that neuromorphic, or brainlike, computers equipped with memristors could perform well at running neural networks, ML systems that leverage synthetic versions of synapses, and neurons to imitate the human brain’s learning process. In an IEEE Spectrum interview, University of Vienna scientist Michele Spagnolo stated that “the memristor, unlike any other quantum components, has memory.”

Spagnolo and a colleague developed a quantum memristor that relies on a stream of photons existing in superpositions, where each individual photon can travel down two separate paths that are laser-written onto glass. The memristor uses one of the paths to measure the data flowing within its system, which makes it hard to coexist with quantum effects known to be sensitive to outside interference, such as measurement. 

To overcome this contradiction, the researchers engineered a device in which interactions are strong enough to enable memristivity, but weak enough to preserve quantum behavior. Based on observations from computer simulations, the researchers believe quantum memristors could lead to an exponential performance growth in an ML approach called reservoir computing, which allows systems to learn quickly. 

As next steps, they are looking to connect several memristors or to scale by increasing the number of photons in each memristor and the number of states in which the photons can exist within a device. 

Why These Stories Matter

Memristors are yet another innovation that points to AI being the key element in the advancement of quantum computing. The idea of a system capable of accelerating learning through a computational model built on the human brain is fascinating. 

AI continues to be integrated into people’s everyday lives, for example to help them receive the best medical services, perform surgical operations in ways that were not previously possible, or predict the next earthquake at higher accuracy levels. Based on this growth, it seems that techniques such as SI, which helps users better understand how AI reasons compared to humans at scale, will continue to gain adoption. 

Until next time, stay informed and get involved!

Dive in
Related
Blog
The Week in AI: Quantum ML, Graphcore’s Wow Factor, Machine Peripheral Vision, SEER 10B
By Greg Coquillo • Mar 9th, 2022 Views 1.7K