The Week in AI is a roundup of key AI/ML research and news to keep you informed in the age of high-tech velocity. From creative solutions for reducing landfill waste to multimodal ML systems and more, here are this week’s highlights.
A team of researchers at the Cockrell School of Engineering and College of Natural Science created an ML model that generates new mutations of PETase, a natural enzyme that allows bacteria to degrade PET-based plastics. The model predicts which enzyme mutations work best to break down postconsumer polyethylene terephthalate (PET) plastic waste at low temperatures in a matter of hours or days.
Plastic, which makes up 12% of global waste, represents one of the world’s most pressing environmental problems, contributing to billions of tons of waste piling up in landfills and polluting natural lands and water. Globally, only 10% of all plastic is recycled, and plastic can take centuries to degrade.
To prove the effectiveness of mutated enzymes named FAST-PETase (functional, active, stable, and tolerant PETase), the researchers studied and tested 51 different postconsumer plastic containers, five different polyester fibers and fabrics, and several types of water bottle, all made from PET.
FAST-PETase has the potential to speed up recycling, allowing major industries to reduce their environmental impact by recovering and reusing plastics at the molecular level. It can complete a “circular process” by breaking down the plastic into smaller parts (depolymerization) and then chemically putting it back together (repolymerization) at less than 50 degrees Celsius, significantly reducing the energy consumption required by traditional plastic recycling methods.
The team filed a patent for the technology and plans to scale the new process for industrial and environmental applications. It is also exploring methods to get the mutated enzyme into the field to clean up polluted sites.
A team at Meta AI, in collaboration with researchers at the University of Illinois, Urbana-Champaign, created an AI that can devise and refine formulas for increasingly high-strength, low-carbon concrete. Humans produce billions of tons of concrete per year, generating an estimated 8% of the total annual global dioxide emission.
In addition, traditional production methods are far from being ecologically friendly. Although advances in recent years have reduced concrete’s carbon footprint while making it more rugged and resilient and even allowing it to charge electric vehicles, concrete production remains among the most carbon-intensive in modern construction.
Concrete is made of four basic components: cement (the most carbon-intensive ingredient), aggregate, water, and admixture (which acts as a doping agent). Efforts to reduce the amount of cement required, by replacing it with lower-carbon materials such as fly ash, slag, or ground glass, have not worked well. Moreover, aggregates such as gravel and sand might also be replaced with recycled concrete.
Dozens of potential ingredients could be used as an alternative, and the ratio of their amounts all interact to influence the structural profile of the resulting concrete. So the Meta AI team trained an AI using the Concrete Compressive Strength dataset to speed up testing, selecting, and refining the best possible combinations identified by researchers. The dataset comprises more than 1,000 concrete formulas, as well as their structural attributes that include 7- and 28-day compressive strength data.
The resulting concrete mixture’s carbon footprint was determined using the Cement Sustainability Initiative’s Environmental Product Declaration (EPD) tool. Once researchers chose the five most promising options, they iteratively refined them until they met or exceeded their 7- and 28-day strength metrics, while reducing their carbon requirements by 40%.
Overall, 50% of the required cement was replaced with fly ash and slag. To test real-world use cases, the Meta team partnered with Ozinga, a company that recently built Meta’s newest data center in Illinois.
As next steps, the researchers will look for ways to allow concrete to cure faster to speed up construction, while accounting for weather variables such as wind and humidity.
MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard used an optimization methodology developed for ML to significantly reduce the CPU time required to predict the temperature and density profiles of plasma, a form of matter used in fusion energy production. Today, not even brute force from the most advanced supercomputers can solve this problem.
Fusion, which promises unlimited carbon-free energy, requires the heating of matter to temperatures above 100 million degrees to form plasma. Strong magnetic fields are used to isolate the hot plasma from turbulence (interaction with ordinary matter). The turbulence arises from the difference in the extremely high temperature of the plasma core and the temperature of the plasma edge, which is a few million degrees cooler.
Thus, predicting the performance of self-heated fusion plasma requires a calculation of the power balance between the fusion power input and the losses caused by turbulence. To measure the performance of their method, the researchers leveraged SPARC, a compact, high-magnetic-field fusion experiment device under construction by MIT spin-out company Commonwealth Fusion System and MIT’s Plasma Science and Fusion Center.
This new approach, recently published in the journal Nuclear Fusion, represents the highest-fidelity calculation ever made of the core of a fusion plasma. In addition to increasing confidence in the fusion performance of the SPARC experiment, it will provide a road map for checking and calibrating smaller physics models, which run on a fraction of the traditionally required computational power.
Researchers at Deepmind recently introduced Flamingo, a single 80B-parameter visual language model (VLM) that can perform state-of-the-art, few-shot learning on a wide range of open-ended multimodal tasks. Flamingo’s simple interface uses a prompt consisting of interleaved images, videos, and text as input, then outputs associated language.
Given a few examples of visual inputs paired with expected text responses, the model can generate an answer when asked a question with a new image or video. Even though Flamingo was given as few as four examples per task for a total of 16 tasks during a study, the model outperformed all previous few-shot learning approaches. A non-expert can expect to quickly and easily use accurate VLMs such as Flamingo on new tasks.
Multimodal capabilities are critical for AI applications such as helping the visually impaired with everyday challenges or improving the identification of hateful content on the web. Flamingo makes it possible to adapt to other tasks on the fly without having to modify the model.
In addition to image-to-text predictions, the model possesses multimodal dialogue capabilities, allowing it to achieve human-level performances in activities such as the famous Stroop test. Flamingo’s research advances are set to pave the way toward deeper interactions with VLMs during new applications like virtual assistants, which can help people in everyday life.
This week’s stories reflect an increase in multidisciplinary collaborations between experts in domains spanning synthetic biology, chemical engineering, AI, and more.
Large industries understand the urgent need to reduce their carbon footprint and have solicited the help of AI practitioners to help them solve this tough challenge.
Moreover, while the advances in smaller-sized visual language models promising to improve people’s lives give hope, humanity should remain excited about the possibility of unlimited clean energy generation, enabled by AI.
Until next time, stay informed and get involved!