The Week in AI is a roundup of high-impact AI/ML research and news to keep you up to date in the fast-moving world of enterprise machine learning. From an AI superchip accelerating the development of nuclear fusion reactors to a simulator that tests autonomous vehicle vulnerabilities, here are this week’s highlights.
At the 2022 International Supercomputing Conference (ISC) in Hamburg, Germany, Nvidia announced that the U.K. Atomic Energy Authority (AEA) will use its Omniverse simulation platform, powered by GPUs that include the Grace Hopper architecture, to accelerate the design and development of a full-scale fusion reactor. Renewable energy sources such as nuclear fusion reactors are seen as strategic in combating global warming challenges.
Today’s nuclear reactors generate large amounts of radioactive waste due to fission, a reaction in which the nucleus of an atom splits into two or more smaller nuclei. Fusion technology promises to deliver large amounts of energy without the same waste. With the Nvidia Omniverse, researchers aim to build a fully functioning “digital twin” reactor, and the platform will help ensure that the most efficient designs are selected for construction.
The goal of using Omniverse for the digital twin is to have an AI-generated replica of a state-of-the-art fusion reactor. The digital version will aim to simulate in real time the entire power station, its robotic components, and even the behavior of the fusion plasma at its core.
The U.K. AEA also plans to reproduce the physics of fusion plasma containment with Nvidia Modulus. For its part, Modulus is a framework for creating physics-informed AI models, which will learn how the fusion reaction and its containment can occur. With this digital-twin application, researchers hope to move a step closer to building sustainable infrastructure and technology for clean energy.
At the recent Microsoft Build developer conference, company CTO Kevin Scott demonstrated an AI helper built for the game Minecraft. The nonplayer character within the game is powered by Codex, the OpenAI ML technology funded by Microsoft, which has been testing Codex to auto-generate software code. The Minecraft agent appropriately responds to typed commands by converting them into code behind the scenes using the game’s API.
To perform these tasks, the AI model (Codex) that controls this bot was trained on vast amounts of code and natural language text. The model was next trained on the API specifications for Minecraft, including a few usage examples. For instance, when a player tells a bot to “come here,” the underlying AI model generates the code needed to have the agent move toward the player.
During the demo shown at Build, the bot was also able to perform more complex tasks, such as retrieving items and combining them into something new. And since the model was trained on natural language as well as code, it can respond to simple questions about how to build things.
Microsoft previously built an AI coding tool called GitHub Copilot on top of the Codex technology. Now used by 10,000 developers, Copilot automatically suggests code when developers start typing or when they add comments to their code.
Both Codex and Copilot have stirred up anxiety among some developers, who fear being automated out of a job. Redditors believe that the Minecraft demo could inspire similar concerns, but Microsoft is betting that AI agents will be successful at assisting developers to automate tedious coding tasks for the popular game, as well as help with other applications.
During the International Society for Magnetic Resonance (ISMRM) meeting in London, researchers from Stanford University revealed a quantitative technique called MR fingerprinting that could make a “one-minute clinical brain MRI scan” a reality. More precisely, MR fingerprinting is an acquisition and reconstruction framework for quantitative and multicontrast imaging that requires a scanning time of approximately one minute and a reconstruction time of as little as five minutes.
Its quantitative technique allows simultaneous measurement of multiple tissue properties in a single data acquisition. This is a huge win when compared to the time required to perform traditional MRI scans, which is between 30 and 90 minutes. The new technique is powered by an ML algorithm specialized in image synthesis and can provide five high-quality images with common clinical contrasts at 1-mm isotropic resolution.
To train the algorithm, researchers used data contributed from 14 healthy volunteers. Of the 14 subjects, 10 were used for training, two were used for validation, and two for model testing. When compared with images reconstructed using traditional techniques, which takes hours, the MR fingerprinting method contained more undersampling artifacts, more image blur, and more noise. However, researchers believe that critical image information can still be recovered in the synthesis network. In the future they plan to continue collecting clinical data, which will include patient information to be used for training data sets, while using semi-supervised methods.
Researchers at the University of California, Irvine, have identified a possible risk involving autonomous vehicles, which can be tricked into an abrupt halt or other undesired driving behavior by the placement of an ordinary object on the side of the road. The study, presented at the Network and Distributed System Security (NDSS) Symposium, revealed that autonomous vehicles can’t distinguish between objects present on the road by accident and those left intentionally as part of a physical denial-of-service (DoS) attack.
How does a DoS attack become physical? A box, bicycle, or traffic cone can “scare” a driverless vehicle into coming to a dangerous stop in the middle of the street or on a freeway off-ramp, creating a potential hazard for motorists and pedestrians.
To address this, researchers focused on security vulnerabilities specific to the planning module, a part of the software code that controls autonomous driving systems. This is the component that oversees the vehicle’s decision-making process and governs such things as when to cruise, change lanes, slow down, or stop.
The team designed a testing tool called PlanFuzz to automatically detect vulnerabilities in widely used automated driving systems. In video demonstrations, researchers used PlanFuzz to evaluate three different behavioral planning implementations of the open-source, industry-grade autonomous driving systems Apollo and Autoware.
In another test, the team found that autonomous vehicles, after perceiving a nonexistent threat, neglected to change lanes as planned. With this study, researchers hope these gaps will close in the near future to increase trust and mass adoption of automated vehicles.
In addition to providing access to integrated AI tools that will help users to optimize power-generation systems, Nvidia’s Omniverse claims to offer the possibility of a real-time platform that researchers can use to develop first-of-a-kind power plant technology. The goal is to pave the way for tomorrow’s clean and sustainable energy sources.
Meanwhile, the application of Codex in gaming systems such as Minecraft hints at how recent advances in AI could change personal computing in years to come. AI will replace today’s interfaces—tap, type, and click to navigate—with interfaces that you can simply have a conversation with. This could lead to a world in which we can ask computers to retrieve on-device documents and send them to a group of people using text or voice commands.
Until then, stay informed and get involved!