Scale Events
+00:00 GMT
AI Research and Industry Trends
May 25, 2022

The Week in AI: A Smooth Cruiser, Synthetic Healthcare, an AI Recruiter, a Sacred Book Translator

The  Week in AI: A Smooth Cruiser, Synthetic Healthcare, an AI Recruiter, a Sacred Book Translator
# The Week in AI
# AI in Healthcare
# Fraud Detection
# Autonomous Vehicles
# Synthetic Data

A control system that streamlines traffic flows; a deep generative model produces artificial data to improve fraud detection in healthcare; AI's trust issues with hiring managers; and a language model finds emotional and meaning patterns in religious text.

Greg Coquillo
Greg Coquillo
The  Week in AI: A Smooth Cruiser, Synthetic Healthcare, an AI Recruiter, a Sacred Book Translator

The Week in AI is a roundup of key AI/ML research and news to keep you up to date in the fast-moving world of enterprise machine learning. From an autonomous vehicle fleet control to a philosophical and religious text analyzer, here are this week’s highlights.  

AI Helps Autonomous Vehicles Avoid Idling at Red Lights

In a new study supported by IBM Watson AI Lab, MIT researchers demonstrated an ML approach that can control a fleet of autonomous vehicles (AV) as they approach and travel through a signalized intersection in a way that keeps traffic flowing smoothly.

Signalized intersections cause vehicles to consume more fuel and emit more greenhouse gases while they wait for the light to change. Intersections can differ in billions of ways, depending on the number of lanes, how the signal operates, the number and speeds of vehicles, and the presence of pedestrians and cyclists. 

In simulations, researchers were able to cut fuel use and emissions while increasing average vehicle speed. The technique gets the best results if all cars on the road are autonomous. If every vehicle on the road were AVs, the system could reduce fuel consumption by 18% and carbon dioxide emissions by 25%, while boosting travel speeds by 20%. 

However, substantial fuel and emissions benefits can be achieved when as few as 25% of the cars on the road use the control algorithm. 

In response, the researchers used a model-free version of deep reinforcement learning. This is in contrast to reinforcement learning, which trains and improves by trial and error. For its part, deep reinforcement learning leverages assumptions learned by a neural network to find shortcuts to rewarding sequences used in the learning mechanism, even if there are billions of possibilities. 

Researchers want the system, in preparation for the road, to learn a strategy that reduces fuel consumption while limiting the impact on travel time—two goals that conflict with each other. 

To address these challenges, the researchers developed a workaround using reward shaping, a technique that helps the system acquire domain knowledge it is unable to learn on its own. In this case, the researchers penalized the system whenever the vehicle came to a complete stop, allowing it to learn to avoid that action.

During testing in a simulated environment, the system was able to push more cars through an intersection during a single green phase, outperforming a model that simulates human drivers. In addition, the technique resulted in reduced fuel consumption and lower emissions when compared to other optimization methods designed to avoid stop-and-go traffic. 

As next steps, the researchers plan to study interaction effects between multiple intersections, as well as exploring how different intersection setups (such as the number of lanes) can influence travel time, emissions, and fuel consumption. 

AI-Based Synthetic Data Advances Anthem’s Vision for Healthcare

Health insurance company Anthem is aiming to fuel AI efforts by using algorithms and statistical models to generate up to 2 petabytes of synthetic data. In collaboration with Google Cloud, the company plans to create a synthetic data platform to improve model training and validation for fraud detection, while delivering personalized services to members. 

Synthetic data consists of both real-world data that has been stripped of personal information and anonymized, as well as artificial data that has been created from deep generative models. Anthem will leverage the latter method, which will include data sets of medical histories, healthcare claims, and other key medical data. The goal is to reduce privacy issues surrounding personal medical information. 

Fraud and abuse in insurance claims are a significant source of monetary loss for insurers, and abnormalities in health records can represent a risk to patients’ lives. Anthem says models trained with synthetic data can better scale to tackle these use cases and reduce biases that exist in real-world datasets. 

Personalizing care for members and running AI algorithms that identify when they may require medical intervention are longer-term goals. Although wider variation compared to real-world datasets is considered one of the biggest advantages of synthetic approaches, researchers believe this process could also lead to datasets that are worse than real-world ones if not properly validated. 

AI Hiring Faces Adoption Challenges

Researchers from the London School of Economics and Political Science conducted a review of previous studies that assessed the effectiveness of AI as a recruitment tool and found that AI is equal to or better than human recruiters when it comes to hiring people who go on to perform well at work. Although AI boosts the fill rate for open positions and is mostly better than humans at improving diversity in the workplace, the new study, published in Artificial Intelligence Review, finds that people react negatively towards leveraging AI for hiring tasks. 

As of 2019, some 37% of businesses have adopted AI to assist in workplace decision-making processes, including recruitment, a previous study found. AI can be used in recruitment in several ways, such as searching through hundreds of CVs for a certain combination of keywords to narrow applicants down to those with the most relevant experience. Another example is the use of chatbots to conduct a preliminary interview with a candidate before he or she can meet their prospective hiring manager. 

For the study, the researchers reviewed 22 studies published between 2005—when AI first emerged in the workplace, researchers said—and 2021. In addition to being tested for efficiency, hiring with AI was also assessed to determine if it resulted in more outcomes focused on promoting diversity and inclusion than did human hiring. Depending on the algorithm and what data is fed into the model, AI can be much better than humans at selecting underrepresented groups for hire, such as people of color, people with disabilities, and LGBT people. 

But there is a big disadvantage to using AI in this context. Despite AI’s better overall performance, researchers discovered overwhelmingly negative responses from candidates and recruiters as a reaction to AI contributing to hiring. 

Researchers found that people trust AI hiring less than they do human hiring due to privacy concerns and lack of empathy from AI. In addition, they said, people view organizations that deploy AI hiring as less attractive than those hiring through humans. These findings represent significant obstacles for the adoption of AI hiring methods. 

AI Assesses Translation Accuracy of Sacred Hindu Book

Researchers employed deep learning AI algorithms to analyze English versions of the Bhagavad Gita, an ancient Hindu scripture written initially in Sanskrit, to test whether AI can teach humans about philosophy and religion. This study is the first step in applying AI-based tools to compare translations and assess sentiments in a variety of texts. 

Although ML has achieved enormous success in scientific tasks such as determining how protein molecules form or identifying faces in a crowd, its application in humanities has not been substantially explored until now. 

The researchers investigated sentiment and semantics in three selected translations (from Sanskrit to English) using BERT, a pre-trained deep learning language model created by Google scientists. Despite significant differences in language and sentence construction in the three translations, the researchers found that emotional and meaning-related patterns were largely comparable. 

The Bhagavad Gita is religious and intellectual literature of Hinduism, written almost 2,000 years ago and translated into over 100 languages. There have been many English translations of the book, but there has been little evidence to support the quality of those translations. 

To help assess translation quality, researchers fine-tuned the BERT algorithm using a human-labeled training dataset based on Twitter tweets that capture 10 different attitudes. These sentiments—optimistic, thankful, empathetic, pessimistic, anxious, sad, annoyed, denial, surprised, and joking—were adapted from the researchers’ prior study of social media attitudes during the beginning of the COVID-19 pandemic. 

According to their model, the most often-stated emotions among all three translations are optimism, irritation, and surprise. One limitation from being trained with a Twitter dataset is that the model inappropriately identifies joking as a common sentiment throughout various passages of the Bhagavad Gita. However, helping the model better understand the nuance behind humor was out of scope for the study. 

The researchers hope these sentiment analysis techniques might also be used to analyze emotions portrayed in entertainment materials or to assess films and songs to inform parents and officials about the appropriateness of content for younger audiences. 

Why These Stories Matter

Though in its early stage, the use of deep reinforcement learning in connected autonomous vehicles to optimize traffic flows through signalized intersections promises to make vehicles more eco-friendly by reducing fuel consumption and minimizing carbon dioxide emissions while cutting average travel times. Another side benefit is the potential increase in useful life of brake pads, since cars controlled by the systems will avoid having to brake at traffic lights. 

These past weeks reveal promising advancements in AI performance when trained with synthetic datasets to improve healthcare, as well as an increase in AI applications for philosophical and religious interpretations. 

With synthetic data, patients should see faster and more accurate responses from caregivers, including insurance providers. And as AI ventures into cross-lingual humanities subjects, scientists will be eager to expand sentiment analysis techniques and more to better connect the world and build communities. Looking forward, expect more effort to be made on the adoption of AI for business hiring needs—if hiring managers can be convinced to trust in AI. 

Until then, stay informed and get involved! 

Learn More

Dive in
Related
Blog
The Week in AI: Rapid Robots, Quantum Leaps Roadblock, Generative Trains ML, AI Forecasts
By Greg Coquillo • Apr 6th, 2022 Views 1.5K