Scale Events
+00:00 GMT
AI Research and Industry Trends
March 30, 2022

The Week in AI: FastTreeSHAP, Nvidia's Hopper, AI Bias, and a Heart Attack Predictor

The Week in AI: FastTreeSHAP, Nvidia's Hopper, AI Bias, and a Heart Attack Predictor
# AI Bias
# AI in Healthcare
# Explainable AI
# The Week in AI

LinkedIn open-sources FastTreeSHAP, Nvidia introduces Ampere successor, NIST framework addresses AI bias, and an AI tool predicts heart attacks. Greg Coquillo gets you up to speed.

Greg Coquillo
Greg Coquillo
The Week in AI: FastTreeSHAP, Nvidia's Hopper, AI Bias, and a Heart Attack Predictor

The Week in AI is a roundup of key AI/ML research and news to keep you informed in the age of high-tech velocity. This week: LinkedIn’s FasTreeSHAP accelerates explainable AI, Nvidia’s Hopper Architecture heralds a new era of AI data centers, a NIST report aims to reduce human and systemic bias, and AI predicts patient heart attacks five years in advance. 

LinkedIn Open-Sources FastTreeSHAP For Tree-Based ML Model Explainability

Understanding input contributions to model output (i.e., feature reasoning) is one of the critical approaches in constructing transparent and explainable AI systems. To close gaps such as time and space complexity that remain unaddressed by current solutions including SHAP, LinkedIn researcher Jilei Yang open-sourced FastTreeSHAP, a Python package that allows efficient interpretation of tree-based machine learning models. 

The module, based on Yang’s paper, “Fast TreeSHAP: Accelerating SHAP Value Computation for Trees,” includes two new algorithms, FastTreeSHAP v1 and FastTreeSHAP v2, which take different approaches to improve predecessor TreeSHAP's computational efficiency. Unlike some alternatives, the FastTreeSHAP package supports parallelism—it uses multiple CPU cores to speed up computation. 

Benchmarking tests reveal that FastTreeSHAP v1 is 1.5 times faster than TreeSHAP while keeping memory costs the same, and FastTreeSHAP v2 is 2.5 times faster while using slightly more memory. Today, predictive ML models are applied in different use cases with the goal of improving the user experience. This includes suggested connections (people you may know), newsfeed ranking, search, and job suggestions. 

When you’re leveraging complex models such as gradient boosted trees, random forest, or deep neural networks, it is essential to figure out how these models function (i.e., model interpretation), which is a challenge because of their opacity. For this reason, TreeSHAP has been a big contributor to creating explainable models, and the FastTreeSHAP package promises to improve it by boosting its computational efficiency while providing a customizable and intuitive user interface. 

Modelers can experiment with implementing FastTreeSHAP in Spark to leverage distributed computing capabilities. This will further scale TreeSHAP computations and push it more toward untapped potentials. 

Nvidia Reveals Its Hopper Architecture and H100 AI Accelerator

To support the next wave of AI data centers, Nvidia recently announced the Hopper GPU Architecture, a new accelerated computing platform that it hopes will power the world’s AI infrastructure by making an order-of-magnitude performance leap. Named after Grace Hopper, the pioneering U.S. computer scientist, the new architecture arrives two years after its predecessor, Nvidia Ampere. 

According to the company, Hopper “securely scales diverse workloads in every data center, from small enterprise to exascale high-performance computing (HPC) and trillion parameter AI, so that brilliant innovators can fulfill their life's work at the fastest pace in human history." 

Built with 80 billion transistors, the Hopper-based H100 GPU sets a new standard in accelerating large-scale AI and HPC, delivering six innovations. According to Nvidia, Hopper sports: 

  • The world’s most advanced chip: Twenty H100 GPUs can sustain the equivalent of the entire world’s Internet traffic, making it possible for customers to deliver advanced recommender systems and large language models running real-time inference on data. 
  • A new Transformer Engine: The H100 accelerator’s Transformer Engine speeds up network computations as much as six times versus the previous generation, without losing accuracy. 
  • Second-generation secure Multi-Instance GPU (MIG): The Hopper architecture extends MIG capabilities by up to seven times over the previous generation of MIG technology generation, by offering secure multitenant configurations in cloud environments across each GPU instance. 
  • Confidential computing: With H100, customers can encrypt data and apply confidential computing on shared cloud infrastructures, among other systems, to federated learning for privacy-sensitive industries such as healthcare and financial services.
  • Fourth-generation Nvidia NVLink: NVLink combines with the new NVLink Switch to connect up to 256 H100 GPUs at nine times higher bandwidth versus the previous generation using Nvidia HDR Quantum InfiniBand. 
  • DPX Instructions: New DPX instructions deliver up to 40 times the speed of CPUs and seven times the speed of previous-generation GPUs. This accelerates route optimization algorithms such as Floyd-Warshall, which aims to find optimal routes for autonomous robot fleets in dynamic warehouse environments. 

H100-equipped systems may be available in the third quarter of this year. 

NIST Report’s New Approach for Addressing AI Bias

Bias in AI systems is often seen as a technical problem, but a National Institute of Standards and Technology (NIST) report finds that a great deal of AI bias stems from human biases and systemic, as well as institutional, biases

To take a step toward improving our ability to identify and manage harmful effects of bias in AI systems, NIST researchers Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall recommend looking beyond the ML processes and data used to train AI software—more precisely, into the broader societal factors that influence how technology is developed. 

This recommendation is a core message extracted from the researchers’ revised publication, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270),” whose draft version received public comments last summer. The document was released as part of a larger effort supporting the development of trustworthy and responsible AI, while offering guidance connected to the AI Risk Management Framework (RMF) that NIST is developing. 

AI’s ability to make decisions that affect whether a person is admitted to a school, authorized for a bank loan, or accepted as a rental applicant demonstrates how bias in AI can harm humans. The researchers demonstrate that, while computational and statistical sources of bias remain highly important, they do not represent the full picture. To gain a complete understanding of bias, they looked into human and systemic biases. 

Human biases relate to how people use data to fill in missing information, such as a person’s neighborhood of residence, that influences how likely it is that authorities would consider the person to be a crime suspect. On the other hand, systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating based on their race. When combined together and ingested into AI systems, these systems become harmful to people, negatively affecting their lives for many years. 

To address these issues, the NIST authors propose a “socio-technical” approach that aims to mitigate bias in AI. This approach recognizes that AI operates in a larger social context and that addressing bias through purely technical lenses will fall short of expected results. NIST’s Reva Schwartz says that socio-technical methods in AI are an emerging area that will require a broad set of disciplines and stakeholders. 

On that front, NIST isn't planning to stop. The institution will host public workshops over the next few months aimed at drafting a technical report for addressing AI bias, while connecting the report with the AI RMF.

AI Predicts Patients’ Risk of Heart Attack Within Five Years

Researchers from Cedars-Sinai Medical Center developed a new AI tool that can accurately measure plaque deposits in coronary arteries and predict a patient’s risk of suffering a heart attack within five years. Though the AI tool needs further validation before being deployed in clinics, it can do in seconds what has previously taken trained experts up to 30 minutes to deliver. It leverages images of plaque deposits from computed tomography angiography (CTA) to make inferences. 

To train the ML algorithm, researchers used a dataset of CTA images from 921 patients. They then used test set images from several hundred patients to validate the model, achieving performance levels nearly identical to human expert readers. Next, they trained the model to predict future heart attacks by setting a number of plaque volume thresholds, allowing the model to classify patients as at high risk or low risk of experiencing a heart attack within five years of CTA imaging. 

Given that these are the technology’s early days, further research and larger studies will be needed before doctors can provide diverse patient populations with AI-based health advice. And the high cost of CTA imaging casts additional shadows on a product commercialization date, even after it is optimized for usage. Nonetheless, the new study may foreshadow a potentially exciting future of medicine. 

Why These Stories Matter

The combination of approaches that make AI explainable and sensitive to bias will further increase the adoption of the technology by various industries, while promoting trust among the larger population. 

Moreover, unaddressed bias in AI can affect human lives for long periods of time. And though socio-technical methods that can help mitigate bias, it will take a village of stakeholders to make a meaningful impact. 

Meanwhile, AI is set to have a great year, given that it will leverage the world’s most powerful accelerator to allow researchers to make new discoveries with larger models like transformers. The benefits of these large models can already be seen in the medical field, where tools can swiftly analyze diagnostic imaging to deliver immediate risk reports to patients.

Until next time, stay informed and get involved!

Learn More

Dive in
Related
Blog
The Week in AI: Rapid Robots, Quantum Leaps Roadblock, Generative Trains ML, AI Forecasts
By Greg Coquillo • Apr 6th, 2022 Views 1.5K