A group of health care organizations in the United States and Brazil have used federated learning to improve AI mammography classification models for recognizing tumors and breast tissue density assessment. An Nvidia spokesperson declined to share specific performance details, but said a combined effort led to better classification model performance than models made by any of the individual health care organizations. Impacting 2.1 million people a year, according to the World Health Organization, breast cancer is the most frequently occurring cancer in women around the world.
Federated learning is a way to create AI models from data in multiple locations without a need to store all the data in the same place. Models are trained at their source and learnings are shared with a single generalized model without the transfer of data. Apple and Google use federated learning for keyboard personalization today, but in health care, federated learning can combine anonymized data to improve machine intelligence geared toward saving human lives.
“Federated learning presents an opportunity for health care organizations worldwide to work together without compromising on the data security of patient records,” AI and health researcher Jayashree Kalpathy-Cramer said in an Nvidia blog post today sharing the news. “With this methodology, we can collectively raise the bar for AI tools in medicine.”
The work is the result of a collaboration that took place between January and March and pools resources from the American College of Radiology, Brazilian imaging center Diagnosticos da America, Partners HealthCare, The Ohio State University, and Stanford Medicine. The federated learning model was trained using more than 130,000 images from 33,000 mammography studies, Nvidia VP of health care Kimberly Powell told VentureBeat. Hospitals worked together using the Clara Federated Learning SDK.
This is the second such combined demonstration of federated learning by health care…