Humans still have a role to play, but recent research shows that about half of insurers use AI to reduce fraud, waste, and abuse.
The era of big data has revolutionized medical institutions. This includes the ability for providers to leverage large amounts of real-world data to inform high stakes clinical decisions. Still, when it comes to fraud, waste, and abuse (FWA), big data makes it easy to fill the ever-growing amount of data with fraudulent billing needles.
But perhaps the “needle” metaphor does not justify the scale of the FWA. According to the National Health Care Anti-Fraud Association, conservative estimates suggest that about 3% of annual health care costs are spent on fraudulent billing, while other estimates suggest that numbers are close to 10% of total health care costs. Equivalent to over $ 300 billion annually. .. In fact, as a result of efforts to fight the FWA, only one company, Highmark Inc., has reported savings of $ 245 million. The Pittsburgh-based insurance company said the savings were due to the work of the company’s financial research and provider review departments, but also thanks to the department’s artificial intelligence (AI) software.
“AI enables Highmark to detect and prevent suspicious activity more quickly, update insurance policies and guidelines, and stay ahead of new schemes and malicious individuals,” said Highmark’s Chief Auditor and Compliance. The person in charge, Melissa Anderson, said.
It’s not just high marks. Today, many software providers offer AI products designed to identify errors and anomalous activity that can indicate fraud. According to a report released in July 2021 by PYMNTS.com and Mastercard company Brighterion AI, 44% of the largest insurers surveyed used AI to detect FWA. This report is based on a survey of 100 healthcare executives with FWA responsibility or direct knowledge.
Jodi G. Daniel, a partner at Crowell & Moring LLP, says AI could be a powerful tool in data-intensive industries like healthcare. “When talking about large amounts of data, technology can help detect patterns and flag unusual or suspicious things, so humans can see them,” she said. I am. Daniel led the law firm’s digital health practices and previously headed the policy office of the National Health Information Technology Coordinator’s office.
Worried about false positives
Health insurance companies are also paying close attention. Insurer executives pointed out in a Brighterion AI study that cost savings, regulatory pressure, and high adaptability are key factors in choosing an AI provider, but accuracy is also an important concern. Listed (95%). This is because false positives (a seemingly fraudulent but legitimate case) are a major hurdle to managing FWA.
In fact, 66% of the large companies surveyed say that reducing false positives is “very important” when choosing an AI provider. Increased detection of FWA by AI has been labeled as very important by only 25% of the largest insurer executives. For small insurers, reducing false positives is unlikely to be very important (30%) and increasing FWA detection is more likely to be labeled as very important. (53%).
Daniel states that accuracy is important for AI tools. Therefore, regulators prefer to see human involvement as well as software. She also says that there are significant risks associated with full automation, especially when it comes to treatment decisions. “Looking at the (FDA) monitoring of clinical decision support tools, tools with physician or clinician intermediaries are treated in a completely different way than fully automated tools,” Daniel said. say.
Jared Kaltwasser is a freelance writer in Iowa and is a regular contributor to Managed Healthcare Executive®.
..