The key to optimizing payers’ analytics investments? Data integrity and harmonization

by George Mathew, Rikin Patel and Paul Thompson

When the McKinsey Global Institute came out with its 2011 report, “Big Data: The next frontier for innovation, competition and productivity,” it predicted that if U.S. healthcare could use big data analytics to drive efficiency it could capture more than  $300 billion per year in new value with significant reductions in healthcare expenditures. The majority of these savings would be enjoyed by government and payers.

Getting there would require big data analytics that support initiatives such as value-based contracting —whereby payers are required to gather data from multiple providers, often in different formats, at different times, and of varying quality — and increased consumerization to improve customer engagement.

Yet despite massive investments in building and/or contracting analytics expertise, the promised value has yet to be realized.

What’s preventing gains?

There are many reasons why the results have failed to materialize, including poor interoperability, data blocking, lack of change management and mismatched investments. This last point can be attributed to the fact that investment in front-end analytics and personnel has not been matched with enough investment in back-end efforts to retrieve and normalize data.

But perhaps the biggest barrier is a lack of data integrity, or rather the presence of “dirty data.” Recent surveys estimate that as much as 60 percent of data engineers’ time is spent on data cleaning. Further exacerbating this problem is a skills shortage.

The role of data scientists, the professionals who use statistics, math and programming skills, has been described as the sexiest job of the 21st century, and demand has been rising. Various reports note that these professionals are difficult to recruit and retain, leading to problems for some businesses.

In addition, before data scientists can do their magic, they need data engineers to create software solutions around big data, build data pipelines to allow for extract, transform and load (ETL), plus the normalization and transfer of data into a repository for analysis. If the data is not normalized properly, the derivative analytics will provide incorrect answers (“garbage in, garbage out”).

Another reason for dirty data is the lack of data stewardship and governance to determine the quality of the data. Healthcare data continues to multiply, and payers increasingly depend on many different data sources for decision making — clinical, claims, unstructured data, data from wearables and internet of things (IoT) data. To manage these diverse data sources, it is essential to have proper governance. This requires having clear visuals into the data and metadata lineage, master data management, data life-cycle management, and downstream analytics or use cases.

To help organizations address these data integrity issues, new data harmonization services are being offered that help with data wrangling and cleaning. For healthcare organizations, being able to leverage data governance and cleaning “as a service” provides insights and expertise that can be difficult or impractical to recruit internally.

These services can include: handling standard and atypical data formats and sources; dealing with data quality processes to ensure detection of poor quality data (be it missing or incorrect), changing volumes and connectivity issues; and educational and change management services to help data submitters (e.g., providers or vendors) send better quality data over time, understand results and help with changing standard operating procedures.

Outsourcing data cleaning activities should help address the skills shortage, allowing skilled (and expensive) data science resources to spend more time on data analytics problems and less on cleaning. It will also help organizations scale their key initiatives (such as value-based care and the consumerization of healthcare) faster, and to meet their goals — such as those in the Medicare Star Rating System — more rapidly for better profitability and improved outcomes.

Big data analytics can be the promised game changer for healthcare, but it will require careful implementation and a considered approach to how data is managed and used.


George Mathew, M.D. is the Chief Medical Officer for the North American Healthcare organization for DXC. In this role, he serves as the clinical expert and healthcare thought leader to our healthcare clients in transforming the healthcare marketplace. Dr. Mathew graduated from Boston University School of Medicine and completed his residency in Internal Medicine at Greenwich Hospital/Yale University in Connecticut.

 

Rikin Patel is a DXC Technologist with 25 years of diverse experience in Information Technology.  He serves as the Chief Technologist for DXC’s Americas Healthcare & Life Sciences and is a member of the Office of the CTO. Rikin is responsible for building key client relationships, advising senior leadership on technology trends, and providing thought leadership to effectively grow client and DXC business.

 

 

Paul Thompson is the general manager for DXC’s Commercial Healthcare Payer segment. He is responsible for the execution of DXC Payer segment vision through strong P&L leadership, technology innovation and implementation, and strategy execution. Prior to assuming this role, Paul was the chief strategy officer for the Healthcare and Life Sciences Group.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: