Evaluation of casemix adjustment methods for the Massachusetts Division of Medical Assistance

Published: June 6, 1996
Category: Bibliography > Reports
Countries: United States
Language: null
Types: Care Management
Settings: Government

Massachusetts Rate Setting Commission. Final report and recommendation. Boston, MA, USA: Massachusetts Rate Setting Commission.

Massachusetts Rate Setting Commission, Boston, MA, USA

We investigated methods to account for the health status of populations that the Division of Medical Assistance (DMA) could use to predict resource use and to make performance measures comparisons. In this report we will refer to the health status of differing populations as “casemix”.
We first researched casemix adjustment methods that could be used to predict resource use and to make performance comparisons. Based on the information we obtained we selected a claims grouper as the most appropriate methodology. We tested six of the seven claims groupers that included the full range of services by Medicaid recipients for their ability to meet these needs; the seventh vendor, Medstat Q-Stage, declined to participate in this project. We relied on product vendors to group claims data, and then evaluated the grouped data using multivariate regression and other techniques. We focused the evaluation on established methodologies’ and products’ ability to meet DMA’s specific goals.
            Based on our evaluation, we recommend that the Division use the Clinical Complexity Index (CCI), a product of Equifax Healthcare Information Services Inc. (formerly Healthchex) to predict resource use and to make appropriate cross plan/provider performance measure comparisons.
            It is important to note that casemix adjustment provides a way to make relative comparisons among groups. If the measures for all of the groups being compared are adjusted using the same relative casemix index, then they can be considered more comparable. Casemix adjustment does not provide an indication of the medically correct or appropriate level for a particular outcome measure. We did not verify any methodology’s clinical accuracy or correctness in describing different populations or provider panels.
In addition, we made a number of data related assumptions for the purpose of evaluation; these issues should be revisited prior to implementation. Finally, we recommend that DMA discuss internally and with providers the most effective way to employ this technology.

Practice Patterns Comparison,Outcome Measures,Resource Use,United States
LinkedIn Facebook Twitter

© The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System.
All rights reserved. Terms of Use Privacy Statement

Back to top