Start Submission Become a Reviewer

Reading: Comparing VA and Non-VA Medical Centers: Informing Veteran Health Care Choice at the MISSION...

Download

A- A+
Alt. Display

Research

Comparing VA and Non-VA Medical Centers: Informing Veteran Health Care Choice at the MISSION Act Watershed

Authors:

Jill M. Inderstrodt ,

Richard L. Roudebush VA Medical Center, US
X close

Shelley MacDermid Wadsworth,

Purdue University, US
X close

Kayla Williams

RAND Corporation, US
X close

Abstract

This study compared clinical quality and patient experience in US Department of Veterans Affairs (VA) and non-VA hospitals. One hospital was identified for each full-service VA hospital (n = 125 pairs). Hospitals were compared on four Agency for Healthcare Research and Quality (AHRQ) clinical indicators: Influenza immunization (IMM-2), Patient Safety Indicator 04 (PSI 04), Catheter-associated Urinary Tract Infection (CAUTI), Methicillin-resistant Staphylococcus aureus infection (MRSA); and three patient experience indicators: the three-item care transition measure (CTM-3) and Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) #18 and #19. In aggregate, VA hospitals fared significantly better than non-VA hospitals for PSI 04 and CTM-3. Non-VA hospitals fared significantly better for IMM-2 and HCAHPS #18. No differences were found for CAUTI, MRSA, and HCAHPS #19. At the pairwise level, VA hospitals performed the same as or better than the grand mean for each measure except for IMM-2. This study reinforces previous findings in that the data do not mirror public perception of a universally troubled VA system. This study will be helpful as it is one of the last studies published with the latest pre-MISSION Act data.

How to Cite: Inderstrodt, J. M., Wadsworth, S. M., & Williams, K. (2022). Comparing VA and Non-VA Medical Centers: Informing Veteran Health Care Choice at the MISSION Act Watershed. Journal of Veterans Studies, 8(3), 93–101. DOI: http://doi.org/10.21061/jvs.v8i3.343
18
Views
2
Downloads
  Published on 16 Sep 2022
 Accepted on 26 May 2022            Submitted on 14 Mar 2022

The June 2019 implementation of the Maintaining Internal Systems and Strengthening Integrated Outside Networks (MISSION) Act of 2018 precipitated a notable shift in the way veterans access health care in the United States. Conceptualized in an effort to improve veteran-provider relationships and ameliorate long wait times at facilities run by the US Department of Veterans Affairs (VA), this legislation built on previous VA attempts to streamline VA and non-VA services by broadening the qualifying criteria by which veterans could seek outside care (Kullgren et al., 2020). Unlike previous frameworks, the MISSION Act allowed qualifying veterans to seek primary care at a VA medical center while receiving sometimes difficult-to-obtain specialized care at a non-VA medical center.

Under the MISSION Act, eligibility for non-VA care is based on VA assessment that: (a) the veteran requires a medical service that is not available from the VA (e.g., labor and delivery); (b) the veteran lives in a state without a full-service VA (e.g., Alaska); (c) the veteran is already grandfathered into the previous Choice program, and does not wish to change health care providers; (d) the VA cannot provide health care services within the designated access standards (i.e., 30-minute drive time for primary care or 20 days’ appointment wait time); (e) it is in the veteran’s best medical interest; or (f) the service provided by the VA does not meet quality standards, such as those analyzed for this study.

Although opponents of the MISSION Act have argued that broader criteria for non-VA care will result in increased privatization and a consequent exodus of veterans from the VA system, VA Executive-in-Charge Richard Stone reported that the early (pre-COVID) months of its implementation have shown cause for optimism. According to Stone’s testimony to the US House Committee on Veterans’ Affairs Subcommittee on Health, as of September 25, 2019, the VA had facilitated 1 million community care consultations since the Act’s implementation in June 2019 (House Committee on Veterans’ Affairs, 2019). He testified that the surge was due in large part to 80,000 veterans accessing VA care for the first time as a result of MISSION Act changes. While exact data are not yet available as to the distribution of appointments between VA and non-VA providers, Stone reported that the VA scheduled 1.6 million more ambulatory appointments in fiscal 2019 than in 2018, possibly necessitating additional legislation to accommodate growth.

The veteran population presents unique challenges in health care delivery, as these individuals tend to be older and at greater risk for many conditions. These include mental health and substance use disorders, traumatic injuries, and homelessness (Boden & Hoggatt, 2018; Martindale et al., 2018). Because these medical conditions are not observed in the general population at the same rates, connecting veterans to appropriate care is often the first challenge in health care delivery (Meffert et al., 2019).

Female veterans exhibit a unique set of behaviors when it comes to seeking and choosing medical care, and much of the recent academic literature consequently focuses on them. For example, for female veterans who are poor, inconsistent access to food is associated with delaying access to health care and general poorer health outcomes (Narain et al., 2018). Delaying care for women veterans has been found to be fairly commonplace, with the primary barriers identified as logistics, negative treatment bias, and sexual harassment onsite (Chrystal et al., 2022; Meffert et al., 2019; Newins et al., 2019). Female veterans who exclusively use the VA for their health care have been shown to have worse overall physical and mental health than female veterans who only use non-VA sources of medical care; however, female veterans who exclusively use the VA for health care have the most positive perceptions of the VA when compared with non-VA and mixed (VA and non-VA) female users (Washington et al., 2015). And rural veterans experience barriers to access in the form of transportation, availability, and lack of social support (Kinney et al., 2021; Murray-Swank et al., 2018).

Multiple studies have attempted to compare the quality of VA and non-VA medical centers using myriad measures: wait times, clinical indicators (i.e., mortality rates, readmission rates), and quality indicators (i.e., patient satisfaction, patient recommendation). As the MISSION Act was recently implemented and government data often take up to a year to be released, clinical and quality results post-MISSION Act (and post-COVID) are not yet available for analysis, but some recent studies can be used to gauge how the systems compare.

In comparing hospital systems, the demographic variables used for comparison differ greatly. One of the most oft-cited studies comparing US hospitals in terms of quality and clinical measures compared 2,583 US hospitals using six demographic variables as predictors (bed size, urban/rural, Census division, ownership, teaching status, and proportion of Medicare patient days), with Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) quality and Hospital Quality Alliance (HQA) clinical measures as outcomes (Lehrman et al., 2010). Other VA-focused research used four of these variables (bed size, Census division, rural/urban location, and teaching status) in comparing VA and non-VA medical centers on numerous clinical and patient experience outcomes (Anhang Price et al., 2018).

Findings from these comparisons are inconclusive, presenting few clear generalizations about how VA medical centers compare with their non-VA counterparts. The Lehrman et al. (2010) study found that top performers in both clinical and patient experience outcomes were small, rural, in New England or West North Central Census division, and nonprofit. Top performers in patient experience measures alone were small, rural, in East and South Central divisions, and government-owned. Top performers in clinical care alone were medium-to-large and urban, West North Central, and non-government-owned. Anhang Price (2018) and colleagues found that VA medical centers performed similar to or better than non-VA medical centers for most of the analyzed measures but that high variation, especially among VA centers, indicated a need for targeted quality improvement. An additional VA comparison study found that the VA had better outcomes for six out of nine patient safety indicators (PSIs) but that non-VA hospitals performed better on patient experience and behavioral measures (Blay et al., 2017). And finally, one systematic review found that the VA often, but not always, performed better on quality measures (O’Hanlon et al., 2017).

It is clear from the little data available since the MISSION Act’s implementation that (a) the health system changes precipitated by the MISSION Act have increased demand for VA services and (b) veterans will likely seek more information about their health care options as a result of these changes. To provide comprehensive analysis comparing the two types of hospitals, this study addresses the following two research questions:

  1. In aggregate, how do the two populations of medical centers compare on clinical quality and patient experience measures?
  2. When making a one-to-one comparison, how do VA medical centers compare to the grand mean (the mean of the total sample) on clinical quality and patient experience measures?

Unlike other studies that have utilized expanded propensity score matching or aggregate hospital data, this study used 1-to-1 comparisons of VA medical centers and demographically similar, proximal non-VA medical centers. This is especially salient given that national clinical and quality data are released intermittently, putting the comparisons in constant flux. Data released in summer 2019 may provide a strong picture of hospital quality immediately prior to implementation of the MISSION Act. In providing a 1-to-1 comparison of VA and non-VA medical centers, this study can offer broader transparency to veterans and their families, as well as advocate to policymakers the need for continued Veterans Health Administration support and attention.

Methods

Hospital Demographics

Ethics approval was not required for this study. To identify a comparator for every full-service VA medical center, we first constructed a master hospital database using the Hospital Service Area File from the US Centers for Medicare & Medicaid Services (CMS, 2020) website and the VA medical centers database from the US Department of Homeland Security (DHS, 2017). To the best of our knowledge, the geographic and contact information for all US hospitals was included in these two files. Additional demographic data were obtained from a variety of websites and databases as described in the following section. Data sets obtained from the American Hospital Association (AHA) were later used to confirm demographic data. Medical centers were coded for the following variables.

Proximity

Possible comparators were identified using the American Enterprise Institute’s (AEI) VA Mission Act access map (Burgess, 2019; Yoon et al., 2018). Veterans can use this online tool to determine drive time to the VA. Under the MISSION Act of 2018, veterans are able to access non-VA community care health care providers if they live more than 30 minutes of driving time from a VA facility that offers similar care. AEI created this tool for veterans to determine whether they live inside or outside of that 30-minute driving time radius (see Burgess, 2019). The AEI tool uses OpenStreetMap and Open Source Routing Machine to plot polygons that delineate these areas. Hospitals within that 30-minute driving time radius were considered for comparison, unless a medical center was not identifiable in the 30-minute radius, in which case medical centers were coded that were closest to the diameter created by AEI’s program.

Rural-Urban Continuum

Hospitals were coded for their place on the Rural-Urban Continuum (RUCC), a classification that further divides the two Office of Management and Budget (OMB) metro and nonmetro (metropolitan/micropolitan) categories into nine population-based codes. Codes were determined using the Measuring Communities (n.d.) mapping tool from the Military Family Research Institute and the Purdue Center for Regional Development. The tool identifies the Rural-Urban Continuum Code for all counties in the United States using the Department of Veterans Affairs Rural Veterans Health Care Atlas.

Ownership

Hospitals were coded for ownership, which was included in the Department of Homeland Security data and confirmed using AHA data. Because of the similarity in organizational structure to VA-run medical centers, we considered only hospitals that were run by governments or nonprofits. Government-run hospitals (operated by counties or municipalities) and nonprofit hospitals were expected to be more similar to VA medical centers than those run by for-profit companies or physician groups. Proprietary and private hospitals were ultimately excluded from the search for comparators as they were anticipated to be disproportionately out-of-network for community care per MISSION Act regulations.

Teaching Status

Hospitals were coded for whether or not they were a teaching hospital. This information was collected from an array of online outlets, including contacting the hospitals themselves, and verified using AHA databases. Hospitals were coded as nonteaching, minor teaching, or major teaching.

Bed Size

Bed size was coded per the AHA’s classification system: Hospitals with fewer than 100 acute care beds were coded as small, hospitals with between 100 and 399 beds were coded as medium, and hospitals with 400 or more beds were coded as large.

Comparator Selection

As proximity was the most important factor in identifying VA comparators, hospitals in the same ZIP code as their comparable VA were automatically considered as possible comparators. If a comparable hospital could not be located in the same ZIP code, hospitals within the 30-minute driving time radius as identified by the AEI mapping tool were all considered possible comparators. If a comparable hospital could not be located within the 30-minute driving time radius, hospitals closest to the radius were considered as comparators.

In order to narrow the remaining choices, the four comparison categories of proximity, RUCC, teaching hospital status, and bed size were tallied and the medical center with the most similarities was chosen as the comparator. In the case of a tie, categories were assigned a hierarchy in the following order: RUCC, teaching hospital, and bed size. Medical centers with the same RUCC were chosen first as comparators. Non-VA medical centers were considered as comparators if they had the same RUCC as their closest VA medical center, were also teaching hospitals, and/or fell in the same bed size category as the VA. In the case of more than one possible comparator being different from the VA medical center on all fronts, proximity was the tiebreaker, with the closest medical center chosen as the comparator.

A total of 119 pairs of records were identified that were either in the same ZIP code or less than 30 minutes’ driving time from each other; each pair consisted of the VA and its comparator. The remaining 17 pairs were established from outside of the 30-minute driving time radius.

Quality Measures

Researchers met with relevant stakeholders for a roundtable discussion to decide which clinical and quality measures would be best used to compare hospitals; stakeholders included policy think tanks, VA administrators, and researchers. The goal was to select measures that would be important regardless of the characteristics of the patient population or care specialties. Based on recommendations from these field experts, the authors initially selected the following clinical quality measures: IMM-2, PSI 90, and PSI 04, and the following patient experience measures: HCAHPS #18, HCAHPS #19, and CTM-3.

IMM-2

IMM-2 is a measure of influenza immunization. It describes the percentage of patients screened for prior immunization upon admission and administered the flu vaccine if needed. The numerator for this measure is patients screened and administered the flu vaccine, and the denominator is patients 6 months and older who were discharged from the hospital. The authors were able to locate recent (2018) data for IMM-2 for both VA and non-VA medical centers through Hospital Compare on the Centers for Medicare and Medicaid Services website (n.d.b.).

MRSA/CAUTI

PSI 90, the original measure suggested by stakeholders, is a composite that includes rates of 10 indicators related to hospital safety, including hospital falls with hip fracture, peri-operative hemorrhages or hematomas, and postoperative sepsis. We were unable to locate current VA data for these indicators. Because PSI 90 is described as providing “an overview of hospital-level quality as it relates to a set of potentially preventable hospital-related events associated with harmful outcomes for patients,” we instead chose the readily available measures of MRSA and CAUTI found in Hospital Compare that also describe quality related to potentially preventable hospital-related events (US Centers for Medicare and Medicaid Services, n.d.a.). The MRSA measure describes the number of methicillin-resistant Staphylococcus aureus infections per 1,000 bed days. The CAUTI measure describes the number of catheter-associated urinary tract infections per 1,000 device days (days of catheter use).

PSI 04

PSI 04 describes deaths among surgical inpatients with serious treatable complications; in other words, if a patient goes into surgery, what is the chance that the patient dies of a serious treatable complication? We located both VA and non-VA data for PSI 04 from 2018. PSI 04 is expressed as the number of deaths per 1,000 patients who develop specific complications while hospitalized and is calculated by dividing the number of hospital deaths (numerator) by the number of discharges that fit certain inclusion and exclusion criteria (denominator).

HCAHPS #18 and #19 and CTM-3

HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems) is a US Centers for Medicare and Medicaid Services (2021a) survey that measures the quality of patients’ hospital experience. It is a national survey administered by individual hospitals and reported to CMS. Hospitals survey discharged patients between 48 hours and 6 weeks after discharge. Patients are asked 29 questions on a range of experience measures, including care from doctors and nurses, hospital experiences and environment, and understanding of care after returning home.

HCAHPS #18 measures the patient’s overall rating of the hospital and describes the percentage of patients who assess the hospital as a 9 or 10 on a 10-point scale. HCAHPS #19 measures the percentage of patients who are “very likely” to recommend the hospital to friends or family. VA medical centers do not use HCAHPS, but the VA Survey of Healthcare Experiences of Patients (SHEP) used to evaluate VA medical center quality is also a national, standardized survey and contains the same items as the HCAHPS survey (US Centers for Medicare and Medicaid Services (2021b). The CTM-3 is a subscale of HCAHPS. It is a care transition composite consisting of three items measuring a patient’s understanding of their postoperative care and describes the percentage of patients who responded “strongly agree” to composite questions. HCAHPS #18 and #19 for non-VA medical centers were available for 2018, and the comparable SHEP data were available for VA medical centers for 2018 as well. CTM-3 data for both VA and non-VA medical centers from 2018 were used for this study.

After collecting quality data, 11 additional paired records were eliminated for a lack of reported VA data (i.e., no values for any clinical or patient experience measures). This left 125 pairs, or 250 medical centers, for analysis.

Results

Descriptive statistics for all variables can be found in Table 1 (below). Demographic and quality variables were assessed for normality, and heavily skewed (–1.0 > z > 1.0) variables were examined for influential outliers. Variables with extreme scores were Winsorized to limit their influence by replacing scores lower than the fifth percentile of each quality variable with the value of the fifth percentile and replacing scores greater than the 95th percentile with the scores of the 95th percentile value.

Table 1

Descriptive Statistics.


N MIN MAX M SD

IMM-2 243 68.2 99 89.67 9.31

PSI-4 163 60.92 196.82 148.13 38.10

CAUTI 148 0.22 2.33 1.06 0.58

MRSA 167 0 0.15 0.06 0.04

CTM-3 239 37 71 53.65 5.36

HCAHPS-18 242 41 89 71.55 8.10

HCAHPS-19 243 50 90 70.94 6.86

Correlation coefficients were calculated using Spearman’s rho and are presented in Table 2 (below). In the combined VA and non-VA sample, rurality was moderately negatively associated with size (rs = –0.450; p < 0.01) and teaching status (rs = –0.423; p < 0.01), meaning that the more rural the medical center, the more likely it was to be small or nonteaching. Rural code was also moderately positively associated with the three HCAHPS variables (HCAHPS #18, HCAHPS #19, CTM-3; rs = 0.429 - 0.515; p < 0.05) in the VA sample, meaning that the more rural the medical center, the higher the score on those three patient experience measures. Rurality was also moderately negatively associated with MRSA (rs = –0.317; p < 0.01), meaning that being more rural was associated with lower MRSA scores. Size had a moderate, positive association with teaching status (rs = 0.485; p < 0.01), meaning that larger hospitals were more likely to be teaching hospitals. Size also had a small positive association with IMM-2 (rs = 0.172), PSI 04 (rs = 0.209), and MRSA (rs = 0.268); p < 0.01 and a small negative association with the CTM-3 (rs = –0.298; p < 0.01), meaning that being a larger hospital was associated with an increase in immunization, surgical deaths, and MRSA, and a decrease in patient-reported care transition. Finally, all three of the HCAHPS measures were strongly to very strongly intercorrelated (rs = 0.664 – 0.920; p < 0.01), as expected.

Table 2

Correlation Coefficients.


VARIABLE 1. RUCC 2. owner 3. size 4. teach 5. IMM 2 6. PSI 04 7. CAUTI 8. MRSA 9. CTM 3 10. HCAHPS 18 11. HCAHPS 19

1. RUCC X

2. owner 0.009 X

3. size –0.450** .317** X

4. teaching –0.423** 0.007 .485** X

5. IMM 2 –0.011 .444** 0.172** 0.017 X

6. PSI 04 0.091 .450** 0.209** –0.031 0.150 X

7. CAUTI –0.014 0.036 –0.026 0.065 –0.142 0.164 X

8. MRSA –.317** –0.076 0.268** 0.294** 0.017 0.157 .237* X

9. CTM 3 .234** –.268** 0.298** –0.038 –0.075 –0.294** 0.026 –0.036 X

10. HCAHPS 18 .170** .162* –0.039 0.023 0.154* –0.057 0.035 0.027 .664** X

11. HCAHPS 19 .231** 0.099 –0.106 –0.034 .145* –0.098 –0.002 0.013 .707** .920** X

** = Significant at the 0.01 level; * = Significant at the 0.05 level.

To compare medical centers at a granular level, z-scores were calculated using raw scores for the seven quality measures for the combined sample. These were performed using the grand mean for each measure. VA medical centers were coded relative to their comparator, with individual VA medical center standardized scores coded as greater than their civilian comparator, similar to their civilian comparator (0.5 > z > –0.5), and lower than their comparator. For three measures (PSI 04, CAUTI, and MRSA), more than half of the pairwise data was missing because of one or more missing values per pair.

Results of independent samples t-tests are presented in Table 3 below. VA medical centers as a whole fared significantly better in measures of PSI 04/surgical deaths (t = –7.96, df = 66.7, p < 0.01, 95% CI for mean difference –60.80 to –36.43) and CTM-3/care transition (t = 4.73, df = 237, p < 0.01, 95% CI for mean difference 1.83 to 4.45). The VA fared worse in IMM-2/flu immunization (t = –6.91, df = 220.24, p < 0.01, 95% CI for mean difference –9.71 to –5.40) and HCAHPS #18/overall hospital rating (t = –2.16, df = 240, p < 0.05, 95% CI for mean difference –4.27 to –0.19). There was no significant difference between the two groups for CAUTI, MRSA, or HCAHPS #19/willingness to recommend.

Table 3

T-test results comparing VA and non-VA hospital quality indicators.


VA non-VA 95% CI FOR MEAN DIFFERENCE t df P-value


M SD n M SD n

IMM-2† 85.91 9.77 122 93.46 7.05 121 –9.71, –5.40 –6.91 220.24 <0.01**

PSI 04† 117.11 45.24 59 165.72 16.34 104 –60.80, –36.43 –7.96 66.7 <0.01**

CAUTI 1.02 0.6 74 1.1 0.56 74 –0.27, 0.11 –0.82 146 0.41

MRSA 0.06 0.04 48 0.05 0.04 119 –0.01,0.02 1.1 165 0.27

CTM-3 55.26 5.48 117 52.12 4.78 122 1.83, 4.45 4.73 237 <0.01**

HCAHPS #18 70.41 8.27 118 72.64 7.82 124 –4.27, –0.19 –2.16 240 0.03*

HCAHPS #19 70.48 7.38 118 71.38 6.33 125 –2.64, 0.83 –1.03 241 0.3

** = Significant at the 0.01 level; * = Significant at the 0.05 level; † Satterwaite approximation employed due to unequal group variances.

Finally, Table 4 includes frequency results for standardization. For IMM-2, 46.6 percent of VA hospitals performed similar to or better than their non-VA comparator (n = 118 pairs with valid data). For PSI 04, 87.5% of VA hospitals performed similar to or better than their non-VA comparator (n = 56 pairs with valid data). For CAUTI, 71.3 percent of VA hospitals performed similar to or better than their non-VA comparator (n = 73 pairs with valid data). For MRSA, 72.4 percent of VA hospitals performed similar to or better than their non-VA comparator (n = 47 pairs with valid data). For CTM-3, 82.6% of VA hospitals performed similar to or better than their non-VA comparator (n = 115 pairs with valid data). For HCAHPS #18, 61.6% of VA hospitals performed similar to or better than their non-VA comparator (n = 117 pairs with valid data). And for HCAHPS #19, 61.1% of VA hospitals performed similar to or better than their non-VA comparator (n = 118 pairs with valid data).

Table 4

Frequencies for Pairwise Comparisons.


VA > CIVILIAN VA = CIVILIAN VA < CIVILIAN TOTAL

IMM 2 0(0) 44 (37.3) 74 (62.7) 118 (100)

PSI 04 33 (58.8) 6(10.7) 17 (30.4) 56 (100)

CAUTI 28 (38.4) 21 (28.8) 24 (32.9) 73 (100)

MRSA 24 (51.1) 10(21.3) 13 (27.7) 47 (100)

CTM-3 62 (53.9) 34 (29.6) 19(16.5) 115 (100)

HCAHPS #18 29 (24.8) 43 (36.8) 45 (38.5) 117 (100)

HCAHPS #19 42 (35.3) 31 (26.1) 46 (38.7) 119 (100)

Discussion

The purpose of this study was to compare US Department of Veterans Affairs and civilian medical centers on various quality indicators related to clinical and patient experience outcomes. Each VA medical center was matched with a similar comparator, and outcome data were analyzed in aggregate and standardized scores. Consistent with other studies, results were mixed and thus generalizations about quality in the two populations cannot be made. The VA fared significantly better on measures of surgical deaths and patient hospital rating, although the surgical death data had a high percentage of missing data (34%) and equal group variances could not be assumed. The VA fared significantly worse on measures of in-hospital influenza vaccination and care transition, with equal group variances not assumed for the IMM-2. Results of this study mirrored those of Anhang Price et al. (2018), who found that the VA performed the same or better on measures of inpatient safety and mortality, and non-VA performed better on measures of readmission and some effectiveness measures.

Pairwise analysis offers some insight into how the two categories of medical centers compare at a granular level; for all but one measure (IMM-2), a majority of VA medical centers performed the same as or better than their civilian comparator when comparing standardized scores. This suggests that low opinion of the VA system, particularly by nonusers, could be misplaced when comparing VA centers to a medical center that bears close resemblance.

This study used 2018 data, collected pre-MISSION Act implementation. We recommend that additional analysis be conducted with 2019 data, the last collection period before implementation, so trends post-implementation can be analyzed with appropriate referents. Future analysis using this post-implementation data should control for factors related to the COVID-19 pandemic that could influence patient safety and/or patients’ experiences in receiving medical care.

Acknowledgements

The authors would like to thank the many individuals and organizations that have contributed to and inspired the development of this research. The authors express their sincere appreciation to Center for a New American Security colleagues Loren DeJonge Schulman, Melody Cook, and Maura McCarthy for their time and attention in supporting the work. Finally, the authors would like to acknowledge the VA Health Services Research and Development Advanced Medical Informatics Fellowship.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

References

  1. Anhang Price, R., Sloss, E. M., Cefalu, M., Farmer, C. M., & Hussey, P. S. (2018). Comparing quality of care in Veterans Affairs and Non-Veterans Affairs settings. Journal of General Internal Medicine, 33(10), 1631–1638. DOI: https://doi.org/10.1007/s11606-018-4433-7 

  2. Blay, E., DeLancey, J. O., Hewitt, D. B., Chung, J. W., & Bilimoria, K. Y. (2017). Initial public reporting of quality at Veterans Affairs vs Non-Veterans Affairs Hospitals. JAMA Internal Medicine, 177(6), 882–885. DOI: https://doi.org/10.1001/jamainternmed.2017.0605 

  3. Boden, M. T., & Hoggatt, K. J. (2018). Substance use disorders among veterans in a nationally representative sample: Prevalence and associated functioning and treatment utilization. Journal of Studies on Alcohol and Drugs, 79(6), 853–861. DOI: https://doi.org/10.15288/jsad.2018.79.853 

  4. Burgess, R. (2019, May 7). VA Mission Act access map. American Enterprise Institute. https://www.aei.org/multimedia/va-mission-act-access-map/ 

  5. Chrystal, J. G., Frayne, S., Dyer, K. E., Moreau, J. L., Gammage, C. E., Saechao, F., Berg, E., Washington, D. L., Yano, E. M., & Hamilton, A. B. (2022). Women veterans’ attrition from the VA Health Care System. Women’s Health Issues, 32(2), 182–193. DOI: https://doi.org/10.1016/j.whi.2021.11.011 

  6. House Committee on Veterans’ Affairs. (2019, September 25). Subcommittee on Health Hearing: MISSION critical: Care in the community update. https://veterans.house.gov/events/hearings/subcommittee-on-health-hearing-mission-critical-care-in-the-community-update 

  7. Kinney, R. L., Haskell, S., Relyea, M. R., DeRycke, E. C., Walker, L., Bastian, L. A., & Mattocks, K. M. (2021). Coordinating women’s preventive health care for rural veterans. The Journal of Rural Health, 38(3), 630–638. DOI: https://doi.org/10.1111/jrh.12609 

  8. Kullgren, J. T., Fagerlin, A., & Kerr, E. A. (2020). Completing the MISSION: A blueprint for helping veterans make the most of new choices. Journal of General Internal Medicine, 35, 1567–1570. DOI: https://doi.org/10.1007/s11606-019-05404-w 

  9. Lehrman, W. G., Elliott, M. N., Goldstein, E., Beckett, M. K., Klein, D. J., & Giordano, L. A. (2010). Characteristics of hospitals demonstrating superior performance in patient experience and clinical process measures of care. Medical Care Research and Review, 67(1), 38–55. DOI: https://doi.org/10.1177/1077558709341323 

  10. Martindale, S. L., Epstein, E. L., Taber, K. H., Brancu, M., Beckham, J. C., Calhoun, P. S., Dedert, E., Elbogen, E. B., Fairbank, J. A., Green, K. T., Hurley, R. A., Kilts, J. D., Kimbrel, N., Kirby, A., Marx, C. E., McCarthy, G., McDonald, S. D., Miller-Mumford, M., Moore, S. D., … Rowland, J. A. (2018). Behavioral and health outcomes associated with deployment and nondeployment acquisition of Traumatic Brain Injury in Iraq and Afghanistan Veterans. Archives of Physical Medicine and Rehabilitation, 99(12), 2485–2495. DOI: https://doi.org/10.1016/j.apmr.2018.04.029 

  11. Measuring Communities. (n.d.). Map. https://measuringcommunities.org/map 

  12. Meffert, B. N., Morabito, D. M., Sawicki, D. A., Hausman, C., Southwick, S. M., Pietrzak, R. H., & Heinz, A. J. (2019). US Veterans who do and do not utilize Veterans Affairs Health Care services: Demographic, military, medical, and psychosocial characteristics. The Primary Care Companion for CNS Disorders, 21(1), 26992. DOI: https://doi.org/10.4088/PCC.18m02350 

  13. Murray-Swank, N. A., Dausch, B. M., & Ehrnstrom, C. (2018). The mental health status and barriers to seeking care in rural women veterans. Journal of Rural Mental Health, 42(2), 102–115. DOI: https://doi.org/10.1037/rmh0000095 

  14. Narain, K., Bean-Mayberry, B., Washington, D. L., Canelo, I. A., Darling, J. E., & Yano, E. M. (2018). Access to care and health outcomes among women veterans using Veterans Administration Health Care: Association with food insufficiency. Women’s Health Issues, 28(3), 267–272. DOI: https://doi.org/10.1016/j.whi.2018.01.002 

  15. Newins, A. R., Wilson, S. M., Hopkins, T. A., Straits-Troster, K., Kudler, H., & Calhoun, P. S. (2019). Barriers to the use of Veterans Affairs health care services among female veterans who served in Iraq and Afghanistan. Psychological Services, 16(3), 484–490. DOI: https://doi.org/10.1037/ser0000230 

  16. O’Hanlon, C., Huang, C., Sloss, E., Anhang Price, R., Hussey, P., Farmer, C., & Gidengil, C. (2017). Comparing VA and Non-VA quality of care: A systematic review. Journal of General Internal Medicine, 32(1), 105–121. DOI: https://doi.org/10.1007/s11606-016-3775-2 

  17. US Centers for Medicare & Medicaid Services. (n.d.a.). CMS Medicare PSI-90 and component measures—Six-digit estimate dataset. https://data.cms.gov/provider-data/dataset/muwa-iene 

  18. US Centers for Medicare & Medicaid Services. (n.d.b.). Hospital compare. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/HospitalCompare 

  19. US Centers for Medicare & Medicaid Services. (2021a). HCAHPS: Patients’ Perspectives of Care Survey. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/HospitalHCAHPS 

  20. US Centers for Medicare & Medicaid Services. (2021b). Veterans Health Administration hospital performance data. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/VA-Data 

  21. US Centers for Medicare & Medicaid Services Data. (2020). Hospital service area. https://data.cms.gov/provider-summary-by-type-of-service/medicare-inpatient-hospitals/hospital-service-area 

  22. US Department of Homeland Security. (2017, September 27). Veterans Health Administration medical facilities. Homeland Infrastructure Foundation-Level Data (HIFLD). https://hifld-geoplatform.opendata.arcgis.com/datasets/f11d7d153bfb408f85bd029b2dac9298_0/explore?location=33.934749%2C-107.008404%2C3.19 

  23. Washington, D. L., Farmer, M. M., Mor, S. S., Canning, M., & Yano, E. M. (2015). Assessment of the healthcare needs and barriers to VA use experienced by women veterans: Findings from the National Survey of Women Veterans. Medical Care, 53, S23. DOI: https://doi.org/10.1097/MLR.0000000000000312 

  24. Yoon, K., Shatzkes, M. M., & Rai, K. (2018, June 12). The VA MISSION Act: Enhancing healthcare for veterans. The National Law Review, VIII(163). https://www.natlawreview.com/article/va-mission-act-enhancing-healthcare-veterans 

comments powered by Disqus