Customer Satisfaction Ratings of PSHB Plans
Our Guide shows plan-by-plan customer satisfaction ratings reported by OPM. These ratings come from a 2024 survey in which a standardized questionnaire was sent to a sample of each plan’s members.
The ratings tell you how plans compare for several categories of service, based on answers given to various questions asked on the survey. OPM reports scores for:
- Overall quality of the plan
- Overall rating of personal doctors
- Getting needed care
- Getting care quickly
- Coordination of care
- Claims processing
- Plan's customer service
For each of these aspects of care, we report how OPM scored the survey results for each plan. OPM used the following scale:
4= Outstanding (plan's score was in the 90th percentile)
3= Excellent (plan's score was in the 75th-89th percentile)
2= Good (plan's score was in the 50th-74th percentile)
1= Fair (plan's score was in the 25th to 49th percentile)
0= Poor (plan's score was worse than the 25th percentile)
N/A = Indicate no data was reported for the measure
Blanks indicate no data was reported for the measure
We advise that you keep several points in mind when using the customer ratings:
- Some of the ratings are based on opinions. Your opinions might not be the same as those of survey respondents.
- The way enrollees rate their plans can be affected by their age, education level, state of health, and other characteristics. For example, older individuals tend to rate their plans relatively high. If one plan has attracted a large proportion of members over age 65, its ratings might be high for that reason. The scores reported here have not been adjusted for member characteristics. Within the group of HMO and POS plans, it does not appear that such differences in member characteristics had much effect on scores; very few plans overall scores would change by more than two percentage points if the scores were adjusted for member differences. But the effects might be greater among fee-for-service plans.
- Since the survey included only a sample of plan members, it is possible that a plans ratings were affected by the luck of the draw: a disproportionately large number of satisfied or dissatisfied members happened to respond.
- Some enrollees did not return the questionnaire. It is possible that those who responded are more satisfied or less satisfied than those who did not. Our analysis of these and similar survey data has indicated that younger members and men are less likely to respond than women and older members. Young male members also tend to give somewhat lower ratings than older members of either gender. Fortunately, with roughly 40 percent of surveyed members responding for most plans, the respondents do at least represent a substantial portion of members. And we have some evidence from follow-up survey tests we have done that scores would not have changed much even if an additional 10 or 15 percent of surveyed members had responded.
In interpreting member ratings, also keep in mind that comparing across different types of plans is at best imperfect. First, it is not possible directly to compare regional plans (primarily HMO and POS plans) to national plans. For example, high or low ratings of "personal doctors" by enrollees in a national plan don't tell you how that plan's members in a region rate their doctors, and yet it is the national plan's doctors in that region who should be compared to the doctors of members of regional plans serving only that region. Second, for the national plans (and the ratings we report for Blue Cross and Blue Shield by state), the ratings are only from FEHB & PSHB enrollees, while the ratings for the regional HMO and POS plans include ratings from non-FEHB & PSHB enrollees.
Even among HMO and POS plans, you should be aware that the ratings given by non-Federal members may have been given by members who were in a different variant of the plan than the variant offered to Federal employees and retirees. We have found, for example, that among members of the same plan, POS users are about two to three percentage points less likely than basic HMO users to give high ratings to the plan.
It should also be noted that differences in the way the survey was administered might explain small differences in plan scores. All the plans are required to use an independent firm to conduct their surveys under the supervision of the nonprofit National Committee for Quality Assurance (NCQA) using standardized survey procedures. But there is some room for variation in procedures. For example, some plans allow members to respond online in addition to mail and phone calls, and online responders tend to give somewhat lower ratings. Also, some plans get a relatively high percentage of their responses by phone (as opposed to mail), and phone responders tend to give higher ratings than mail responders.