Context: The majority of studies linking health to political behavior capture an individual's health with an ordinal survey question, called self-rated health status (SRHS), that asks respondents to rate their health along a five-point scale (e.g., excellent to poor). While studies generally have found associations between SRHS and political behavior, differences in respondent understanding and interpretation of the SRHS question and response categories may lead to biased inferences and invalid analyses.
Methods: The author used anchoring vignettes to evaluate previous inferences regarding SRHS and political behavior, including voter turnout, political participation, and party identification.
Findings: Individuals who participate in politics are more health optimistic than those who rarely participate. Liberals tend to be less health optimistic compared to moderates. Once the SRHS measure is adjusted for interpersonal incomparability, it is no longer associated with voter turnout or party identification.
Conclusions: Researchers should note that adjusting for interpersonal incomparability in the SRHS measure influences our conclusions about health and political behavior. Scholars of political behavior must continue to think conceptually about what we mean by health, as well as critically about how to measure this concept accurately.
The power of health as a determinant on political behavior has gained increased attention from scholars of political behavior. Health is associated with voter turnout (e.g., Denny and Doyle 2007), campaign participation (Soderlund and Rapeli 2015), party identification (Pacheco and Fletcher 2015; Schur and Adya 2013), political trust (Mattila and Rapeli 2017), political efficacy (Schur, Shields, and Schriner 2003), and various policy preferences including opinions about the determinants of health (Robert and Booske 2011) and health care reform (Richardson and Konisky 2013). There appears to be building evidence that “health and illness shape who we are politically” (Carpenter 2012: 303), suggesting that population health inequalities may have significant political consequences for representation and policy outcomes.
As scholars continue to disentangle the effect of health on political behavior, it is imperative that we have a full understanding about what we mean by health. Health is a multidimensional concept that encompasses physical and mental aspects of well-being with various origins, trajectories, and consequences (Palloni 2005), suggesting different associations with political behavior. For instance, recent research has found that, while poor health is associated with decreased turnout (Burden et al. 2017; Ojeda and Pacheco 2017), other health ailments, such as cancer (Gollust and Rahn 2015), may spur political activism. As another example of the complex link between health and political behavior, there is evidence that poor health during adolescence has different effects on political participation than does poor health later in life (Ojeda and Pacheco 2017). How and why health is related to political behavior depend on the particular health condition being studied and, ultimately, our theoretical understanding of health.
The challenges in conceptualizing health are amplified by limitations in research design. By far, the most common way to study health and political behavior is through survey research methods where respondents are asked directly about their health (but see Sund et al. , who used register-based data). To this end, the majority of political science research captures an individual's health with an ordinal survey question, called self-rated health status (SRHS), that asks respondents to rate their health along a five-point scale (e.g., excellent to poor). Of the 50 articles published on health and political behavior, for instance, 66% used SRHS to measure health and almost all found a positive association between health and political behavior, especially when looking at voter turnout.1
The dominance of SRHS as a measure of objective health extends well beyond political science research; SRHS is widely used across multiple fields, including the medical, social, and behavioral sciences (Garbarski 2016). The popularity of SRHS is due to its availability on surveys, as well as its correlation to medically determined health conditions (Bjoner, Fayers, and Idler 2005), health service use (Angel and Gronfein 1988), and mortality (Jylha 2009). Many claim “an individual's health status cannot be assessed without” SRHS and that this single item captures “an irreplaceable dimension of health status” (Idler and Benyamini 1997: 34).
While SRHS is the most popular measure of health, researchers have little control over how respondents understand and interpret the SRHS question and response categories, which may lead to methodological problems. Chief among these problems is interpersonal incomparability, which occurs when individuals with the same underlying quantity of interest (in this case, health) have unequal probabilities of providing the same answer (Hopkins and King 2010). Analyses based on unadjusted SRHS may produce misleading results if different groups have systematically different ways of using the item's response categories or if the factors that lead to measurement error in SRHS are related to the outcome of interest.
A popular method for ameliorating this problem, which I explain below, is anchoring vignettes (King et al. 2004; King and Wand 2007). After providing their own health assessment, respondents rate hypothetical individuals (e.g., a person who is described as energetic) on the same response scale. This allows researchers to rescale the respondents' self-assessments on a common scale, enabling comparability across groups and a correction of measurement error. While successful in improving the measurement of SRHS, anchoring vignettes are underutilized by political scientists. Is the association between SRHS and political behavior sensitive to these methodological issues? How do our inferences change once we correct for the measurement error in SRHS?
Using the 2014 Cooperative Congressional Election Survey (CCES), I used anchoring vignettes to evaluate previous inferences regarding SRHS and political behavior. In this article I first explore reporting differences in SRHS across demographic characteristics (e.g., gender, race, and socioeconomic status) as well participation and political predispositions; the latter is a significant contribution to the literature on SRHS and interpersonal comparability. I then present estimated unadjusted and adjusted models associating health to political behavior, including voter turnout, other forms of political participation, and party identification. I end this article with suggestions on improving the measurement of health for scholars of political behavior.
SRHS and the Problem of Interpersonal Incomparability
Answering a survey question validly requires a foolproof set of cognitive processes, including comprehension of the question, retrieval of information, judgment and estimation, and reporting of an answer (e.g., Groves 2004; Krosnick 1999). My focus here is on differences in how respondents evaluate and comprehend the SRHS question and response scales. Garbarski (2016) describes this phenomenon as “differences in evaluative frameworks” (see also Jylha 2009). Others refer to this issue as “reporting differences” (Burgard and Chen 2014), “reporting heterogeneity” (e.g., Dowd 2012), “differential item functioning” (e.g., Grol-Prokopczyk, Freese, and Hause 2011; King et al. 2004), and “interpersonal incomparability” (Hopkins and King 2010; King and Wand 2007). Regardless of the description, methodological problems arise when respondents with the same underlying quantity of interest understand the survey question and response categories differently. Inferences and conclusions are significantly threatened if scholars attribute differences in question interpretation to important intergroup differences in survey responses (Hopkins and King 2010). There are also inferential problems if unadjusted SRHS is used as an independent variable and the factors that lead to measurement error are correlated with the outcome variable. Matters are more complicated in a multiple regression context since measurement error in an independent variable can bias the estimates of the effect of other independent variables in complicated and often unpredictable ways.
Interpersonal incomparability is particularly problematic for abstract concepts like health. Indeed, the majority of research on the validity of SRHS has found significant differences in intergroup interpretations of the response categories. I briefly summarize these results below.2
The problem of interpersonal incomparability is particularly pronounced for comparisons across cultural groups or cross-nationally if subpopulations have different norms and expectations. A comparison of American and English men's health, for instance, shows that, even though American men have objectively worse health, when asked the SRHS question they report better health (Banks et al. 2006). A similar study found that Aboriginal Australians report better SRHS than the general Australian population, while objective measures suggest otherwise (Murray et al. 2002). Evidence of interpersonal incomparability is found across Asian countries (Salomon, Tandon, and Murray 2004) and European countries (e.g., Murray et al. 2002), as well as others (Shmueli 2003). This has led to a general conclusion by the World Health Organization that “self-report ordinal responses are not comparable across or even within populations primarily because of response category cut-point shifts” (Sadana et al. 20022). As a result, the World Health Organization's World Health Survey includes various anchoring vignettes to validly measure global health outcomes, including self-rated health status, mobility, vision, cognition, energy, pain, self-care, and sleep, among others.
Previous studies have also found significant differences in SRHS comparability across men and women. Using the Wisconsin Longitudinal Study, for instance, Grol-Prokopczyk, Freese, and Hause (2011) found that women tend to give higher ratings of health than do men. After reading a vignette about a person with severe heart disease, 48% of women rated the character's health as excellent, compared to 34% of men. Similarly, after reading a vignette about a person with severe diabetes, 17% of women reported the character in poor health, compared to 33% of men. These examples suggest that women are more health optimistic than men. These findings are also consistent with the pattern that women give slightly higher self-ratings than men, even when reporting significantly more health problems (Hauser and Roan 2006), particularly in old age (Ferraro 1980). What's more, Grol-Prokopczyk, Freese, and Hause (2011) found that differences in SRHS between men and women disappear once models adjust for women's greater health optimism.
Differences in interpersonal incomparability also exist across socioeconomic status. Using the National Health Interview Survey linked to Multiple Cause of Death files, Dowd and Zajacova (2007) found that the effect of SRHS on mortality risk differs by education and income, such that lower health ratings are more strongly associated with mortality among individuals with high socioeconomic status compared to those with low socioeconomic status. Scholars speculate that these differences may have to do with health expectations; for instance, those with high socioeconomic status may believe they should be healthy and systematically upgrade their health (Idburg et al. 2001; Dowd and Todd 2011). It may also be that individuals with higher levels of education have greater confidence to handle a given level of health impairment (Grol-Prokopczyk, Freese, and Hause 2011). Either way, Dowd and Todd (2011) conclude that educational inequalities in health may be substantially underestimated when not adjusted for reporting differences.
Evidence suggests that reporting health differences also occur across race/ethnicity (Assari, Lankarani, and Burgard 2016) due to cultural or linguistic differences (Finch, Nicolson, and Fawcett 2002) in the interpretation of health status. As an example, Germans consider the use of excellent an exaggeration in their language and are thus less likely to use this category to rate their health (Jurges 2007). Hispanics may incorporate both social and mental health aspects into their self-ratings, while whites only use the former (Finch et al. 2002), which could cause differences in health reporting. For black/white differences, the majority of research has found that blacks are more health pessimistic and report worse SRHS compared to whites with similar objective health conditions (Boardman 2004; Ferraro 1993; Spencer et al. 2009). Yet, there is also evidence that minority respondents are relatively health optimistic compared to whites on specific health conditions (Dodd and Todd 2011).
Anchoring Vignettes as a Solution
Anchoring vignettes are promising solutions for interpersonal incomparability (e.g., Murray et al. 2002; King et al. 2004; King and Wand 2007; Hopkins and King 2010). Vignettes have been widely used on US surveys (e.g., the Wisconsin Longitudinal Survey) and international surveys (e.g., World Health Survey) to measure a variety of concepts, such as political efficacy, affect, school community strength, and subjective economic welfare, as well as health (for more examples, see gking.harvard.edu/vign/eg). The logic of anchoring vignettes is quite simple (for more detailed explanations, see Wand 2013; King and Wand 2007; King et al. 2004). Respondents are presented with hypothetical individuals through brief vignettes and asked to place those individuals on the same response scale as the self-assessment question (the appendix lists vignettes 1–4 used in the 2014 CCES). For example, after being asked to rate their own health, respondents might be asked to read vignette 4 about David/Nancy and subsequently rate his/her health:
[David/Nancy] is energetic and has little trouble with bending, lifting, and climbing stairs. [He/she] rarely experiences pain, except for minor headaches. In the past year [David/Nancy] spent one day in bed due to illness.
Since the vignette has the same response categories as the self-assessment, it provides a common metric for scholars to rescale the original response, thus correcting for interpersonal incomparability. Because David/Nancy has the same level of health no matter what, variation in responses to the anchoring vignette question can only be due to interpersonal incomparability. General practice is to include multiple vignettes, depending on the concept and diversity of the sample (Hopkins and King 2010).
Two assumptions must be met for anchoring vignettes to be valid. First, there needs to be response consistency, meaning that respondents must use the response categories in similar ways when rating themselves and the hypothetical individuals; respondents cannot hold themselves to higher or lower standards (Grol-Prokopczyk, Freese, and Hause 2011). As long as response consistency exists, scholars can subtract off the error in reporting differences from the self-assessment to estimate an adjusted level of SRHS. In the simplest method of analysis (explained in notation below), the correction is made by recoding the self-assessment in relation to the vignette responses as less than, equal to, or greater than the vignette responses (King and Wand 2007).
Second, there must be vignette equivalence, which means that “the level of the variable represented in the vignette is understood by all respondents in the same way” (King and Wand 2007: 49). This assumption holds if, on average, respondents perceive each domain level or threshold represented by a vignette in the same way. For example, regardless of the level of assessment of David/Nancy, respondents must understand that “being energetic and having little trouble with bending” represents the same point on the underlying continuum and that David/Nancy is healthier than other hypothetical individuals with less favorable descriptions. Research shows that both of these assumptions hold when applying anchoring vignettes to the SRHS question (Salomon, Tandon, and Murray 2004; Au and Lorgelly 2014; Shmueli 2003; Grol-Prokopczyk, Freese, and Hause 2011).
The 2014 CCES and Health Vignettes
I relied on a special module of the 2014 CCES, which includes the SRHS question, anchoring vignettes, and various measures of political behavior (n = 1,000), to explore reporting differences and adjust for interpersonal incomparability. The CCES is a web-based survey, but with proper survey weights it produces estimates of social and political indicators that are as accurate as telephone surveys and is nationally representative (Ansolabehere and Schaffner 2011).
Respondents are asked to rate their health using five response categories (excellent to poor) prior to being asked to rate four hypothetical individuals from separate vignettes (see the appendix). Gender is matched with the respondent to encourage self-reference (see gking.harvard.edu/anchoring-vignettes-faqs). The order of the vignettes and response categories is randomized to mitigate problems of question ordering and priming. Table 1 presents descriptive statistics of the SRHS self-assessment and SRHS of the four hypothetical individuals; on average, respondents rate David/Nancy (vignette 4) in better health than Paul/Mary (vignette 3), who is rated in better health than Andrew/Jennifer (vignette 2), who is rated in better health compared to Michael/Connie (vignette 1). This rank ordering is consistent with the descriptors in the vignettes and provides evidence of vignette equivalence.
Differences in Health Ratings
To identify factors predicting differences in vignette ratings, I estimated an ordinary least squares (OLS) regression model including basic demographics and political variables. The dependent variable is a standardized scale that combines all responses to the four vignettes, with higher values indicating higher perceptions of health (alpha reliability = 0.68). Basic demographics include unadjusted SRHS with higher values indicating higher health, age, gender (1 = female), race (black = 1, else = 0), education (no high school = 1, high school graduate = 2, some college = 3, 2-year = 4, 4-year = 5, postgraduate = 6), and family income (ranging from <$10,000 to ≥$500,000). Political variables include political interest, party identification (independent is omitted category), ideology (moderate is omitted category), a voter turnout scale, and a political participation scale. The political interest variable asks respondents how often they follow what's going on in government, with values ranging from “hardly at all” to “most of the time.” The voter turnout scale was created by combining self-reported voting in 2012, validated voting in 2014, and self-reported registration (alpha reliability = 0.71). Finally, the political participation scale was created by combining responses to whether the respondent did the following over the past year: attended local political meetings, put up a political yard sign, worked for a candidate or a campaign, donated money to a candidate, campaign or political organization, or none of these activities (alpha reliability = 0.75). Both the voter turnout scale and the political participation scale were rescaled from 0 to 1 to ease statistical interpretation. Estimates were weighted using the survey adjusted weights.
As shown in table 2, individuals who rated themselves in better health were more likely to be health optimistic; this provides evidence of response consistency since vignette ratings are positively and significantly associated with self-ratings. Women gave higher ratings than men, a difference that is statistically significant, not trivial in size, and consistent with previous research. The magnitude of this effect can be seen in some simple descriptive statistics: 5% of women rated Paul/Mary in vignette 3 as in poor health, compared to 9% of men. Meanwhile, 66% of women rated Michael/Connie in vignette 1 as in poor health, compared to 73% of men. Inferences regarding other demographic variables are largely in line with previous research. Education and ratings have a positive association, while income and age are negatively associated with ratings (see Grol-Prokopczyk, Freese, and Hause 2011).
Results in table 2 also suggest that political ideology and political participation may be associated with health ratings. The model suggests that liberals are more health pessimistic and give lower ratings than do moderates. The magnitude of this effect rivals others and can be displayed via descriptive statistics. Eighty-seven percent of liberals rated Andrew/Jennifer in vignette 2 as being in poor or fair health, compared to 82% of moderates. Likewise, 77% of liberals rated Michael/Connie in vignette 1 as in poor health, compared to 67% of moderates. The model also suggests that people who are high on the political participation scale are more health optimistic compared to people who rarely participate in political acts beyond voting, although this effect is statistically significant at the 0.10 level with a two-tailed test. The model predicts that, all else equal, going from 0 to 1 (the min to the max) on the political participation score increases a person's health rating by 0.24. These results suggest that respondents who report being participatory also have greater health optimism.3
Reevaluating the Link between SRHS and Political Behavior
The group differences in health ratings imply the presence of those same group differences in self-rating (assuming response consistency). How does adjusting the SRHS measure to account for these differences influence analyses that associate health and political behavior? To answer this, I compared two models for three outcomes of interest, including the two participatory scales described above and party identification, which is a 7-point scale ranging from “strong Democrat” to “strong Republican.”4 One model includes unadjusted measures of SRHS, and the other model includes an adjusted measure of SRHS. A comparison of models reveals how accounting for interpersonal incomparability influences our inferences regarding health and political behavior.
I used a simple, nonparametric approach to analyze the relative ranks of self-assessment responses compared to the vignette responses (King and Wand 2007; Wand and King 2016). Define Cis as the self-assessment relative to the corresponding set of vignettes. Let yi be the self-assessment response, and let zi1, . . . , ziJ be the J vignette responses for the ith respondent. For respondents with consistently ordered rankings of all vignettes (zj–1 < zj for j = 2, . . . , J), I created the adjusted self-assessment Ci:
This equation is generic and can be used for any number of vignettes. Consider an example with two vignettes, shown in table 3, which gives all possible combinations that can result from two vignettes (z1 and z2) and a self-assessment (y). To ease explanation, let the z1 vignette be the Michael/Connie vignette 1, and let the z2 vignette be the David/Nancy vignette 4 (question wording is in the appendix). Based on the reading of these two vignettes, it is clear that the correct ordering is z1 < z2 since Michael/Connie's health is described as worse than David/Nancy's health. If the respondent ranks the vignettes in the correct order (z1 < z2), we can create an adjusted variable by recoding the self-assessment in relation to the vignettes. The adjusted variable can fall into one of five categories: (1) less than Michael/Connie, (2) equal to Michael/Connie, (3) between Michael/Connie and David/Nancy, (4) equal to David/Nancy, and (5) greater than David/Nancy. These five categories are shown on the right-hand side of the equation above and in the five columns in table 3. Again, this assumes that the respondent correctly orders the vignettes.
As shown in example 1 in table 3, individuals who rank their own self-assessment lower but not equal to z1 and z2 get a score of 1 on the corrected self-assessment score (C). As a concrete example, individuals who rate themselves as a 2 (fair) on the SRHS question, rank Michael/Connie as 3 (good), and rank David/Nancy as 5 (excellent) would receive an adjusted score of 1. Individuals who rate Michael/Connie and Nancy/David in an identical manner but rank themselves as 1 (poor) on the SRHS question will also receive an adjusted score of 1. The differences in the unadjusted scores of these two sets of individuals would be due to differences in response interpretations; the first group exhibits a level of health optimism evidenced by giving Michael/Connie a rating of good and an SRHS of fair. The first group is more willing to give positive responses compared to the second group. Examples 1–5 in table 3 show additional cases where both vignettes are ordered correctly and not tied; in these cases, C is scalar, meaning it can take on only a singular value.
The remaining challenge is what score to give respondents who give tied or inconsistently ordered rankings. This is done by first checking which of the conditions on the right-hand side of the equation shown at the top of table 3 are true and then summarizing C with the vector of responses that range from the minimum to maximum values among all the conditions that hold true (King and Wand 2007). Examples 6 and 8 show combinations where there are ties but the self-assessment is not equal to either vignette response. In these cases, we can obtain a scalar value of C. Consider example 6 with our concrete example. Here, an individual ranked himself or herself as 3 (good) and ranked both Michael/Connie and David/Nancy as 4 (very good). In this case, the individual would receive an adjusted SRHS value of 1 because the only condition that is met is y < z1.
Examples 7 and 10–13 show instances where the respondent ordered the vignettes incorrectly, resulting in nonscalar values of C. For instance, in example 10, C could range from 1–4, given the order of the vignettes in relation to the self-assessment. Again, using the concrete example, this would occur if an individual ranked himself or herself as 4 (very good), ranked Michael/Connie as 5 (excellent), and ranked David/Nancy as 4 (very good). In this example two conditions are met: y < z1 and y = z2. That means that the adjusted SRHS could range anywhere from 1 (poor) to 4 (very good) once we control for differences in response interpretations.
As King and Wand (2007; see also Wand and King 2016) explain, when C is an interval rather than scalar, we cannot distinguish the value of C without further assumption. To reduce the number of respondents with interval values, researchers can use diagnostics to determine the relative ordering of vignettes, as well as to choose the number of vignettes.
Through various diagnostics on the CCES vignettes, analyses revealed that respondents often had identical ranks of vignette 1 and vignette 2: 34% of respondents ranked both vignette 1 and vignette 2 as being in poor health; an additional 14% ranked vignette 1 and vignette 2 as being in fair health. Recall that this results in interval values only if respondents also ranked their own health as equal to the vignettes. As King and Wand (2007) explain, one way to reduce the number of interval values is to omit a vignette that respondents often tie. As a result, vignette 2 is omitted from the analyses. The number of respondents with interval values decreases from 136 to 109 when I omit vignette 2. In the end, a total of 823 respondents have scalar values, while 109 have interval values on the adjusted SRHS measure using vignettes 1, 3, and 4.5Figure 1 shows the unadjusted SRHS measure, and figure 2 shows the adjusted SRHS measure. Comparing figures 1 and 2 shows that the adjusted SRHS measure is more left skewed but also has more variation than the unadjusted SRHS measure. Recall that the unadjusted SRHS measure ranges from 1 (poor) to 5 (excellent), while the adjusted SRHS measure ranges from 1 to 7 (2J+1, where J is three vignettes, omitting vignette 2).
In the analyses that follow, I included standard predictors of political behavior, including many of the same demographic variables described above, such as gender, education, age, race, and family income. For the turnout and participation models, I also included age squared to account for the curvilinear relationship between age and participation (Plutzer 2002), religious attendance ranging in responses from “never” to “more than once a week,” a binary measure of partisan strength, a binary measure of ideological extremity, and whether the respondent was contacted by a political organization during the 2014 midterm election. All of these variables are expected to have a positive association with turnout and political participation (see, e.g., Rosenstone and Hansen 1993). Binary measures for party identification and ideology were also included. For the party identification model, I included religious attendance and the ideology measures described above. Because the dependent variables are continuous, I used standard OLS regressions; estimates were weighted using the survey provided weights.6
As shown in the first column of table 4, unadjusted SRHS is positively associated with the voter turnout scale, which is consistent with previously accumulated research. The model predicts that people who report being in excellent health have a voter turnout score about 5 points higher than those who report being in poor health.7 The association between SRHS and voter turnout is larger (2.17; p < 0.01, n = 799) when the family income variable is omitted from the analyses. There is an argument that income, as a measure of socioeconomic status, needs to be included in the model since it is highly related to SRHS (Lantz et al. 1998). At the same time, it may be defensible to omit income since missing values are commonplace and problematic (e.g., Schenker et al. 2006) and since it is mostly education and not income that is related to voter turnout (Rosenstone and Hansen 1993). I opted to include income in the models and note when inferences change with its omission. Education, family income, age, partisan strength, political interest, and being contacted are all positively associated with the turnout scale, which is of little surprise to scholars of political behavior.
How do inferences change once SRHS is adjusted for interpersonal incomparability? Results in table 4 suggest that health is unrelated to voter turnout using an adjusted measure of SRHS.5 The coefficient on the adjusted SRHS measure is 0.31 and not statistically significant using conventional levels. All other inferences are unchanged. Interestingly, when family income is omitted from the models, the coefficient on the adjusted SRHS is 1.06 and is statistically significant at the 0.05 level (n = 799).
Depending on the model specification, there are two inferences regarding the association between SRHS and voter turnout. If income is included in the models, the results indicate that once one adjusts for interpersonal incomparability, SRHS is no longer associated with turnout. This suggests that previous research erred in finding a positive association between self-reported health and turnout; instead, the factors that influence response inconsistencies in the self-assessment of health are related to participation, not objective health as measured by the SRHS question. If income is not included in the models, results indicate that SRHS is positively associated with turnout, but with a lower substantive association once adjusted for interpersonal incomparability. Either way, researchers should take note that adjusting for interpersonal incomparability in the SRHS measure influences our previous conclusions about health and turnout.
Results in table 4 show that SRHS is unrelated to political acts beyond voting, regardless of whether adjustments are made for interpersonal incomparability and regardless of whether income is including the model specification. Unsurprisingly, partisan strength, political interest, ideological extremity, and being contacted by a political campaign have the most robust associations with the political participation scale, which conforms to previous research (e.g., Rosenstone and Hansen 1993).
Finally, table 4 shows that unadjusted SRHS has a positive and statistically significant association with identifying strongly with the Republican Party, which is consistent with previous analyses (Pacheco and Fletcher 2015). More specifically, the model suggests that a respondent who reports being in excellent health scores a 5.14 on the partisanship scale, compared to a person who reports being in poor health, who has a predicted score of 0.14, all else equal. This association is roughly equal in size to the effects of religious attendance and political interest on identifying strongly with the Republican Party. However, the coefficient on SRHS fails to reach statistical significance once it is adjusted for interpersonal incomparability: the coefficient on adjusted SRHS is a mere 0.04 and no longer statistically significant at traditional levels. Similar to the analyses on voter turnout, the results suggest that objective health measured by adjusted SRHS is unrelated to identifying strongly with the Republican Party.
The goal of this study was twofold. First, I set out to identify political variables associated with reporting differences in SRHS. I found suggestive evidence that individuals who participate in politics are more health optimistic than those who rarely participate. Moreover, liberals tend to be less health optimistic compared to moderates. Second, I reassessed the association between health and political behavior once accounting for variations in health ratings and, subsequently, self-assessments in health. I found that SRHS may not be as uniformly related to political behaviors as suggested by previous research. Once the SRHS measure is adjusted for interpersonal incomparability, it is no longer associated with voter turnout or party identification. What do these results mean for the burgeoning field of health and political behavior?
It is imperative to point out the limitations of the current research design, of which there are plenty. The most critical is the context of the survey, which occurred during the 2014 midterm elections. Scholars of political behavior are quick to point out that factors influencing political behavior in midterm elections are typically different from those in presidential elections (e.g., Rosenstone and Hansen 1993). I assuaged these concerns by using a turnout scale that includes 2012 as well as 2014 turnout. Even still, the 2012 turnout measure is self-reported and likely suffers from biases in overreporting, which is a large and growing problem for retrospective questions about turnout in presidential elections (Burden 2000). Thus, SRHS may be unrelated to turnout or participation in midterm elections but still associated with political behavior in other electoral contexts. This underscores the need for scholars to replicate and extend these analyses and to explore conditional relationships across various political contexts.
Likewise, there is evidence that both turnout and party identification have developmental and contemporary components, which necessitates panel analyses. Cross-sectional analyses, as used in these analyses, do little to differentiate factors that influence the latent long-term probabilities of turning out or identifying with a particular party from the short-term responses to recent personal or political conditions (e.g., Plutzer 2002). I cannot, for instance, distinguish between how growing up with a chronic health condition is associated with voter turnout in 2014, compared to a recent bout of illness in adulthood. This is important since there is evidence that health exhibits both developmental and contemporary effects (Pacheco and Fletcher 2015; Ojeda and Pacheco 2017) on political behavior. In short, while I found that adjusted SRHS is unrelated to turnout and party identification, it may still be relevant as a long-term factor of political development.
It would be a mistake to conclude that health is not related to political behavior based on these analyses alone. Instead, the message is that scholars of political behavior must continue to think conceptually about what we mean by health, as well as critically about how to accurately measure this concept. SRHS is but one measure of health. If used on surveys, SRHS should be adjusted for interpersonal incomparability using anchoring vignettes as presented in this article. Yet, I would argue that this quick fix does not go far enough in truly understanding how health influences political behavior. Instead, we are tasked with implementing innovative research designs that both more accurately measure health and improve the identification of causal mechanisms. These designs may include panel analyses with various self-reported measures of physical, mental, and overall health across the life span, the use of register-based or hospital medical records in conjunction with political behavior measures, the use of experimental drug trials to understand how improving health may impact political behavior, the addition of clinical health readings (e.g., blood pressure, weight, and height) to survey questionnaires, an instrumental variables approach that identifies community level health shocks (e.g., poor water quality or lead exposure) and political outcomes, or a quasi-experimental approach that takes advantage of Medicaid expansion or retrenchment. It is my hope that scholars continue to study health and political behavior—and to do so creatively—as understanding this topic is critical to our understanding of a successful and healthy democracy.
I thank the participants of the “Health and Political Behavior” conference hosted by the University of Missouri, as well as the anonymous reviewers for their feedback. Thank you to Scott LaCombe for his research assistance. A special thank you to Sarah Gollust and Jake Haselswerdt for their thoughtful comments and patience.
A graduate student and I started our search using Burden et al. (2017), looking at their works cited and identifying more recent articles that cited this work. We also did an article search on JSTOR using the search terms health and political behavior to identify other sources. We then read each article to determine if and how health was included in the analyses, as well as in the theoretical discussions.
There is evidence that other demographic variables, such as age (Idler 1993; Schnittker 2005) and marital status (Zheng and Thomas 2013), are also related to reporting differences. Due to space limitations, I do not include a discussion about these.
It is interesting that respondents who reported voting in the 2014 election but who actually did not are also more health optimistic (β = 0.11, p = 0.06).
To ease statistical interpretation, the turnout and participation scales are multiplied by 100.
There are multiple options for dealing with interval values (Wand and King 2016). I opted to omit respondents with interval values from the analyses. This solution is simplistic and, admittedly, wastes information, and it may introduce selection bias. More complex parametric solutions, however, are beyond the scope of this article. The analytical samples across models in table 4 are identical; discrepancies exist due to list-wise deletion as the dependent variables change. The analytical samples differ between analyses in tables 2 and 4 but should not cause alarm. The analysis in table 2 is concerned with an individual's rankings and not in correcting for the SRHS. Individuals who offer incorrect rankings can be included in these analyses. Discrepancies between tables 2 and 4 result from list-wise deletion from the change in dependent variables and addition of independent variables, as well as the deletion of respondents who have interval values on adjusted SRHS.
Inferences are nearly identical if using an ordered logistic regression with the party identification model.
This coefficient is significant at the 0.10 level using a one-tailed test. I discuss limitations of the research design, which may have influenced these results, in the conclusion.
Appendix: Vignettes from the 2014 CCES Survey Questionnaire
Earlier, we asked you to rate your own health overall. We are interested in how you would use the same categories to rate the health of other people your age. Now I am going to describe the health of some people your age; then I am going to ask you to rate their health using the same categories you used to rate your own health.
Vignette 1: [Michael/Connie] feels exhausted several days a week. [He/she] has trouble bending, lifting, and climbing stairs, and every day experiences pain that limits many of [his/her] daily activities. In the past year, [Michael/Connie] spent a few night's in a hospital, and over a week in bed due to illness.
Vignette 2: About once a week, [Andrew/Jennifer] has no energy. [He/she] has some trouble bending, lifting, and climbing stairs, and each week experiences pain that limits some of [his/her] daily activities. In the past year, [Andrew/Jennifer] spent a week in bed due to illness.
Vignette 3: [Paul/Mary] is usually energetic, but occasionally feels fatigued. [He/she] has some trouble bending, lifting, and climbing stairs. [His/her] occasional pain does not affect [his/her] daily activities. In the past year, [Paul/Mary] spent a few days in bed due to illness.
Vignette 4: [David/Nancy] is energetic, and has little trouble with bending, lifting, and climbing stairs. [He/she] rarely experiences pain, except for minor headaches. In the past year [David/Nancy] spent one day in bed due to illness.
Self-Rated Health Status (randomly reverse ordered)
In general, would you say YOUR health is
4 Very good
9 Don't know (volunteered)