The Median Voter: Fact or Fiction?
The History of a Theoretical ConceptPrepared for Presentation at the Annual Meeting of the Western Political Science Association
March 25-27, 1999
Robert G. Boatright
Department of Political Science
The University of Chicago
5828 S. University Avenue
Chicago, Il 60637
robb@polisci.spc.uchicago.edu
To an extent that many political scientists are only dimly aware, the median voter theorem has infiltrated much of American political science. Even among those who do not work in the area of formal modelling, the predictions of candidate convergence and proximity voting govern much of both theoretical and empirical literature on electoral competition. This is not to say that we always find what we predict; instead, it is to say that we frequently look for these two occurrences, even if only to take note of our failure to find them.
Bernard Grofman notes of Anthony Downs’s An Economic Theory of Democracy, the first political science text to explicate the logic of spatial candidate competition, that
As a seminal work, An Economic Theory of Democracy suffers from the triple dangers of (1) being forever cited but rarely read, with its ideas so simplified as to be almost unrecognizable, (2) being regarded as outmoded or irrelevant, (3) having its central ideas so elaborated by ostensible refinements that what was good and sensible about the original gets lost amidst the subsequent encrustations (Grofman 1993: 3).
In this essay, I certainly do not dispute Grofman’s claims. Grofman’s words are contained in the introduction to an edited volume designed to reread Downs with an eye towards correcting wayward interpretations of his theory. In this essay, however, I seek to assess the very effects of the “calamities” of which Grofman speaks upon the study of political parties. Furthermore, I seek to clarify means by which lack of empirical support for Downs’s candidate convergence prediction can be used not to dismiss his claims but to second them.
In pursuing this exercise, it is necessary to treat the median voter theorem not as a mathematical proof but as a theory – as a theory which, despite the mathematical rigor that has been applied to explication of its various facets, should be considered on level ground with its predecessors. The median voter model should be read as a response to the “responsible parties” theory propounded by the 1950 American Political Science Association report and other normative theories of political party behavior dating back into the early years of the twentieth century. Downs’s work effectively put an end to such normative theorizing about what political parties should do; if it could be demonstrated that political parties would never take political scientists’ advice seriously, what was the point in offering advice at all?
Few have considered, however, means by which this debate might be re-addressed by the very tenets of the Downsian model. Downs and many of his successors have argued that disputation of the empirical predictions of his theory does not undermine the theory itself. They have claimed that to find that any of the theory’s predictions are not borne out brings into question the empirical support for one or more of the theory’s assumptions, but such a finding has no effect upon the internal validity of the theorem itself (Downs 1959). This seems a fair claim, but adherence to this claim has not stopped formal theorists from tinkering with various components of the model in order to prescribe variants or close relatives of the model which have greater empirical support than does the “pure” median voter model itself.
This type of activity, however, runs the risk of making the median voter model unfalsifiable. If we limit its application only to events in which it occurs, we have effectively established a theory with no empirical import at all. As Martin Diamond points out in his early review of Downs, a weakened median voter hypothesis is no model at all:
The revised “fundamental hypothesis” would have to read: Some politicians formulate policy only for the rewards of office and some do not, and which behavior is decisive is a matter for study each time, all of which would leave political science in the difficult but fascinating position it was in before economic models were offered in succor (Diamond 1959: 210).
Diamond’s claim might be read in two ways. The quantitative political scientist may read it as a statement that “the outliers are what is of most interest,” that Diamond’s claim is that if we cannot explain nonconvergence in a systematic way, the outliers – the candidates who do not adopt “rational” positions – will be the candidates who are of the most interest and have the most effect upon politics. A student of 1950s political and sociological theory – a student of Leo Strauss, for instance – might read Diamond’s claim as a broader statement that the scientific study of politics cannot explain political change or innovation. It is a claim that “rational” political behavior is uninteresting, and political “action” cannot be subsumed under theories of rationality (See Arendt 1958: 41-42).
Diamond’s argument also poses a tremendous obstacle to those who would seek to adapt Downs for the sake of empirical inquiry. We cannot merely say that some candidates behave in accordance with Downs’s precepts and some do not, nor can we say that Downs’s theory holds when the tenets of his theory can be shown to exist and it does not when such tenets do not hold. Instead, a theory of candidate convergence must demonstrate that there is a systematic logic to nonconvergence as well as to convergence – that we can predict when convergence will occur and when it will not occur without resorting to ex post facto analysis.
I recognize that such a task is a formidable one, and in this paper I do not purport to have discovered such a theory. Instead, I argue that the roots for such a theory may be located in one of the least explored of Downs’s assumptions – that of simultaneity in candidate positioning. Where candidates adopt positions sequentially, the logic of candidate competition and convergence is altered, but it is altered in ways that can be systematically identified and explained, and it can be amended in ways that can lead to accurate predictions of candidate divergence.
In order to arrive at this argument, I proceed in this paper first to restate the historical context of Downs’s theory, with particular attention to debates about responsible political parties and to debates about pluralism and the definition of political power. Second, I briefly note the fundamental assumptions of the median voter theory, the level of empirical support for these assumptions, and the refinements or revisions to empirical findings which formal modelers have undertaken in order to adapt economic modeling to better testing. Third, I discuss the lack of attention which has been paid to the simultaneity assumption and ways in which discarding or limiting this assumption re-opens many of the theoretical and normative debates which Downs’s theory closed. I do not seek to provide a formal theory myself because I believe that the results of a sequentiality assumption should and can be stated, at least for the purposes of this essay, without the “encrustations” of which Grofman speaks.
The Historical Context: Closing a Debate
The study of political parties is at least as old as the discipline of political science in America. In the late nineteenth and early twentieth century, Woodrow Wilson, A. Lawrence Lowell, Henry Jones Ford, and others debated how best to conceive of political parties’ function and membership. Ford (1914: 295-296) argued that parties were somewhat democratic organizations, oligarchically controlled but with the tacit support of the voters. In this period, only the Russian political scientist Moisei Ostrogorski (1902) confined party membership to those actually employed by the party. Ostrogorski’s work appears to have been relegated largely to the fringes of this debate at the time, although it was rediscovered in the 1950s and is now frequently cited.
This discussion of parties was, as was much of contemporaneous political science, highly normative. It revolved around the question of how political parties should behave, and it was taken – especially in the case of Wilson – as prescription for how parties should behave and who should control them. It raised, however, a somewhat more empirical question which has persisted – is democracy best served when parties strive to appear identical, or is the practice of democracy restricted by party similarities, insofar as voters are given no real choice between platforms?
By the 1950s, several leading political scientists had concluded that Ostrogorski was correct – that voters and parties were best conceived of as two distinct entities. V. O. Key (1958: 378-380) conceived of parties in three parts – voters who supported and identified with the party, the party organization, and those members of the party who held governmental office. E. E. Schattschneider (1942: 35-64) argued that democracy existed between parties, but not within parties; party “membership” was a facade. Parties nonetheless had a duty to “frame political questions” for consumption, and were thus driven by forces of the political “market” to create a product tht reflects public opinion, even without the direct input of the public in framing the issues.
Oddly, Schattschneider’s introduction of the market metaphor did not stop him from chairing the American Political Science Association working group which produced Towards a More Responsible Two-Party System, one of the few direct political statements published under the imprimatur of the American Political Science Association. This report, published in 1950, called for the parties to present coherent, yet divergent, packages of policy proposals to the public. The public could then make an informed choice about the direction in which it wished American public policy to go. Furthermore, it called upon parties to design long-range plans that would “cope with the great problems of modern government.” In a 1992 retrospective on Schattschneider’s work, John Kenneth White cites several leading political scientists of later decades who attested to the report’s status as the most significant work in the area of political parties of its time. The report also played a role in reviving interest in earlier debates on political parties. Austin Ranney’s summary of the views of early twentieth century theorists of political parties appeared soon afterwards (Ranney 1954).
To a large extent, Downs’s An Economic Theory of Democracy, published only seven years later, put an end to this normative debate. If the APSA report was formulated in response to a perceived crisis in party government, Downs’s work seems to have arisen from no such concern. Downs seems blissfully unaware or uninterested in the “responsible parties” debate. His bibliography does include Key, but he makes no reference to Schattschneider, the APSA report, or any of the report’s antecedents. If we are to trust his recollection of the development of his project (Downs 1993), An Economic Theory of Democracy was written very rapidly, and it was inspired more by his own personal political experiences and his encounters as an economics graduate student with Schumpeter’s analysis of party competition than it was by current trends in political science.
Downs’s work exposes, however, the inconsistency of pairing a market theory of political parties with normative calls for the parties to espouse contrasting viewpoints and to design long-range plans for government. Employing Hotelling’s theory of economic competition, Downs demonstrated that a rational political party would, in two-party competition, seek out an ideological position in the middle of the electorate’s preference distribution. The two parties would then, under something approximating full information conditions, mimic each other, thus encouraging voters to make decisions not about policy, but about non-issue traits. The parties would, among other things, be ambiguous about their positions on controversial issues or avoid addressing such issues entirely; incorporate seemingly incompatible positions into their platforms; and seek to avoid long-run solutions to problems in order to maximize their present electoral fortunes. In such a scenario, there is complete separation of the voter and the party. The party operates as the producer of policy, and insofar as the two-party system functions in an oligarchical manner, the voter, or consumer, would have to take what was offered by the parties. Normative arguments such as those contained in the APSA report were rendered somewhat moot by this line of reasoning; the fault, if there was one, lay with the median voter himself, and no amount of exhortation by an elite cadre of political scientists would sway the parties from their vote-maximizing strategies.
The Downsian disputation of the APSA report’s tenets need not be stopped here, however. Riker, in recounting the differences between Downs and the APSA report, notes that “political science and political events have passed the adherents of ‘responsible parties’ by.” (Riker 1982: 63) Not only was the report wrong on empirical and logical grounds, however; it was wrong on normative or moral grounds:
Its implicit purpose was to sharpen the partisan division as it then existed and thus to ensure that the winners kept on winning. As the status quo was then in favor of the Democrats, the report should be regarded as a plan for a political system in which Democrats would always win and Republicans always lose. . . Although some people saw that the report was bad description, almost no one saw that it was profoundly immoral – a sad commentary on the state of the profession (Riker 1997: 191-192).
These are, perhaps, words only a political scientist could write; the call for political parties to differentiate themselves has largely disappeared from political science, but it is still common on newspaper editorial pages. A brief perusal I undertook shows editorialists as diverse as George Will, Barbara Ehrenreich, and E. J. Dionne lamenting the lack of difference between party platforms.
Responsible party theorists are conspicuously absent from the response which greeted Downs’s work. The most glowing review of An Economic Theory of Democracy was penned by Charles Lindblom, who had also been instrumental in securing a publisher for the book. Lindblom writes that
While economists have made the most of a seriously defective system, political scientists have permitted a kind of perfectionism to inhibit serious, explicit system-building. In talking with political scientists, I am often struck by their dissatisfaction with theoretical proposals that do not promise a rough fit to the phenomena to be explained, while economists have happily elaborated, to take an example, a theory of the firm that is still a caricature of the phenomena described (Lindblom 1958: 241).
While Lindblom hailed Downs for bringing into political science a model that was largely free of concern for empirical support, most reviews predictably dwelt upon the model’s fit with empirical data. Almond (1993) summarizes several of these reviews; with the exception of the above-quoted Diamond review, most voiced rather qualified support for Downs but expressed doubt that his theory would find much support in political phenomena. In a debate with W. Hayward Rogers, Downs responds to several questions Rogers raises about empirically testing his predictions by noting that lack of empirical support does not invalidate his model as a deductive proposal; instead, it indicates that one or more of the assumptions is not borne out in the population upon which the test is being conducted (Downs 1959; Rogers 1959). Johnson (19xx) reiterates this claim, disputing the notion that lack of empirical support dooms the model. After all, few of the tenets of responsible parties theory are even conducive to empirical tests.
The fact that Downs’s theory purports to be positive rather than normative did at least shift the debate over political parties to his own turf. As Rabinowitz and MacDonald (1989) note, the most evident example of this is the introduction of scaling questions about political candidates on the National Election Survey.
Downs’s work bears an uneasy relationship, however, to one dominant strain of contemporaneous political science, however. He adopts numerous tenets of pluralism. Most notably, he directly cites two statements of Dahl and Lindblom regarding both descriptive and normative issues. In setting out definitions early in the book, he explicitly borrows Dahl and Lindblom’s definition of “governments” as
organizations that have a sufficient monopoly of control to enforce an orderly settlement of disputes with other organizations in the area. . . Whoever controls government usually has the “last word” on a question (Downs 1957: 22, citing Dahl and Lindblom 1953: 42).
Later, Downs notes that democratic control over government, a normative precept, can be tested in his model. He approvingly cites Dahl and Lindblom’s further definition of “political equality” as a circumstance in which
Control over governmental decisions is shared so that the preferences of no one citizen are weighted more heavily than the preferences of any other one citizen (Downs 1957: 32, citing Dahl and Lindblom 1953: 41).
At the time Downs was writing, however, the task of pluralists, to identify and define political power, was also being brought into question. In the economic model, the relationship between the parties is relatively simple – one party has power, the other wants it. Bachrach and Baratz (1962) propose a somewhat more complicated version of power. In a representative government, the exertion of power is manifested in the establishment of an agenda. In the pluralist approach, all popular grievances are recognized and acted upon, and all may thus participate to some degree in decision-making. According to Bachrach and Baratz, and as conceptualized later by Gaventa (1980), power may be exercised by the exclusion of some ideas from the political agenda entirely, and also by “influencing, shaping, or determining [one’s] very wants.” (Gaventa 1980: 12) By extension, the convergence of policy options presented to the voters has profound normative implications, insofar as the very preferences of voters are shaped by it. If this holds true, party convergence may not even be a result of parties catering to voters, but of a tacit collusion by parties in policies which will be offered to them.
Power theorists such as Bachrach and Baratz did not take on the normative implications of the median voter theorem directly. In taking issue with the pluralist definition of power, however, they were implicitly taking issue with the ability to draw any sort of normative inferences about the comparative normative status of party convergence or divergence. They were also, however, creating a significant measurement problem for pluralist theory. Baumgartner and Leech (1998: 60) note that in the wake of this debate,
the concept of power was not banished from political science, but scholars for the most part reacted by abandoning their interest in those questions. . . Scholars moved on to other fields that did not have at their core such a difficult concept.
Perhaps because the median voter theorem has so infrequently been the subject of normative debate, or because its conception of power is rarely considered by those who explore the ramifications of the model, this particular aspect of the model and the questions it raises have rarely been considered.
These three strains of political science, then – the developing field of formal models, responsible party theories, and pluralism – and the conflict between them created a context for Downs of which Downs himself may have been unaware. To a significant extent, debate about the median voter theorem has been about empirical accuracy; the other debates that preceded Downs have largely been left behind by political science. Those who have sought to develop Downs’s ideas further, or to present alterations of his model may have sought to defend themselves against charges of being uninterested in empirical accuracy, but the major refinements of Downs have all taken as their starting point propositions which have greater empirical support than do those of Downs. Because of these efforts, however, it can be shown that altering any of Downs’s assumptions bring his entire model into question. And in doing so, many of the debates which his work appears to have closed off may be re-opened. In the next section, I examine the empirical roots of work that has tinkered with his model, and I illustrate ways in which these adjustments collectively work to re-open questions of party responsibility and of the exercise of power.
The Median Voter Model and its Refinements
As articulated by Downs (1957: 114-141), the median voter model is a model of party, not candidate, competition. Party convergence is predicated upon seven claims about party and voter behavior:
1) A political party is a “team of men seeking to control the governing apparatus by gaining office in a duly constituted election.” (Downs 1957: 25) Each member within the party thus shares the same goals, and each member takes policy positions as a means towards gaining office.
2) Voters judge parties based upon the proximity of the parties on policy issues to the voters’ own preferred position. Voter preferences can be reduced to a unidimensional policy space. They are single-peaked and monotonically declining from the voter’s ideal point. Voters prefer the party closest to them, the party that maximizes their utility (or minimizes their disutility) in this function. Voter preferences are exogenous to the actions of parties.
3) All potential voters vote; there are no abstentions.
4) Parties are free to position themselves at any point along the preference distribution.(1)
5) Parties have full information regarding the distribution of voter preferences.
6) Parties choose positions simultaneously. One party cannot know ex ante where the other party will position itself, although following Assumption One, each party should presume the other to take positions rationally.
7) Party utilities are defined by the number of votes they receive; parties are vote maximizers.
Given these seven assumptions, the result in a two-party election will be convergence at the median of the distribution of voter preferences.
Throughout both these assumptions and those refinements or alterations that follow, only three basic variables are in play: information about voter preferences or other candidates’ strategies; expected or potential outcomes of a given pairing of party positions; and the location of candidate issue positions themselves. These definitions themselves have been relatively uncontroversial in work that has followed Downs, but the assumptions outlined above have been disputed and altered. Empirical questions about each of the above assumptions have preceded theoretical work on the effects of each alternate assumption.
Assumption One: The Composition and Function of Parties
Assumption One was among the first tenets of the median voter model to be questioned. Most studies have shown that, at least in the American case, parties are not unified teams (see, for instance, Mayhew 1986). In addition, geographical representation and the heterogeneity of the American electorate would give lie to the notion that a unified party platform would be in the interest of vote-maximizing politicians. It thus seems inconsistent for Downs to describe parties as unified “teams” yet also to posit that their members are election-oriented.
At first glance, this might seem to be merely a small terminological problem. If we substitute candidate competition for party competition and if we then use the median voter model to study only individual elections we can proceed through the remainder of the model. Downs himself notes that the presumption of a unitary actor is necessary to avoid messy discussions of intra-party conflict; that is, he does not deny that intra-party dissension over policy exists, but it is not a concern of his model. Spatial models that have followed Downs’ assumptions rather faithfully have either referred solely to candidates rather than parties (see Shepsle 1972) or have discussed both without inconsistency of results (Page 1978).
The candidate/party distinction has not been easily finessed by others, however. As Schlesinger (1975, 1994) points out, the Downsian party is composed solely of office-holders and office-seekers. It is only one wing of Key’s (1958) tripartite division of the party in office, the party organization, and the party in the electorate. Downs’s parties emphatically do not include the electorate. This exclusion is necessary to maintain the relationship of parties as producers to voters as consumers. Voters exert a discipline upon parties by making their preferences known and choosing among two products, but they are unable to act in concert to allow themselves differentiated products.
In addition, voters are not presumed by Downs to be motivated by the same concerns as are politicians. Downs assumes that all voters vote sincerely; that is, they vote for the party whose policies they most prefer, and their benefit derives from seeing these policies enacted, not from the spoils of holding office. Voters have far less to gain from having their preferred party hold office than does the party itself.
Both prominent critics of Downs and proponents of alternate models have questioned the empirical applicability of this distinction between the preferences of voters and those of the Downsian party. Riker (1963) and Riker and Ordeshook (1968) have proposed models in which parties divide the benefits of office amongst themselves – in which the positions taken by parties are not positions of ideology, but positions regarding the optimal division of benefits amongst those within the party. Similarly, Aldrich (1995) and Aldrich and Rohde (1997) propose a “conditional party government” model in which party members collude in order to divide all benefits amongst themselves at the expense of the opposing party. Neither of these theories explicitly includes voters within the party, but they can, as Schlesinger notes, be read as attempts to include voters within the party. They are, he claims, “shareholder” models in which the voters have a stake in the party’s fortunes.
This framework, in which individual benefits – slices of a distributional pie – are the goal of voters rather than satisfaction of ideological preferences, does not necessarily yield different results than does the median voter model. An optimal strategy for parties is still to take the position which spreads benefits to a bare majority of voters. That is, if voters are arrayed unidimensionally in terms of their specific demands, the voter in the middle of this distribution holds the most leverage over both parties, and both parties will cater to this voter. Such a conception has implications for Assumptions Six and Seven, however. First, if Assumption Six is relaxed, if the parties move sequentially and if the first party does not take its position rationally, the second party would, in the Downsian conception, take a position right next to that of the first party in order to maximize votes. In the Riker and Ordeshook conception, however, the second party still would seek out the median voter; allocating benefits among a bare majority would maximize the benefits to each member. Thus, the Riker and Ordeshook model predicts a median position for the victorious party (and thus a median outcome) regardless of whether the strategy of the opposing party is known or unknown. Second, considering voters as party shareholders means that parties are not, as Assumption Seven states, vote-maximizers; instead, they seek to maximize benefits, which they do by maximizing their probability of winning.
Aldrich (1995) and Aldrich and Rohde (1997) utilize a similar allocation-of-benefits model to illustrate reasons for party divergence. Although again their model considers parties in government – more specifically, parties in the legislature – they argue that a model in which log-rolling exists will produce divergence in that it is the party median rather than the general median which governs the policy positions offered. This model relies upon relatively strong parties and a two-step process in which positions are first generated within the party through a median voter process, and then are offered to the general legislature. The voter at the legislative median still votes for that position closest to him, but he is choosing between two policies which are somewhat far from his ideal point. Such a model may also be used to explain the production of party platforms and the process by which party primaries or caucuses produce candidates. It does not, however, allow updating of strategies between stage one and stage two. Aldrich (1995: 20-21) notes that such a model must include at least some voters in the conception of party – it is the party activists, who are motivated by policy benefits rather than by pure office-seeking, who will be most active in developing the positions between which the median voter must choose.
Both of these models rely in part upon analyzing the intra-party conflict which Downs so studiously sought to avoid as a precedent to investigating the positions offered to voters. While the Riker and Ordeshook model makes sharp breaks with Downs in that it does not require the presumption of simultaneous movement, neither model makes explicit claims about simultaneity. Both, however, can be read as models which derive from empirical criticisms of the strict market relationship of parties and voters specified by Downs, and both introduce dynamics which alter Downs’s assumptions about the composition and goals of political parties.
Assumption Two: Proximity Voting, Unidimensionality, and Single-Peakedness
Another early line of empirical criticism of Downs was raised by adherents of the Michigan school of voting behavior study. In one of the most trenchant critiques of Downs, Stokes (1963) took issue with the assumption of proximity voting. In The American Voter, Campbell, Converse, Miller, and Stokes (1960) had found that voters had relatively ill-defined policy preferences; that they had scant information about candidates’ policy positions; that they frequently voted for candidates based upon party identification, personal attributes of the candidates, and other heuristics that were not necessarily related to ideological proximity; and that they rarely considered policy alternatives in a unidimensional liberal-conservative framework. Although these findings have been debated by public opinion scholars, they raise questions about whether single-peaked preferences, unidimensionality, and proximity voting are realistic assumptions for a model of voting behavior.
Of these three empirical issues, the argument against proximity voting is by far the most significant for reconsidering the model. Single-peakedness is, as Hinich and Munger (1996: 35) note, a necessary condition for proposing unidimensional equilibrium. One could certainly propose “all or nothing” situations in which preferences are not single-peaked. A voter might, for instance, prefer to allocate a large amount of resources to solve a particular policy problem, but this voter’s second most-preferred position might be to allocate no resources at all to this problem rather than to allocate an amount which is not large enough to solve the problem. Such situations may well exist, but if policy positions are to be averaged by the voter and placed upon a single liberal-conservative dimension, it seems far-fetched to propose that single-peakedness does not occur.
The specific claim above is only relevant if the policy space is unidimensional. Again, this is an empirical issue which has little import for the internal coherence of the unidimensional model. Much of the work in spatial modeling since Downs has been devoted to the quest for equilibrium in multi-dimensional models. Enelow and Hinich (1984) have published the most comprehensive investigation of multidimensional models. Where there are two or more dimensions, convergence does not occur, as one position can always be defeated by another (McKelvey and Ordeshook 1976). This cycling problem would, if the other conditions of the Downsian model held, ensure that incumbents are always defeated. It has brought about numerous studies of the process of agenda-setting, especially in small groups such as legislative committees. At heart, however, the dimensionality of the policy space is an empirical issue. As Iverson (1994) and Klingemann, Hofferbert, and Budge (1994) argue in comparative studies of politics in several countries, the actual number of policy dimensions in mass elections appears to be quite small. There may be more than one dimension, but there are rarely more than two.
Ferejohn (1993) argues that there is compelling theoretical reason for unidimensionality in mass elections as well. Positing a multidimensional space seems inconsistent with Downs’s work on voters’ information costs. Voters may be psychologically unable or unwilling to process multidimensional information, and they may prefer to seek to place candidates’ positions into a unidimensional space even if candidates do not seek to frame their positions in such a manner. Because of their own limited resources, candidates must economize on the transmission of information to voters, and will thus seek to transmit unidimensional information. Ferejohn notes, however, that this is a somewhat ad hoc argument. He finds more compelling the notion that unidimensionality is the only way for voters to enforce discipline upon candidates, to hold them responsible for their policy commitments. It is the only way that candidates can be accountable to voters, and as such, unidimensional ideologies may be created not by candidates but by the public as a means of framing policies. This is also not an airtight defense of the unidimensional model – it reads as a rather normative defense – but it is a compelling argument for remaining open to its viability in mass elections.
Concomitant with the debates over unidimensionality and single-peakedness is concern over the assumption of proximity voting. If there truly is a single dimension, then single-peakedness seems relevant, or at least empirically testable. If the policy space is multidimensional and an empirical study does not account for this, preferences which are truly single-peaked over each individual dimension but are taking multiple dimensions into account may appear not to be single-peaked in the unidimensional model. Questions also exist about the identification of these dimensions. The unidimensional liberalism/conservatism dimension may, for instance, be broken down into an economic liberalism/conservatism and a social liberalism/conservatism dimension; voters may prefer government regulation of economic matters yet be against government regulation on social issues (Enelow and Hinich 1984). Dimensions which are not strictly ideological may also exist; for instance, voters may evaluate candidates on a liberalism/conservatism dimension but then also evaluate them on a “leadership” or “charisma” dimension. In such a case, the second dimension ought not to exhibit anything approaching a normal distribution – voters may differ in their evaluation of a candidate’s charisma or the importance they place upon it, but it seems problematic to assume that voters would not prefer more charisma to less charisma, for instance.
The question in such models of how voters weight different dimensions has also been held to be of importance in unidimensional models. In a series of articles over the past decade, Rabinowitz and colleagues (Rabinowitz and MacDonald 1989; MacDonald and Rabinowitz 1993a, 1993b, 1997, 1998; Rabinowitz and Listhaug 1997; Morris and Rabinowitz 1997) have proposed a “directional theory of issue voting” which dispenses entirely with the proximity voting assumption. Instead of voting based on proximity, they argue, voters have only a diffuse “for or against” sentiment over ideological alternatives (albeit they make some allowance for proposals that are too extreme) and a particular degree of intensity about their preferences on these issues. Rabinowitz and MacDonald review developments in National Election Survey questions and conclude that there is not strong evidence that voters do array issues spatially. If voters only take a directional pro/con position on policy proposals, candidates have a “realm of acceptability” in which issue positions they may take. Voters may be more attracted to a candidate far from their “true” ideal point but on the same side as the voter than to a candidate who is closer to their ideal point but on the opposite side on an issue. There is little middle ground here; issues are framed in a yes or no manner, and voters will evaluate candidates’ position based on which side they are on and weight these positions according to how intensely they feel about the particular issue. Thus, parties will converge on an issue where there is consensus but will diverge where the electorate is polarized.
This model also raises empirical problems. Gilljam (1997) disputes Rabinowitz et al’s empirical support for their argument, and Merrill and Grofman (1997) join Gilljam in arguing that the directional model mixes voters’ subjective evaluations of parties with an attempt to place parties objectively on a policy dimension. The fact that voters may make errors in evaluating candidates does not discredit the proximity voting model, nor does the introduction of a preference intensity dimension. Merrill and Grofman also argue, in an argument which may bring Assumption Six into question, that tests for directionality in voting actually measure attempts voters and candidates make to confront uncertainty or lack of information.
In sum, these debates about voters’ behavior seem compelling in evaluating voting and election outcomes, but they have limited import for studying candidate strategies if candidates do not share these models’ quarrels with unidimensionality and proximity voting. That is, if candidates believe that their ideological statements will be evaluated solely on the liberalism/conservatism dimension, they will take positions that accord with a unidimensional model whether or not voters truly do evaluate them along these lines.
Assumption Three: Abstentions
The Downsian claim that there are no abstentions may be relaxed without affecting the model if either (a) the position of abstainers can be known, or (b) abstentions are not systematic – i.e. if candidates converge at the median in a single dimension then those voters on the extreme left and right have the same probability of abstention and will cancel each other out. Research on differences between voters and nonvoters has generally supported the second of these conditions. Wolfinger and Rosenstone (1980) have found, for instance, that if all Americans voted in presidential elections the outcome would be little different than it is in practice, where a large minority of eligible voters choose not to vote. The possibility exists, however, that candidates may mobilize disenchanted voters by taking noncentrist positions, and this phenomenon may indeed occur in some elections. Mobilization of potential supporters is certainly a goal of most candidates’ campaigns for office.
Downs himself does devote attention to the effects of abstention upon electoral outcomes (Downs 1957: 260-276). It is significant, however, that the Hotelling model upon which he draws in the median voter model is generally viewed as a model of competition between producers of goods with an inelastic demand function – for instance, of grocery stores or gasoline stations. Given equivalence of product, consumers will prefer the business located closest to them, but they cannot do without food, for instance, if the grocery store is farther away from their home than they would like. Likewise, one might argue, all voters are subject to their government’s laws; they cannot opt out of citizenship if their government does not enact policies they prefer. To extend a Hotelling model with barriers to entry to an unnecessary good – ice cream, for instance – would not alter its results unless consumers on one side of town were able to punish the ice cream stand for moving far away from them by declining to purchase ice cream while consumers on the other side of town were not.
This possibility is explored by Hirschman (1970) in his description of the problems of exit and voice in politics. If some consumers exit – or if some voters abstain – from supporting a firm or a party, the firm or party may not notice if it attracts as many new customers or voters as it loses by shifting its position. If, however, we have a two-stage process in which these individuals can make threats to exit without actually doing so, they may force the firm or party to take a position closer to their ideal point. This is the exercise of voice – an attempt by customers to change the practices of a firm rather than to escape from it. This can only occur where consumers have some sort of bargaining power. To return solely to the political context, such bargaining power may entail the threat of abstention or the threat of supporting an alternate candidate en bloc. It also may involve inspiring activists and mobilizing voters to pressure the party into taking a particular position. Because, somewhat paradoxically, the individuals most likely to exercise voice are those most loyal to the party and least likely to exit without warning, their threats may well be taken seriously by the party. These threats to punish the party in the short run in order to exact benefits in the long run spell trouble for office-seekers, whose time horizon is shorter than is the time horizon of activists. Hirschman’s conception still utilizes differences in motivations for office-seekers and other party members, but it certainly includes these activists within the party in the initial stage where voice occurs.
We know from empirical research on party conventions and caucuses that the most extreme members of the American parties are those most likely to attempt to exercise voice prior to the election or the selection of candidates (see, for instance, Bartels 1988 on primary voters and Sullivan, Pressman, Page, and Lyons 1974 on convention delegates). The Hirschman model seems somewhat inapplicable to a one-shot game, but if there is a multi-stage process occurring, where voice can be exercised prior to the adoption of issue positions, his model does produce a “curvilinear disparity” (May 1973) in which members attempt to exact benefits from leaders prior to the establishment of positions, and in which divergent positions may result. As Stokes (1998) points out, the leaders themselves must come from somewhere, and they are more likely than not to come from the activist ranks within the parties and to share some of these individuals’ ideological preferences.
Because these members have a longer time horizon than do office-seekers, they may remain loyal to the party even in a losing effort. Indeed, they may prefer a losing effort to a winning effort if it enhances the long-run prospects of having their preferences satisfied. Again, where the simultaneity assumption is discarded and where voters or candidates are able to gauge their ex ante probability of victory in an election at Time A, candidates who gauge their probability of winning to be equivalent at a number of different positions may be expected to take that position among those which maximizes their proximity to those party members who are exercising voice. This may be the case with a candidate certain of victory or a candidate certain of defeat. Election at Time A would certainly be presumed to be the most important goal for a candidate, but election (or re-election) at Time B may also carry some weight in the candidate’s calculus.
Assumption Four: Freedom of Party Movement
The threat of abstention imposes some limitations on party movement, but these are limitations of a particular type – they hamper movement toward the median because of strictly ideological preferences of party members. A somewhat different concern that has been raised by students of political mandates and political credibility is that candidates may not appear credible in the adoption of particular ideological positions. Voters may not believe that a candidate will actually pursue the policies claimed (that is, will remain at the issue positions taken prior to the election) if that candidate is elected. This may preclude a candidate from taking a median position.
This may occur in two ways, both of which are dependent upon a multi-stage game. First, an incumbent may be evaluated based upon her record. If voters vote retrospectively – that is, based upon what a candidate has done in the past and how well her past record compares with her campaign pronouncements – they may punish or fail to believe a candidate who advocates positions which differ from her past record. Comparative studies such as those of Klingemann, Hofferbert, and Budge (1994) and Przeworski and Stokes (1995) have evaluated the mechanisms by which voters may enforce accountability upon parties or candidates to ensure that once candidates are elected they actually seek to enact the policies they propose in their campaigns. An incumbent may be constrained by her past record from taking some positions.
This is not a major concern for the Downsian model; after all, even if one candidate is an incumbent, she presumably was elected in the first place because she took issue positions which satisfied the median voter. A candidate may be judged by voters to be inept or dishonest, but this ought not to alter the nature of issue competition. Another concern, however, is that if candidate emergence is itself considered a multi-stage process, a candidate may already have established a record as an advocate of a particular ideological position. Candidates may not actually be able to move towards the median; doing so may damage their credibility. This line of reasoning is frequently used to explain the failures of presidential candidates – it is said that candidates cannot shed the positions they have taken to win nomination once they proceed to the general election (Aldrich 1980). It is also used to explain the problems of office-holders who seek an office with a different constituency and thus a different preference distribution – for instance, members of the House of Representatives seeking election to the Senate. These candidates may seek to move towards the positions preferred by their prospective new constituency (see Rohde 1979), but they run the risk of losing credibility through “flip-flopping,” through taking contradictory positions at different points in time.
Finally, the movement of parties or candidate may be limited by party reputation; this seems to accord with Downs’s prohibition of “leap-frogging.” I noted above that leap-frogging poses no problems for a simultaneous movement model with two parties. If we again look at elections in a multi-stage process, however, party reputation may impose limitations upon movement. A Democrat may not, for instance, be able to take a position to the right of a Republican opponent because he would lose credibility. We might assume, for instance, a relatively liberal Republican incumbent who has established a position to the left of the electorate’s median. Were there no restraints on movement, the Democrat should win by establishing a position slightly to the right of the Republican, thereby conceding normally Democratic votes to the Republican and garnering Republican votes in return. If credibility is an issue, however, Republicans might not believe this Democrat to truly be more conservative than her opponent and might discount her issue positions.
Yet again, these criticisms suggest the problems of a simultaneous movement, one-stage issue competition model. They do no damage to the internal consistency of the median voter model, but they raise empirical questions about its ability to describe mass elections.
Assumption Five: Full Information
Perhaps the strongest assumption of the median voter model is its dual command regarding information – that voters know where candidates stand and that candidates know where voters stand. Downs devotes much of his book to arguments about why voters have little incentive to gather information about candidates. It does seem likely that voters will not be particularly well-informed, but this should have little import for the basic structure of competition unless voters are systematically uninformed – that is, voters who would prefer one candidate have little information while those who would prefer another do have information about the candidates. Low voter information might be another reason for candidates to systematically mobilize or inform particular groups, but in the absence of knowledge of the opposing candidate’s positions, it does not lead to alteration of the convergence prediction. Probabilistic voting theory, as exemplified by the work of Hinich, Ledyard, and Ordeshook (1972), Coughlin (1975), and Hinich and Munger (1995: 168) has made advances in modeling the behavior of voters given beliefs about candidate positions, but it does not affect candidate convergence unless it means that voters use non-ideological heuristics such as candidates’ personal attributes as means of reducing their uncertainty about candidate positions (Hinich 1977).
Of greater import, however, is the assumption of complete information on the part of the candidates about voter preferences. Downs’s model is deterministic – that is, it assumes that candidates know the expected outcome given any particular preference distribution. If candidates cannot know the distribution of voter preferences with certainty, however, they may take suboptimal positions based upon their subjective assessment of voter preferences. Erroneous assessments of voter preferences make a convenient scapegoat for candidates who take non-centrist positions.
For candidate divergence to occur, however, candidates must either be completely uninformed, must have different amounts of information, or must have different types of information. The first of these conditions would, if true, make any sort of formal theory of candidate strategies futile – it would have candidates behaving with no observable election-oriented incentive whatsoever. The second and third, however, seem quite plausible. Ferejohn and Noll (1978) present a theory of information asymmetries in which information about voter preferences is available to each candidate, but is costly. Such would be the case, for instance, for privately held, proprietary public opinion polls. In such situations, the wealthier candidate would obviously have an advantage. They might also, however, prefer to avoid policy issues and ideological appeals altogether in their campaign, so as to entice voters to evaluate them on other grounds.
Such an explanation may again account for divergence on issues, but again, it explains such divergence as a function of errors made by the candidates. Were the candidates in possession of information, they would still follow a median voter strategy. Even if candidates prefer to steer their campaign away from ideology, they must still take some issue positions, and there is no logic to adopting these positions without respect to beliefs about the median voter and the distribution of voter preferences.
Low information might also lead to rhetorical or heresthetical(2) appeals on the part of candidates – that is, if candidates are uncertain what voters’ preferences are, they may seek to influence voters’ preferences in order to bring them more in line with their own. Appeals to social norms, for instance, might influence voters’ beliefs about what their preferences are. Riker (1990) argues that, in fact, this is the function of campaigns. In addition, Kingdon (1993), Stoker (1992), and Hardin (1995) all make an argument that voters’ or citizens’ beliefs are not strictly self-interested or outcome oriented, and as such rhetorical appeals may be effective. Certainly voters are not omniscient. However, evidence is lacking that candidates have the resources to actually persuade voters to alter their preferences. If candidates can know voters’ preferences, it certainly seems more cost effective for them to follow voters’ preferences rather than to try to change them.
In electoral competition, however, the full information requirement for candidates is not as demanding as it may seem. First, candidates do have means available for gauging public opinion. Some, such as opinion polls, are costly. Others, such as gathering knowledge of the past behavior of the electorate, are not. In addition, if candidates move sequentially, the second mover has the additional advantage of observing the first mover’s positions. The second mover may thus either copy the positions of the first candidate, or if she thinks the first candidate has made an incorrect assessment of voter preferences she can take a different position. If we do assume that the liberalism/conservatism dimension is the appropriate dimensional distribution function for voters’ preferences, taking a position on this continuum which roughly approximates the electorate’s median does not require superhuman information-gathering efforts.
Assumption Six: Simultaneity
One relatively unexplored tenet of the Downsian model is the assumption that candidates choose positions simultaneously. In one sense, the simultaneity assumption can be relaxed without altering the outcome of the model. Simultaneous positioning necessarily implies a lack of information about one’s opponent’s position; hence, there is a presumption of rationality for each candidate. This ensures that each candidate will seek a median position regardless of what the other candidate’s position is. As the above discussion of Riker and Ordeshook shows, however, simultaneity is a necessary assumption for a median voter outcome where candidates are vote maximizers. It is not a necessary assumption where candidates seek to build a minimum winning coalition. In the latter circumstance, each candidates should seek a median position even if that candidates has knowledge that her opponent has failed to take such a position.
This seems like a rather rare defect to the model, however; this instance only occurs in cases where one candidate is irrational or misinformed and the other candidates knows the first to be irrational or misinformed. Furthermore, the simultaneity assumption may be discarded if the campaign is seen as a repeated give-and-take. If candidates have frequent opportunities to update their strategies, to assess their opponent’s positions, and to revise their own positions, a gradual movement toward the median on the part of both candidates results.
This circumstance can only happen, however, where there is freedom of movement and where movement is not particularly costly. This presumption seems ill-suited to most campaigns. Changing positions may be costly in terms of candidate credibility, and if one candidate has a pre-existing advantage, as in the case of incumbency, positions taken over a long period of time – over a term in office, for instance – may be difficult to alter. Thus, while simultaneity may seem a rather restrictive assumption, assuming unlimited updating may also be difficult to support.
The tendency documented by Fiorina (1981) and noted by Downs (1957: 41) for voters to vote retrospectively suggests a two-stage game in which the candidate who moves first – generally the incumbent – can “capture” a particular position on the dimension. Other models have sought to account for incumbents’ advantage, but they have not done so in the explicit context of a sequential movement framework. Feld and Grofman (1991; also Grofman 1993, Merrill and Grofman 1997) have developed a theory of “incumbent hegemony” (see Stokes 1998) in which incumbent have a “benefit of the doubt” zone, a zone of invulnerability around their spatial position. Here, incumbents give the incumbent the benefit of the doubt if their positions seem relatively close to theirs because of nonpolicy attributes of the incumbent. If this zone includes the electorate’s median, the incumbent cannot be defeated. They extend this model beyond the unidimensional framework to argue that where it exists, the two-dimensional instability described by McKelvey and Ordeshook does not exist. In this scenario, the incumbent need not be precisely at the electorate’s median, only somewhat close to it. Thus, an incumbent might also be able to maximize utility in regard to secondary, non-vote-maximizing goals.
The Feld and Grofman model assumes simultaneity, but it hints at a two- or more stage process. They demonstrate that, where this benefit of the doubt accrues to incumbents, “certain centrally located points will defeat any challenger by a substantial margin.” (Feld and Grofman 1991: 117) Should a potential challenger suspect that this will transpire, competition and candidate entry will be deterred. Thus, a sort of two-stage process transpires where an incumbent establishes a central position and a potential challenger decides whether or not to run.
Groseclose (1997) does not make direct reference to Feld and Grofman, but his model of two-candidate competition where one candidate has a personal advantage is quite reconcilable with Feld and Grofman. Groseclose notes that any personal advantage, no matter how small, causes the Downsian equilibrium to disappear. Again, candidates choose positions simultaneously, but the advantage held by one candidate is exogenous and is known. Should this transpire, candidate know that if indeed they do converge, the candidate with the personal advantage will be the unanimous winner. Groseclose assumes “non-policy triviality” – that is, that the personal advantage is not so large that there is no pair of positions where the disadvantaged candidate wins. Given this, the disadvantaged candidate will gain votes by moving away from the center if the advantaged candidate is at the center, and by moving towards the center if the advantaged candidate moves away from it. There is thus substantial allowance for candidate divergence. Groseclose closes by arguing that as the personal advantage of one candidate grows, the disadvantaged candidate adopts a more and more extreme position. This scenario is equivalent to Feld and Grofman’s benefit-of-the-doubt scenario.
Each of these models, as well as the incumbent hegemony model of Snyder (1994), assumes the establishment of a non-ideological advantage but simultaneous establishment of positions. Retrospective voting, however, a factor which has been acknowledged as rational behavior by spatial theorists at least as far back as Downs (1957: 41), must be considered at least in part to be retrospective evaluation of the ideological pronouncements of a party or candidate. As such, it is difficult to imagine the establishment of a personal advantage on the part of an incumbent which is completely devoid of issue positioning. The incumbent must take positions while in office and before the true extent of her “benefit of the doubt” or personal advantage is known; a vote-maximizing incumbent thus has an incentive to adopt a median position as early as possible – before competition arises.
Assumption Seven: Vote Maximization
By this point, it should be evident to the reader that rejection of one of the assumptions stated above has implications for the feasibility of entertaining the subsequent assumptions. For instance, if one disputes the Downsian definition of parties, it is difficult to assume that parties are solely vote maximizers. If parties do not have freedom of movement or if they do not take positions simultaneously, it is difficult to support the idea of parties as vote-maximizers because vote maximization in a losing cause may have scant utility to a party. If voter preferences are entirely known by parties, then the result of any election is virtually assured given a set of policy positions, and a party which cannot adopt a centrist position is a certain loser.
These problems bring into play two objections to the assumption that parties are vote maximizers: first, that parties do have vote maximization as a primary goal as opposed to maximizing benefits or probability of winning election; and second, that even if parties are vote-maximizers, that they are solely vote-maximizers, to the exclusion of any other type of secondary goals.
A common early line of criticism against Downs is that the market analogy has limited utility in describing politics precisely because parties gain little by winning by overwhelming majorities or in losing close elections. Barry (1970) and Przeworski and Sprague (1971) point out that in market competition, a firm always benefits from greater sales or more market share, while a party does not necessarily benefit from votes beyond a narrow majority or plurality. This argument is systematized by Riker and Ordeshook, who substitute benefit maximization to vote maximization; in such a scenario, a party seeks to ensure victory, and thus might prefer to seek as many votes as possible where the preferences of voters are somewhat uncertain. In a probabilistic, simultaneous mover model, maximizing votes and maximizing probability of winning may be coterminous (Coughlin 1975). Where a party’s probability of winning at any particular position can be known, however, that party may have a variety of positions with an equivalent probability of winning.
If simultaneity is not assumed, and where there are exogenous factors such as an incumbency advantage, this condition may occur in two different circumstances. First, an advantaged party may have a range of winning positions. Second, a disadvantaged party may have no winning position. In the first circumstance, a party with a benefit-of-the-doubt zone and full information about voter preferences can take any position within that zone. In the second, a party with knowledge that its opponent has taken a winning position has a choice of many positions, all of whose probability of winning is zero. Where parties position themselves sequentially, the party which chooses a position second may have a range of winning positions if the first mover has taken a suboptimal position, or it may be able to adopt any position without affecting its probability of winning (because it has no chance of winning) if the first mover has an advantage and has taken a position rationally.
These may seem to be relatively extreme circumstances, but they do necessitate the introduction of secondary goals for the parties in order to make any claims at all about rational position-taking. Even if the extreme nature of the above is reduced somewhat – where the probability of winning is not one or zero, but is highly restricted and there are secondary concerns for the parties, a party’s decision-making calculus may be affected. This begs the question of what these secondary concerns might be.
Relaxation of the first assumption to include activists or voters within the party, as well as considering the threat of voice or exit which results from relaxing the third assumption, introduces noninstrumental policy preferences for the party. That is, in addition to preferring to either maximize votes or to maximize probability of winning, candidates or parties may prefer to maximize proximity to their “true” or ex ante preferred positions. Where vote-maximization is posited, there is always a trade-off between votes and noninstrumental policy concerns; even where one candidate has a significant advantage, there are votes to be gained or lost through movement within the ideological space. Several formal theorists (Groseclose 1997; Wittman 1977, 1983a, 1983b; Chappell and Keech 1986) have sought to model the trade-off between the two, assigning weights to each concern and constructing a utility measurement which accounts for both concerns. If probability of winning is posited as the dominant concern, however, a deterministic, sequential, and full-information model throws such secondary incentives into sharp relief – there is nothing else to guide candidate or party position-taking across a range of positions where probability of winning is equivalent.
Reaching such a point involves disputing Assumptions One, Three, Six, and Seven. The only crucial dispute, however, is with Assumption Six; the other assumptions must necessarily be discarded when simultaneity is not assumed.
Secondary utility concerns have been inserted into hypothetical models, most notably in the work of Wittman and Chappell and Keech. These concerns are not directly measurable because they are idiosyncratic characteristics of each candidate. We cannot measure the actual preferences of candidates; even if we were to ask them what they “truly” believe about policy issues, it seems unlikely that they would claim to be advocating policies which deviate from their ex ante beliefs for the sake of being elected or gaining votes. As Canon (1990: 27-30) notes, however, a candidate who truly believes he has little chance of winning has less incentive to compromise his position; the very fact that he has chosen to run indicates that he is guided by his devotion to a cause, his desire to bring greater attention to his own ex ante preferences, or his desire to induce his opponent to address these issues. He will only make himself – and his fellow partisans – unhappy by deviating from such positions. Should this candidate find himself in a position to win, however, he may reason that even should he compromise his positions he will still no worse in regard to these issues than his opponent. Where candidates positions themselves sequentially, such a candidate seems particularly likely to emerge.
Implications of Altering the Simultaneity Assumption
The assumptions of the median voter model are thus a set of dominos – if one is knocked down, the rest follow. To say this is not to argue that the median voter model contains internal contradictions, nor is it to say that the model should be knocked down. The basic intuition of the model, that given the assumptions enumerated above, candidates will adopt similar ideological positions, has been used to great effect in analysis of committee behavior and other smaller-scale phenomena. It has also been a useful tool for the study of many elections – but certainly not enough to pass empirical muster. It may well be that this is because one or more of the assumptions contained therein rarely hold, but a model cannot be proven or disproven if its failure requires us to test its assumptions.
The purpose of this paper has been to argue that while the introduction of positive models of political behavior ended much of the normative debate in the discipline about appropriate actions of political parties, these questions have not entirely vanished. The introduction of a sequential component into the median voter framework brings about several alterations in the other assumptions:
– Sequential movement implies that political parties must, at some times, produce candidates whose primary goal is not to win office, because attaining political office may not be feasible where the first mover holds an advantage.
– Parties in a sequential movement model may, then, share some of the preferences attributed to voters.
– In a sequential movement model, parties who choose positions second have information about the strategies of those candidates who move first. In such circumstances, holding full information about voter preferences is not entirely necessary – there is a threshold beyond which information about voter preferences serves no purpose.
– Vote maximizing strategies yield no benefits for candidates who choose positions second and are at a disadvantage; gaining votes does not alter a candidate’s probability of victory.
These alterations pose several theoretical issues for debate, issues which parallel the normative concerns which were debated prior to the introduction of the median voter model. First, can party preferences be isolated in instances where parties have multiple optimal strategies which maximize their probability of winning? In the case of candidates certain of defeat, can the positions taken be said to reflect the noninstrumental preferences of their party? If so, do these positions represent clear and divergent policy prescriptions? It may be somewhat paradoxical to look for the voice of the party in the campaigns of losing candidates, but these positions are not exclusively the province of losing candidates. Rather, we can be certain that these are positions taken for noninstrumental reasons, while similar positions taken by victorious candidates cannot securely be attributed to anything other than a desire to maximize votes. A victorious liberal candidate who represents an overwhelmingly liberal district may take the same positions as a defeated liberal candidate running in an overwhelmingly conservative district against a conservative incumbent. In the first case, the victorious candidate may be following either his or her true beliefs or merely catering to voter preferences; in the second, the defeated candidate certainly cannot be said to be seeking to gain support through such positions. The relevant question, then, is whether this defeated candidate speaks for his party, or whether his views are idiosyncratic, personal beliefs.
This scenario poses a somewhat paradoxical agenda for advocates of responsible parties. It might lead to a call for increased attention to disadvantaged candidate – to calls for campaign finance reform or public financing of campaigns, for instance. Such a call would not, according to the logic of the adjusted median voter scenario I have proposed, yield significantly different incumbents. First, divergent races occur because one candidate has no chance of victory. If that candidate’s probability of winning is increased, there is no reason not to expect that candidate to eschew his or her positions and adopt a more centrist strategy. In attempting to reward the provision of clear choices, we would have eliminated them. Second, even if this were not to happen, the candidate who is not following a strategy designed to capture the median voter would still be defeated, for the simple fact that her positions do not match those of a majority of voters. We would end up, as Riker’s above criticism of the responsible parties model notes, merely perpetuating the dominance of the party in power. In the end, we are left with the depressing conclusion that divergent party agendas exist right beneath our noses, but the more we seek to reward such strategies, the more they recede from our grasp.
Second, a focus on the similarity between the platforms of disadvantaged candidates – an emphasis, for example, on commonalities between challengers to incumbents which cuts across ideological or partisan lines – may help us to identify which issues are kept off of the agenda. Given the plethora of issues which confront the average member of a large legislative body, it seems difficult to argue that any particular issues are kept off of the agenda. To a large extent, however, one would expect a competitive challenger to be essentially reactive – to be addressing issues on which the incumbent appears to be vulnerable. Campaigns of less competitive candidates are free of this restraint. Candidates who run against popular opponents and who have no chance of victory have the luxury of being able to speak about anything they choose, to adopt any issue stance they choose. Are issues introduced in such campaigns which are not introduced by more competitive candidates? If so, what is the merit of such issues? Do they represent valid or innovative policy proposals, or are they merely idiosyncratic causes of these candidates?
Altering the assumptions of the median voter model thus does reintroduce valid normative questions, albeit in a different form than they took before its emergence. It does seem rather beside the point to argue about whether parties as a whole should present the voters with divergent yet responsible agendas. The question may be, instead, do they have “true”agendas which do diverge? And what is the import of this for policymaking, if indeed such divergence has any import at all? The area to look for responsible parties and electoral choice is not, in an age where overwhelming majorities of incumbents are re-elected, in the campaigns of incumbent office holders. It just may be, however, available when we consider the campaigns of nonincumbents.
Brian Twomey