In this chapter, I tell the story of how cognitive social psychology came to take the form it did after the Second World War. This “cognitive” social psychology emerged in the 1930s in opposition to other approaches to social psychology, notably those grounded in behaviorism, psychoanalysis, and sociology. It is also distinct from the “social cognition” movement that began in the 1970s, drawing on concepts and methods developed in the “cognitive revolution” that overtook psychology in the 1950s (see North and Fiske, this volume). My focus will be on the underlying scientific paradigms (Kuhn, 1970) that conditioned this earlier cognitive approach to social psychology. I will note their institutional support, and illustrate their key features by descriptions of prominent practitioners and key publications. In the first part I examine the disciplinary evolutions that led to a particular definition of the social mind. In the second part I describe the emergence of a Gestalt approach to social psychology before the Second World War. In the third part I show how Gestalt theory and other nonbehaviorist approaches addressed social perception and impression formation. In the fourth part, I describe how the institutions and dominant methodological practices of social psychology had changed by the end of the war. In the fifth part I analyze the effect of these changes on the rise and fall of cognitive consistency theory from the 1950s to the 1970s. In the sixth part I further show how the assumptions of systematic psychology influenced the development of attribution theory, which rose to prominence in the 1960s and 1970s, and which has since remained active (with revisions). In the final part I address the question of what these research programmes have contributed, and (with the benefit of hindsight) what errors there may have been in the way the original research has been represented to later generations of social psychologists.
In this chapter, I tell the story of how cognitive social psychology came to take the form it did after the Second World War. This “cognitive” social psychology emerged in the 1930s in opposition to other approaches to social psychology, notably those grounded in behaviorism, psychoanalysis, and sociology. It is also distinct from the “social cognition” movement that began in the 1970s, drawing on concepts and methods developed in the “cognitive revolution” that overtook psychology in the 1950s (see North and Fiske, this volume). My focus will be on the underlying scientific paradigms (Kuhn, 1970) that conditioned this earlier cognitive approach to social psychology. I will note their institutional support, and illustrate their key features by descriptions of prominent practitioners and key publications. In the first part I examine the disciplinary evolutions that led to a particular definition of the social mind. In the second part I describe the emergence of a Gestalt approach to social psychology before the Second World War. In the third part I show how Gestalt theory and other nonbehaviorist approaches addressed social perception and impression formation. In the fourth part, I describe how the institutions and dominant methodological practices of social psychology had changed by the end of the war. In the fifth part I analyze the effect of these changes on the rise and fall of cognitive consistency theory from the 1950s to the 1970s. In the sixth part I further show how the assumptions of systematic psychology influenced the development of attribution theory, which rose to prominence in the 1960s and 1970s, and which has since remained active (with revisions). In the final part I address the question of what these research programmes have contributed, and (with the benefit of hindsight) what errors there may have been in the way the original research has been represented to later generations of social psychologists.
The intellectual landscape discovered by graduate students in social psychology returning from the Second World War must seem hardly recognizable to today’s graduate students. The contemporary eye finds itself looking on a lost world when thumbing through Lindzey’s (1954) Handbook of Social Psychology. After a first part comprising Allport’s chapter on the “long past and short history” of social psychology, the second part presents five “contemporary systematic positions” which reflected more general “systems” in psychology (e.g., Chaplin and Krawiec, 1968). These macrotheoretical perspectives may be thought of as “paradigms” in the sense of Kuhn (1970). I give the list as presented by Deutsch and Krauss (1965), with names of illustrative practitioners in parentheses:
Historians of psychology have chronicled “the disappearance of the social in American social psychology” (Greenwood, 2003), and a clear manifestation of this trend is the decline of “sociological” social psychology from the late 1960s onwards. While the ratio of textbooks from social psychology written by psychologists and sociologists was approximately equal from 1949 to 1964, psychologists wrote three times as many textbooks as sociologists in the period 1973–80 (Jones, 1985), and the overall volume of production increased threefold. Role theory was extensively presented in social psychology textbooks primarily written by sociologists (e.g., Newcomb, Turner and Converse, 1965) but has largely disappeared from textbooks written by psychologists. It is now rare to see psychologists taking advantage of sociological thinking in formulating their theories, in the manner of French and Raven’s (1959) use of Weber in their theory of social power.
While there were attempts in the postwar years to build interdisciplinary programs in psychology, sociology, and anthropology at major universities such as Harvard and Michigan, these had disappeared under the strain of disciplinary rivalry by the 1970s (Jackson, 1988). Of course psychology alone was not responsible for this, as some sociologists, following Durkheim’s lead, deliberately sought to evacuate psychological explanations for behavior. For example, “institutionalists” believed that the facts of group structure (rules, roles, norms, culture, etc.) were independent of individuals, who were inessential and replaceable. The fracturing of an integrated social science approach to social psychology can also be seen through the increasing specialization of journals along disciplinary lines. Social psychologists used to publish their work alongside sociologists in Human Relations (e.g., Festinger, 1954), a practice that has since lapsed.
In addition, clinical and social psychologists went their separate ways: Journal of Abnormal and Social Psychology divided into Journal of Abnormal Psychology and Journal of Personality and Social Psychology in 1965. Although psychoanalytic ideas played an important role in the gestation of well-known theories in social psychology (e.g., Janis’ work on fear appeals in persuasion and groupthink), few vestiges of psychoanalytic thinking now remain in mainstream experimental social psychology. What influence there is of psychoanalytic thinking in contemporary social psychology has mostly come in through the back door, such as Bowlby’s attachment theory of relationship style which has been brought in via developmental psychology.
With the gradual evacuation of sociology, anthropology, and clinical perspectives from social psychology came the increasing adoption of the privileged methods of mainstream psychology, in particular experimentation. The belief that human behavior could be studied experimentally had taken widespread root in the scientific community by the early twentieth century, and the acceptance of the experimental technique brought a means of resolution to the question of whether the “group mind” could be studied scientifically. The solution adopted by experimental social psychologists to this problem was the “methodological individualism” pioneered by Floyd Allport in America and Walther Moede in Germany (Danziger, 2000). Psychologists came to adopt Floyd Allport’s (1924) view that groups and institutions are “sets of ideals, thoughts and habits repeated in each individual mind and existing only in those minds.” As examples, Allport (1924) was able to demonstrate experimentally that the mere presence of others influenced both performance (social facilitation effects) and judgment of odors and the heaviness of weights (normalization effects). These and similar studies (Sherif, 1936) provided a means of subjecting “group mind” to experimental analysis, and as Lewin (1951) noted, “the taboo against believing in the existence of a social entity is probably most effectively broken by handling this entity experimentally.”
Nevertheless, not everyone accepted that the laboratory experiment is the only scientific way to study the social mind. For many philosophers such as Comte and Mill this problem presented itself in terms of how to develop a “second psychology” that goes beyond the study of individual experience to include the social in the science of mental life (Cahan and White, 1992). For a time, anthropology and psychology worked hand in hand. The groundbreaking Cambridge University expedition to the Torres Strait in 1898 included three members whose primary professional identity was as psychologists (McDougall, Myers, and Rivers; the others were the ethnographers Haddon and Seligmann along with the oceanic linguist Ray). Back in continental Europe, Wundt’s solution was two-pronged: He founded the first laboratory of experimental psychology in Leipzig (1879) to study mental processes, but he also privileged herme-neutic methods in his version of Volkerpsychologie (inherited from Humboldt and Herder) which might roughly be translated in modern terms as “cultural psychology.” He devoted 20 years of his long life to this nonexperimental project, which in parts resembled modern linguistics, anthropology and cultural studies (Farr, 1996). Sitting in the audience for Wundt’s lectures in 1908–9 was the young Bronislaw Malinowski, who never completed his projected dissertation on Volkerpsychologie but instead went on to revolutionize ethnology (Young, 2004). Malinowski nevertheless retained his interest in psychology, reading the treatise written by Wundt’s student, Hugo Munsterberg, when with his friends in Melbourne between his field trips to Papua New Guinea from 1914 to 1918.
Munsterberg was to take a chair of psychology at Harvard in 1934. However, the tension between the “first” and “second” psychologies led the psychology department at Harvard to split in 1946. The hard-line experimentalists stayed in the reduced psychology department and studied low-level perceptual and learning processes, while the others founded the Department of Social Relations, which included social psychology, clinical psychology, and anthropology.
A look at the short-lived Department of Social Relations gives an idea of what a less exclusively experimental approach to cognitive social psychology might have looked like. It included luminaries such as Jerome Bruner and George Miller, who went on to found Harvard’s Center for Cognitive Studies in 1960. The leading social psychologist was Roger Brown (1925–1997), whose 1965 textbook Social Psychology has a magisterial breadth of sweep. An impression of Brown’s style can be gauged from the second chapter, which followed an introductory chapter reviewing social behavior in animals. In this chapter on language and social structure, Brown reviewed his own work with Albert Gilman (a professor of English at neighboring Boston University, and Brown’s “partner” in domestic life). This work focused on pronouns of address, tracing the use of tu (T-form, 2nd person singular) and vous (V-form, 2nd person plural) in French, and their equivalents in languages such as German (Du and Sie ), Italian (tu and Lei) and Shakespearian English (thou and ye ). Brown also analyzed the analogous use of the first name (e.g., Denis) rather than the title and last name (e.g., Dr Hilton) to mark similar distinctions of social status and distance between speakers in modern English. Giving vous to one’s superiors and tu to one’s inferiors is a general pattern in continental Europe, which may be due to the common Roman heritage of French, German, Italian, and Spanish speakers. These groups may have taken over use of the T and V forms from their former Roman rulers, who used the V form (vos) to Diocletian in the third century when he was one of two simultaneous Roman Emperors (one in the East and the other in the West). This usage emerged in order to signify that members of the court were addressing both emperors. However, the emergence of T/V distinctions in non Indo-European languages suggests that this may also be due to a universal metaphor (plurality = power). In any case, the very universality of this structure allows Brown to pinpoint periods of historical disturbance and change in social relations. For example, the use of T/V to mark social status briefly disappeared during the overthrow of the monarchy and nobility during the French Revolution only to reassert itself in the nineteenth century. In the more egalitarian modern period in France, it is now frowned upon as disrespectful to use tu to a waiter, while beforehand it would have been acceptable for a master to use tu to and receive vous from his servant at table.
Brown and Gilman’s (1960) work was a major inspiration for politeness theory (P. Brown and Levinson, 1987), now one of the most widely studied theories of sociolinguistics. The other chapters of the book similarly contained riches that profited a wide range of social sciences. For example, a chapter reviewed Brown’s seminal—for developmental psycholinguists—work on child language. Other chapters reviewed cultural influences on perception (including Brown and Lenneberg’s seminal—for anthropologists and cognitive psychologists—work on the Whorf hypothesis), psychoanalysis, achievement motivation across cultures and time periods, group dynamics (which included a classic—for social psychologists—discussion of risky shift), crowd behavior and social panics (interpreted using economic game theory). For those who admire Renaissance men (especially those with high citation impacts), it is difficult to gainsay the stature of Brown’s achievement. We may thus ask why many experimental social psychologists have adopted a somewhat ambivalent tone towards Brown, suggesting that he had “not done a very good job” at Harvard. 1
The answer to this question tells us how “social psychology” has been defined by those who have come to dominate the field. Indeed, it is the very breadth of Brown’s achievement that gives the clue to the ambivalence felt towards him. This can be illustrated by the story of how Henry Roediger was lost to social psychology. Later to become a distinguished cognitive psychologist, the young Roediger had actually been inspired by Brown’s book to study social psychology at graduate school. However, he went to Yale rather than Harvard, as his mentors had told him it possessed the leading social psychology program. When he arrived there in 1969, Roediger learned much from the eminent social psychologists that he took classes with, but was shocked to find that they did not share Brown’s view of social psychology as an intersection of psychology, sociology, and anthropology. As he recounts the story: one of the biggest lessons came even before classes even started. David Mettee asked me why I had decided to apply to graduate school in social psychology. I told him about reading Roger Brown’s great book. He looked at me a bit askance and said, “Oh, that is a great book but it is not really much about social psychology. Only a few chapters are really relevant to the research interests of the people at Yale”. I eventually discovered that he meant chapters 11 and 12, which were on attitude change and person perception.
It is instructive to note that these two topics, along with attribution theory (which Roediger also discovered at Yale through taking a class with Richard Nisbett), constitute the three main pillars of early (pre-1980) work in cognitive social psychology that will be covered in this chapter (see Figure 3.1 ). But it is salutary to note that some of Brown’s chapters prefigure work on “social cognition” in other disciplines. For example, his review of developmental psycholinguistics and moral development presages modern work in developmental psychology and neuroscience on “theory of mind,” just as his review of economic and sociological approaches to crowd behavior anticipates economic approaches to social cognition, understood in terms of socially distributed beliefs.
Figure 3.1 Publications in PsycInfo from 1940 to 2009 including the keywords: “impression” and “person perception”; “dissonance” and “cognitive consistency”; and “attribution” and “causal explanation.”
In the place of sociological and psychoanalytic approaches to the social mind, a cognitive approach emerged whose roots lie in Gestalt psychology and theories of perception developed in pre-war Europe. Gestalt ideas were brought to America by the three main leaders of this movement in the 1930s, Kurt Koffka, Wolfgang Köhler, and Max Wertheimer, who had all moved from Germany by the time the Second World War started. Push factors were important for these three well-established psychologists, notably the Nazi regime’s anti-Semitism, as they were Jewish or married to Jewish women (Geuter, 1987). However, Gestalt psychology was not directly persecuted by the Nazis, unlike the “degenerate” Bauhaus school of architecture. Less well-established researchers were also drawn by pull factors such as better career opportunities (Lewin, Brunswik), not to mention the graces of their American research assistants (Heider). These newcomers did not gain positions in the elite Ivy League Universities of Harvard, Yale, Cornell, Princeton, and Stanford, but were accommodated in less prestigious institutions such as Swarthmore College, Smith College, and the New School for Social Research. Only Lewin was able to create a “dissertation factory” at Iowa and then MIT, which he bequeathed to his followers on his death in 1947. Nevertheless, from these positions a flourishing cognitive social psychology sprang forth that has left an indelible mark on the discipline.
The success of the émigré psychologists and their followers such as Solomon Asch can be attributed to several factors. The first is the quality of their ideas, as Gestalt theory was “Europe’s gift to American psychology” (Taft’s words, cited in van der Geer & Jaspars, 1966). The second reason was the vigor of their networking in America—they stayed in touch with each other and with luminaries from the major universities through visiting talks and symposia (e.g., Tagiuri & Petrullo, 1958). The third reason was the intellectual bankruptcy of their main rival, behaviorism, whose rejection of the study of unobservable mental life had led American psychology into a theoretical impasse from which it emerged only in the 1950s. However, social psychologists had in the meantime developed distinctive cognitive theories before modern cognitive psychology came into being. Below we examine the historical roots these theories drew on, and how they evolved when transplanted to the hothouse of American experimental psychology.
The cognitive approach to social psychology had its origins in the first part of the twentieth century in Europe, as part of an
intellectual revolt against the application of certain natural science theories and methods to the understanding of human experience. Much attention had been devoted in the nineteenth century to “elementarism,” a search for the “atoms” of sensation following the associationist approach developed in British empiricist philosophy. This resulted in the analysis of the subjective properties of percepts by psychophysicists such as Fechner (Mandler, 2007; Miller, 1962; see also Eiser, this volume), whose work in turn inspired Wundt. At the turn of the century, work continued in German universities such as Würzburg which sought to identify the fundamental units of experience. However, from 1910 onwards a number of researchers based in continental Europe began performing empirical studies of visual and auditory perception designed to contradict these views. The new Gestalt psychology made three essential claims (Ash, 1991): (1) In perception, people see patterns (or gestalten) that constitute functional units rather than punctiform sensations, and these constitute the primary units of experience; (2) behavior, as well as cognition, is meaningfully structured, and these structures are not impressed from the environment but result from dynamic interactions between organisms and their environment mediated by perceptual structures; (3) these gestalten correspond to the laws of physics, which also analyze behavior in terms of structured fields.
An example of this new approach that is of especial relevance to social psychology can be found in the work of the Belgian researcher Albert Michotte (1881–1965). Michotte began his book (1963; original French language edition published in 1946) by giving a historical account of theories of causal perception. Notably he challenged the claims of the influential eighteenth-century Scottish philosopher David Hume that we: (a) perceive events as isolated sense-data; and (b) through perception of “constant conjunction” come to see certain events as causally associated through force of habit. To illustrate this position, Michotte (pp. 7–8) cites Hume (1739) on this point: It appears that, in single instances of the operation of bodies we never can, by our utmost scrutiny, discover anything but one event following another…So that, upon the whole, there appears not, throughout all nature, any one instance of connexion, which is conceivable by us. One event follows another, but we never can observe any tie between them. They seem conjoined but never connected …After a repetition of similar instances, the mind is carried by habit, upon the appearance of one event, to expect its usual attendant, and to believe that it will exist. This connexion, therefore, which we feel in the mind, this customary transition of the imagination from one object to its usual attendant, is the sentiment or impression, from which we form the idea of power or necessary connexion.
Hume had been very much influenced by the work and methods of the seventeenth-century English physicist Isaac Newton, and saw it as his mission to use these scientific methods to develop an “experimental philosophy” (Mossner, 1980).
In contrast, Michotte argued for a characterization of perception that emphasizes the inherent relations between units. He wrote (pp. 3–4, emphasis in original): We need to know that things can be moved, e.g., by pushing them, causing them to slide, lifting them, or turning them over, by hurling, breaking, bending or folding them, by leaning on them and so on. We need to know too that certain gestures, certain looks, or certain words can attract or repel other men and animals, or modify their conduct in some other way…Although these events all have a spatial and kinematic aspect, the most important thing about them is that they imply functional relations between objects…do we not see the wine come out of the bottle and run into the glass? That is something quite different to a simple change of position in space.
Michotte’s position is representative of that of the Gestalt psychologists, who argued that the perceptual process itself automatically analyzes visual sensations into patterns, or gestalten. For example, a musical melody is not perceived as a simple string of notes, but the relation between those notes is perceived—a fact that allows us to recognize a melody even when it is played in a different key.
The first experimental demonstration of Gestalt effects in visual perception is often considered to be the Phi effect, analyzed in an article published by Wertheimer in 1912 (Mandler, 2007). The Phi effect emerges when two adjacent lights flash rapidly in succession, as at a railway level crossing. Although from the physical point of view each light flashing is a separate event, from the phenomenological point of view, people tend to see these lights “as” a single light moving back and forth, and cannot prevent this impression from forming. This illusion demonstrates that people do not “see” a succession of percepts, but have a single integrated experience of a single light moving. In similar vein, Michotte (1946/1963) conducted a classic series of experiments that presented observers with impressions of an object A that approaches an object B, which subsequently moves. Under certain conditions of spatial and temporal contiguity, people no longer have the impression of two separate, unconnected events but of a unitary causal sequence, in which A “causes” B to move, through “launching” it (lancement) or “carrying it along” (entraînement), etc.
Gestalt theory in psychology evolved during a period that saw a flowering in the visual arts, which themselves exploited the illusions that so interested the psychologists of perception. Cinema spread widely after the First World War, exploiting the Lumière brothers’ successful demonstration in Paris in 1895 that the illusion of continuous movement could be obtained from successive images projected at 16 frames per second. Some psychologists used film to illustrate their ideas; in 1929 Kurt Lewin had great success in America with his film of a boy experiencing an approach–avoidance conflict. In turn, Gestalt psychology was to influence the visual arts, as the artists and architects of the Bauhaus School in Dessau became interested in the ideas developed by the Gestalt school of psychology in Berlin. They received visits and lectures from 1927 to 1931 from Gestalt psychologists such as Rudolf Arnheim and Karl Duncker (whose audience included Paul Klee). Wassili Kandinsky wrote a manual on visual form, Punkt und Linie zur Fläche (Point and Line to Plane ), which described the basic elements of visual forms and colors, such as the circle, triangle, and square. Kandinsky was of course famous for using these fundamental forms and colors in paintings that expressed movement. Fritz Heider, an Austrian psychologist who frequently visited the Berlin group in the 1920s, was later to use these same “pure” geometric forms and primary colors in a celebrated film designed to elicit anthropomorphic impressions using very minimal cues of movement (Heider & Simmel, 1944). It is as if Heider, who had been very interested in painting before becoming a psychologist, animated the colored geometric forms to make a living, moving Kandinsky painting (see Figure 3.2 ).
Figure 3.2 Top left: Basic Visual Forms of the Bauhaus group, from Nina Kandinsky’s prevate collection and, top right: as presented in their 1923 exhibition. Bottom left: Kandinsky’s “ Komposition VIII ” (1923). Solomon R. Guggenheim Museum, New York, Solomon R. Guggenheim Founding Collection, by gift. Bottom right: Heider and Simmel’s (1944) Figure 1. Copyright © 1944 University of Illinois Press. Reproduced with permission.
Once in America, the émigré psychologists continued to proselytize Gestalt theory and its variants (Lewin’s field theory and Brunswik’s lens model). But now the main local opponent changed from elementarism to behaviorism. The debate was part of a larger view about human nature, which the aristocratic English philosopher, Bertrand Russell (1(1960). pp. 32–33) caricatured in the following way.
The manner in which animals learn has been much studied in recent years, with a great deal of patient observation and experiment…One may say broadly that all the animals that have been carefully observed have…displayed the national characteristics of the observers. Animals studied by Americans rush about frantically, with an incredible display of hustle and pep, and at last achieve the desired result by chance. Animals observed by Germans sit still and think, and at last evolve the solution out of their inner consciousness. To the plain man, such as the current observer, this situation is discouraging.
Like Gestalt psychology, American behaviorism had emerged as a reaction to the earlier introspectionist tradition in German psychology, whose “outpost” in America had been founded at Cornell by an English student of Wundt’s, E. B. Titchener (Mandler, 2007). But by the 1930s, Titchener’s influence had waned. Behaviorism held sway in psychology departments in leading American universities such as Harvard, Yale, Princeton, and Chicago, and its leaders, such as Watson, Thorndike, Skinner and Hull, were legendary names in the field. Behaviorism was also dominant in other disciplines such as linguistics, economics, and management, and reflected a general position in American social science. Behaviorist approaches shared a number of common features, such as: experimental observation and quantification of behavior; simple associative structures for forming links between stimulus and response; the importance of learning and environmental influences on motivation; and an assumption of the fundamental similarity between humans and other animals (Lyons, 1977). Different disciplines emphasized different aspects of the methods of the natural sciences. Economics emphasized mathematical formalisms and adopted “revealed preference” theory whereby preferences were inferred from observed choices, as measured by quantitative indicators such as product market share or gross national product. In contrast, psychology privileged the use of experiment. 3
Gestalt psychology and elementarism in Europe shared a common phenomenological approach, both relying on introspection and phenomenal experience for data, but differing in the emphasis laid on innate principles of cognitive organization and structure. However, when contrasted to the tenets of American behaviorism, Gestalt psychology took on a distinctively cognitive aspect. It is for this reason that it has been recognized as the precursor of the “cognitive revolution” in mainstream experimental psychology that took place in the 1950s and 1960s (Mandler, 2007). The Gestalt approach inspired theorizing in social psychology from the 1930s onwards, following the lead of Kurt Lewin, Fritz Heider, and Solomon Asch. It accompanied a more general attempt to challenge behaviorism on its own terms through experimental demonstrations that higher-level “social” factors influenced lower-level “sensory” judgments that were easily measurable and “objective.” This perhaps explains why there was relatively little work on social processes in remembering of the kind presaged in Bartlett (1932). Although his seminal book was cited approvingly as an early source of cognitive insight (Jones, 1985), after Allport and Postman’s (1945) studies it yielded little systematic research on social processes in remembering (Roediger, 2010).
Scientific interest in social perception had begun with Darwin’s (1872) work on the expression of emotion in man and animals. Following Darwin’s procedure, judges were presented with emotional expressions in the form of photographs, drawings, or recordings of a person and asked to classify the emotion being expressed. Researchers were interested in “empathic ability” or skill in judging others. This focus on judgmental accuracy was given further impetus with the advent of intelligence and aptitude testing in the First World War, which revealed the ubiquity of the “halo” effect (Thorndike, 1920). This effect, first identi-fied in 1907, results from a bias in the ratings made by the perceiver, who tends to ascribe either consistently positive or consistently negative characteristics to a target. In like vein, work on judgment of personality had a focus on whether people are “good” judges of personality. In the conclusions to their review, Bruner and Tagiuri (1954, p. 650) noted an “excess of empirical enthusiasm and a deficit of theoretical surmise” in the field. They did not however foresee the extent to which the field was to be transformed by the methodological rigor brought to the analysis of accuracy by Cronbach (1955).
Early studies assumed that people would have insight into their own emotional states and traits, and these were used as a criterion to evaluate the accuracy of others’ judgments. However, studies accumulated that found a systematic bias in self-evaluation, as targets often rated themselves favorably on valued characteristics such as intelligence and honesty. This complicated the use of self-ratings as criteria for objective accuracy in social judgment, as a judge with rose-colored spectacles who systematically judged targets to have high scores on valued characteristics would be considered “accurate” simply because her ratings correlated strongly with the targets’ own (positive) self-ratings. Rather than demonstrating “empathic ability,” such correlations would suggest that these judges and targets simply share a “leniency effect” (Bruner and Tagiuri, 1954) in judgment, or that judges simply endorse their targets’ “positive illusions” (cf. Taylor & Brown, 1988) about themselves.
One consequence of this realization was the development of methods for controlling for such statistical artefacts (Cronbach, 1955). Studies were performed (e.g., Bronfenbrenner, Harding, & Gallwey, 1958; Cline & Richards, 1960; Crow & Hammond, 1957) that distinguished stereotype accuracy (or sensitivity to the generalized other, i.e., an awareness of the average rating that target group will give itself on a characteristic) from differential accuracy (sensitivity to individual differences or interpersonal sensitivity, i.e., awareness of variation in how group members will rate themselves). For example, if Americans tend to rate themselves highly on intellectual-ability characteristics but Japanese rate themselves highly on social–interpersonal characteristics, a judge who is high on stereotype accuracy will take these group tendencies into account when evaluating Americans and Japanese “personalities” (as expressed by the self-ratings). Cronbach thus dealt early accuracy research a series of well-aimed blows from which it has never fully recovered.
Solomon Asch (1907–1996) was born in Warsaw and moved with his family to America in 1920. He encountered Gestalt theory while a graduate student at Columbia University. Indeed, he actively sought it out, recalling “When I read in the New York Times one day that Wertheimer was coming to the New School for Social Research (later also known as the University in Exile) as a refugee, I said to myself that I must see him” (Ceraso, Gruber and Rock, 1990). Asch developed a close personal relationship with Wertheimer and during the early 1940s edited Wertheimer’s unpublished manuscript on productive thinking, which he used for his own course on the topic at Brooklyn College. Although Wertheimer died in 1943, both his thought and elegant style of argument remained a model for Asch.
Asch bore the standard for Gestalt theory in his early publications on social perception. The first set of studies proposed an alternative interpretation for the “prestige suggestion” effects described by Sherif (1936). Behaviorists saw the tendency to form positive attitudes to opinions expressed by high-status sources as evidence for a habit in judgment that “experts are right,” which will have formed due to learning that high-status sources tend to be correct. However, Asch, Block, and Hertzman (1938) showed that the interpretation of political statements changed as the attributed source changed. Asch (1940) produced another example of these “change of meaning” effects when he showed that when a sample of college students has been told that a “congenial” group (500 college students) rank politics positively among a list of ten professions, they will do the same. When told that the comparison group has ranked politics negatively, college students will likewise rank politics negatively. However, follow-up work showed that the two groups of participants had quite different kinds of politician in mind when making their judgments. The first group thought of statesmanlike examples such as Roosevelt, Hull, and Stimson, whereas the second group thought of lower status examples such as “Tammany Hall” (the executive committee of the Democratic party in New York City), caucus leaders and local neighborhood politicians. In Asch’s memorable phrase (1940, p. 458), this entailed “a change in the object of judgment rather than in the judgment of the object,” and thus invalidated explanations of this effect in terms of imitation and reward.
In his next paper, Asch (1946) sought to actively illustrate the operation of Gestalt principles in the formation of impressions of personality. The three topics illustrated in this paper (the centrality of traits, the coloration of traits by others, and the effects of order of presentation of traits on impressions) have become justly celebrated lessons in social perception. It is nevertheless instructive to follow how Asch himself formulates the questions addressed by these studies. Rather than present the results in classic S-R fashion as illustrating the “effect” of an independent variable on a dependent variable, Asch presents his studies as illustrating Gestalt principles of organization. He begins by arguing that central traits such as “warm” and “cold” have “organizing functions” on the formation of impressions. Their effect is not to give a positive or negative direction to all the other given traits, but to reverse “the choice of fitting characteristics” such as generosity, shrewdness, happiness, and irritability, while other traits such as reliability, importance, physical attractiveness, and persistence were little affected. This phenomenon is nevertheless illustrated by an experimental manipulation, by contrasting the effects of central pairs of traits on impressions with those of peripheral pairs, such as “polite– blunt”. Coloration effects, whereby a trait such as “calm” takes on different overtones in the context of “kind, wise and honest” as opposed to “cruel, shrewd and unscrupulous,” are presented as yet another example of change of meaning. Last, but not least, Asch uses his spectacular demonstration of order effects on impression formation to rule out a simple additive model of impression formation (addition being an operation that is indifferent to order; 2 + 3 yielding the same product as 3 + 2).
Throughout Asch’s experimental reports, there is a continuing insistence on understanding how participants interpret the material that is presented to them. Unlike most contemporary experimental reports in social psychology, there is no separate “experimental section” describing the method and results, and the reader “hears” the participants speaking through Asch’s carefully chosen and extensive quotations of their responses. A similar insistence on understanding how people structure their perceptions of the experimentally presented material manifests itself in later work where he re-examines the doctrine of association in learning and memory research from a Gestalt perspective. Asch (1969) criticized contemporary theories for subscribing implicitly to the old doctrine of elementa-rism, and not addressing the psychological reality of perceptions of relation between stimuli, such as “between,” “greater than,” “father of,” which refer to two or more objects rather than one (e.g., “Clint Eastwood” and “Kyle Eastwood”). He gently admonished researchers on learning and memory for focusing too much on the property of contiguity, which he considered to be an objective condition of stimuli and not an experienced relation between them. As an illustration of his ideas, he showed that people had much better memory for shapes and colors that were presented in a constitutive relation (e.g., a blue contour) rather than in a paired relation (the contour printed in black next to a patch of blue color). This technique of dissociating color and form in constructing stimulus events was later exploited in ground-breaking experiments on rats illustrating a “cognitive” approach to animal learning theory (see Rescorla, 1990 for a review and acknowledgment of Asch’s influence).
Despite Asch’s insistence on processes of organization in impression formation, subsequent researchers analyzed his ideas in ways that are more compatible with other theoretical perspectives. Thus Kelley (1950) took a straightforward stimulus–effect approach (the effect of an independent variable on a dependent variable) and showed that being told that the teacher was “warm” (rather than “cold”) influenced students’ perceptions of a real teacher, and led them to ask more questions. This greater responsivity of the students to the teacher when they had been given a positive impression of him was an early demonstration of expectancy effects in social behavior (Rosenthal & Jacobson, 1966; Snyder, Tanke, & Berscheid, 1977). Others attempted to reinterpret the effect of order of presentation of impression formation in terms of models of information integration and functional measurement that were being developed at the time (Anderson and Hubert, 1963).
Finally, the advent of high-powered computers and advanced statistical techniques for examining intercorrelations was an impetus to work on “implicit personality theory” (e.g., Rosenberg & Sedlak, 1972) which showed that everyday language of trait description was organized along underlying social and intellectual dimensions, which themselves bore some resemblance to Osgood, Suci, and Tannenbaum’s (1957) dimensions of evaluation and potency. While some researchers have concluded that “as this particular line of research has evolved under the influence of the dominant models in psychology (gestalt, behaviorist and cognitive) the plausibility of Asch’s original gestalt view of impression formation has been eroded” (Hampson, 1988), others contend that the Gestalt perspective has been vindicated (Peabody, 1990). In any case, the similarity of structure obtained between the “implicit” personality of the layperson and the models produced by personality theorists in the 1970s and 1980s provided corroborative support to the “lay scientist” perspective that dominated attribution theory during the same period.
In the postwar years, a number of studies accumulated which indicated that judgments of the size, length, and distance of objects could be affected by value and need (Bruner and Goodman, 1947), conformity pressure (Asch, 1952), and higher-level grouping (Tajfel & Wilkes, 1963). These studies were inspired by an anti-behaviorist stance and provided support for the “new look” in perception, which sought to demonstrate that top-down processes could influence low-level perception. Later work was to qualify some of the empirical bases of the evidence for the role of higher-level factors in perception. Subsequent researchers found it difficult to replicate studies by Bruner and Goodman (Osgood, 1952; Tajfel, 1969) and by Tajfel and Wilkes (Corneille, Klein, Lambert, & Judd, 2002), leading to reflection on the supplementary conditions necessary for the effects to occur. The Asch (1952) conformity studies fared better, the effect proving to be stronger in collectivist than in individualist societies, although becoming weaker over time in the United States (Bond and Smith, 1996).
One clear implication was that if “top-down” factors could influence perception of physical reality, they could also affect intergroup perception. Studies appeared to support this line of thinking (e.g., Hastorf and Cantril, 1954). For example, Duncan’s (1976) study, showing that contextual activation of a stereotype about blacks made white participants more likely to interpret an ambiguous behavior as violent, was interpreted in terms of Bruner’s (1957) notion of perceptual readiness. Such approaches served to illustrate the view that stereotypes were irrational forms of prejudice, serving to justify discrimination against outgroups.
Nevertheless, it was Asch (1952), acting like a dissenter in one of his own conformity studies, who argued for the rationality of stereotypes. Asch criticized the well-known research by Katz and Braly (1933) for not asking what the participants meant when they said “Germans are scientific.” Did they mean to make the patently absurd proposition that “ All Germans are scientific”? The empirical response was brought by McCauley and Stitt (1978), whose analyses showed that what participants meant by this statement is that “Germans are more scientific than other nations on the average.” Even if one assumes that a minority of Germans are scientists, “being scientific” may nevertheless be a useful diagnostic cue (in the Brunswikian sense) for inferring that someone is German. Asch’s attention to understanding what the experimental participants meant, coupled with a Brunswikian use of the notion of cue validity, once again showed people to be more rational than psychologists had presumed.
The anti-behaviorist stance taken in social psychology succeeded in banishing serious consideration of learning theory from social psychology, despite some dramatic demonstrations of the continuing relevance of learning approaches. For example, Christensen-Szalanski & Beach (1982) showed that trial-by-trial learning of cue–outcome relations eliminated under-use of base-rate information in a celebrated social categorization task (the engineers-and-lawyers task) devised to show human irrationality in judgment (Tversky and Kahneman, 1974). This raises the concern that the voluminous literature on the (mis)use of covariation information has focused too much on “word” problems where covariation information was presented in verbal form or in summary contingency tables.
In fact, social psychology did use learning paradigms in a classic study designed to challenge motivational accounts of stereotyping. Hamilton and Gifford (1976) used a trial-by-trial learning paradigm to study the formation of stereotypes, in which 36 behaviors of a majority group and 18 behaviors of a minority group were presented with the same proportion of positive to negative behaviors (26:10 and 13:5). Despite the identical proportions, respondents formed “illusory correlations” between the minority group and the negative behaviors, overestimating the frequency of negative behaviors in the minority and forming a negative evaluation of the minority group. Hamilton and Gifford’s research had been inspired by work on the perception of illusory correlations in clinical diagnosis, but came to be discussed in terms of various “cognitive” models such as stimulus salience, memory decay, and information loss (e.g., Fiedler, 1991; Smith, 1991), which did not draw on accounts based in associative learning theory.
In principle, Hamilton and Gifford’s effect could have been predicted from basic assumptions about learning curves that were well known since Hull’s days. These predictions about learning curves were incorporated in Rescorla and Wagner’s (1972) associative learning model, which consummated learning theory’s own “cognitive” revolution. This model predicts that at intermediate stages of learning, organisms will experience an “illusory correlation” by overestimating the size of cue–outcome correlations. However, the theory was not applied to analyze the formation of “illusory correlations” in stereotype learning. With some exceptions (e.g., an unpublished paper by Slugoski, Sarson, & Krank, 1991), there was little discussion of Hamilton and Gifford’s results in terms of contemporary associative learning theory before the advent of connectionist models (e.g., van Rooy, Van Overwalle, Vanhoomissen, Labiouse, & French, 2003). This was perhaps because the first published application of Rescorla and Wagner’s (1972) model of animal learning to human causal judgment (Dickinson, Shanks & Evenden, 1984) came some years after Hamilton and Gifford’s publication. However, when Rescorla and Wagner’s model was applied to the Hamilton and Gifford paradigm (Murphy, Schmeer, Vallée-Tourangeau, Mondragón, & Hilton, 2011), it was shown that the illusory correlation effect disappeared (as predicted) with further learning (90 trials rather than the 54 used by Hamilton and Gifford). In an illustration of fixation on existing experimental paradigms, the failure to notice that the illusory correlation effect is a result of incomplete learning was in part due to the fact that all the following studies only used between 36 and 54 trials (Mullen and Johnson, 1990), sufficient for the illusory correlation effect to appear but insufficient for it to disappear.
A final word must be said about one of the émigré psychologists, Egon Brunswik (1903–55). Brunswik worked at the Viennese Psychological Institute under the direction of Karl Buhler, before accepting a position at Berkeley to work with Edward Tolman. Brunswik became recognized as the intellectual conscience of the Berkeley department, yet due to his reputation for extreme intellectual rigor, only four students did their doctorate with him. One of those who had been so deterred was Ken Hammond, later to become Brunswik’s chief apologist after his suicide in 1955. While in Vienna Brunswik was influenced by Heider’s views about using perceptual cues to infer qualities of an underlying object, and also developed a distinctive approach to “probabilistic functionalism” which focused on the validity of cues for making these inferences. Brunswik’s ideas influenced work on cue utilization in Asch’s impression formation task (Bruner, Shapiro, & Tagiuri, 1958). In addition, Wishner (1960) used the technique of correlated cues to explain centrality effects, as certain traits (e.g., “sociable”) can be inferred from others (e.g., “warm”). Brunswik’s ideas were also to give general inspiration to the field of judgment under uncertainty. Nevertheless, considerable debate remains to this day as to whether his ideas were fully assimilated (see the collection in Hammond & Stewart, 2001). Suffice it to say that his ideas on “cue–outcome” correlations, “ecological validity,” and “representative design” sit uneasily with the use of systematic experimental designs and analysis of variance (Woodworth, 1938) that had come to dominate American psychology by the time he died (Gigerenzer, 2001). Later we describe how this Zeitgeist affected the study of cognitive processes in American social psychology.
Of course, the advent of cognitive theorizing described above did not cause the behaviorists in social psychology to simply pack up shop and go home. Language became an important domain of debate, as this offered a potential means of explaining how cognitive processes could be brought under the control of factors external to the individual. One origin of this line of thinking came in Pavlov’s conception of language as a “second signal” system, which had in turn influenced Vygotsky and Luria. Luria was invited over to America during the 1960s, and the increasing influence of the Russian school was reflected in the decisions of Jerome Bruner and Roger Brown to investigate the role of language in thinking, and to become interested in developmental studies of language learning.
In social psychology, there was much interest in the role of “linguistic mediation,” whereby a society could influence the way its members think about the world by socializing them into a category system that was mediated by language and then “internalized” in the individual in the form of “linguistic habits.” A prominent behaviorist advocate was Charles Osgood (1918–1991), who completed his doctorate at Yale in 1945 and became professor at Illinois. Osgood (1952) criticized the mentalistic “sign” theory of meaning, which was to become dominant during the cognitive revolution, as too disembodied. In its stead he proposed a view of language as a “representational mediation process,” presenting it as an extension of Hullian learning theory. Signs (e.g., words) would become associated with objects, and consequently “When stimuli other than the stimulus-object, but previously associated with it, are later presented without its support, they tend to elicit some reduced portion of the total behavior elicited by the stimulus-object” (p. 203). Accordingly, one can expect associated words to elicit some portion of the behavior they signify. In support of this proposition, Osgood (1952, p. 208) described the following experiment by Razran (1936). Another pioneer investigation into the organic correlates of meaning was that by Razran, serving as his own subject. Meaningfulness of a series of signs was the independent variable, the stimuli being words for “saliva” in languages with which Razran had varying familiarity. Amount of salivary secretion was the dependent variable—following presentation of each stimulus, a dental cottonroll was placed in his mouth for two minutes and its weight determined immediately afterward. As “meaningless” controls he used the Gaelic word for saliva, the nonsense syllables QER SUH, and periods of “blank consciousness.” Salivation was greatest in his childhood tongue (Russian), next in his most proficient one (English), and less in three slightly known languages (French, Spanish, and Polish). The control conditions showed no differences among themselves, despite the fact that Razran “knew” the Gaelic word stood for saliva.
4
Osgood’s paper included an insightful discussion of the weaknesses in some of the experiments on the “new look” in perception, evoking principles of verbal priming (e.g., frequency effects) that were later to be incorporated in semantic priming models in psycholinguistics (Morton, 1969). Osgood concluded his paper by presenting the semantic differential technique, which can be understood as a method for understanding an individual’s category system and the kinds of generalizations that she is disposed to make. He also discussed cross-modal generalization (synaesthesia) and metaphor, a topic that was relevant to person perception (Asch, 1955) and music criticism (Brown, Leiter, & Hildum, 1957). To a contemporary reader, Osgood’s analysis seems remarkably prescient. For example, at the time of writing, a rebellion against the mentalistic sign-theory of meaning is creeping back into social psychology under the banner of “embodiment.” Thus carefully controlled experiments have shown the same kind of cross-modal priming effects that Osgood identified as support for his representational mediation hypothesis (e.g., Foroni & Semin, 2009).
From the 1950s through to the 1970s, a major topic of research interest was the question of whether language shapes thought. The anthropologist Benjamin Lee Whorf had noted that Western languages (which he lumped together as SAE—“Standard Average European”) marked fundamental concepts such as time and number through grammatical mechanisms such as verb tense and plurality in a way quite different to a native American Indian language such as Hopi (Whorf, 1941). Observations such as these motivated the Sapir-Whorf hypothesis of linguistic relativity, which took a strong and a weak form. In the strong form, culturally acquired linguistic habits fully determined an individual’s world view, and in the weak form such habits simply predisposed individuals to categorize reality in certain ways, without “imprisoning” them in a specific world view. Brown and Lenneberg (1958) led the way in putting the Sapir-Whorf hypothesis to the test. In one study, Brown and Horowitz drew on the fact that although the distinction between the short a and long a: is phonetically distinguishable (e.g., bat vs. bad ), it does not function as a phoneme in English. This is because it is never used to distinguish between words, in the way that a change of phoneme from a to i does (e.g., to differentiate bit or bid from bat or bad ). However, the distinction between short a and long a: does function as a phoneme that is used to distinguish words in Navaho. Brown and Lenneberg therefore presented eight color chips from the reddish-violet region of the spectrum to speakers of English or Navaho, and named each chip with a monosyllable that is not a color term in either English or Navaho (see Figure 3.3 ). Whereas the change from a to o is a phoneme change in both English and Navaho, the changes from short a to long a: and from short o to long o: are a phoneme change in Navaho but not English. Whereas the majority of English speakers divided the chips into two classes of colors (reflecting the a vs. o phoneme distinction in English), the majority of Navaho speakers divided the chips into four categories, reflecting their further division of the vowels into the four phonemes a, a:, o and o: in Navaho. Post-experimental interviews indicated that many of the English speakers had noticed the difference between short and long as and os, but they assumed that they did not carry any significance.
Figure 3.3 Phoneme segmentations in English and Navaho of sounds used to name colors (from Brown and Lenneberg, 1958). Copyright © 1958 by Holt, Rinehart and Winston. All rights reserved. Reproduced with permission.
Brown and Lenneberg then went further, building on Zipf’s law that frequent words in a language tend to be shorter and easier to pronounce than longer ones. For example, red, yellow and orange are shorter, more frequent and easier to pronounce than vermilion. In addition, some languages make distinctions, such as yellow and orange in English, that are not made in others such as Zuni (which conversely makes distinctions in the color spectrum that English does not). Brown and Lenneberg accordingly collected codability scores for 24 colors, based on the number of syllables in the words used to name them, the speed of naming, and interpersonal and intrapersonal agreement in naming. Brown and Lenneberg assumed that colors that had names that were shorter and verbalized faster would be more “codable” in a verbal representation intervening between the stimulus and a recognition response. They then exposed four of these colors to experimental participants, and asked each participant 30 s later to point to the four colors he had just seen on a complete chart of 120 colors. The results showed that highly codable colors were better recognized. The correlation between codability and recognition appeared to increase when there was a higher delay (3 min) between exposure and recognition, which Brown and Lenneberg interpreted as supporting their hypothesis that recognition was mediated by superior codability in memory. 5
These studies represent the historical high-water mark of the linguistic relativity hypothesis. Subsequent research on color naming supported the view that the human categorization of the color spectrum is determined by innate properties of the human visual system. Thus Berlin and Kay (1969) showed that the sequence of color naming across cultures followed an almost invariable sequence, starting with the distinction between the primary colors red and blue, then moving on to green and yellow, before marking the non-primary colors. Heider and Olivier (1972) used Brown and Lenneberg’s procedure with a New Guinean tribe, the Dugum Dani, that did not possess color names, but found superior recognition for best (or “prototypic”) examples of primary colors. They interpreted this result as indicating that superior memory was due to prototypicality, not codability. These studies dealt a body blow to work on the linguistic relativity hypothesis, as they suggested that color categories were due to the structure of the human visual system, chiming with the view that language learning was innate (Chomsky, 1968). The codability hypothesis was consigned to oblivion, and even Brown (1986) did not even mention his earlier work in the second edition of his textbook, instead presenting the later contributions of Berlin and Kay. This time round, he highlighted the methodological pitfalls awaiting researchers on linguistic relativity, describing how his student Terry Kit-Fong Au (1983) used improved translations of stimulus stories to refute Bloom’s (1981) claim that the Chinese could not reason counterfactually. Social psychological interest in the relationship between language and cognition then deflated and did not fully revive until many years later.
But there is a final twist to this story. Much later, Roberson, Davies, and Davidoff (2000) suggested that the results that Heider and Olivier used to refute the codability hypothesis were due to experimental artefacts induced by participants’ guessing strategies. Consistent with the codability hypothesis that this effect is mediated by “inner speech,” the memory advantage for primary colors disappears when there is articulatory suppression (Roberson & Davidoff, 2000). In a recent review, Chiu, Leung & Kwan (2007) suggest that members of a culture are not prisoners of “habits of thought,” but rather that their languages provide tools that allow interactants to co-operatively mark differences (Grice, 1975) that may not be available in other languages. This represents a move from a simple “code” to a “code + conversational” version of the weak Sapir-Whorf hypothesis, in turn reflecting a general evolution in the intervening period to attend to functional aspects of language use based on speech act theory and linguistic pragmatics (Krauss and Fussell, 1996).
Asch (1987) noted that social psychology had changed from a “corner grocery” into an “international combine” during his lifetime. The transformation had already begun in prewar America, for two reasons (Cartwright & Zander, 1968). First, American society was supportive of research, with public and private budgets increasing from $160 million in 1930 to $320 million in 1940. Some of that money found its way into psychology, which had established its usefulness in the public mind through contribution to military selection processes in the First World War. Second, many professions were adopting a group-based perspective during the interwar years, such as social work, psychotherapy, education, and management. The impact of the Hawthorne studies conducted by Mayo (a friend of Malinowski’s) underscored the inadequacy of a purely economic model of man in understanding work motivation, and the importance of understanding how human relations at work influenced individual motivation. Factors such as these provided a favorable context for research “entrepreneurs” such as Kurt Lewin, who was able to find substantial funding for his Iowa Child Research station in 1935.
Social psychology then had a good war. The Committee on Survey and Planning for Psychology set up during the Second World War reported that “It seems probable that the present conflict will do for social psychology, in the broadest sense of that term, what the First World War did for intelligence testing” (Boring et al., 1942, p. 620). The committee produced a book in 1943 for the ordinary soldier, Psychology for the Fighting Man, which included substantial material on topics of social psychological interest such as leadership, national and group differences, crowd behavior, rumor, public opinion, propaganda, motivation, morale, and psychological warfare (Boring, 1943). Hadley Cantril (1906–69) of Princeton provided research on American public opinion, especially concerning the European War, to the Roosevelt administration from 1940 onwards. His group proved its usefulness in 1942 by identifying the extent of anti-British sentiment in Vichy French troops in North Africa, thus guiding troop dispositions for the Anglo-American landings of Operation Torch. Cantril’s group, which Jerome Bruner joined from Harvard, also analyzed enemy propaganda.
Social psychology was therefore well placed after the Second World War to claim its place in major research universities, as well as to attract and justify government funding. The organization of social psychology after the war was clearly conditioned by the economic and moral dominance of the United States while Europe was in the process of self-destruction and reconstruction. The success of American organization in helping win the Second World War no doubt contributed to a feeling that all problems could be solved if sufficient resources were thrown at them. On the pattern of the Manhattan Project, the postwar years saw the advent of “Big Science” in the form of large coordinated projects (Redner, 1987), and social psychology was no exception. The Yale Communication Project was the most notable example, as its leader, Carl Hovland, was able to attract continued support from the Rockefeller Foundation, the National Science Foundation, and Bell Laboratories that enabled him to attract leading researchers and equip extensive laboratories.
Apart from Yale, Festinger (1989) identified three other distinct influences on postwar social psychology in America. The first was the development of rigorous survey techniques during the war that enabled scientific sampling of important questions such as voting patterns or real or imagined responses to crises. The second was the group at Berkeley whose nucleus was formed of exiles from Nazi Germany, and whose best known legacy is The Authoritarian Personality (Adorno, Frenkel-Brunswik, Levinson, & Sanford, 1950), which represented a fusion of psychoanalytic ideas and psychometrics to explain social behavior that was in large part fostered by Egon Brunswik’s wife, Else Frenkel-Brunswik. The third major influence was the Research Center for Group Dynamics founded by Kurt Lewin at the Massachusetts Institute of Technology in 1944, which moved to Michigan in 1947. This group had close relations with the “more traditional” School of Social Relations at Harvard, whose blend of social psychology, sociology, and anthropology gave a perspective that Festinger found enriching. Nevertheless, despite the frequent interactions, Festinger considered that there was relatively little fusion of theoretical viewpoints between the main schools of thought.
The immense pent-up demand for places at graduate school from young men who had been in the armed services allowed for rigorous selection, allowing these research groups the pick of a group of applicants that was older and had personal experience of the worldwide catastrophe. The atrocities of the Second World War had attuned the minds of a generation of “hard-headed idealists” (Deutsch, 1999) who wished to understand the causes of social conflict with a view to proposing remedies. There were frequent interactions and exchanges of members; for example, students in the Lewin group such as Morton Deutsch and Harold Kelley took positions at Yale, and new groups sprung up such as in Minnesota which attracted alumni from the Lewin group (e.g., Festinger, Schachter) and Yale (Brehm, Kelley, McGuire). Festinger (1989, p. 556) was to remark that in the postwar years “the field had not yet come to the point where almost every department of psychology had to have a program in social psychology. There were still relatively few active researchers in the area, and we all knew each other.” Easy access to funding in the postwar years allowed members of the Lewin group to recruit its doctoral students as research assistants and to form them on the job. Finally, the postwar expansion of social psychology in American universities meant jobs for the graduates of these programs, which further contributed to a sense of optimism and purpose.
The Yale Social Psychology group, directed by Carl Hovland (1912–61), sat at the center of postwar American social psychology. Hovland’s early work, conducted under the influence of the eminent Yale learning theorist Clark Hull (1884–1952), dealt with efficient methods of rote learning. However, his extended stint at the War Department during the Second World War led Hovland to address issues of wartime morale and motivation to fight. For example, his research group showed that (contrary to the doctrines of totalitarian propaganda) two-sided communications could be more effective than one-sided communications in inciting American soldiers to continue fighting against Japan after the victory against Germany. Much of his work focused on source effects on credibility, and is couched in the language of behaviorism. For example, the effects of source credibility on short-term acceptance of arguments is attributed to learning “that following the suggestions of certain persons is highly rewarding whereas accepting what others say is less so” (Hovland, Janis, & Kelley, 1953, p. 20). Hovland and Weiss’s work showed that such positive source credibility effects wore off with time, disappearing after several weeks. In contrast, the “sleeper effect” (whereby messages presented by negative sources became slightly more persuasive over time, another effect discovered in war work using the US Army propaganda film on the Battle of Britain) was attributed to the increasing “dissociation” between the source and the message. The sleeper effect was eliminated by “reinstating” the original unconditioned stimulus (i.e., source of the message) at the later testing period.
Hovland took the nucleus of his War Office team with him to Yale after the war, and recruited many more talented social psychologists to turn Yale into a “candy store” for enthusiastic young budding graduate students (see Figure 3.4 ). Although he himself remained wedded to his behaviorist roots, Hovland was wide-ranging in the approaches he fostered. For example, psychodynamics led to theories that were tested experimentally, as in Janis’ work on fear-arousing appeals in persuasion and his analysis of groupthink, which both have their origins in the Freudian concept of repression. Other examples of the continuing influence of psychoanalytic concepts at Yale were the use of hypnosis to study hypotheses about cognitive reorganization (Brock & Grant, 1963; Rosenberg, 1960) and Miller and Dollard’s (1941) work on frustration and aggression.
Figure 3.4 The Yale “Candy Store” as seen by Philip Zimbardo (student 1954–60), with additions from Henry Roediger (student 1969–1973). Adapted from Zimbardo (1999) and Roediger (2010).
Yale’s extensive investment in laboratories allowed it to host many of the resource-intensive experiments that were needed to decide between the dissonance theory approach to attitude change originated by Festinger at Stanford and its own neo-behaviorist “incentive theory.” Before telling this story, we address the development of another leading movement in experimental social psychology associated with Kurt Lewin. We then show how this led in Festinger’s hands to a style of theorizing and experiment that produced a particular approach to cognitive consistency theory, that of cognitive dissonance.
At his death in 1947, Kurt Lewin bequeathed both a theoretical framework and a group of researchers who went on to shape experimental social psychology. As Mandler (2007, p. 157) writes, “To list his students and associates at the Research Center in Group Dynamics at the Massachusetts Institute of Technology is to list an honor roll of social psychology.” This list included Cartwright, Zander, Thibaut, Kelley, Back, Pepitone, Deutsch, and Festinger, and their impact can be gauged by the fact that by the 1980s eight of the 10 most cited social psychologists were associates of Lewin or students of these associates (Perlman, 1984). How then can the creation of this group and its remarkable influence be explained?
Some individuals make a difference by being the right person in the right place at the right time. This was the case with Kurt Lewin, whose career positively flourished in the United States after he moved there permanently from Berlin in 1935. Although Lewin originally came to the United States as a child psychologist, his interests increasingly turned to group dynamics. Wherever he went, Lewin assembled a team of talented collaborators, whether in Berlin (1921–33), Iowa (1935–44) or MIT (1944–47). During the Second World War, Lewin and his associates did work for the government. These included Dorwin Cartwright, who moved to Washington in 1942 to work with Rensis Likert on inflation control and sale of war bonds. These connections proved useful, for after Lewin’s death in 1947 the Center for Group Dynamics moved to join Likert’s Survey Research Center at the University of Michigan. Together they created the Institute of Social Research, which remains an outstanding center for social psychological research to this day.
Lewin attracted talented collaborators for a number of reasons. Some joined him because they had been disappointed by the kind of psychology done elsewhere, such as Dorwin Cartwright, who went to Iowa after an unexciting stint at Harvard, on the suggestion of his undergraduate teacher at Swarthmore, Wolfgang Köhler (Patnoe, 1988). Another reason was Lewin’s charm and enthusiasm, recounted by Morton Deutsch (1999, p. 7), who tells how he was convinced not to go to graduate school at Yale. The appointment in midtown New York in August 1945 had to move to a nearby coffee-shop, as the great man, who was half an hour late, had come down for breakfast in his hotel’s dining room without a tie. I do not remember much about the conversation other than that I described my education, experience and interests, and he described his plans for the new center. I was being treated as an equal; I felt somewhat courted; I was experiencing a trancelike sensation of intellectual illumination with new insights constantly bubbling forth from this brilliant, enthusiastic, effervescent, youthful, middle-aged man. He spoke a colloquial American, often with malapropisms, and he was both endearing and charming. I left the interview with no doubt that I wanted to study with Lewin. I also left with a dazed sense of enlightenment, but I could not specifically identify what I was enlightened about when I later tried to pin it down for myself.
In addition, Lewin adopted a creative scientific style that seemed to bring out the best in those around him (Patnoe, 1988). He was not someone who sought to do research on his own in the style of Rodin’s Thinker, but rather actively solicited his students and associates for discussions, whose job it sometimes was to write down and formalize the master’s thoughts (Cartwright, in Patnoe, 1988). Lewin’s groups in Berlin, Iowa, and MIT were all characterized by frequent “chat sessions” known as Quasselstrippe, where researchers were encouraged to brainstorm and present work in progress rather than finished research (Ash, 1992; Patnoe, 1988). Deutsch recounts that Lewin’s genial personality was able to resolve tensions in the group, such as when Lippitt (who favoured socially relevant research) clashed with Festinger (who favoured scientific rigor). Lewin’s form of interaction with members of his group very much resembled that of the democratic leadership style that he identified with Lippitt and White. And just as team members in a democratically led group continued working when its leader left the room (Lewin, Lippitt, & White, 1939), so work in the Lewinian tradition carried on when its leader died in 1947, modified and transformed by his former assistant in Iowa, Leon Festinger.
Before Lewin, social psychology relied on minimalist experiments using the “mere presence” of others (Allport, 1924; Sherif, 1936) as an independent variable. However, Lewin et al. (1939) hit on a style of experimentation where they manipulated several factors simultaneously in order to produce the kind of “social climates” they were interested in (e.g., democratic vs. authoritarian atmospheres in groups of children). Danziger (2000) suggests that this “molar” approach to manipulating experimental situations had its roots in the philosophy of science developed by Ernst Cassirer, one of Lewin’s teachers as a young man. In any case, the simultaneous manipulation of factors that co-occur in a systemic way in the real world (e.g., having a democratic leader, being consulted in group discussion, having to vote by a show of hands in democratic functioning) lends external validity to the manipulation. Furthermore, from the point of view of action research, the simultaneous manipulation of variables to attain a desired end can be thought of as the use of the sound engineering principle of system redundancy.
However, this approach had its critics. The next phase of experimental social psychology research, led by Leon Festinger (1919–1989), retained high-impact experimentation but argued that independent variables should be systematically manipulated in a way more in keeping with the commandments of the bible of American experimental psychology (Woodworth, 1938). As Festinger’s student, Stanley Schachter, put it (Patnoe, 1988, p. 193): “Social psychology experiments before he came along were essentially morasses. In the democracy–autocracy study, for example, they were attempting to create the experimental parallel to what is democracy and compare it with the autocratic. They were manipulating a million things.” An example of the new Zeitgeist is found in Pelz’s (1958) studies that attempt to unpick, in controlled fashion, four variables that were simultaneously manipulated in Lewin’s (1947) classic study of group decision-making, namely: the obligation to reach a decision; the occurrence of a group decision; public commitment to a decision; and the degree of consensus attained by the group in reaching a decision.
Although strongly influenced by Gestalt theorists since his earlier days in Berlin, Lewin developed a distinctive approach that placed an emphasis on motivation, personality, and psychodynamics. Whereas Gestalt theorists were interested in perception of the external world, Lewin was interested in the person’s action and “locomotion” through her life space. The sense of physical energy systems can be sensed in the core constructs of his “field theory,” such as need, tension, valence, vector, barrier, and equilibrium. Lewin worked out a highly influential typology of goal conflict (approach–approach, avoidance–avoidance, approach–avoidance) that had considerable impact on clinical psychology through Miller and Dollard’s (1941) adaptation of his analysis.
Consequently, when Festinger received an invitation in 1951 to review work on social communication, this provided him with an opportunity to develop and systematize the Lewinian approach in his own way. Festinger drew on “personal knowledge” (Polanyi, 1958) based on his experience of the Lewin research group’s repository of experimental procedures and findings, and this yielded the distinct approach to psychological consistency he took in his own theory of cognitive dissonance. For example, the initial set of experiments reviewed in Festinger’s (1957) book concerned the “freezing” of decisions identified by Lewin, whereby a decision-maker finds it hard to revise a preference once he has made a commitment to a decision (e.g., Brehm, 1956). Here, the Lewinian idea of locomotion through life-space is explicitly evoked, and tested with an ego-involving experiment designed to make participants believe that they would receive highly attractive (and expensive) gifts at the end of the experiment. Lewin’s influence also continued to be felt through the way that experimental paradigms he had developed (e.g., the “forbidden toy” experiment of Barker, Dembo, & Lewin, 1941) were recycled for later research on cognitive dissonance (Aronson and Carlsmith, 1963). Although Festinger noted (1957, pp. 7–8) the formal similarity between dissonance and concepts of cognitive imbalance or incongruity in the sense of Heider (1958) or Osgood and Tannenbaum (1955), his Lewinian heritage showed through in the way he also conceptualized dissonance as a motivational state like hunger (p. 3).
Festinger’s book systematically treated four topics with a theoretical chapter, followed by an experimental chapter which reported empirical studies that examined the effects of the factor in question. The factors dealt with were: post-decision rationalization; forced compliance; exposure to information; and social support. The theoretical chapters often included field studies (e.g., on the formation of rumor) as well as his own field study on disconfirmed prophecies to illustrate the importance of social support mechanisms. Of these, the chapter on forced compliance led to a major controversy in the 1950s and 1960s, contrasting the classical “incentive” approaches to attitude change and the rival dissonance theory approach. This controversy threw light on human nature but also generated much heat and scientific debate. It resulted in a crisis for social psychology, but also yielded important experimental findings that now form part of the discipline’s heritage. Below, I draw out its main themes to show how they illustrate implicit assumptions about the role of cognitive processes in social psychological explanation.
Experiments on forced compliance emerged out of a real-world phenomenon that had intrigued psychologists and sociologists in the postwar years, namely the effects of institutional roles on attitudes and behavior. Such processes were demonstrated in the field by researchers such as Lieberman (1956), who found that factory workers who were selected as foremen adopted pro-management attitudes whereas those who were elected as union stewards adopted anti-management attitudes. These changes appeared to last only as long as they remained in their roles, as once the foremen had been demoted they reverted to their original attitudes. In addition, there had been observations of dramatic conversion phenomena in prisons and organizational settings. For example, Bettelheim (1943) had observed how some inmates in German concentration camps themselves became “little Nazis,” adopting the ideology and behaviors of their Nazi gaolers. Another example came some years later during the Korean War, with the successful “brainwashing” of American prisoners of war by Chinese Communists (Schein, 1957).
Unsurprisingly, experimental procedures that explored the effects of role-playing and forced compliance on the “internalization” of role-congruent attitudes and behavior attracted the interest of social psychologists. These had been used before the war by Lewin’s colleague Ronald Lippitt, who had been very interested in the use of role-playing as an educational tool in helping people learn new behavioral repertoires. After the war, Kelman (1953) reported the intriguing finding that asking experimental participants to write essays that argued against their own privately held attitudes sometimes led them to modify their positions in line with their arguments. One commonly advanced explanation for the effect of counter-attitudinal essay-writing was that this led subjects to “biased scanning” of the arguments in favour of the position being advocated, such that by the end of the role-playing session the subject convinced himself of the position for which he had been told to argue for (Janis & King, 1954).
Festinger (1957, pp. 272–5) was well aware of the way people adopt attitudes and beliefs congruent with their roles, but did not accept that attitude change occurred in Kelman’s (1953) experiment because of self-persuasion through “biased scanning” of arguments. Rather, Festinger focused on the mechanism of insufficient justification, suggesting that in Kelman’s participants “There would have been dissonance between their private opinion and the knowledge of what they were doing” (p. 106). This would be especially strong in the low-incentive conditions as the modest reward would provide insufficient justification for compliance. In order to substantiate this position, Festinger and Carlsmith (1959) developed the “forced compliance” experiment in which they induced participants to produce counter-attitudinal behavior without requiring them to engage in counter-attitudinal argumentation. This was achieved by requiring participants to tell a lie to a new experimental participant (actually a stooge) that the experiment that they had just done was interesting. When the lie was told for low reward ($1), as predicted participants were more likely to declare a favorable attitude to the experiment afterwards than when they had been promised a substantial reward ($20). Low-incentive participants were also more likely to declare themselves available for similar experiments in the future. These findings seemed impossible to reconcile with the tenets of classic reinforcement theory, which was at that time still the dominant framework for attitude change research (Hovland et al., 1953).
It took some time for the Yale group to respond to this head-on challenge to their theoretical framework, but once battle was joined, the controversy between “dissonance” and “incentive” approaches was intense. The first salvo was fired by Rosenberg (1965), who noted that Festinger and Carlsmith (1959) provided no information on how participants perceived the high or low rewards, and questioned whether the high incentive ($20) was really rewarding. He suggested that since the reward was so abnormally high, participants would become suspicious and feel that they were being manipulated. Such induced aversion in the case of high reward would then explain why participants would not adopt a positive attitude to the experiment. A second salvo was fired by Janis and colleagues, who also focused on the perceived legitimacy of the experimenter’s request (e.g., Elms and Janis, 1965). They did this by requiring US students to engage in a counter-normative role-playing task (arguing for an exchange program with the Soviet Union where American students would spend four years studying the Soviet system of government and the history of communism) for a private research firm that had either been hired by a favorably perceived sponsor (the US government) or an unfavorably perceived one (the Soviet Embassy to the US). The experimenters pretested what would be considered a plausible large ($10) and a plausible small ($0.50) reward. When the sponsor was positively regarded (the US government), larger opinion changes in favor of the program were associated with the larger reward in line with incentive theory. When the sponsor was negatively regarded (the Soviet Embassy), the reverse pattern was obtained: Larger opinion changes were observed when low rather than high incentives were offered. The explanation offered by incentive theorists was that suspicion of the motives of the sponsor “tainted” the high reward offered, such that it became aversive rather than attractive.
As Zajonc (1968) noted, experiments comparing dissonance and incentive theories failed to use fully comparable procedures. A crucial clarification of this issue was provided by Carlsmith, Collins, and Helmreich (1966), who gave participants the same levels of reward ($0.50, $1.00, $5.00) for participation in both a counter-attitudinal essay-writing experiment (à la Kelman) and a forced-compliance lie-telling experiment (à la Festinger and Carlsmith). The results very clearly showed that increased incentives led to increased attitude change in the essay-writing task (confirming incentive theory) but led to decreased attitude change in the forced compliance task (confirming dissonance theory). Given the accumulation of results that supported one theory but not the other, attention turned to a “post-imperialistic” demarcation of the conditions under which dissonance and incentive theories would hold.
On the dissonance theory side, theorists turned to the self-concept as a crucial moderating variable. In his original statement, Festinger had been remarkably broad in how he defined cognition: as “any knowledge, opinion or belief about the environment, about oneself or one’s behavior” (1957, p. 3). A weakness of Festinger’s original formulation was its failure to distinguish clearly between cognitive and motivational interpretations of dissonance (cf. Beauvois & Joule, 1996). In addition, not all logical inconsistencies aroused cognitive dissonance, which creates problems for generating theoretical predictions. For example, Wicklund and Brehm (1976) suggested that the experience of choice was necessary for the arousal of dissonance, presumably because of the sense of personal responsibility that it entailed. Support for this view comes from experiments such as that of Linder, Cooper, and Jones (1967). They found that when participants were given no choice in participating in an experiment where they had to make a counter-attitudinal argument (in favor of a North Carolina State law banning communists and Fifth-Amendment pleaders from speaking at state-supported institutions, such as universities), their attitudes changed more in response to the high reward ($2.50) than to the low reward ($0.50), in line with incentive theory. However, when they were given a free choice about whether to make this counter-attitudinal argument, the reverse pattern was obtained, in line with dissonance theory predictions. The dissonance aroused by the sense of personal responsibility for the opinion expressed was presumably reduced by rationalization.
Greenwald and Ronis (1978) suggested that these “self-reformulations” effectively transform dissonance theory into a theory of ego-defense. This interpretation of dissonance theory is hardly surprising, as Festinger often used examples of self-threatening thoughts to illustrate cognitive dissonance, as in the case of the smoker who knows that his habit may give him lung cancer. Despite the fact that Festinger did not accept that the “firm beliefs” challenged by dissonant cognitions necessarily involved the self-concept (Aronson, 1999), this view now seems to have prevailed (e.g., Fischer, Frey, Peus, & Kastenmueller, 2008). Moreover, subsequent developments have vindicated Greenwald and Ronis’ prediction that research on the self from 1980 would explode in social psychology. Accordingly, Festinger is more cited than Heider in research on the self (e.g., in Baumeister’s 1999 collection of readings), whereas Heider is more cited in research on social cognition (e.g., see Hamilton’s 2005 collection).
Cognitive dissonance theory certainly raised social psychology’s profile in the public eye. Festinger was even named one of Fortune Magazine’s Ten Scientists of the Year during the 1950s (Schachter, 1994), and the phrase “cognitive dissonance” has entered the common vocabulary along with “groupthink” and “brainstorming.” It inspired a series of studies that helped strike the death knell for the behaviorist hegemony in psychology, and the plethora of unexpected and contradictory experimental results provided the impetus for the construction of cognitive theories that could reconcile them. However, the dissonance vs. incentive theory controversy also contributed to a sense of crisis. Many were frustrated at the “will o’ the wisp” nature of experimental effects that were so easily changed by seemingly minor changes in experimental procedure (Elms, 1975). The focus on the “power of the experimental situation” was such that individual differences became ignored, leading some to lament the way that dissonance research had used up resources that might have been better spent on other questions, such as longitudinal personality research (Elms, 1975). Dissonance research was thus the high-water mark for a certain style of high-impact experimentation in social psychology, coinciding as it did with Milgram’s (1974) obedience studies and the Stanford prison experiment, which can no longer be performed in American universities due to ethical concerns. In addition, Zimbardo (1999) suggests that this period of research reflected a macho male attitude that waned with the increased prominence of female researchers in the discipline. 6
Nevertheless, the use of an overly physicalistic “model of man” inherited from field theory appears to have contributed to some of the experimental contradictions of dissonance theory. In particular, increasing awareness of the importance of the “social psychology of the psychology experiment” (Orne, 1962) prompted more attention to be paid to the way participants interpreted the tasks they were given and the rewards they received (Zajonc, 1968). Despite the physicalistic language of field theory (“forces”, “equilibrium,” etc.) and the carefully controlled experimentation, one increasingly senses the participants of the experiments expressing themselves as moral agents. Thus when reading the experimental report provided by Festinger and Carlsmith (1959) and related experiments performed by Festinger’s students (e.g., Aronson and Mills, 1959), it is striking how much space is spent describing the elaborate experimental procedure, and how little attention is paid to assessing the viewpoint of the participants. We are some way from Asch’s (1952) cautiously optimistic attitude to the rationality of his participants, expressed in his post-experimental probes of participants’ interpretations of the material that was presented. Both Asch and Heider seem to be closer to the notion of an experimental subject in Wundt’s research, where the scientist was interested in the subject’s phenomenological experience (and often changed roles with him). In contrast, dissonance theorists did seem to view the experimental participant as an object to be manipulated and as a tabula rasa on which instructions were to be imprinted. A dialogue with the experimental “subjects” might have revealed rather sooner their norms and expectancies about what constitute acceptable levels of payment for their time (cf. Hovland, Harvey, & Sherif, 1957), as opposed to suspiciously high overpayments that had the air of bribes. All in all, one can readily understand protests that the human being’s status as an autonomous moral agent was insufficiently recognized during this phase of research in social psychology, and more attention needed to be paid to participants’ own accounts of their behavior (Harré, 1979).
Dissonance theory yielded a rich haul of experimental results which paved the way for subsequent approaches. The question of direct self-insight was raised by Nisbett and Wilson’s (1977) extensive review of the effects of dissonance manipulations, which concluded that “People often cannot report accurately on the effects of particular stimuli on higher-order, inference-based responses.” Interest was also stimulated in how people came to explain their own behavior: For example, Bem revised his earlier “neo-behaviorist” reinterpretation in terms of self-perception (Bem and McConnell, 1970). Although Schachter and Singer’s (1962) study of the effect of labeling on subjective experience of emotions proved difficult to replicate (Reisenzein, 1983), it influenced Valins and Nisbett’s (1972) use of a self-perception approach to explain placebo effects. This was to contribute to the development of attribution theory, which Festinger labeled as “one terrible mistake” (Patnoe, 1988). In any case, the heyday of dissonance theory was over and attribution theory was to become the dominant paradigm in social psychology.
Unlike his friend Kurt Lewin, Fritz Heider (1896–1988) did not exert his influence through training students or creating research groups. He was however frequently invited to present his ideas at seminars in major research universities, for example at Harvard in 1946, though his ideas did not always evoke an immediate response (Harvey, Ickes, & Kidd, 1976). He was offered the safety of a tenured university position by Roger Barker at Kansas in 1947, which allowed him to complete work on his book The Psychology of Interpersonal Relations, which had been begun in 1943 and circulated in unpublished form from the late 1940s (Malle, 2008). When published in 1958, the book was to have a transformative effect on the field, as it summarized his earlier thinking on balance theory, and importantly presented new and extended analyses of causal attribution processes. Heider’s book is balanced and thoughtful, rich in reading and reflection, and in many ways resembles a philosophical treatise such as Spinoza’s (1667) Ethics or Smith’s (1759) Theory of moral sentiments (both of which are frequently and approvingly cited) more than a contemporary monograph in social psychology. Attribution theory soon became the dominant research area in social psychology, with over 900 publications by the time of Kelley and Michela’s (1980) review. Unfortunately, as we shall see, some of Heider’s philosophical sophistication was lost in the ensuing empirical stampede.
In contrast to Festinger’s ego-involved experimental participant struggling with cognitive dissonance, Heider’s view of the social perceiver may best be characterized as that of a “contemplative observer” (Beauvois and Joule, 1996). Heider’s interest in causal attribution grew out of the problem of perceptual constancy (Malle & Ickes, 2000; Reisenzein & Rudolph, 2008). A whitewashed house may look brilliantly white at noon yet take on warm colors during a glorious African sunset. To understand that the house has remained the same, we have to attribute the variation in apparent color to a distal factor, namely the change in the light coming from the sun, due to its moving position in the sky. Heider first addressed these issues in his dissertation work in Graz under Meinong. The causal theory of perception holds that our perceptions are the result of external sources, which impact on our consciousness through the medium of our senses. In this view, the goal of perception can be characterized as the cognitive reconstruction of the stable, underlying sources of variations in perceptual sensations. The causal theory of perception is thus different from the Gestalt approach, which addressed the question of how the percepts were organized, not what objects they mediated. Heider in turn inspired Egon Brunswik, who noted that the use of intermediate perceptual cues to make inferences about the properties of an underlying object (Ding an sich, or “thing in itself ”) was as central to the Viennese school of thought as conditioning was to American psychology.
Heider had also long known of Lewin’s attempt to formalize his field theory with a mixture of topological psychology and vector mathematics to represent goal-directed, planned behavior. He was later to recall an inspirational conversation in Berlin in 1926 in which Lewin depicted life-spaces in the snow, which left him with the idea that it was important to formalize these relations correctly (Harvey et al., 1976). It fell to Heider and his wife, Grace, to translate Lewin’s book from the German into English in 1936 as Principles of Topological Psychology, and later Heider corrected en passant what he considered as certain errors in Lewin’s approach in his own paper on distal and proximal determinants of perception (Heider, 1939). Some years later he developed his own more successful notation to represent interpersonal relations (Heider, 1946), which he elaborated as an appendix to his 1958 book. Heider introduced the simple mathematical logic of unit formation and balance, differentiating unit and sentiment relations and showing how causal relations produce unit formation between the cause and the effect. This rigorous approach to questions of cognitive organization was to culminate in the work of Cartwright and Harary (1955). Furthermore, through its influence on the work of Robert Abelson (1925–2005) on “psycho-logic” (e.g., Rosenberg & Abelson, 1960), this work was also to have a substantial impact on artificial intelligence attempts to represent “commonsense understanding” through the collaboration of Schank and Abelson (1977).
Heider clearly saw causal attribution as being important in “restructuring the field” in order to attain cognitive balance, as shown by the following quotation, published as Germany’s second defeat in a World War was imminent. One of the devices used to lift morale is to restructure the field in such a way that a defeat is not attributed to one’s own inferiority. Defeat undermines the morale of a nation or a person only if the cause of the defeat is attributed to its own weakness. When, on the other hand, the blame is laid on a ‘stab in the back’ or some other factor which is not connected with the relationship between the own power and that of the enemy, the self-evaluation which is basic for morale is not affected. These examples show that causal attribution is of great importance in cognitive structures which give rise to tensions in the person since many vital equilibria concern relationships of the own person to other persons; relationships of power, of value, of benevolence or hostility. Most of the ‘sensory’ experiences of changes having a positive or negative value for the person become relevant to these equilibria only when they are, by attribution, related to the stable social environment consisting of other persons.
Contemporary commentators clearly saw the intimate connection between balance and attribution theory. For example, Deutsch (1954, p. 211) observed that “The nature of attribution which occurs in any particular instance will be influenced by the need to prevent cognitive imbalance.” Yet balance and attribution were to be treated quite independently by experimental social psychologists, and it was over 30 years before the relation between the two was studied experimentally by Brown and van Kleeck (1989; see also Crandall, Silvia, N’Gbala, Tsang, & Dawson, 2007, for further discussion of the relations between balance and attribution theory).
Heider’s balance model led to a number of demonstration studies (reviewed by Zajonc, 1968), but never generated a substantial programme of empirical research in the way that Festinger’s formulation of cognitive dissonance did. Heider’s work on attribution only began to exert an influence after the publication of his book in 1958 as, somewhat ironically, it was able to provide concepts that allowed understanding of how experimental participants responded to manipulations designed to induce cognitive dissonance. For example, Heider’s ideas enabled psychologists to better conceptualize the way participants restructured the field in terms of personal responsibility and internal–external attributions (Beauvois & Joule, 1996; Wicklund & Brehm, 1976).
Edward E. (“Ned”) Jones (1927–93) was well placed to make this crossover between dissonance research and attribution theory. Jones and Davis (1965) stayed close to the Viennese roots of Heider’s position, in their conceptualization of dispositional inference as a process of referring social perceptions (e.g., of a target person’s introvert or extravert behavior) to a stable underlying source (her personality). Of course, the target person’s observable behavior may change as a function of situational demands, as he responds to self-presentational requirements of the kind analyzed by the sociologist, Erving Goffman. Thus he may be by nature an extravert but be required to present himself at interview as an introvert when interviewing for a job that requires this quality, such as working on a submarine (Jones, Davis, & Gergen, 1961).
Experimental social psychology also became self-referential in this period, using its own experiments as bases for generating new research problems. Whereas the essay-writing paradigm had originally been used to explore how role-playing influences attitude change, Jones turned it into a vehicle for attribution research. In the paradigm developed by Jones and Harris (1967), participants were placed in the position of an observer who had to make inferences about the underlying attitudes of experimental participants who had been required to write essays supporting counter-normative positions. Jones labeled the finding that participants apparently fail to discount the effect of situational pressure on the target’s behavior as “correspondence bias,” and considered it to be “a candidate for the most robust and repeatable finding in social psychology” (Jones, 1990). It led to a wealth of research on what came to be known as the “fundamental attribution error” (Ross, 1977). This is a special case of discounting, which we return to below.
As well as reiterating his earlier views on the perception of action and attribution to distal causes, in his book Heider introduced an analogy between commonsense psychology and scientific inference. Heider suggested that people often make inferences about the causes of events using the same logic of controlled inference that was recommended to experimental psychologists themselves (Woodworth, 1938, pp. 2–3). This is Mill’s (1872/1973) method of difference, which stipulates “that condition will be held responsible for an effect which is present when the effect is present and absent when the effect is absent” (Heider, 1958, p. 152). In making this suggestion, Heider was emphasizing the rational nature of the layman’s processes of causal inference, suggesting that it was deserving of careful study by the scientist at a time when behaviorism still reigned supreme.
At about the same time, Brunswik coined the term “intuitive statistician,” an analogy that like Heider’s was meant to draw attention to the intelligent adaptations that an organism made to its unstable, changing, and uncertain environment. However, Heider eschewed probabilistic functionalism, and also differed from Brunswik in his level of attention to method. Brunswik gave detailed prescriptions and examples of how his lens model (derived from Heider’s ideas) should be operationalized. However, as Leary (1987) suggests, the effort demanded by these “representative” designs probably deterred others from adopting them at a time when the systematic manipulation of independent variables in an orthogonal experimental design represented the scientific ideal for psychologists. As we shall see below, the failure to pay careful attention to the operation-alization of key concepts in experimental design was to have disastrous consequences for attribution theory, leading lay persons to be unfairly browbeaten for being “poor scientists” (Nisbett and Ross, 1980).
The first part of this story is a cautionary tale of how scientific rhetoric (in the form of ill-understood analogies, visual aids, etc.) has been allowed to obscure and confuse the nature of the object under study. In the 1960s, Harold Kelley (1921–2003) was already a prestigious researcher in the discipline, having originated from the Lewin group and spent time in the Yale Communication Project before going on to make significant contributions to social exchange theory (see van Lange, this volume). Following his admiring but somewhat selective review of Heider’s book (Kelley, 1960), Kelley (1967) picked up on Heider’s comparison of lay causal inference to the statistical analysis of variance, which at this time was the preferred statistical technique of experimental psychologists (Gigerenzer, 1991). Kelley adapted the ANOVA analogy, as it came to be called, to the concerns of experimental social psychologists by explicating four highly recognizable sources of variance in observed behavior: the person, the situation, the circumstances, and the modality of observation. Inferences about the causal role of these variables would be assessed by the covariation of the effect with each of these sources.
Thibaut and Kelley (1959) had used visual aids to great effect in presenting economic game-theory matrices to represent joint payoffs of two individuals. Kelley hesitated before including analogous diagrams in his presentation, but once he had decided to throw it in, the “cube” never stopped rolling. It represented covariation of the target effect across three factors, yielding three dimensions of information: persons (consensus), stimuli or situations (distinctiveness), and circumstances or times (consistency). Modality was omitted from this highly visualizable cube (see Figure 3.5 ), and subsequently disappeared from further consideration in attribution research (to Kelley’s regret). As Kelley (1999, p. 40) was to put it: “If I had relied entirely on words, as I had originally intended, people wouldn’t have had the Kelley cube to play hacky-sack with all these years.”
Figure 3.5 Example of cubes corresponding to person (a), entity (b), and time (c) attributions. Reproduced from Kelley (1973) by Försterling (1989). Copyright © 1973 American Psychological Association. Reproduced with permission.
The first published test of the ANOVA model similarly had a hesitant start. Leslie McArthur, a graduate student of Robert Abelson’s at Yale, had intended to use another experiment for her doctoral dissertation. However, when that experiment failed, she obtained her advisor’s agreement to substitute her experiment on Kelley’s model (Roseman and Read, 2007). In this study McArthur used analysis of variance (of course) to analyze the relative effects of her manipulations of consensus, distinctiveness, and consistency on participants’ causal attributions, as measured by three forced-choice options (person, stimulus, circumstances), along with a fourth option in which participants could write in an interactional explanation. An example of a high-consistency, low-distinctiveness, and high-consistency (HLH) information configuration with response options is given below:
Participants who chose option (d) were asked to specify the particular combination of factors that they thought caused the event. Attribution to one interactional explanation (person by stimulus) was quite frequent, especially in response to LHH configurations, but other interactional responses were quite rare. The choice of response language had some unfortunate consequences. First, the omission of explicit interactional response options biased performance, as when a full range of response options was provided, the number of interactional responses went up from 33% in McArthur’s experiment to 61% (Jaspars, 1983) and 47% (Hilton and Jaspars, 1987). In addition the choice of the word “circumstances” to signify attributions to the time factor was ambiguous, as it could also imply interactions of other factors. In McArthur’s study “the particular circumstances” was used for 24% of responses, whereas this rate fell to 11% in a study that used “this occasion” as a response option (van Overwalle, 1997).
The bias in responses produced by the choice of response options enables us to understand why unexpectedly uneven effects of the experimental manipulations emerged. Thus consensus accounted for only 6% of the variance on person attributions, compared to 22% for distinctiveness and 16% for consistency. For stimulus attributions a similar pattern emerged, with consensus accounting for 5%, distinctiveness 12%, and consistency 6% of the variance. For circumstance attribution, consensus accounted for 0.30%, distinctiveness 8%, and consistency 41% of the variance. This pattern was replicated in another similar experiment using a similar response methodology (McArthur, 1976). These results prompted the assertion that people “underuse” consensus information, an assertion that continued to be made (Fiske & Taylor, 1991; Försterling, 2001; Gilbert, 1998) long after repeated demonstrations that McArthur’s original findings were artefactual, and consensus information has a very strong effect on attributions when a full set of response options is presented, as we will see below.
Despite its somewhat haphazard début, McArthur’s experiment went on to attract considerable attention. Drawing on McArthur’s results, Orvis, Cunningham, and Kelley (1975) proposed a “template-matching” model which posited that distinctiveness information drives person attributions, consensus information drives stimulus attributions, and consistency information drives circumstance attributions. The result was a model that contradicted the implications of Kelley’s (1967) original proposition, a fact that neither Orvis et al. nor subsequent theorists (Anderson, 1978; Medcof, 1990) remarked upon. Attribution theory was then rescued from its conceptual and methodological slumbers by Joseph (Jos) Jaspars (1934–85), a Dutch psychologist working at Oxford who possessed a strong mathematical background. Jaspars returned to first principles and, in contrast to Orvis et al.’s bottom-up experiment-driven theory, proposed a top-down theory-driven experiment. Jaspars, Hewstone, and Fincham (1983) first proposed a formal “inductive logic” model of how Mill’s method of difference should be applied to patterns of consensus, distinctiveness, and consistency information. As they pointed out, the pattern of covariation information provided to participants in McArthur’s experiments did not respect basic principles of experimental design, as they received only four of the eight cells of information that would be required in a fully crossed 2 × 2 × 2 experimental design that tests for the effect of three factors (person by stimulus by occasion). The celebrated ANOVA cube actually has cells of information missing on McArthur’s experiment (see Figure 3.6 ).
Figure 3.6 Missing dimensions of information in the Kelley cube. The shaded parts correspond to the cells of information that are actually given to participants in McArthur’s (1972) experiment: Target event (Cell 0), Consensus (Cell 1), Distinctiveness (Cell 2), and Consistency (Cell 3). Information corresponding to Cells 4, 5, 6, and 7 was not given to participants in McArthur’s experiment. From Cheng and Novick (1990). Copyright © 1990 American Psychological Association. Reproduced with permission.
Although a full-blown analysis of variance is not possible on what is technically a “fractionated block design,” properly conducted experiments that eliminated experimental artefacts from the response options showed that participants do use consensus information to make personal causal explanations in the manner that would be predicted by Mill’s method of difference (Jaspars, 1983; see also Cheng & Novick, 1990; Försterling, 1989; Hilton & Jaspars, 1987). A final clarification was brought to the ANOVA analogy when it became clear that the method of difference model (predicting consensus → person and distinctiveness → stimulus inference patterns) describes the logic of causal explanation of events, whereas a process involving Mill’s method of agreement (predicting the inverse pattern of distinc-tiveness → stimulus and consensus → person inferences) describes the logic of dispositional attribution to involved entities (van Overwalle, 1997). In sum, once the kinds of causal judgment have been properly distinguished (e.g., causal explanation vs. dispositional attribution), appropriate rules of inference specified (e.g., method of difference vs. method of agreement), and these models are tested with a properly designed response methodology, the layperson shows herself to be highly rational in experiments using McArthur’s (1972) paradigm.
Gilbert (1998) suggests that Kelley acted like a “wise anthropologist” in translating Heider’s ideas into the language of experimental design and analysis of variance that psychologists had been trained in. Unfortunately, this same research community failed to understand the underlying logic of the analogy of lay causal inference with its favoured method of statistical analysis, the analysis of variance, and thus failed to test Kelley’s analogy properly. The result was confusion, with the layperson being roundly accused of underuse of base-rate information (e.g., Nisbett and Ross, 1980). In fact in some cases, the lay person was using base-rate information, in the form of implicit world knowledge, to flesh out the missing dimensions of covariation in a way that was consistent with the logic of ANOVA. In the example used above (Hilton & Slugoski, 1986), the target event is abnormal, as In general, few people trip up over few other people dancing. When this low base-rate is taken into account to complete the data matrix (see the simplified square omitting the time dimension in Figure 3.7 ), it is logical to assume that there are two “main effects” in an informal analysis of variance, i.e., Ralph is clumsy and Joan is clumsy (Hilton, 1990). However, if the target event is normal (e.g., Sally buys something on her visit to the supermarket) then the implicit assumption is that it is normal for there to be a high base-rate (Most people buy something on their visits to the supermarket ). In this case, there is no effect at all to be explained, and participants indeed chose this null option (Nothing special about Sally, the supermarket or the occasion caused her to buy something on her visit to the supermarket).
Figure 3.7 Data matrices from high-consensus, low-distinctiveness information configurations as a function of high and low presupposed norms. From Hilton (1990). Copyright © 1990 American Psychological Association. Reproduced with permission.
In the above case, it was the “naive scientist” that proved to be statistically rational, by combining information provided by the experimenter with plausible assumptions based on experience of the real world. It is possible that the recognition of this process of integrating old and new information may have been hindered by the predilection of experimental psychologists to exclude subjective expectancies as “noise” that would contaminate the study of the “pure” effect of the experimentally manipulated information variables. A voluminous research literature has similarly concentrated on the effect of the information given in contingency tables on covariation detection and causal induction without addressing the question of how this information might be integrated with prior world knowledge. However, McKenzie and Mikkelsen (2007) have recently presented a “knowledge-based” Bayesian account, which argues convincingly that the seemingly “irrational” focus on cause-present effect-present information may be due to the assumption that contingency tables are usually labeled in such a way that these cases are rare. Causal hypotheses such as “Do patients with symptom x have disease y?” are normally formulated to test questions about rare cases (having symptoms and having diseases) rather than common cases (not having symptoms and not having diseases).
Causal discounting refers to the process whereby we believe less in a focal cause once we know that an alternative cause may cause the effect in question, as when we disbelieve that a favorite celebrity really likes the product she is endorsing, because we know that she has been paid to participate in an advertisement (Kelley, 1973). Work on discounting tended to assume that the competing causes were “internal” (e.g., the celebrity’s honesty) or “external” (e.g., the fact the celebrity had been paid), and that these factors were related in a mutually exclusive fashion—thus the more one attributes the effect to internal causes, the less one attributes it to external causes, and vice versa. This “hydraulic assumption” about causality had slipped early into attribution theory and measurement methodology through Thibaut and Riecken’s (1955) experiment where participants had to attempt to persuade another participant (in fact a confederate) to do something (e.g., give blood). Drawing on analogies with studies by Heider (1944) and Michotte (1946) on visual perception of physical power and causation, they reasoned that acquiescence to high-power confederates would be attributed to internal forces, whereas acquiescence to low-power sources would be attributed to external forces. They gave participants forced-choice options, and their hypotheses were confirmed. Researchers then spent considerable time and energy trying to unsuccessfully prove the hydraulic assumption in causal attribution (e.g., Miller, Smith, & Uleman, 1981).
The fundamental attribution error (the overestimation of the importance of person causes and the underestimation of the importance of situational causes) became the cornerstone of an attack on lay rationality (Nisbett & Ross, 1980; Ross, 1977), as it represents a special case of under-discounting. In some cases, it was clear that people under-discount, as when people’s attributions and predictions for obedience in Milgram’s experiment were compared to the obedience rates actually observed. However, cases such as Milgram’s experiment, where “actuarial” data was in fact present allowing conclusions to be made as to whether people under-discount or not, were few and far between (Hilton, 2007). Indeed, research proved to be inconsistent and inconclusive as to whether people do discount appropriately or not. Fiske and Taylor (1991, p. 38) reported that “Research conducted on the discounting principle suggests that sometimes it is strong (Jones, Davis, & Gergen, 1961), sometimes weak (Jones & Harris, 1967), and sometimes virtually absent (Messick & Reeder, 1974; Napolitan & Goethals, 1979).” Despite the centrality of the topic to the evaluation of the layperson’s rationality in causal inference, analysis of the discounting problem was plagued by poor definition of the judgments under study, and a failure to translate the rhetoric of the “man-the-scientist” model into a clear model of normative inference against which performance could be evaluated.
As with the ANOVA analogy, it took time for clarity to be brought to the problem. Some order was brought into the area by McClure (1994), who identified a number of problems in the extant literature. One was the failure to distinguish causal discounting (reducing one’s belief that an explanation is true) from causal backgrounding (omitting to mention an explanation because it is no longer relevant to mention in a conversationally given explanation, even if it is still considered to be true). Another was the unicausal assumption, which had become enshrined in response methodology through the use of bipolar scales. This assumption was in fact inconsistent with the ANOVA analogy, which presupposed that the lay scientist analyzed person and situation information along dimensions of covariation that were crossed in an orthogonal design, and thus allowed for events to have both internal and external causes (Hilton, 2007).
Morris and Larrick (1995) proposed a theoretical model of discounting that resolved many of the problems identified above. Theirs was a Bayesian model which included many Brunswikian features, such as assumptions about the diagnosticity and intercorrelation of cues. For example, rather than assume a hydraulic relation between causes, they included a parameter that modeled the perceived covariation between the two hypothesized causes (are they uncorrelated as in the ANOVA design, or do they correlate positively or negatively?). They also included elements that corresponded to the assumed base-rate probability of an outcome as well as the perceived sufficiency of these causes for the effect in question. Extending the knowledge-based approach introduced by Hilton and Slugoski (1986), they replicated Jones and Harris’ experiment, but in addition they collected information about participants’ assumptions concerning subjective probabilities and cue-diagnosticity. They were thus able to replicate the patterns of attribution obtained by Jones and Harris (1967), but they also showed that these causal judgments were highly rational given participants’ assumptions, as they corresponded with what would be expected from a Bayesian model of inference that used the participants’ subjective probabilities as inputs (see Figure 3.8 ). Morris and Larrick’s study makes it clear that participants showed near-perfect Bayesian rationality in their use of their own assumptions. If there is an error in their judgment, its origin is to be found in their assumptions, not in their reasoning.
Figure 3.8 Subjective probabilities and observed dispositional attributions (with rational benchmarks given Bayesian inference from subjective probabilities in parentheses). Adapted from Morris and Larrick (1995).
As we have seen, Heider’s ideas were operationalized by a generation of scientists who had been brought up with the “bible” of systematic experimental design and analysis of variance. However, their understanding of this analogy was more “ritualistic than rational” (Gigerenzer, 1991). The result was that rather than highlighting the rationality of lay inference (as Heider originally intended), the “man-the-scientist” analogy would prove to be a double-edged sword, whereby the layman came to be berated for his lack of rationality. But the normative models were often ill-conceived and poorly operationalized, and lay inference turned out to be much more rational when methodologically sound experiments were used to test properly interpreted models of the “naïve scientist.”
Unlike impression formation or cognitive dissonance theory, attribution theory has remained an active field of research (see Figure 3.1 ). While new perspectives have emerged, such as “mindlessness” (Langer, 1978) or the knowledge-structure approach (Abelson & Lalljee, 1988; Read, 1987), it is striking how much of the subsequent research has been devoted to correcting the errors of the early “classic” studies (see Gawronski, 2004; Hilton, 2007; Malle, 2004 for reviews). Nevertheless, these proposed corrections have not yet assimilated by gatekeepers of the discipline. Despite the historical importance of the ANOVA analogy for social psychology, major handbook chapters have not given a detailed treatment of the issues raised above (e.g., Anderson, Krull, & Weiner, 1996; Gilbert, 1998). Likewise, the same errors of prediction about interactional attributions from the configurations of covariation information in Kelley’s cube continued to turn up like bad pennies in social cognition textbooks (Delhomme et al., 2005, p. 87; Fiske & Taylor, 1991, p. 35; Yzerbyt & Schadron, 1996; p. 68). There may be a number of reasons for this. Malle (2008) suggests that social psychologists considered Heider’s project of explicating commonsense psychology to risk the banal fate of becoming “bubba psychology,” and hence focused more on errors and biases in judgment which carried more news value.
In addition, the attribution revisionists have not come from the heart of American social psychology. For example, Morris and Larrick’s (1995) groundbreaking analysis has attracted relatively little research attention, in part due to the fact that the authors went on to concentrate on topics such as cross-cultural research and negotiation which were more relevant to the business schools in which they were recruited. Cheng and Novick were cognitive psychologists, and Jaspars died shortly after his reanalysis of the ANOVA analogy. Finally, most of the later conceptual revisions of attribution theory were made by Europeans, whose more analytic temper may not chime with the more empiricist orientation of American social psychology. 7 Attribution theory has contributed to a significant revival in research on the relationship between language and social cognition. An oasis in this desert after the decline of linguistic relativity research was to be found at Yale, where Robert Abelson and his students performed elegant studies relating verb-types to subjective generalizations (Gilson and Abelson, 1965; Kanouse and Abelson, 1967) and causal attributions (McArthur, 1972). The “verb effect” on causal explanation (demonstrated in the lesser known part of McArthur’s classic paper) was rediscovered some years later by Brown and Fish (1983), whose masterly analysis in turn was to inspire the Linguistic Category Model (LCM) of Semin and Fiedler (1991). The LCM did not introduce any major new distinctions in kinds of verbs that went beyond those of Abelson and his students (Rudolph and Försterling, 1997), but refocused attention on the ways that language could be used to influence judgments about other persons and other groups. For example, Maass, Salvi, Arcuri, and Semin (1989) used the LCM to show that abstract (e.g., dispositional) terms were more frequently used to characterize negative actions when an outgroup member performed them than when an ingroup member did so. This linguistic intergroup bias (LIB) showed that coding of free language descriptions could reproduce the same pattern of intergroup bias in attribution revealed by more conventional rating procedures (Taylor & Jaggi, 1974).
The insights of attribution theory have had considerable impact on other areas. Within social psychology, examples include minority influence research (Maass and Clark, 1984), intrinsic motivation (Kruglanski, 1975), achievement motivation (Weiner, 1985), and aggression (Dodge, 1980) in an application already anticipated by Heider and Simmel (1944). Attribution theory has also had an influence outside of social psychology, for example in clinical psychology through the attributional reformulation of learned helplessness theory (Abramson, Seligman, & Teasdale, 1978) and on medicine through contributing to the understanding of placebo effects (Valins and Nisbett, 1972). This applied success has been a major motivating factor for theoretical revisions that help understand how attributions affect behavior. For example, Friedrich Försterling (1953–2007) was moved to better specify the normative inference rules of causal attribution in order to be better placed to evaluate the efficacy of attribution-based interventions in clinical psychology and education. Försterling and Morgenstern (2002) showed that accuracy in self-assessment led to subsequent success, thus contesting the view that positive illusions about self are the key to success (Taylor & Brown, 1988).
Given attribution theory’s continuing importance in generating applications, it becomes appropriate to ask how its history should be represented to succeeding generations. We may reasonably expect gatekeepers (textbook and manual writers) to distinguish “signal” from “noise” by writing about major contributions (“hits”) and passing over less important contributions (“correct rejections”). But in looking at the representation of attribution theory it seems that some important distinctions were not made (“misses”), such as the failure to properly understand the normative nature of the ANOVA analogy, with its accompanying failure to recognize that people reliably (and rationally) use world-knowledge in causal attribution, discounting, and induction. Conversely, many “findings” have with time appeared to be chimerical, and may be characterized as “false alarms.” Here we may cite the longstanding claim that people “underuse” consensus information as a false alarm. Other examples of false alarms may include Gilbert, Pelham, and Krull’s (1988) well-known study, which continues to be cited as supporting the existence of the correspondence bias (e.g., Gilbert, 1998), despite the accumulation of evidence that the “overattribution” effect may be due to a procedural artefact (only person dispositional attribution questions are posed to participants). Indeed, when only situational attribution questions are posed, the effect is reversed (Krull, 1993; Krull & Erickson, 1995; for related arguments and data, see also Trope & Gaunt, 2000; Webster, 1993).
Attribution theory seems to offer many examples of over-concentration on some questions that has led to neglect of others. For example, Malle (2008) argues that the field’s obsession with the internal–external distinction obscured an equally important distinction that Heider made between intentional and non-intentional action, which he referred to as personal and impersonal causality. While the distinction received early attention in social psychology (e.g., Locke and Pennington, 1982), it has proved of great importance in research in developmental psychology and neuroscience on “theory of mind” (Hilton, 2007; Malle, Moses, & Baldwin, 2001). Malle (2006) argues that there is no evidence for an actor–observer asymmetry when analyzed through the lens of internal–external differences, thus implying that the actor–observer effect as described by Jones and Nisbett (1972) is a “false alarm.” However, Malle, Knobe, and Nelson (2007) suggest that researchers have missed another actor–observer difference in causal explanations that can be reliably described and measured in terms of the theory of mind approach implicit in Heider’s original notion of personal causality.
A common theme of the biographical accounts of the career choices of those who were to distinguish themselves in social psychological work in the postwar “golden age” (Pepitone, 1999) was how many of them had not heard of social psychology when they began their studies. The discipline grew and rapidly changed form during the postwar period, with the result that early sources of inspiration such as Solomon Asch and Fritz Heider seem to belong to a different period of history. Both born in central Europe and good friends, they seem to fit an older tradition of gentleman scholars or philosophes. One can easily imagine them passing a pleasant evening with Hume and Smith round a warm fire in 18th-century Edinburgh, conducting a civilized disquisition on associationist theories of causation and equilibrium theories of social processes. After the Second World War the model of science changed to one of “Big Science,” with the dominance of well-funded research universities that became “dissertation factories” where graduate students performed tight, controlled experiments that were analyzed with statistical packages contained in high-speed computers.
Although Asch and Heider had immense success through getting their ideas accepted in the major research universities, their use of the experimental method was not only sparing, it was different in style to that of many of their contemporaries. For example, through his extensive use of post-experimental interviews, Asch was able to show that source effects on judgment represented a rational “change of meaning” in the object of judgment, not an irrational change in the evaluation of the object as proposed by reinforcement theorists. Likewise he presciently argued that the failure of social psychologists to understand what people meant when they made a generalization about a group supported an unwarrantedly pessimistic conclusion about the rationality of social stereotypes. It has been instructive for the present writer to realize how much Asch’s arguments prefigured his own use of the logic of conversation (Grice, 1975) to reanalyze conclusions about the rationality of human judgment (Hilton, 1995). It is also striking how little researchers in social psychology and judgment and decision-making research heeded Asch’s example in the intervening years. Conscious attempts to integrate the insights of the past, such as Read, Vanman, and Miller’s (1997) use of connectionist principles to model Gestalt phenomena are rarer than some would like.
The theory of cognitive dissonance also represented a full-scale onslaught on the rational picture of man. This time reinforcement theory, by now reinvented as “incentive theory,” was enlisted to argue for the rationality of attitude change in cognitive dissonance experiments. Once again we see the importance of attention to the experimental participant’s interpretation of his task. “Evaluation apprehension” was an issue for an experimental participant who wanted to protect his image as a moral person resistant to experimental bribes to do something dubious, such as tell a lie or work for the enemy. Attention was paid to how participants restructured the field through perceptions of free choice, responsibility, and engagement, and these concepts set the stage for an explosion of research on the self and attribution theory. An attention to the phenomenology of the subject came in through the back door of reinterpretations of the meaning of the experiment for the participants. But one wonders whether much effort might have been saved from the start if experimenters had listened as intently to their subjects as did researchers in the Wundtian tradition familiar to Asch and Heider.
As we have seen, problems in the assumptions that guided work in attribution theory during the 1960s and 1970s have prompted significant revisions to be attempted from the 1980s onwards. Science should of course be self-correcting, and hypotheses should be formulated in ways that allow experimental disconfirmation (Popper, 1972). But even when the experiments have been done, the history of social psychology suggests that it can take a long time for corrected findings to be transmitted through gatekeepers to the rest of the field. In the case of attribution theory, it seems as if the field has a certain “image” of what it has learnt from this source in the 1960s through to 1980, which it is reluctant to correct.
A pervasive feeling among some of the leading figures in postwar social psychology, such as Deutsch (1999) and Gerard (1999), is regret for the disappearance of psychodynamic approaches from contemporary social psychology. It is easy to see that the risk of a loss of collective wisdom is real, as psycho-dynamics disappeared from Lindzey and Aronson’s Handbook between 1968 and 1985. Although it is possible to give well-known concepts such as groupthink a shiny new cognitive gloss (by referring to collective overconfidence, confirmation bias, insufficient search for information and alternatives, etc.), a researcher who fails to recognize its intellectual origins in the Freudian concept of repression has surely overlooked something important. And if psychoanalytic techniques for achieving self-insight have proved useful in reducing prejudice (e.g., Sarnoff, Katz, and McClintock, 1954), then one may expect them to be used, regardless of their intellectual heritage.
Social psychology does seem to have a disquieting tendency to forget theories rather than to disprove them. For example, careful analysis suggests that many of the theories studied by behaviorism were not falsified by critical experiments. In their review of papers published in Journal of Abnormal and Social Psychology in 1960, Berger and Lambert (1968) show that behaviorist approaches were primarily used to explain actions and behavioral responses whereas cognitive and psychoanalytic approaches were primarily used to explain attitudes and judgments. It seems then that behaviorism was less falsified than simply forgotten. While the legacy of the behaviorist approach continued to be felt in specific theories such as exchange theory (Thibaut and Kelley, 1959) and general topics such as the problem of the discrepancy between verbal attitudes and behavioral measures, social psychology entered a cognitive phase where behavioral measures were discarded and verbally or numerically expressed judgments became the primary dependent measure used in research. Tellingly, behavioral and experimental economists retained the commitment to methodological behaviorism. Later, this led them to criticize much social psychological research for its failure to validate theories with real monetary incentives and measures of real behaviors (Hilton, 2008).
As well as general approaches, particular research topics seem to have waxed and waned in social psychology before suddenly sinking without trace, unremarked and unmourned, into the depths of oblivion. Even if not much progress is currently being made on a given topic, there would seem to be cogent arguments for preserving signposts (e.g., in handbook chapters) to help researchers find their way back to important sources. To make this point, I return to the domain of language, and give two examples that seem plausibly to illustrate the role of fashion. One is a phoenix that has risen from the ashes of neglect (cognitive tuning), and the other a bird that has quite simply migrated into the obscurity of other disciplines (linguistic codes and thinking).
Robert Zajonc (1923–2008) was a mercurial social psychologist who worked in a variety of fields in involving cognition and emotion in social psychology, often with a strong emphasis on man’s biological heritage. In 1960 he published two papers that have had quite different citation careers (see Figure 3.9 ). His concise and perceptive review paper on cognitive consistency theories was immediately well received, being extensively cited in the 20 years after its appearance and then declining (with a slight regain of interest in the past few years, perhaps reflecting a revival in consistency theory). This pattern is not unusual, reflecting a classic pattern of citation half-life as well as the heyday of consistency theories. The second paper, on cognitive tuning, shows a quite different pattern of citation, starting slowly, then increasing its citations after 1980 and reaching a peak in the period 2000–4, fully 40 years after its publication. How can this be explained? How is it that two papers by the same prestigious author can have such different citation careers?
Figure 3.9 Number of citations per 5-year period of Zajonc’s (1960a, 1960b) articles on cognitive consistency (blue) and cognitive tuning (red). Source: Google Scholar.
The example would seem to lie in the Zeitgeist of the period. Zajonc’s work on cognitive tuning was an early demonstration of a functional approach to communication, whereby a message sender adapted (“tuned”) her message so as to maintain interpersonal harmony with its recipient. The article was published just one year after Chomsky (1959) published his devastating attack on Skinner’s (1957) book Verbal Behavior, which led to a period of ascendancy for the structuralist approach to language, marked by attention to the acquisition of syntax and semantics (e.g., Brown, 1973). Those who have actually read Skinner’s book will have realized that Chomsky’s critique was in many ways beside the point, as Skinner was not interested in syntactic issues. Rather, he focused on the functional aspect of language through his treatment of the instrumental use of mands (e.g., commands and demands) to obtain what the organism wants. For these reasons, Skinner (no doubt to his cost) disdained even to reply to Chomsky. It was only when philosophers reintroduced the functional analysis through speech act theory (Austin, 1962) that linguists and psychologists once again adopted a functional approach, albeit with a new cognitive focus on “shared intentionality” (Bruner, 1975; Clark, 1985). Unsurprisingly, it was around this time that Zajonc’s work on cognitive tuning began to be recognized as the groundbreaking study that it was, and increasing attention began to be paid to it. It seems that the cognitive revolution in general and Chomsky’s gift for polemic in particular may have delayed attention to the communicative functions of human language.
In contrast to research on cognitive tuning, other topics seem to drop off the map without evident explanation. This seems to have been the case with the work by the British sociologist Basil Bernstein on the relation between social class and the use of “restricted” and “elaborated” speech codes. Bernstein’s work attracted substantial attention in the 1968–69 Handbook of Social Psychology, being mentioned by Tajfel (1969) and discussed at length by Zigler and Child (1969) in their chapter on socialization, and by Miller and McNeill (1969) in their chapter on psycholinguistics. At the time there was a lively controversy over whether the exclusive use of restricted codes by working-class mothers played a role in the subsequent failure of their children at school. One implication of Bernstein’s work was that the access to an elaborated, formal code (use of more complex sentences, greater use of logical function words such as if, because, before, after, etc.) would provide an advantage to middle-class children when they went to school. Bernstein’s ideas were debated across the Atlantic, because of their implications for controversy over race, schooling, and intelligence (e.g., Jensen, 1969). His ideas were notably contested by the American linguist William Labov (1970), who argued that the speech style of Black American English in no way prevented its speakers from formulating logically complex propositions. Nevertheless, at the height of this controversy, a thoughtful review by Robinson (1972) concluded that the debate had not been settled. If anything, later work has confirmed the intergenerational stability of maternal tutoring styles. Thus Robinson (2003) reviewed work which suggests that women use the same tutoring style with their children that they experienced with their own mothers. Given the social significance of the topic and the continuing social inequality in advanced societies, one could have hoped for a better resolution of this issue. Yet one searches for answers in vain in later handbook chapters on language in social psychology. Tellingly, the only mentions to be found of Bernstein’s name are in footnotes to chapters by Krauss where he distinguishes his (cognitivist) use of the term “code” from that of Bernstein (Krauss & Chiu, 1998; Krauss & Fussell, 1996). Nor is any mention made in a 2001 handbook of language and social psychology (Robinson & Giles, 2001). It seems that the topic has quite simply migrated out of social psychology, as interest in Bernstein’s work remained strong in sociology (Sadovnik, 2001). There are several reasons that one may adduce for this. One is that there has been a decline in interest in the related topic of the effect of schooling and literacy on thinking (Cole and Scribner, 1974). But another is surely the internal evolution of social psychology, notably the disappearance of socialization as a major topic in social psychology. Whatever the reason, there must be concern that an important societal question has not been fully addressed. For while sociolinguists are well placed to document the existence of restricted and elaborated codes in different social groups, it is surely psychologists that are best placed to evaluate whether and how these codes have an effect on thinking styles and scholastic performance.
History often serves an identity function for a group, in explaining who they are, how they came to get that way, and what their mission should be (Hilton and Liu, 2008). Events are selectively “remembered” that serve as “charters” that justify the current constitution of the group (cf. Malinowski, 1926). Collective memories of ingroups tend to focus on positive deeds to the exclusion of negative ones (Sahdra & Ross, 2007). The risk is evident that scientific histories that are written by practitioners will be self-serving in this way, and some historians of psychology have criticized just such a “presentist” or “Whig” approach to the history of social psychology (Farr, 1996). For example, Danziger (2000) suggests that the choice of social facilitation as the “first” area of experiment in social psychology is less a matter of historical fact than an attempt at an “origin myth” reflecting the field’s current interests at the time the history in question was written (Allport, 1954).
In this respect, it is striking to note the distinct lack of trium-phalism in the retrospective evaluations of social psychology given by those commonly regarded as founders of the discipline. Festinger led the way by voting with his feet, leaving social psychology in 1964 and psychology altogether in 1968 to work on what has come to be called “cognitive archeology.” When explaining his move into a new field he wrote, “Forty years in my own life seems like a long time to me, and while some things have been learned about human beings and human behavior during this time, progress has not been rapid enough; nor has the new knowledge been impressive enough. And even worse, from a broader point of view, we do not seem to have been working on many of the important problems” (Festinger, 1983). Festinger’s complaints about fragmentation of knowledge were echoed by Kelley (1983), who compared the list of topics in social psychology to a “Sears and Roebuck catalogue.”
Nor did the new “person memory” (Hastie et al., 1980) and “social cognition” approaches (North and Fiske, this volume) meet with unreserved acclaim from the old guard. The old focus on social judgment processes changed, as new techniques were introduced to track knowledge-activation and memory processes. These included word-priming techniques which were adapted from psycholinguistics to study social perception (Higgins, Rholes, & Jones, 1977), and signal detection theory (Tanner & Swets, 1954) was drawn upon to show how stereotyping could interfere with memory in the “Who said what” paradigm of Taylor, Fiske, Etcoff, and Ruderman (1978). But the introduction of new theories and techniques from the revitalized cognitive psychology did not prevent Asch (1987) from lamenting the new “cognitive behaviorism” and the “expansion of surface rather than depth” since the first edition of his book. He noted that “Busyness is no substitute for serious analysis,” and used this as an argument for the reissue of his own book. Furthermore he recommended Heider (1958) and Brown (1986) for further reading!
Asch’s regrets about the “outcropping of piecemeal ways of thinking during a supposedly Gestalt revolution” had been echoed by Heider in his 1976 address to the Society of Experimental Social Psychology on receiving their Distinguished Senior Scientist Award. As Isen and Hastorf (1982) recounted the scene, “Spreading the fingers of his hand in illustration, he described how, years before, he and his contemporaries had introduced ‘peninsulas’ of thought and study that they hoped subsequent generations would extend and weld into a broad, solid foundation for the field. Instead of the hope for development, broadening, and integration of these ideas, however, he said softly, ‘it seems that the peninsulas have become’—and he paused, shrugged his shoulders, and smiled gently—ʽinsulas’.”
Bruner and Tagiuri’s (1954) conclusion about work on person perception that there has been an “excess of empirical enthusiasm and a deficit of theoretical surmise” could easily be extended to all the research fields reviewed in this chapter. There has been a tendency for model construction to be bottom-up and inductive (driven by experimental results) rather than top-down and hypothetico-deductive (driven by theory). This runs the risk of inappropriate models being adopted if there are artefacts or omissions in the experimental procedure, as seems to have been the case in “classic” experiments in causal attribution and illusory correlation. In addition, a recurring phenomenon in the field has been the need for a reinterpretation of experimental results in a way that takes the human participant’s interpretation of her task better into account. While social psychology always seems ready to adopt the latest “scientific” technique (priming, fMRI, etc.), it is still not clear that this fundamental lesson has been fully learnt. Finally, social psychology has not succeeded in identifying simple and elegant cognitive models that succeed in explaining a wide range of data with a minimum of assumptions in the manner of Rescorla and Wagner’s (1972) model in learning theory. Such models are common in other fields, such as the use of Grice’s (1975) logic of conversation in linguistics and philosophy, and the theory of expected utility in economics. While the attempt to construct such all-encompassing theories has been made in social psychology (e.g., Festinger, 1957), the attempt has met with failure. This contributed to the growing sense of crisis in the discipline in the 1970s and 1980s (Jackson, 1988).
The kind of theories studied in social psychology during this period has changed from the larger frameworks described at the beginning of this chapter to “mid-range” and even “mini-range” theories (Kruglanski, 2001). Insiders complain of the lack of theoretical ambition of social psychology, and sympathetic outsiders who wish to learn something useful from social psychology often complain of the proliferation of mini-theories that “explain (next to) nothing.” They are often economists, who are busy raiding social psychology’s rich store of empirical knowledge for material to use in their revisions of theories of economic rationality. Much of this (e.g., Brocas and Carrillo, 2003) draws on ideas that were generated by the research reviewed in this chapter. However, the different tastes in theory construction between economists and psychologists can be represented by an indifference curve (Rabin, 2002; see Figure 3.10 ). Economists are prepared to sacrifice psychological realism in their theories of human behavior if it brings them the greater formal tractability that allows them to make usable predictions about economic behavior. Psychologists on the other hand are more willing to sacrifice theoretical ambition for empirical adequacy. While one can admire the elegance of economists’ theories, we should also remember that if there is a social science that one could readily criticize for an “excess of theoretical surmise and a deficit of empirical enthusiasm,” it would have to be modern economics. A challenge for future generations of social psychologists will therefore be to find the right balance between theoretical elegance and predictive accuracy.
Figure 3.10 Indifference curves for theory preferences in economists and psychologists. From Rabin, 2002. Copyright © 2002 Elsevier. Reproduced with permission.
I thank Laetitia Charalambides and Véronique Turbet-Delof for their help in the preparation of this chapter, and Jerome Bruner,
Gerd Gigerenzer, Craig McKenzie, Richard Nisbett, Robin Murphy, Peter Robinson, and especially Bertram Malle for their helpful feedback on an earlier draft.
I heard remarks to this effect on more than one occasion from American social psychologists in the 1980s and 1990s, including one of Brown’s successors at Harvard.
The upshot was that Roediger eventually changed back to cognitive psychology, having read Neisser’s (1967) classic text on the subject, where he found material on language, memory and other topics that Brown had treated under the rubric of social psychology.
Behaviorism was a product of the more general “physics envy” of the social sciences, which sought to adopt the techniques that had proved so successful in the physical sciences. Each discipline would seek to adopt best practices from physics. Sociology and anthropology thus adopted functional perspectives following Parsons and Malinowski. Gestalt psychology was also subject to this “physics envy,” with its now discredited postulate that perceptual Gestalts were the result of equilibria in electrical states of brain activation.
Razran might have a problem getting an experiment with one subject published today, especially as that subject was Razran himself. The only journal today that systematically publishes data from one (animal) subject is the Journal of the Experimental Analysis of Behavior, founded by B. F. Skinner. However, Razran published his experiment at a time when psychology was still much closer to the Wundtian ideal of a single, “expert” subject, often the experimenter himself. It was during the 1920s and 1930s that psychology adopted the practice (originated in educational psychology) of experimenting on groups of subjects and then using statistical inference to analyze mean tendencies. In the period 1934–36, 29% of publications in the Journal of Experimental Psychology were still based on analysis of individual data (Danziger, 1987).
They did not however statistically test the differences in correlations between codability and recognition.
Festinger (1989, p. 562) was nevertheless critical of high-impact experiments that in his view did not serve a clear theoretical point, such as the Stanford prison experiment.
It is striking how many of those who have sought to revise the theoretical bases of attribution theory are of European origin (Kruglanski, Jaspars, Hilton, Försterling, van Overwalle, Malle, Gawronski). Heider was once heard to remark about his American colleagues: “Ah these Americans! They work so hard, but they never stop to think!” (Recounted to the author, with a Norwegian impression of an Austrian accent, by Ragnar Rommetveit in 1980.)