For updates on the Museum’s work towards the repatriation and burial of the Morton Collection, please refer to this page.
Craniology is the study of the skull. In the last few hundred years, craniological methods, like measuring the angle of the face, the size of the braincase, or the ratio of the length to the breadth of the head, have been used to classify people into racial groupings, to make claims about alleged differences in intelligence, and to study human variation. The study of medicine, anatomy, and art were all important to the development of craniology. Furthermore, craniology’s association with “race science” gave it widespread influence through the early 20th century and closely connected craniology to the development of physical anthropology.
Differences in the physical and behavioral characteristics of peoples have been noted since antiquity. Records from ancient Greek, Roman, Egyptian, Hebrew, Chinese, and other civilizations remarked upon the distinctions among the languages, customs, and appearance of peoples. However, it was not until the rise of European exploration, conquest, and colonialism that something like the modern concept of “race” begin to take hold. This idea stipulated innate, inherited, unchanging differences, behavioral as well as physical, in human groups.1 One of the first Europeans to classify humans based on race was the French physician and traveler François Bernier (1620-1688). In 1684, Bernier classified humans into distinct races based on geography.2 Bernier’s classification was typical of those that followed through the 18th century, which were based on observations and reports from travelers, missionaries, and colonial officials. These reports often supplied highly ethnocentric descriptions of skin color, temperament, and other features which were used to classify human races. Some of these authors spoke from the authority of their own experience in Africa, Asia, or the Americas, as did Bernier. Other authors relied on their scholarly expertise as naturalists, as did, for example, the Swedish taxonomist Carolus Linnaeus (1707-1788).3 All of these early racial classifications were largely the product of written field reports and observations until the late 18th century and the rise of comparative anatomy.
In the waning decades of the 18th century, the collection, trade, dissection, and study of bodies and bones provided an anatomical basis for racial divisions. Anatomical analysis allowed for repeated observation and precise measurement, and was, therefore, considered more consistent than conflicting and hard-to-verify field reports. Since Hippocrates (470-360 BCE) in ancient Greece, physicians had studied the skull. But it was not until anatomist-artists such as Leonardo da Vinci (1452-1519), Albrecht Dürer (1471-1528), and, especially, Anders Vesalius (1514-1564) conducted systematic, empirical studies that variations in the shapes of skulls were finally recognized.4 The direct application of craniology to racial divisions traces to about 1770, when anatomist and artist Pieter Camper (1722-1789) devised the “facial angle,” to distinguish among different human races and apes.5 This angle is formed between a line coming from the forehead to the most projecting point of the jaw to a line coming from the ear to the bottom of the nose. A larger angle indicates a more vertical forehead and less projecting jaw, while a smaller angle indicates a more sloping forehead and more projecting jaw. For Camper, the higher European forehead and less pronounced jaw were closest to the (anatomically impossible) Greco-Roman ideal of beauty, while the African was furthest from this ideal and closest to the ape.6 The use of craniological methods to differentiate human races from one another, and to differentiate humans from animals, would remain the primary aims of craniology.
Beginning in 1775, Johann Friedrich Blumenbach (1752-1840) suggested that the four races in the taxonomy of Linnaeus (including, African, American, Asian, and European) could be expanded to five, with somewhat different terminology: Europeans as Caucasians, Africans as Aethiopians, Asians as Mongolians, as well as Americans and Malays (who were Polynesians and other South Pacific Peoples). These five groups corresponded to what Blumenbach called the major “varieties” of humans, which could be divided based on skull form.7 Blumenbach published woodcut images and descriptions of some of his personal collection of hundreds of skulls from across the world sent to him by travelers and students.8 Blumenbach, who was widely celebrated in his time, may be largely responsible for coalescing craniology as a distinct field of study. However, other naturalists like Samuel Thomas von Sömmering (1755-1830) and Johann Baptist von Spix (1781-1826) contributed to studies of the comparative anatomy of the skull and nervous system, which would be foundational for later research.9
Although Blumenbach defined different races, he suggested that physical differences were the effects of the environment on the body, a theory he called “degeneration.”10 He argued that races blended into each other, and that all humans on the planet shared common ancestry. Blumenbach was, at least relative to many scholars of his time, an egalitarian in his racial worldview, and was opposed to slavery. Even so, his five-fold division of humanity into races and his focus on the skull was an enduring feature of craniology, adopted by many who did not share his belief in racial equality. For example, in 1817, influential French naturalist Georges Cuvier (1769-1832), who dissected the “Venus Hottentot” Sarah Baartman in Paris, claimed that she had a small brain and a resemblance to a monkey. For him and many of his contemporaries, the examination of her body, and the bodies of other Africans, proved their inferiority to Europeans, showing “no exception to this cruel law which seems to have condemned to eternal inferiority the races with cramped and compressed skulls.”11
Around the time of Cuvier’s report, the claim of a link between skull size and intelligence was becoming commonplace. This increasingly frequent supposition of 19th century naturalists, inherited from long association in Western thought, traces to at least Aristotle (384-322 BCE) and was bolstered by phrenology.12 Phrenology was a pseudo-science founded by German physician Franz Joseph Gall (1758-1828) in the late 18th century, and then continued and popularized by Johann Spurzheim (1776-1832) and George Combe (1788-1858). Phrenology aimed to determine character and intelligence from the shape and size of the brain, as reflected through the exterior cranial surface.13 The principles of phrenology were given an air of obviousness by cases of small-brained “idiots” reported from hospitals and asylums, and large-headed “geniuses,” documented by plaster casts, paintings, and sustained phrenological observation.14 Phrenology was used both to advance claims of racial equality as well as racial hierarchy. Nonetheless, phrenology’s popularity through the mid-19th century in Europe and the United States helped to entrench the notion that there were stark racial differences.15
During the early 19th century, there was also increasing doubt that all humans shared common ancestry. For centuries, scholars and laypeople alike had explained racial differences by referencing the Biblical story of Noah’s three sons (Genesis 9:18-27). This view, that all humans had one origin, is call “monogenism.” Monogenism tended to attribute racial variation to the effects of lifestyle and environments, suggesting a dynamism in racial characters. In contrast, “polygenism” forwarded that human races did not, in fact, share common ancestry. For polygenists, the story of God’s creation of Adam and Eve was, if true, just the story of the creation of the Caucasian race, and it was occasionally claimed that other races were created outside the Garden of Eden.16 For polygenists, racial differences were heritable, fixed, static, and innate. Polygenism was first propounded with speculative assertions by Voltaire (1694-1778) and Lord Kames (1696-1782), then in travelogues such as Edward Long’s (1734-1813) History of Jamaica (1774).17 By the mid-19th century, this notion had grown into scientific racism, in which measurements of body parts, especially heads, could supposedly define human racial differences and capacities.
Monogenists, including Blumenbach, Friedrich Tiedemann (1781-1861), and James Cowles Pritchard (1786-1848), relied upon the impact of differing environments to explain human differences. Except for the philosopher and critic of phrenology Sir William Hamilton (1788-1856), who filled skulls with sand to measure their volume, it was Tiedemann who first, in 1836, made systematic racial comparison of the size of the interior of the braincase.18 By filling braincases with millet then measuring the difference between the weight of the filled and emptied skull, Tiedemann estimated brain size by weight. After measuring over 400 hundred crania from different races (using Blumenbach’s categories), Tiedemann concluded that the large overlap in brain measurements among the races suggested monogenism, and provided a scientific basis for ending the slave trade.19
At this same time, polygenists relied on heredity to explain human difference. Samuel George Morton (1799-1851), Josiah Nott (1804-1873), Louis Agassiz (1807-1873), and Paul Broca (1824-1880) all claimed that there were inalterable racial differences. Samuel George Morton’s craniological publications, Crania Americana (1839), Crania Aegyptiaca (1844), and The Catalogue of Skulls of Man and the Inferior Animals (1849) included measures of the “internal capacity” of the skull which contradicted Tiedemann’s findings.20 Morton claimed that his measurements of the volume of the braincase showed racial differences in average brain size. Morton further suggested that differences in skull size showed a ranking of races based on cranial size, and therefore intelligence: Caucasians (especially Germanic Anglo-Saxons) were most intelligent, followed by Mongolians, Native Americans, Malays, and “Negroes.”21
In his influential book, Crania Americana (1839) Morton presented descriptions, measures, lithographs, and woodcuts of over one hundred Native North and South American crania. Cementing his reputation as the world’s foremost cranial collector, Morton published Crania Aegyptiaca (1844) in which he studied skulls and mummies sent to him by self-taught Egyptologist George Gliddon (1809-1857). Through this study, Morton claimed that he could detect racial differences in the cranial form and brain size of ancient Egyptian remains, and that distinct racial differences had remained the same between ancient Egypt and today.22 The implication was that that the environment did not have an effect in shaping cranial form over time, suggesting that the physical differences among races have always existed.
After Morton, polygeny began to overtake monogeny in educated consensus.23 Morton’s collection of skulls grew to about 900 at the time of his death, making it the then-largest such collection in the world. Morton’s views were elaborated after his death by Agassiz, Nott, and Gliddon, who published Morton’s posthumous papers along with their own and other’s writings in the massive Types of Mankind (1854).24 This book was perhaps the most comprehensive statement of polygenist thought before Darwin. Although phrenology had already largely faded in educated opinion by the 1850s, the notion that cranial form could clearly be associated with intelligence and race stuck.
Unlike monogenists, polygenists regraded each “race” as a separate species. Thus, miscegenation (“the mixing of races”) was regarded as hybridity, analogized to the production of mules from horses and donkeys.25 In France, Broca (1864) devised anthropometric methods to find subtle quantitative differences among different degrees of supposedly “hybrid humans,” using both cranial and other bodily measures.26 In Sweden, polygenist Anders Retzius (1796-1860) devised the cephalic index to define racial types based on the ratio of the length and the breadth of the skull. He defined long-headed “dolicocephalics,” short-headed “brachycephalics,” and intermediates as “mesocephalics.”27
Charles Darwin’s (1809-1882) On the Origin of Species by Means of Natural Selection (1859) did not immediately change craniology and its claims about racial differences in intelligence.28 But, it did signal a shift such that theories like Morton’s, which relied heavily on a Biblical chronology (for example, in proving that racial differences go back to nearly the dawn of history, in ancient Egypt), were no longer plausible. Even so, craniological methods were still frequently used in racial classifications, and even Morton’s own writings helped shape evolutionary accounts of racial difference. Thomas Henry Huxley (1825-1895), who was the first to publish an account of human evolution with his Evidence as to Man’s Place in Nature (1863), used Morton’s research on brain size to show that the distance between the ape and the human was not so great, thereby making the evolutionary connection between humans and apes more plausible. Using Morton’s published measurements, Huxley asserted that the difference between the brain size of the largest Caucasian and that of the smallest Aboriginal Australian was greater than the difference between the brain size of the same Aboriginal Australian and a large gorilla.29
Bolstered by Francis Galton’s (1822-1911) project of “eugenics” (coined 1883), the intentional direction of human evolution by selective breeding, refinements of craniometric and anthropometric measures continued through the late 1800s.30 Increasingly large studies facilitated by measures which could be taken on the living as well as the dead expanded craniology beyond cranial collections like Blumenbach’s or Morton’s.31 Even so, the collection of human skulls for craniology continued well into the 20th century. Thousands of Native American crania were shipped to museums from the American West, and colonial archaeological and anthropological projects supplied crania from around the world.32 Initially, craniology in the early 19th century largely responded to political and moral questions of slavery and the treatment of colonial subjects.33 However, after legal abolition in Great Britain’s colonies (in 1833) and the United States (after the Civil War), concerns about miscegenation, immigration, and linking racial histories with national ones came to the fore.34 With the development of readily printable photography, radiography, and standards of cranial measurement in the late 19th and early 20th century, craniological measurements became increasingly standardized and elaborate. For example, Rudolf Martin’s (1864-1925) comprehensive Lehrbuch der Anthropologie (1914) contained over 400 pages (about 2/5 of its total length) detailing measures, descriptions, and methods for the study of the skull.35 On the basis of these measures, various racial types were defined and re-defined. Attention to old measures of brain size and facial angle were augmented with considerations of nose and ear shape, detailed descriptions of hair texture and color, and more.
Not until Franz Boas’s (1858-1942) study of immigrant parents to the United States and their American-born children, which showed that there was very low inheritance of the cephalic index, did discussion of racial characters of the cranium begin to recede.36 Once racial categories were recognized as changing through time, even in one generation, the older “typological” model of race which had characterized craniological study to that point became increasingly untenable.37 Even so, in the popular imagination, craniology remained an easy explanation for human differences. For example, in 1918 The Washington Post published an article entitled “Science Explains the Prussian Ferocity in War,” with contributions by American Museum of Natural History (New York) president and paleontologist Henry Fairfield Osborn (1857-1935) and anthropologist William King Gregory (1876-1970). This article explained that “gentle” long-headed Teutons had become a minority in the German population, while round-headed “savage” Prussians, who inherited their barbarity from “oriental hordes” traceable to prehistory, accounted for German obedience to authority, brutality, and lack of morality: “‘As a man thinketh in his heart, so is he,’ says the Bible, and science adds that according to the shape of a man’s skull, so he thinks.”38 Despite its popular appeal and the imprimatur of prominent Anglophone naturalists as late as the interwar years, racial craniology would soon disappear in mainstream professional science.
Racial craniology persisted through the 1940s and was a major feature of Nazi Germany’s race science.39 After the war, the race concept, and the science that studied it, was widely rejected. The “New Physical Anthropology” suggested the replacement of old typological, narrow classifications of race with a more fluid, evolving Darwinian concept of “population.”40 While this shift largely ended older craniological studies, many of the measures, methods, and standards developed in the century of craniology's major expansion, from the early 19th to the early 20th century, persisted in studies of paleoanthropology, forensic anthropology, human variation, and medicine.41 The racial classifications and link between brain size and intelligence supposed by Morton and fellow 19th century craniologists, as well as the link between broad cranial classifications (e.g. “savage round-heads” and “gentle long heads”) have been thoroughly discarded. Even so, many of the techniques, ideas, and even collections of human remains which were at the core of craniology persist into the present.