It’s Time to Stop Talking About “Generations”

From boomers to zoomers, the concept gets social history all wrong.
A gif of hand gestures with generational labels
Treating age cohorts like cultural units is more confusing than clarifying.Illustration by Ben Wiseman

The discovery that you can make money marketing merchandise to teen-agers dates from the early nineteen-forties, which is also when the term “youth culture” first appeared in print. There was a reason that those things happened when they did: high school. Back in 1910, most young people worked; only fourteen per cent of fourteen- to seventeen-year-olds were still in school. In 1940, though, that proportion was seventy-three per cent. A social space had opened up between dependency and adulthood, and a new demographic was born: “youth.”

The rate of high-school attendance kept growing. By 1955, eighty-four per cent of high-school-age Americans were in school. (The figure for Western Europe was sixteen per cent.) Then, between 1956 and 1969, college enrollment in the United States more than doubled, and “youth” grew from a four-year demographic to an eight-year one. By 1969, it made sense that everyone was talking about the styles and values and tastes of young people: almost half the population was under twenty-five.

Today, a little less than a third of the population is under twenty-five, but youth remains a big consumer base for social-media platforms, streaming services, computer games, music, fashion, smartphones, apps, and all kinds of other goods, from motorized skateboards to eco-friendly water bottles. To keep this market churning, and to give the consulting industry something to sell to firms trying to understand (i.e., increase the productivity of) their younger workers, we have invented a concept that allows “youth culture” to be redefined periodically. This is the concept of the generation.

The term is borrowed from human reproductive biology. In a kinship structure, parents and their siblings constitute “the older generation”; offspring and their cousins are “the younger generation.” The time it takes, in our species, for the younger generation to become the older generation is traditionally said to be around thirty years. (For the fruit fly, it’s ten days.) That is how the term is used in the Hebrew Bible, and Herodotus said that a century could be thought of as the equivalent of three generations.

Around 1800, the term got transplanted from the family to society. The new idea was that people born within a given period, usually thirty years, belong to a single generation. There is no sound basis in biology or anything else for this claim, but it gave European scientists and intellectuals a way to make sense of something they were obsessed with, social and cultural change. What causes change? Can we predict it? Can we prevent it? Maybe the reason societies change is that people change, every thirty years.

Before 1945, most people who theorized about generations were talking about literary and artistic styles and intellectual trends—a shift from Romanticism to realism, for example, or from liberalism to conservatism. The sociologist Karl Mannheim, in an influential essay published in 1928, used the term “generation units” to refer to writers, artists, and political figures who self-consciously adopt new ways of doing things. Mannheim was not interested in trends within the broader population. He assumed that the culture of what he called “peasant communities” does not change.

Nineteenth-century generational theory took two forms. For some thinkers, generational change was the cause of social and historical change. New generations bring to the world new ways of thinking and doing, and weed out beliefs and practices that have grown obsolete. This keeps society rejuvenated. Generations are the pulse of history. Other writers thought that generations were different from one another because their members carried the imprint of the historical events they lived through. The reason we have generations is that we have change, not the other way around.

There are traces of both the pulse hypothesis and the imprint hypothesis in the way we talk about generations today. We tend to assume that there is a rhythm to social and cultural history that maps onto generational cohorts, such that each cohort is shaped by, or bears the imprint of, major historical events—Vietnam, 9/11, COVID. But we also think that young people develop their own culture, their own tastes and values, and that this new culture displaces the culture of the generation that preceded theirs.

Today, the time span of a generational cohort is usually taken to be around fifteen years (even though the median age of first-time mothers in the U.S. is now twenty-six and of first-time fathers thirty-one). People born within that period are supposed to carry a basket of characteristics that differentiate them from people born earlier or later.

This supposition requires leaps of faith. For one thing, there is no empirical basis for claiming that differences within a generation are smaller than differences between generations. (Do you have less in common with your parents than with people you have never met who happen to have been born a few years before or after you?) The theory also seems to require that a person born in 1965, the first year of Generation X, must have different values, tastes, and life experiences from a person born in 1964, the last year of the baby-boom generation (1946-64). And that someone born in the last birth year of Gen X, 1980, has more in common with someone born in 1965 or 1970 than with someone born in 1981 or 1990.

Everyone realizes that precision dating of this kind is silly, but although we know that chronological boundaries can blur a bit, we still imagine generational differences to be bright-line distinctions. People talk as though there were a unique DNA for Gen X—what in the nineteenth century was called a generational “entelechy”—even though the difference between a baby boomer and a Gen X-er is about as meaningful as the difference between a Leo and a Virgo.

You could say the same things about decades, of course. A year is, like a biological generation, a measurable thing, the time it takes the Earth to orbit the sun. But there is nothing in nature that corresponds to a decade—or a century, or a millennium. Those are terms of convenience, determined by the fact that we have ten fingers.

Yet we happily generalize about “the fifties” and “the sixties” as having dramatically distinct, well, entelechies. Decade-thinking is deeply embedded. For most of us, “She’s a seventies person” carries a lot more specific information than “She’s Gen X.” By this light, generations are just a novel way of slicing up the space-time continuum, no more arbitrary, and possibly a little less, than decades and centuries. The question, therefore, is not “Are generations real?” The question is “Are they a helpful way to understand anything?”

Bobby Duffy, the author of “The Generation Myth” (Basic), says yes, but they’re not as helpful as people think. Duffy is a social scientist at King’s College London. His argument is that generations are just one of three factors that explain changes in attitudes, beliefs, and behaviors. The others are historical events and “life-cycle effects,” that is, how people change as they age. His book illustrates, with a somewhat overwhelming array of graphs and statistics, how events and aging interact with birth cohort to explain differences in racial attitudes, happiness, suicide rates, political affiliations—you name it, for he thinks that his three factors explain everything.

Cartoon by T. S. McCoy

Duffy’s over-all finding is that people in different age groups are much more alike than all the talk about generations suggests, and one reason for all that talk, he thinks, is the consulting industry. He says that, in 2015, American firms spent some seventy million dollars on generational consulting (which doesn’t seem that much, actually). “What generational differences exist in the workplace?” he asks. His answer: “Virtually none.”

Duffy is good at using data to take apart many familiar generational characterizations. There is no evidence, he says, of a “loneliness epidemic” among young people, or of a rise in the rate of suicide. The falling off in sexual activity in the United States and the U.K. is population-wide, not just among the young.

He says that attitudes about gender in the United States correlate more closely with political party than with age, and that, in Europe, anyway, there are no big age divides in the recognition of climate change. There is “just about no evidence,” he says, that Generation Z (1997-2012, encompassing today’s college students) is more ethically motivated than other generations. When it comes to consumer boycotts and the like, “ ‘cancel culture’ seems to be more of a middle-age thing.” He worries that generational stereotypes—such as the characterization of Gen Z-ers as woke snowflakes—are promoted in order to fuel the culture wars.

The woke-snowflake stereotype is the target of “Gen Z, Explained” (Chicago), a heartfelt defense of the values and beliefs of contemporary college students. The book has four authors, Roberta Katz, Sarah Ogilvie, Jane Shaw, and Linda Woodhead—an anthropologist, a linguist, a historian, and a sociologist—and presents itself as a social-scientific study, including a “methodological appendix.” But it resembles what might be called journalistic ethnography: the portrayal of social types by means of interviews and anecdotes.

The authors adopt a key tenet of the pulse hypothesis. They see Gen Z-ers as agents of change, a generation that has created a youth culture that can transform society. (The fact that when they finished researching their book, in 2019, roughly half of Gen Z was under sixteen does not trouble them, just as the fact that at the time of Woodstock, in 1969, more than half the baby-boom generation was under thirteen doesn’t prevent people from making generalizations about the baby boomers.)

Their book is based on hour-long interviews with a hundred and twenty students at three colleges, two in California (Stanford and Foothill College, a well-regarded community college) and one in the U.K. (Lancaster, a selective research university). The authors inform us that the interviewees were chosen “by word of mouth and personal networking,” which sounds a lot like self-selection. It is, in any event (as they unapologetically acknowledge), hardly a randomized sample.

The authors tell us that the interviews were conducted entirely by student research assistants, which means that, unless the research assistants simply read questions off a list, there was no control over the depth or the direction of the interviews. There were also some focus groups, in which students talked about their lives with, mostly, their friends, an exercise performed in an echo chamber. Journalists, or popular ethnographers, would at least have met and observed their subjects. It’s mystifying why the authors felt a need to distance themselves in this way, given how selective their sample was to begin with. We are left with quotations detached from context. Self-reporting is taken at face value.

The authors supplemented the student interviews with a lexical glossary designed to pick out words and memes heavily used by young people, and with two surveys, designed by one of the authors (Woodhead) and conducted by YouGov, an Internet polling company, of eighteen- to twenty-five-year-olds in the United States and the U.K.

Where there is an awkward discrepancy between the survey results and what the college students say in the interviews, the authors attempt to explain it away. The YouGov surveys found that ninety-one per cent of all persons aged eighteen to twenty-five, American and British, identify as male or female, and only four per cent as gender fluid or nonbinary. (Five per cent declined to answer.) This does not match the impression created by the interviews, which suggest that there should be many more fluid and nonbinary young people out there, so the authors say that we don’t really know what the survey respondents meant by “male” and “female.” Well, then, maybe they should have been asked.

The authors attribute none of the characteristics they identify as Gen Z to the imprint of historical events—with a single exception: the rise of the World Wide Web. Gen Z is the first “born digital” generation. This fact has often been used to stereotype young people as screen-time addicts, captives of their smartphones, obsessed with how they appear on social media, and so on. The Internet is their “culture.” They are trapped in the Web. The authors of “Gen Z, Explained” emphatically reject this line of critique. They assure us that Gen Z-ers “understand both the potential and the downside of technology” and possess “critical awareness about the technology that shapes their lives.”

For the college students who were interviewed (although not, evidently, for the people who were surveyed), a big part of Gen Z culture revolves around identity. As the authors put it, “self-labeling has become an imperative that is impossible to escape.” This might seem to suggest a certain degree of self-absorption, but the authors assure us that these young people “are self-identified and self-reliant but markedly not self-centered, egotistical, or selfish.”

“Lily” is offered to illustrate the ethical richness of this new concern. It seems that Lily has a friend who is always late to meet with her: “She explained that while she of course wanted to honor and respect his unique identity, choices, and lifestyle—including his habitual tardiness—she was also frustrated by how that conflicted with her sense that he was then not respecting her identity and preference for timeliness.” The authors do not find this amusing.

The book’s big claim is that Gen Z-ers “may well be the heralds of new attitudes and expectations about how individuals and institutions can change for the better.” They have come up with new ways of working (collaborative), new forms of identity (fluid and intersectional), new concepts of community (diverse, inclusive, non-hierarchical).

Methodology aside, there is much that is refreshing here. There is no reason to assume that younger people are more likely to be passive victims of technology than older people (that assumption is classic old person’s bias), and it makes sense that, having grown up doing everything on a computer, Gen Z-ers have a fuller understanding of the digital universe than analog dinosaurs do. The dinosaurs can say, “You don’t know what you’re missing,” but Gen Z-ers can say, “You don’t understand what you’re getting.”

The claim that addiction to their devices is the cause of a rise in mental disorders among teen-agers is a lot like the old complaint that listening to rock and roll turns kids into animals. The authors cite a recent study (not their own) that concludes that the association between poor mental health and eating potatoes is greater than the association with technology use. We’re all in our own fishbowls. We should hesitate before we pass judgment on what life is like in the fishbowls of others.

The major problem with “Gen Z, Explained” is not so much the authors’ fawning tone, or their admiration for the students’ concerns—“environmental degradation, equality, violence, and injustice”—even though they are the same concerns that almost everyone in their social class has, regardless of age. The problem is the “heralds of a new dawn” stuff.

“A crisis looms for all unless we can find ways to change,” they warn. “Gen Zers have ideas of the type of world they would like to bring into being. By listening carefully to what they are saying, we can appreciate the lessons they have to teach us: be real, know who you are, be responsible for your own well-being, support your friends, open up institutions to the talents of the many, not the few, embrace diversity, make the world kinder, live by your values.”

I believe we have been here before, Captain. Fifty-one years ago, The New Yorker ran a thirty-nine-thousand-word piece that began:

There is a revolution under way . . . It is now spreading with amazing rapidity, and already our laws, institutions, and social structure are changing in consequence. Its ultimate creation could be a higher reason, a more human community, and a new and liberated individual. This is the revolution of the new generation.

The author was a forty-two-year-old Yale Law School professor named Charles Reich, and the piece was an excerpt from his book “The Greening of America,” which, when it came out, later that year, went to No. 1 on the Times best-seller list.

Reich had been in San Francisco in 1967, during the so-called Summer of Love, and was amazed and excited by the flower-power wing of the counterculture—the bell-bottom pants (about which he waxes ecstatic in the book), the marijuana and the psychedelic drugs, the music, the peace-and-love life style, everything.

He became convinced that the only way to cure the ills of American life was to follow the young people. “The new generation has shown the way to the one method of change that will work in today’s post-industrial society: revolution by consciousness,” he wrote. “This means a new way of living, almost a new man. This is what the new generation has been searching for, and what it has started to achieve.”

So how did that work out? The trouble, of course, was that Reich was basing his observations and predictions on, to use Mannheim’s term, a generation unit—a tiny number of people who were hyperconscious of their choices and values and saw themselves as being in revolt against the bad thinking and failed practices of previous generations. The folks who showed up for the Summer of Love were not a representative sample of sixties youth.

Most young people in the sixties did not practice free love, take drugs, or protest the war in Vietnam. In a poll taken in 1967, when people were asked whether couples should wait to have sex until they were married, sixty-three per cent of those in their twenties said yes, virtually the same as in the general population. In 1969, when people aged twenty-one to twenty-nine were asked whether they had ever used marijuana, eighty-eight per cent said no. When the same group was asked whether the United States should withdraw immediately from Vietnam, three-quarters said no, about the same as in the general population.

Most young people in the sixties were not even notably liberal. When people who attended college from 1966 to 1968 were asked which candidate they preferred in the 1968 Presidential election, fifty-three per cent said Richard Nixon or George Wallace. Among those who attended college from 1962 to 1965, fifty-seven per cent preferred Nixon or Wallace, which matched the results in the general election.

The authors of “Gen Z, Explained” are making the same erroneous extrapolation. They are generalizing on the basis of a very small group of privileged people, born within five or six years of one another, who inhabit insular communities of the like-minded. It’s fine to try to find out what these people think. Just don’t call them a generation.

“I don’t know about you, but I really miss stampeding.”
Cartoon by Liza Donnelly and Carl Kissin

Most of the millions of Gen Z-ers may be quite different from the scrupulously ethical, community-minded young people in the book. Duffy cites a survey, conducted in 2019 by a market-research firm, in which people were asked to name the characteristics of baby boomers, Gen X-ers, millennials (1981-96), and Gen Z-ers. The top five characteristics assigned to Gen Z were: tech-savvy, materialistic, selfish, lazy, and arrogant. The lowest-ranked characteristic was ethical. When Gen Z-ers were asked to describe their own generation, they came up with an almost identical list. Most people born after 1996 apparently don’t think quite as well of themselves as the college students in “Gen Z, Explained” do.

In any case, “explaining” people by asking them what they think and then repeating their answers is not sociology. Contemporary college students did not invent new ways of thinking about identity and community. Those were already rooted in the institutional culture of higher education. From Day One, college students are instructed about the importance of diversity, inclusion, honesty, collaboration—all the virtuous things that the authors of “Gen Z, Explained” attribute to the new generation. Students can say (and some do say) to their teachers and their institutions, “You’re not living up to those values.” But the values are shared values.

And they were in place long before Gen Z entered college. Take “intersectionality,” which the students in “Gen Z, Explained” use as a way of refining traditional categories of identity. That term has been around for more than thirty years. It was coined (as the authors note) in 1989, by the law professor Kimberlé Crenshaw. And Crenshaw was born in 1959. She’s a boomer.

“Diversity,” as an institutional priority, dates back even farther. It played a prominent role in the affirmative-action case of Regents of the University of California v. Bakke, in 1978, which opened the constitutional door to race-conscious admissions. That was three “generations” ago. Since then, almost every selective college has worked to achieve a diverse student body and boasts about it when it succeeds. College students think of themselves and their peers in terms of identity because of how the institution thinks of them.

People who went to college in an earlier era may find this emphasis a distraction from students’ education. Why should they be constantly forced to think about their own demographic profiles and their differences from other students? But look at American politics—look at world politics—over the past five years. Aren’t identity and difference kind of important things to understand?

And who creates “youth culture,” anyway? Older people. Youth has agency in the sense that it can choose to listen to the music or wear the clothing or march in the demonstrations or not. And there are certainly ground-up products (bell-bottoms, actually). Generally, though, youth has the same degree of agency that I have when buying a car. I can choose the model I want, but I do not make the cars.

Failure to recognize the way the fabric is woven leads to skewed social history. The so-called Silent Generation is a particularly outrageous example. That term has come to describe Americans who went to high school and college in the nineteen-fifties, partly because it sets up a convenient contrast to the baby-boom generation that followed. Those boomers, we think—they were not silent! In fact, they mostly were.

The term “Silent Generation” was coined in 1951, in an article in Time—and so was not intended to characterize the decade. “Today’s generation is ready to conform,” the article concluded. Time defined the Silent Generation as people aged eighteen to twenty-eight—that is, those who entered the workforce mostly in the nineteen-forties. Though the birth dates of Time’s Silent Generation were 1923 to 1933, the term somehow migrated to later dates, and it is now used for the generation born between 1928 and 1945.

So who were these silent conformists? Gloria Steinem, Muhammad Ali, Tom Hayden, Abbie Hoffman, Jerry Rubin, Nina Simone, Bob Dylan, Noam Chomsky, Philip Roth, Susan Sontag, Martin Luther King, Jr., Billie Jean King, Jesse Jackson, Joan Baez, Berry Gordy, Amiri Baraka, Ken Kesey, Huey Newton, Jerry Garcia, Janis Joplin, Jimi Hendrix, Andy Warhol . . . Sorry, am I boring you?

It was people like these, along with even older folks, like Timothy Leary, Allen Ginsberg, and Pauli Murray, who were active in the culture and the politics of the nineteen-sixties. Apart from a few musicians, it is hard to name a single major figure in that decade who was a baby boomer. But the boomers, most of whom were too young then even to know what was going on, get the credit (or, just as unfairly, the blame).

Mannheim thought that the great danger in generational analysis was the elision of class as a factor in determining beliefs, attitudes, and experiences. Today, we would add race, gender, immigration status, and any number of other “preconditions.” A woman born to an immigrant family in San Antonio in 1947 had very different life chances from a white man born in San Francisco that year. Yet the baby-boom prototype is a white male college student wearing striped bell-bottoms and a peace button, just as the Gen Z prototype is a female high-school student with spending money and an Instagram account.

For some reason, Duffy, too, adopts the conventional names and dates of the postwar generations (all of which originated in popular culture). He offers no rationale for this, and it slightly obscures one of his best points, which is that the most formative period for many people happens not in their school years but once they leave school and enter the workforce. That is when they confront life-determining economic and social circumstances, and where factors like their race, their gender, and their parents’ wealth make an especially pronounced difference to their chances.

Studies have consistently indicated that people do not become more conservative as they age. As Duffy shows, however, some people find entry into adulthood delayed by economic circumstances. This tends to differentiate their responses to survey questions about things like expectations. Eventually, he says, everyone catches up. In other words, if you are basing your characterization of a generation on what people say when they are young, you are doing astrology. You are ascribing to birth dates what is really the result of changing conditions.

Take the boomers: when those who were born between 1946 and 1952 entered the workforce, the economy was surging. When those who were born between 1953 and 1964 entered it, the economy was a dumpster fire. It took longer for younger boomers to start a career or buy a house. People in that kind of situation are therefore likely to register in surveys as “materialistic.” But it’s not the Zeitgeist that’s making them that way. It’s just the business cycle. ♦


New Yorker Favorites