You have /5 articles left.
Sign up for a free account or log in.

Labels act as lenses, shaping what we see. As Michel Foucault observed, “The power to classify is the power to control.”

Recently, Len Gutkin, a senior editor at The Chronicle of Higher Education, discussed a ruckus surrounding Loïc Wacquant’s 2022 study, The Invention of the “Underclass”: A Study in the Politics of Knowledge, which examines how American social scientists, policy analysts, philanthropists and politicians became fixated on the concept of the urban “underclass.” Wacquant argues that social categories are not just descriptive tools but instruments of power, designed to reflect and reinforce prevailing social, economic and political hierarchies.

The act of naming and classifying populations does more than categorize—it legitimizes inequalities and subtly controls marginalized groups by shaping how they are perceived, understood and treated. Labels like “underclass” don’t merely describe reality; they construct and regulate identities, creating hierarchies that drive social policy and shape individual lives. This process is a form of symbolic power, imposing normative expectations on groups and justifying policies that perpetuate inequality under a veneer of objectivity.

In this way, the creation and deployment of labels like “underclass” reveal more about societal anxieties and mechanisms of control than about the individuals they claim to describe. By examining the rise and fall of terms like “underclass” and “precarity,” we can see how social categories shift with changing economic forces and ideologies, illuminating society’s strategies for managing and marginalizing its most vulnerable populations.


The evolution of social labels—“pauper,” “beggar,” “bum,” “hobo,” “welfare queen,” “underclass”—reflects society’s shifting economic anxieties, moral judgments and values. These terms are not merely descriptive; they carry assumptions and judgments that shape how society perceives and treats marginalized groups, drawing boundaries between those deemed deserving of help and those viewed as threats to social order.

In early modern Europe, terms like “pauper” and “beggar” referred to the impoverished in need of charity, distinguishing between the “deserving” poor (e.g., the elderly or disabled) and the “undeserving” poor, seen as lazy or deceitful.

With industrialization, labels such as “bum” and “hobo” arose, framing transient or unemployed individuals as social outsiders unwilling to conform to the demands of steady work and stable life. These labels justified restrictive social policies, reinforcing the emerging capitalist ethic around labor and productivity.

In the 1970s and 1980s, terms like “welfare queen” and “underclass” introduced racial and gendered dimensions to the language of poverty, casting entire groups as culturally deficient and self-perpetuating in poverty.

The “welfare queen” stereotype portrayed Black single mothers as exploitative and irresponsible, reflecting anxieties about government spending and dependency. The “underclass” label described urban minorities as disengaged from mainstream society, framing poverty as a cultural rather than systemic issue and supporting punitive welfare reforms.

These labels served to define and confine identities, legitimizing policies that constrained social mobility and reinforced social hierarchies. Society’s classifications defined who was respectable and who was deviant, justifying welfare cuts, increased policing and restrictions on public spaces.

These labels implicitly positioned marginalized groups as morally deficient, in contrast to the “ideal” citizen—self-sufficient, hardworking, conforming to social norms.

The evolution of these terms aligns with Michel Foucault’s idea of discursive power, where language shapes reality and imposes social order. Terms like “underclass” or “welfare queen” create a “regime of truth” that frames poverty as a moral failing, influencing public discourse and policy.

Once embedded in public discourse, these labels influence policy in ways that further constrain and stigmatize those labeled. Welfare cuts, increased policing and punitive social policies become justified as responses to the perceived moral and cultural failings associated with the label, rather than responses to economic or structural conditions.

Although labels like these have receded, the impulse to classify remains, shifting today to terms like “precarity,” “working poor” and “gig economy workers,” which reflect current concerns over economic insecurity and productivity.

From “pauper” to “underclass,” the history of social classification reveals how language constructs a moral economy, determining who is worthy of support and who is marginalized. These terms reinforce social boundaries and power structures, shaping not only public attitudes but also the identities and opportunities of those at society’s margins.


The term “underclass” gained prominence in the 1970s and 1980s amid rising anxieties about urban poverty, social unrest and economic decline in postindustrial cities. Unlike earlier labels for the urban poor, such as “pauper,” “beggar,” “bum” or “hobo,” which emphasized individual moral failures, underclass implied cultural, racial and behavioral connotations. It signified a persistent, socially disconnected social stratum marked not only by poverty but by behaviors deemed culturally deviant: violent and misogynist and neglectful of family obligations—behaviors deemed indicative of a breakdown in social values, including violence and family neglect.

The Racial and Cultural Coding of “Underclass”: This label was racialized, where it was linked to Black and Latino communities, reinforcing stereotypes that equated poverty with social deviance and moral deficiency. By pathologizing the family lives and behavior of the “underclass,” the term positioned poverty as a cultural failing rather than the result of structural forces like deindustrialization or systemic inequality.

The term suggested a cultural deficit, attributing behaviors like violence and drug use to personal failings rather than structural factors. It also pathologized family dynamics, focusing on single motherhood and “broken” homes, linking these issues to cycles of poverty and dependency.

Rise and Fall of the Term “Underclass”: “Underclass” resonated within the neoliberal political context of the 1980s, when welfare reform, stricter crime policies and reduced social programs dominated. This classification supported calls for personal responsibility over state intervention, legitimizing welfare cuts and increased policing rather than addressing the structural roots of poverty.

Over time, however, critics argued that “underclass” oversimplified poverty, blamed individuals rather than addressing structural causes like deindustrialization and systemic racism. By the 1990s, the term began to fade, replaced by less stigmatizing language that aimed to capture the complexities of economic and social issues.

Similar Concepts That Rise and Recede: This cycle—where terms like “underclass,” “culture of poverty” or “welfare queen” gain prominence, resonate with social anxieties, then decline amid criticism—illustrates the cyclical nature of social classification. These terms emerge to explain specific social, economic or political concerns and often serve ideological purposes, justifying policies that reinforce existing power structures. As contexts shift, terms lose relevance, replaced by new classifications aligned with evolving values and economic realities.

Several sociological terms have followed a similar trajectory, gaining traction during specific social or political climates, then receding as public perception shifts. The term “culture of poverty,” coined by Oscar Lewis, suggested that poverty could create a distinct culture. Although initially influential, it fell out of favor due to its perceived racial bias. The term “welfare queen” emerged in the 1980s, representing a racialized stereotype but receded as critics highlighted its distortion of welfare demographics. Other concepts, like “moral panic” and “social capital,” illustrate how language evolves with societal values and concerns.

Why These Concepts Emerge and Fade: This cycle—where terms like “underclass,” “culture of poverty” or “welfare queen” gain prominence, resonate with social anxieties, then decline amid criticism—illustrates the cyclical nature of social classification. These terms emerge to explain specific social, economic or political concerns and often serve ideological purposes, justifying policies that reinforce existing power structures. As contexts shift, terms lose relevance, replaced by new classifications aligned with evolving values and economic realities. In addition, as academic critique increases, outdated or stigmatizing terms often face scrutiny and decline.

The Political Economy of Social Differentiation: The classification of individuals into groups like “underclass” serves to manage surplus populations and reinforce existing power structures. The economic restructuring of the 1970s and 1980s marginalized specific social groups, leading to the emergence of terms that described them as socially disconnected and economically redundant. This classification supported a neoliberal agenda that emphasized personal responsibility over systemic change, stigmatizing the urban poor and legitimizing welfare cuts and increased surveillance.

The rise and fall of “underclass” reveals the political economy of classification, where labels are created to manage and understand populations in ways that align with political and economic interests. Terms change as societal needs and critiques evolve, showing that classifications are fluid tools, adapted to serve new ideologies and economic frameworks.

Foucault’s Notion of Discursive Power: Michel Foucault’s concept of discursive power sheds light on how the classification of the underclass functions as a tool of social control. Social labels like “underclass” wield “discursive power,” shaping perceptions, policies and social hierarchies. These terms are not neutral but reinforce dominant ideologies, defining what is “normal” and legitimizing control over marginalized groups. In this view, labels like “underclass” enable the state to monitor and manage populations deemed socially disruptive, using language as a tool of biopolitical control.

Discourse defines what is “normal” and shapes societal perceptions of urban poverty as a moral issue rather than a product of economic systems. This framework justified interventions aimed at controlling and reforming the urban poor, aligning with neoliberal goals.

The Sociology of Knowledge: Through the sociology of knowledge, we see that labels are powerful instruments, influencing public perception, policy and social control, shaping both how society addresses social issues and how individuals are understood within it.

The sociology of knowledge helps explain how terms like “underclass” reflect the interests of dominant groups and serve to maintain power relations. The term shaped public attitudes toward poverty, implying inherent deficiencies within certain groups. As critiques arose, particularly in the 1990s, awareness grew of the term’s limitations, leading to its decline as new frameworks emerged to better capture the complexities of poverty.

The Cyclical Nature of Social Classification: The history of the term “underclass” exemplifies a recurring pattern in social classification: terms arise in response to social, economic and political needs and eventually decline as critiques develop. This cycle reflects changing societal values and reveals how language shapes the discourse around social issues. As we redefine our understanding of poverty, contemporary terms like “precarity” reflect current concerns about economic insecurity in a globalized context.

The rise and fall of the term “underclass” illustrate how societies use language to frame social issues, embedding moral and racial judgments that reflect prevailing anxieties. By recognizing the power of these classifications and their influence on public perception and policy, we can move toward more nuanced and supportive understandings of social diversity and inequality.


Social labels carry immense power: In a world of labels, we are what we’re called. Before autism and Asperger’s syndrome were recognized, people with autistic-like traits were often labeled in ways that reflected society’s limited understanding of neurodiversity, attributing their behaviors to character flaws, emotional immaturity or intellectual deficits. Common terms for neurodivergent behaviors included:

  • Loner or recluse, which suggested voluntary isolation rather than social processing differences.
  • Odd or eccentric, which framed neurodivergent behaviors as quirky personality traits.
  • Withdrawn or aloof, which implied emotional disengagement instead of sensory or communication challenges.
  • Shy, timid or introverted, which simplified social hesitancy without recognizing neurodivergent complexities.
  • Simple or slow, which misunderstood intellectual abilities, implying cognitive limitation.
  • Feeble-minded or imbecile, early, offensive terms for cognitive and developmental differences, now obsolete.
  • Daydreamer or spacey, which attributed inattentiveness to lack of focus rather than sensory differences.
  • Absent-minded or inattentive, which suggested character flaws rather than attention challenges.
  • Emotionally disturbed or schizoid, which misinterpreted social withdrawal as psychological disturbance.
  • Socially awkward or social misfit, which blamed individuals for not adhering to social expectations.
  • Sensitive or nervous, which framed sensory issues as exaggerated emotional responses.
  • Idiot savant, which recognized specific skills but dehumanized individuals.
  • Hyperfocused or obsessive, which framed intense interests as fixations, ignoring their benefits.

In the 1940s, Leo Kanner and Hans Asperger introduced autism and Asperger’s syndrome as distinct neurological conditions. This shift from character judgments to neuro-developmental understanding allowed for more nuanced support and interventions tailored to neurodivergent needs.


The recognition of attention deficit hyperactivity disorder followed a gradual shift from viewing ADHD-like behaviors as moral failings to understanding them as symptoms of a neurodevelopmental condition. Before ADHD’s classification, people with similar symptoms were often described using terms that reflected limited understanding and reinforced negative character judgments. Historically, various labels were applied to individuals with ADHD-like symptoms:

  • “Fidgety Phil” or restless: Terms from the 19th century used to describe children who couldn’t sit still.
  • Scatterbrained or flighty: For inattentive individuals, implying a disorganized mind.
  • Lazy, unmotivated or daydreamer: Used for children who seemed unfocused or lacked motivation.
  • Badly behaved, problem child or troublemaker: Applied to impulsive children, seen as undisciplined.
  • Hyperkinetic or hyperactive child: Terms from the 1950s and 1960s describing high physical activity but not yet linked to attention deficits.
  • Minimal brain dysfunction (MBD): A vague 1960s-’70s term for hyperactivity and impulsivity.
  • Distractible or easily distracted: Framed attention issues as weak willpower.
  • Overly sensitive or emotionally unstable: Attributed emotional dysregulation to immaturity.
  • Slow learner or underachiever: Applied to children struggling academically, implying laziness.
  • Immature: Viewed ADHD traits as signs of delayed development.
  • Absent-minded or spacey: Implied a lack of focus, often seen as forgetfulness.
  • Willful or disobedient: Framed impulsivity as intentional defiance.

Only in the 1980s, with the DSM-III, did ADHD gain recognition as a medical diagnosis, shifting society’s perception from moral judgment to a neurodevelopmental framework. This reclassification has since improved treatment, understanding and support, fostering a more empathetic and less stigmatizing view of ADHD.


Psychological labels like “autism” and “ADHD” offer crucial frameworks for understanding and supporting individuals but also carry risks of oversimplification, stereotyping and identity reduction.

A diagnosis can open doors to essential services, accommodations and interventions that might otherwise be inaccessible. Labels like “autism” and “ADHD” often qualify individuals for educational support, therapy and workplace accommodations under laws like the U.S. Individuals With Disabilities Education Act (IDEA), providing tools for academic and professional success. These labels create a common language, allowing families, educators and clinicians to discuss specific needs, share strategies and foster empathy. Public awareness can also reduce stigma, helping others understand these conditions as neurological, not disciplinary, challenges.

Labels enable tailored, evidence-based interventions that can lead to more effective, individualized care. For instance, children with ADHD may benefit from behavioral strategies, while those with autism may thrive in sensory-friendly settings. Labels also provide validation for individuals and families, offering answers to long-standing questions and a sense of belonging within supportive communities. In research, labels advance scientific understanding and enable advocacy for improved funding, policies and services.

However, labels can also stigmatize, leading to social discrimination and reinforcing stereotypes that overshadow individual strengths. For example, “autism” may evoke uncommunicativeness, while “ADHD” can imply unruliness. This labeling can reduce individuals to one aspect of their identity, leading to “identity foreclosure,” where a diagnosis becomes a dominant part of self-concept, limiting personal potential and self-esteem.

Psychological labeling can medicalize behaviors that might fall within normal human diversity, leading to unnecessary interventions with potential side effects. It also risks pathologizing traits, such as intense focus or high energy, that may be neutral or even advantageous in certain contexts. Labels may lag behind evolving understanding, as with autism’s recognition as a spectrum and can be difficult to adjust if symptoms or self-perceptions change.

Labels can set limiting expectations, with educators or employers sometimes viewing a diagnosis as predictive of lower potential, which can constrain growth. Misdiagnosis is another risk, as overlapping symptoms can lead to inaccurate labels, resulting in interventions that fail to meet actual needs and further complicate individual challenges.

While psychological labels offer valuable tools for support, understanding and advocacy, they also require careful application to avoid restricting the diversity and potential of those they describe.


Labels carry profound power: They help us make sense of the world, but they also shape it in ways we may not intend. As tools for classification, labels can bring visibility and support, but they can also confine, reducing the richness of human experience to narrow definitions. Our reliance on labels reflects a fundamental desire to understand the world—yet this understanding is always incomplete. Labels serve as lenses, highlighting particular traits or behaviors, but they risk obscuring the full picture, imposing order where there is complexity and rigidity where there is fluidity.

As society evolves, so must our understanding of the labels we use. The power to classify is the power to define reality and with that power comes a responsibility to avoid reducing human diversity to rigid categories. The language we use to classify is more than descriptive; it is prescriptive, shaping how society perceives, treats and values individuals. Recognizing the double-edged nature of labels allows us to redefine them not as boundaries but as bridges—tools for connection, understanding and empathy. Embracing labels as flexible tools rather than fixed truths lets us see each person in their full humanity, with every label as, at best, a guidepost—and never the whole story.

Labels are only as powerful as the meanings we attach to them. They can control, marginalize and limit, yet they also hold the potential to liberate, validate and connect. The history of labeling reveals a tension between order and individuality, between the need to categorize and the imperative to honor human uniqueness.

Let us strive for a future where labels are not walls that divide but doors that open to understanding, acceptance and possibility. Only then can we honor the full spectrum of human identity, using language to empower rather than constrain and creating a world where classification illuminates rather than confines.

Steven Mintz is professor of history at the University of Texas at Austin and the author, most recently, of The Learning-Centered University: Making College a More Developmental, Transformational and Equitable Experience.

Next Story

Written By

Share This Article

More from Higher Ed Gamma