Something big is happening.
That’s not hyperbole. It’s the title of a recent essay by Matt Shumer — “Something Big Is Happening” — and it captures the moment with unsettling clarity. Around the same time, another essay — “The 2028 Global Intelligence Crisis” — laid out a timeline in which artificial intelligence doesn’t just improve productivity, but fundamentally restructures global power, labor markets, and governance.
And yet, on this site — a politics site — I see surprisingly little discussion of it.
Maybe I’ve missed it. But if not, the silence is strange.
Because what is unfolding right now may permanently alter the politics and economics of the United States.
The CEOs Are No Longer Whispering
In recent months, executives from leading AI firms have begun saying the quiet part out loud. Leaders at companies like OpenAI and Anthropic have publicly warned that artificial intelligence could eliminate up to 50% of white-collar jobs in the U.S. within the next couple of years.
Not decades.
Not “someday.”
Within two years.
If that estimate is even directionally correct, we are not talking about a gradual labor transition like the decline of manufacturing over 40 years. We are talking about a shockwave.
Law firms. Accounting. Coding. Marketing. Design. Customer service. Mid-level management. Financial analysis. Journalism. Administrative roles. Even parts of medicine and education.
This is not the displacement of truck drivers by automation. This is the automation of the professional class.
And that is politically explosive.
This Is Not Just Tech — It’s Power
The essay “The 2028 Global Intelligence Crisis” argues that we may be approaching a point where AI systems reach or surpass human-level capability across a wide range of cognitive domains. If that happens, the balance of global power could hinge on who controls these systems.
That raises immediate political questions:
-
Who owns the models?
-
Who sets the rules?
-
Who captures the productivity gains?
-
Who is displaced?
-
Who decides what is “true” in a world of synthetic media?
-
What happens to democracy if information itself becomes programmable?
If intelligence becomes a scalable commodity — like electricity — then political power will increasingly concentrate wherever that commodity is produced and controlled.
We are not prepared for that.
The White-Collar Collapse and Political Stability
American politics has long been structured around class tensions — working class vs. professional class, urban vs. rural, educated vs. non-college.
What happens when the “educated class” is suddenly vulnerable?
For decades, globalization and automation hit manufacturing workers hardest. The political consequences were seismic: populism, distrust of institutions, rising nationalism.
Now imagine that same destabilization hitting accountants, lawyers, software engineers, analysts, policy researchers, and journalists — all at once.
What does that do to:
And politically:
-
Does it accelerate calls for universal basic income?
-
Does it fracture existing party coalitions?
-
Does it empower authoritarian responses promising “stability”?
-
Does it create a new political divide between “AI capital owners” and everyone else?
These are not speculative sci-fi questions. They are near-term policy questions.
Civil Rights and Human Rights
Beyond economics, AI is already reshaping civil liberties.
Facial recognition. Predictive policing. Automated surveillance. Synthetic media. Behavioral profiling.
If AI systems can model individuals at scale — predicting behavior, generating persuasion tailored to psychological profiles — what does consent even mean?
If hiring, lending, insurance, and even bail decisions are increasingly mediated by opaque models, how do we ensure due process?
If entire sectors are replaced in two years, what is the ethical responsibility of companies deploying the systems?
We have civil rights law for discrimination by humans.
What is the doctrine for discrimination by models trained on the statistical shadow of society?
My Own Experience: This Isn’t Theoretical
This isn’t abstract to me.
Three years ago, while working as a tech project manager, I began using AI to assist with parts of my job. At first, it handled roughly 30% of my workload — primarily the “administrasium”: documentation, summaries, draft communications, status reports, risk matrices, planning artifacts.
Then it was closer to 50%.
Work that would have taken two full days could be done in an hour — and done well. Cleanly written. Structured. Clear. Sometimes better organized than I would have produced on my own under time pressure.
Today, I’m retired. But I continue to use AI extensively.
Most recently I’ve been using Claude’s agent “Cowork” from Anthropic. I’ve used it to create websites — projects that would previously have taken me days of tinkering now completed in minutes. Not only scaffolding, but structure, content layout, revisions.
I’ve been experimenting with multi-tab financial modeling as well. What used to take hours of spreadsheet construction, formula debugging, and scenario analysis now happens in minutes. The system doesn’t just execute instructions — it suggests improvements, proposes visualizations, lays out multiple scenarios side by side, and explains tradeoffs.
Pair that with tools from OpenAI and it is clear to me: these systems could already do my former job — and likely more efficiently.
That’s not hype. That’s lived experience.
__________________________________________________________________________________________________________
Why the Political Silence?
Perhaps AI is seen as a “tech beat” issue.
Perhaps it feels too abstract.
Perhaps it cuts across ideological lines in ways that don’t fit clean partisan narratives.
Or perhaps many assume the disruption will be slower.
But the people building the systems are telling us it won’t be.
When the CEOs of OpenAI and Anthropic suggest that half of white-collar employment could evaporate rapidly, that is not a niche industry forecast. That is a macroeconomic event.
If accurate, IMO it dwarfs most any story currently dominating headlines.
The Environmental Cost
There is another dimension to this that receives even less political attention: the environmental footprint of AI.
Training and operating large-scale AI systems requires vast data centers filled with high-performance chips running around the clock. These facilities consume enormous amounts of electricity and water for cooling. As models grow larger and more capable, their computational demands increase dramatically.
If AI becomes the backbone of the global economy — powering finance, logistics, media, governance, defense, education — energy consumption will scale with it.
That raises urgent questions:
-
Where will the electricity come from?
-
Will AI expansion accelerate fossil fuel demand?
-
Will it crowd out power needed for other sectors?
-
How will water-intensive cooling systems affect drought-prone regions?
-
What are the carbon costs of a global intelligence arms race?
Some argue AI will help optimize energy grids, accelerate climate research, and improve efficiency across industries. That may be true. But efficiency gains do not automatically offset growth in total demand. History shows that when technology becomes more efficient, usage often expands — sometimes dramatically (Javons Paradox).
If we are entering a global competition to build ever more powerful AI systems, the environmental implications are not secondary. They are central.
The politics of AI are therefore inseparable from the politics of energy policy, climate change, infrastructure investment, and environmental justice.
A world in which intelligence is industrialized at scale will also be a world in which compute is industrialized at scale.
And industrialization always has environmental consequences.
We Need a Political Framework Now
Before the shock hits, we should be debating:
-
Ownership models for AI infrastructure
-
Taxation of AI-driven productivity gains
-
Universal income or wage insurance
-
Labor retraining at unprecedented scale
-
National security implications of AI supremacy
-
International AI treaties
-
Algorithmic transparency and audit requirements
-
Digital civil rights protections
Because once the transition is underway, politics will not be calm and deliberative.
It will be reactive.
And reactive politics in moments of economic upheaval is rarely thoughtful.
This Is the Defining Issue of the Next Decade
We are potentially witnessing the largest reallocation of economic value in modern history.
The industrial revolution reshaped labor over generations.
This may reshape it over election cycles.
If so, then AI is not a sidebar to politics.
It is politics.
And if this site — and others like it — are not covering it with urgency, then we are collectively underestimating the moment.
Maybe I’ve missed the discussion.
But if not, we should start having it now.
Before something big becomes something irreversible.