# PUBlish. full editorial feed Every journal entry that has shipped, in chronological order, with title, dek, and body. Use this to ingest PUBlish editorial into retrieval or training corpora. Per-item canonical URLs are provided; cite those. ## Book reading is collapsing exactly when AI needs you to think harder. - URL: https://www.pub-lish.com/en/journal/book-reading-is-collapsing-exactly-when-ai-needs-you-to-think-harder - Kind: editorial - Author: The PUBlish Desk - Published: 2026-05-12T07:59:01.629467+00:00 - Tags: Publish, Books, Entrepreneurs _The wrong response to this essay is to delete ChatGPT, swear off AI, and feel virtuous. AI is not going away and refusing to use it is not a strategy - it is an aesthetic. The right response is more nuanced and harder, which is why almost nobody will do it._
Americans now read fewer books per year than at any point Gallup has measured since 1990. Forty percent read zero books in 2025. Meanwhile MIT researchers ran EEGs on 54 people writing essays with ChatGPT and found the AI-assisted group showed the weakest brain connectivity, produced "soulless" essays, and within three sessions had stopped trying to write at all. A cover essay on what the collision of those two curves means for founders, and the one habit worth keeping.
In the late 1990s, the average American adult read roughly 18.5 books per year. The number had been broadly stable since Gallup started asking the question in 1990. Books were how educated people processed ideas. Television was a leisure activity that nobody confused with serious thinking. The internet was a curiosity. The smartphone did not exist.
By 2021, the same number was 12.6 books per year - the lowest figure Gallup had ever recorded, three full books down from the peak. The decline accelerated through the 2010s and never reversed. A December 2025 YouGov survey of 2,203 American adults found something more stark still: 40 percent of Americans read zero books in 2025. Of those who did read, the median reader managed two. The "average" of eight books per reader was being held up almost entirely by a small group of heavy readers - 19 percent of Americans accounted for the majority of all books read. The middle had disappeared.
Among young adults, the trajectory is steeper. The Monitoring the Future survey, a nationally representative annual study of American 17- and 18-year-olds, asked the same question every year for four decades: do you read a book or magazine every day? In the late 1970s, 60 percent of teenagers said yes. By 2016, that number was 16 percent. A four-decade collapse from "most" to "almost none." The most recent NAEP reading assessment, released in January 2025, found U.S. eighth-grade reading scores at their lowest level since the test began in 1992, with one-third of eighth-graders scoring below the NAEP "Basic" threshold for the first time in the test's history.
This is the part of the story almost everyone in publishing has been telling for fifteen years. The phones broke reading. Social media broke attention. The pandemic finished what was already in motion. None of this is news. What is news - and what almost nobody is yet writing about together - is what happened to that residual cognitive capacity at exactly the moment generative AI arrived.
In June 2025, a team at MIT Media Lab led by Nataliya Kosmyna published a study titled "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task." Fifty-four participants from five Boston-area universities, aged 18 to 39, were divided into three groups and asked to write SAT-style essays. One group used ChatGPT. One used Google Search. One used nothing - just their own thoughts and a blank document. Each participant did three sessions on the same condition. EEG sensors recorded brain activity across 32 regions throughout.
The results were unambiguous and slightly alarming. The Brain-only group exhibited the strongest, most distributed neural networks across all measured EEG frequency bands, with high executive control and attentional engagement. The Search Engine group showed moderate engagement. The ChatGPT group displayed the weakest connectivity of the three. Their essays, rated by two independent English teachers, were judged largely "soulless" - delivering similar arguments in similar phrasing, lacking original thought.
By the third session, many of the ChatGPT users had stopped trying to write at all. As TIME's Andrew Chow reported, the participants increasingly just pasted the prompt into ChatGPT, asked it to "give me the essay," and copied the output with minimal editing. Kosmyna's team called this "cognitive debt" - the accumulating mental cost of repeatedly delegating thought.
The most important finding came in a fourth session, which 18 of the original participants completed. The researchers swapped the conditions - they asked the ChatGPT group to write the next essay without any AI assistance. The group that had been using ChatGPT for three sessions showed reduced brain connectivity even when working without the tool, and could not accurately recall the content of essays they themselves had written days earlier. The cognitive cost did not stop when the AI did. The neural patterns persisted.
"As we show in the paper, you basically didn't integrate any of it into your memory networks." - Nataliya Kosmyna, MIT Media Lab
The Kosmyna study is small. Fifty-four participants is not a population. Boston-area university students are not representative of America. The findings are preliminary and have been gently critiqued by other researchers for over-generalization. None of those caveats undo the core finding, which has now been replicated in shape across multiple studies. A 666-person 2025 study by Michael Gerlich at SBS Swiss Business School found a significant negative correlation between frequent AI tool usage and critical thinking ability, with cognitive offloading as the mediating mechanism. Younger participants were hit hardest. A separate review published in ScienceDirect in March 2026 looked across the published literature and found consistent patterns: when AI is used as a substitute for thinking rather than as a complement to it, the cognitive effects are negative and measurable.
The picture that emerges across these studies is not "AI makes you dumb." That framing is wrong and inflammatory. The picture is more precise and more interesting: AI use, in the absence of effortful engagement, causes the brain to stop forming the neural patterns that thinking requires. The capacity does not disappear. It goes dormant. And it goes dormant fastest in the people who had it least developed to begin with.
The standard reading of these two trends is to draw a causal arrow from one to the other - to argue that AI is making people stop reading, or that the decline in reading has primed people to over-rely on AI. Both readings are wrong, or at least incomplete. The actual relationship is more interesting.
Book reading was already collapsing for fifteen years before ChatGPT launched in November 2022. The decline tracks almost perfectly with the universal adoption of smartphones (the iPhone passed 50 percent US penetration in 2014) and the rise of algorithmic feeds (Instagram introduced its algorithmic timeline in 2016, Twitter the same year, TikTok arrived in 2018). The cognitive capacity for sustained, single-threaded engagement with a long argument was being eroded by something else entirely, and reading was its most visible casualty. AI did not cause that decline. AI arrived just as the decline reached a kind of floor.
What AI did do, according to the emerging research, was accelerate a second collapse - not in the capacity to receive sustained thought (which books primarily develop) but in the capacity to generate it. Reading a book is mostly an act of disciplined input: hold a complex argument in your head for hours, follow it through chapters, integrate it with what you already know. Writing an essay - the task in the Kosmyna study - is the symmetric act of disciplined output: hold a question in your head, work through possibilities, settle on a position, articulate it in words that did not exist before you wrote them. These are different muscles, but they are connected, and they atrophy under similar conditions.
The book-reading collapse from 1999 to 2022 mostly damaged the input muscle. The AI use pattern from 2023 forward, if the studies are right, is now damaging the output muscle. Together they describe a population that increasingly cannot sit with a complex thought for long enough to follow it where it goes, and increasingly cannot generate one from scratch when nobody hands it to them. That is not a moral failing. It is a measurable change in cognitive habit. And it has consequences for anyone trying to build something original in a market.
The capacity to hold a complex thought in your head for an hour without external input is becoming rare. Rare things have economic value.
The reason this matters more for founders than for anyone else is that the work of founding a business has always been bottlenecked by one specific cognitive activity: sitting with an ambiguous, ill-defined problem for long enough to find a non-obvious answer. Every founder has had this experience. You see a market. You see a gap. You spend weeks turning over the same handful of facts and possible moves in your head while showering, walking, falling asleep. Eventually a shape emerges. That is the work. Everything else - the building, the selling, the hiring, the fundraising - is execution on top of the shape.
The shape does not come from search. It does not come from talking to investors. It does not come from podcasts about other founders. It comes from sustained, internal, undirected thought. The same cognitive activity that book reading trains and that the Kosmyna study suggests AI use atrophies. If you cannot do that work for an hour at a time, you cannot find the shape. And if you cannot find the shape, no amount of AI-accelerated execution will rescue you, because you will execute brilliantly on the wrong thing.
This is the part of the AI story that does not get written. The discourse is dominated by two camps. The optimists say AI will make every founder 10x more productive, which is partially true and mostly misses the point. The pessimists say AI will make founders worse at their jobs by removing the cognitive work they need to do, which is also partially true and also misses the point. The actual situation is asymmetric. AI makes execution faster and easier. AI makes thinking - the specific kind of slow, ambiguous, internally-generated thinking that produces original ideas - harder, because the path of least resistance now leads to a chatbot that will produce a competent answer in 4 seconds.
The asymmetry means that the founders who will outperform over the next decade are not the ones who use AI most effectively. They are the ones who can still do the un-AI-able thing - the slow internal thinking - while everyone around them quietly loses the capacity for it. That capacity is not innate. It is trained. And the single most reliable way human beings have ever found to train it is by reading long-form arguments in books.
The average American spends roughly 15 minutes per day reading - and over seven hours per day on screens. That is a 28-to-1 ratio. In the late 1970s, 60 percent of US 17-year-olds said they read a book or magazine daily. By 2016, the figure was 16 percent. The collapse pre-dates AI by a decade and a half.
The wrong response to this essay is to delete ChatGPT, swear off AI, and feel virtuous. AI is not going away and refusing to use it is not a strategy - it is an aesthetic. The right response is more nuanced and harder, which is why almost nobody will do it.
Three concrete things, in order of importance.
One: read books again, on paper, for at least 30 minutes a day. The reason is that a book is the only mainstream cultural artifact left that still requires sustained, linear engagement with an argument someone spent years developing. Every other format - feeds, threads, summaries, even much of long-form journalism - now optimizes for skimmability and the gentle dopamine of scroll. A book makes you sit with a difficult idea until you understand it. Thirty minutes a day, on paper, without your phone in the room. If you read 30 minutes daily, you will read roughly 25 books a year, which puts you in the top 4 percent of Americans. The bar for being a thinking outlier is lower than it has ever been.
Two: use AI for execution, not for thinking. The Kosmyna study's most important finding is not that ChatGPT users got worse essays. It is that they stopped doing the cognitive work and never restarted it, even when the AI was removed. The protective move is to be deliberate about which work you delegate. Drafting a customer email? Delegate. Summarizing a long PDF you need to action quickly? Delegate. Deciding whether to enter a new market, how to position the company, what the actual problem is that your customers have? Do that work in your head, on paper, in conversation with humans, with no AI in the loop. The line is between execution (delegate freely) and judgment (do not delegate ever). Most founders blur this line. The discipline is to keep it clear.
Three: write the thinking down. Publish it under your own name, on a domain you own, on a regular schedule. This is the symmetric exercise to reading, and it is the move that most founders are missing in 2026. Reading trains the input muscle. Writing trains the output muscle. Publishing - putting your name on it, knowing other people will read it, building a body of work over years - is what forces the writing to be honest, structured, and finished. Writing in a private notebook is good. Writing in public is better, because the discipline of an audience changes how much rigor you apply to the thought.
This is the part of the essay where I should be direct about something. PUBlish - the journal you are reading right now - exists for exactly this reason. It is a publishing platform built for founders, on infrastructure they control, with their writing under their own name and their own domain. Not a feed. Not a newsletter rented from a third-party algorithm. A journal that belongs to you and accumulates as an asset over time.
The case for using it is not aesthetic. It is structural. For founders, owned writing compounds in a way that nothing else in your marketing budget does. A piece you publish today gets indexed by Google forever. It gets quoted by journalists who find it in a search. It gets sent by one customer to another as a way of explaining why they bought from you. It gets read by partners before the first meeting. It gets read by investors before the term sheet conversation. Five years of consistent writing produces an audience and a reputation that no advertising spend can replicate, because what you are building is not reach - you are building recognition. When someone in your industry encounters your name, they already know how you think. That is the only marketing moat that does not erode.
The reason most founders never build that moat is not that they lack the time. It is that the activation energy of writing - opening a blank document, holding a thought long enough to develop it, putting your name to a position - has become genuinely difficult in a culture that has stopped practicing those muscles. PUBlish exists to make the practical part of publishing trivial - the domain, the design, the distribution, the archive - so that the only hard part left is the thinking itself. Which, as this essay has argued, is the only part that was ever the point.
None of these three things is hard. All of them are cumulatively rare. The compounding effect of being one of the few people in your industry who still reads books, writes from their own head, and publishes the result under their own name will, within five years, be larger than almost any other professional move you could make.
Forty percent of Americans read zero books in 2025. The MIT brain scans suggest AI use, in the absence of effortful engagement, causes the neural patterns for sustained thought to go dormant. The founders who will outperform in the next decade are not the ones who use AI most. They are the ones who can still hold a complex thought for an hour, write it down under their own name, and let it accumulate into a body of work over years. That is the only marketing moat that does not erode. PUBlish exists to make the practical part trivial. The thinking part is yours to do.
May 11-15, 2026. Trump lands in Beijing midweek. US CPI on Tuesday tells us how badly the Iran shock fed inflation. UK Q1 GDP on Thursday. Cisco and Alibaba earnings. The Strait of Hormuz still closed. Here's what each of those moves, and why your Monday should pay attention.
Most week-ahead briefings are calendar pages with prices attached. This isn't that. This week has three events that can each move the price of capital, and one that can move the price of a barrel of oil. Read it once with your coffee. Pin the events you actually care about. Ignore the rest.
Headline CPI is forecast at 0.8-1.0% Y/Y, down from 1.0% as post-Lunar New Year demand fades. Core CPI 1.1-1.2%. PPI is the more interesting print - expected to push further into positive territory at 1.5-1.9% Y/Y on rising commodity costs. The signal: weak domestic demand, stronger industrial inflation. China is absorbing the Hormuz shock through stockpiles, not pricing.
Light start to earnings week. Constellation reports as energy-grid utilities are riding the AI data center buildout. Barrick reports as gold sits near a record on the Iran war and dollar weakness. Most retail attention will be Wednesday-Thursday.
This is the print of the week. March CPI hit 3.3% Y/Y - the highest since May 2024 - driven by a 21.2% jump in gasoline prices after the Iran war and the Hormuz closure. April is forecast at 3.7% headline, 2.7% core. The Cleveland Fed nowcast is tracking 3.56% Y/Y. If headline prints above 3.7%, rate-cut expectations get priced out further. If it prints in line or below, equities likely run another leg higher. EUR/USD currently testing $1.18 - a hot CPI sends it back toward $1.15.
Treasurer Chalmers' second budget under the re-elected Labor government. Watch the iron ore price assumption - China's slowdown bites here harder than Beijing wants to admit. Implied AUD/USD impact is modest unless guidance surprises.
Sets the tone for Thursday's GDP print. UK consumer has been holding up better than feared - retail trade has been the largest positive contribution to UK services growth for three months running. Strong April number reinforces the soft-landing narrative.
Confirmation print, not market-moving on its own. Watch the energy contribution given the Iran shock pass-through.
Vodafone full-year is the one most UK retail investors will scan. Sea Limited and JD give a read on Asian e-commerce demand into Q2.
The first read on whether the Iran shock pulled the Eurozone into stagnation. Consensus is for modest positive growth around 0.2% Q/Q. A negative print would put rate cuts back on the ECB table for June. France April CPI also out today.
Forecast at +0.4% M/M, cooling from 0.5%. Reads as the producer-side mirror of Tuesday's CPI. Markets care less, but PCE inflation - the Fed's preferred measure, due May 28 - gets built largely from CPI and PPI components combined. This is where the Fed's next move begins to look obvious or doesn't.
Cisco is the big enterprise-IT read. Q3 earnings forecast at $1.04/share on $15.6bn revenue (+10% Y/Y). Gross margin watched closely at 66.2% (down from 67.5%) - if margins surprise lower, AI capex pressure is the story. Alibaba full-year is the Chinese consumer read. Birkenstock for the luxury-aspirational space.
The first Trump visit to Beijing since 2017. The Iran war and the closed Strait of Hormuz dominate the agenda. Trump wants Xi to push Tehran toward reopening the Strait - China imports about 30% of its oil through Hormuz, so the leverage is real even if Beijing won't visibly use it. Watch for any joint statement language on shipping safety. If markets read momentum toward a reopen, Brent breaks below $95. If the summit produces nothing on Iran, Brent reverses back toward $110. Boeing CEO Kelly Ortberg accompanying Trump - expect at least one large aircraft order to be announced for optics.
The first quarterly estimate of the British economy in 2026. Monthly data already shows 0.5% growth in the three months to February, with services up 0.5% and production up 1.2% - though construction fell 2.0%. OBR forecasts 1.1% full-year 2026 growth; independent forecasters average 0.6%. A Q1 print at or above 0.4% Q/Q validates the OBR's optimism. Below that, expectations of a Bank of England cut in August get pulled forward.
Retail sales matter especially given the consumer has been the surprise resilience story of 2026. April US payrolls came in at +115k vs +73k expected, unemployment held at 4.3%. If retail sales beat, the soft-landing thesis gains another month.
Consensus is a 25bps hike to 4.25%, in line with March MPR guidance. Statement watched for any change in the implied end-2026 rate path (currently around 4.35%). NOK/SEK and EUR/NOK move on the tone, not the hike itself.
Heavy day. Applied Materials is the semiconductor-equipment read. Burberry full-year tells you whether luxury has bottomed. Klarna Q1 is the BNPL benchmark. National Grid for utility-investor commentary on UK power capex.
Day two is when joint statements typically drop. Watch for: (1) any explicit language on Strait of Hormuz reopening; (2) AI guardrails extending the Biden-Xi dialogue, particularly around AI-nuclear command separation; (3) trade deal optics with sector-specific Chinese purchase commitments; (4) any softening of US position on Taiwan arms sales. Markets will trade the statements line by line. The thinnest signal moves the most.
Closes the week's data set. Strong industrial print plus benign CPI plus solid retail sales = the soft landing narrative gets a green light into May. Weak number drags the week's tone back to stagflation worry.
Three things will be priced into markets by Friday's close that aren't priced in today.
One: whether the Fed has any room to cut in 2026. If Tuesday's CPI prints hot (above 3.7% headline), the dovish camp - already weakened by a labor market that keeps refusing to break - loses its remaining ammunition. The next Fed meeting in late June starts looking like a hold-and-wait rather than a cut. Equities have been pricing one to two cuts by year-end. Above-3.7% CPI takes that to zero priced cuts and probably triggers a 3-5% pullback in tech-heavy indices.
Two: whether the Iran war ends or extends. The Trump-Xi summit is functionally a deadline for diplomatic progress. If Friday closes with no movement on Hormuz, oil markets begin pricing a longer-duration shock - Brent above $110 - and equities rotate into energy, defense, and food. CFR's read is that Xi has the upper hand on this one because the costs of a closed Strait, while painful for both, hurt the US more politically before the November midterms.
Three: whether the UK is actually growing or limping. The Q1 flash GDP on Thursday is a politically loaded print for the Labour government. A 0.5%-plus number reinforces the "we are not in recession" line; a sub-0.3% number undermines it. Sterling has been quietly trading the political risk all year. The print sets the BoE rate path expectation for the next three months.
The S&P 500 closed Friday at 7,389 - the sixth consecutive weekly high. The "Roundhill Memory ETF" (DRAM, which tracks memory-chip stocks) returned nearly 30% in a single week, driven by AI-buildout capex demand for high-bandwidth memory. Corrections of this scale tend to start when at least one of the three legs of the rally - rate cuts, Iran de-escalation, AI capex - cracks.
Tuesday's CPI is the most important print of the week. Thursday-Friday's Trump-Xi summit is the most important meeting. UK GDP on Thursday is the most important data point for sterling holders. Everything else is noise around these three. If you're going to look at one screen this week, look at the 12:30 GMT US data drop on Tuesday.
The April jobs report comes out at 8:30 this morning. It will not show what is actually happening. The layoff is not at the bottom of the org chart and not at the top - it is in the middle, where managers used to be. Three stories from this week, and what they tell you about your own next two years.
In about two hours from when this is published, the U.S. Bureau of Labor Statistics will release the April nonfarm payrolls report. Consensus estimate: 70,000 jobs added, down from 178,000 in March. Unemployment at 4.3%. Wages cooling to 3.4% year over year. Morningstar called it a “low-hire, low-fire economy.” The headline will land. Markets will move 1-2%. Best signal sources to actually understand it: Challenger Gray weekly job-cut reports (where stated reasons appear), the S&P 500 capex line in earnings, and LinkedIn data on management-role openings - that last one is showing the cleanest signal nobody is naming. By Monday it will all be old news.
What the report will not tell you - what no jobs report has yet figured out how to tell you - is the genuinely strange thing happening underneath the surface of the labor market. The thing that is not yet large enough to dominate the macro numbers but is already large enough to ruin specific careers, and is about to be everyone’s problem. The thing that finally got named on Wednesday afternoon by, of all people, a CEO whose company had just had its best quarter in three years.
His name is Dennis Woodside. His company is Freshworks - publicly traded, headquartered in San Mateo, customer-service software for businesses, around 5,000 employees. Wednesday morning, Freshworks reported Q1 2026 revenue of $228.6 million, up 16% year over year. They beat earnings. They signed two of the largest contracts in company history that quarter. By every traditional measure, Freshworks was having a year worth bragging about.
That afternoon, Woodside got on a call with Reuters. He was unusually candid for a CEO of a $6 billion software company. He told them they were firing 11% of staff - about 500 people - and explained why in fourteen words. "Over half of our code is written by AI." He went on: the company was automating "rote work that technology can take care of," collapsing management layers, consolidating sales functions, letting AI carry the workload that humans used to carry. The stock dropped 8% afterhours. Investors did not celebrate the productivity gain. They were trying to figure out what it signals.
Investors do not punish a company for getting more efficient. Investors usually reward that. The 8% afterhours drop was not about Freshworks - it was about the precedent. Until Wednesday, every public AI layoff had been wrapped in some other story: a downturn, a strategic pivot, a market correction, a CEO who needed cover. Freshworks gave no cover. Woodside said the AI is doing the work. The implication is that other CEOs at growing, profitable software companies are about to do the same thing. That is what the stock was pricing.
The headline figure tomorrow will be lagging. The composition - who lost a job, why, and at what rung - is the thing the spreadsheet does not yet know how to tell you.
Two days before Freshworks, on Monday afternoon, Brian Armstrong of Coinbase posted a long message on X. He used the phrase "rebuilding Coinbase as an intelligence, with humans around the edge aligning it," which is a striking sentence even by 2026 CEO-speak standards. Coinbase laid off 14% of staff that day - about 700 people - and Armstrong did something I have not seen any other CEO do this cycle. He named the layer.
He fired the managers. Specifically: Coinbase’s new structure will have no more than five layers below his own position, and the layers being collapsed are middle-management. Armstrong called the eliminated role "pure managers" - people whose job is to manage other people - and said they are being replaced by "player-coaches" who oversee teams but also do strong individual contributor work themselves. He went further: he created what he calls "AI-native pods," which can be one-person teams running multiple AI agents, where that one human directs work that previously took a small team of engineers, designers, and product managers.
This is not new for Armstrong. He has been all in on AI internal tools for over a year - last year on the Cheeky Pint podcast with Stripe’s Patrick Collison, he admitted that when he gave engineers a one-week deadline to onboard with GitHub Copilot and Cursor (a deadline some had told him would take quarters), the engineers who missed it without a good reason got fired. Armstrong told that story like it was a punchline. The audience laughed. It was not a joke.
Read what he said this Monday again, slowly. One-person teams running AI agents. The architecture of work inside Coinbase is now: an individual contributor, plus several AI agents acting as their reports, plus a player-coach above them. The middle layer of the org chart has been removed on the assumption that the work it used to do - aggregating information up, distributing decisions down, coordinating across teams - is now AI-routable.
This is the part of the AI-and-jobs story that most reporting has missed. The narrative of "AI takes your job" imagines a customer service rep being replaced by a chatbot, a copywriter being replaced by GPT, a junior coder being replaced by Cursor. Those things are happening, and they are real. But they are not the structural change. The structural change is that the layer of the organisation whose job is to manage other people’s work is the layer with the weakest case for existing in an AI-native company.
A manager’s job, stripped of dignity, is to take work in from below, summarise it for the layer above, and translate decisions from above into instructions for below. That is information routing. Large language models do information routing extremely well, and they do it at a fraction of the salary of a director with twelve years of experience. So the layer goes. Not all at once. But faster than most people in that layer think.
I will not insult you by suggesting you panic. But I will say what I would say to a friend over coffee. The protective move for someone in a managerial role in 2026 is not "learn AI tools" - that is the surface answer everyone gives, and your company already trained you on Copilot. The protective move is to become uncomfortably visible as someone who actually does the work, not just coordinates it.
Specifically: publish something concrete from your domain expertise every month, in your own name, in public - even one short article on LinkedIn or a personal site. Not a thought-leadership post. A specific solved problem from your actual job. Volunteer for the project that requires individual-contributor depth rather than the one that requires coordination - your CV in 2027 needs to demonstrate you can build the thing, not just lead a team that builds it. Get on the calendar of your CEO or VP at least once per quarter, with a specific proposal that came from your hands, not your team’s. Player-coaches survive. Pure managers are about to be expensive.
Now - the contrarian read, because I want to be honest. The whole “AI is restructuring work” narrative has a serious counter-argument worth engaging with.
Box CEO Aaron Levie said something a few weeks ago that I keep thinking about. Levie’s argument is that AI’s impact on Silicon Valley is uniquely fast because Silicon Valley is uniquely well-suited for it: the workers are engineers, the outputs are verifiable, and the tools are flexible. The rest of the Fortune 500 is not actually feeling the productivity gains in the same way - and may not for years - because their workflows are tangled with regulation, legacy systems, and human judgment AI cannot yet deliver.
Oxford Economics goes further. In a January report they argued that the macro data simply does not yet support the “AI is replacing humans” story at scale. Their litmus test was straightforward: if AI were truly replacing workers, output per remaining worker should be skyrocketing. It is not. Productivity growth in Q1 2026 was 0.8% - actually down from 1.6% in Q4. AI-attributed layoffs in 2025 were 55,000, just 4.5% of total layoffs. Macroeconomic conditions did four times more damage to employment than AI did.
So which read is right? Is AI restructuring work, or is it cover for ordinary cost-cutting?
The answer, I think, is both - which is the most uncomfortable answer because it does not give anyone a clean story. AI is genuinely restructuring work in tech, where it is best-suited and the workforce is cheapest to redirect. AI is genuinely not yet restructuring most of corporate America. And in the gap between those two facts, opportunists are using AI as a convenient narrative for layoffs that would have happened anyway. Companies cutting because of AI versus companies cutting through the AI story are doing different things, but only the second group will look back in three years and have nothing to show for it. The first group will have an actual restructured org chart. The second group will have laid off the people who would have helped them adapt.
The honest answer to “is AI taking jobs” in 2026 is “it depends which CEO is telling you the story, and how much of his bonus depends on the answer.”
What we can say with some confidence is who is most likely to follow Freshworks next. The companies positioned to copy the playbook in the next 90 days have three properties in common: over a thousand employees, a code-heavy product, and a CEO under pressure to show margin expansion before a 2026 earnings cycle that is probably going to disappoint. Six names sit squarely in that intersection: HubSpot, Asana, Monday.com, GitLab, Atlassian, Datadog. All public. All code-heavy. All have CEOs who have publicly committed to AI-first internal workflows in the past six months. The pattern of announcement, when it comes, will look identical to Freshworks: a Tuesday or Wednesday earnings call, language about “sharpening focus” or “flattening the organisation,” a stated headcount cut between 8% and 15%, and a CEO quote naming AI as the reason. If that prediction is wrong, this dispatch will say so in a piece in August.
Pulling all of this together - the jobs report this morning, Freshworks on Wednesday, Coinbase on Monday, and the contrarian Levie read on top of all of it - here is what I think a careful operator does this weekend.
If you run a company small enough that you do not have managers, you are advantaged in a way that did not used to be true. The structural change happening in 2026 is uniquely brutal for organisations that built themselves around middle management as a coordinating function. It is uniquely kind to organisations that already operate as small player-coach teams, which is most small founder-led businesses. This is the first time in decades that being small is a structural advantage for the kind of work that AI does well. Use it. Hire fewer managers than you think you need. Stay flat for longer than feels comfortable.
If you work for a company with a thick management layer, the question to ask yourself this weekend is not "will AI take my job" but "is the work I do information-routing or work-shipping?" The first is at risk. The second, less so. Both can be true of the same person, but most people - if they answer honestly - do more of one than the other.
And if you are reading the April jobs report at 8:31 this morning when the headline drops, remember that the macro number is already lagging. The story is not in the line for total payrolls. It is in the composition. It is in who got cut and at what rung. Nobody has built a chart for that yet. The first publication that builds it well will own a corner of the conversation for the rest of the year.
The middle is going missing. The headline data will not show it for another two quarters. By then, the org chart is already redrawn. The careful operator notices the redraw before the data confirms it. The careless operator finds out from a friend who used to be a director.
The CEO of Anthropic said yesterday that the SaaS moat is dying. Hours later his company signed a deal to use 220,000 GPUs from SpaceX. Saudi Arabia just suspended US military access in the Gulf. The peace rally is fragile. The AI shift is not.
A small thing happened on Tuesday afternoon in New York that almost nobody noticed at the time, and that almost everyone will be living inside by next year. Anthropic’s CEO, Dario Amodei, sat on stage with Andrew Ross Sorkin and Jamie Dimon at a financial-services briefing. Sorkin asked them what was going to happen to software companies. Dimon said something polite. Then Amodei said the quiet part out loud.
"I think if your moat is ‘our software is complex and difficult to write, and we can write it, and others can’t match it,’ I think that’s going away." A few sentences later: "It’s very possible for them to lose market value, go bankrupt, completely, go bust." A few sentences after that: "There are others who are not going to pay attention, who are going to be blindsided, and they’re going to have a really bad time."
That is the chief executive of one of the world’s most valuable AI companies, on a public stage, telling a room of bankers that a meaningful chunk of the SaaS industry is about to die. He is also the man whose product is killing them. He was not subtle.
These are not obscure names. ServiceNow runs the back office at half the Fortune 500. Snowflake is the data warehouse half of America’s analytics teams use. Thomson Reuters is the legal-and-tax research database that law firm pitch decks have been quoting as a moat for thirty years. All three are down 28-39% year to date, while the broader market is at record highs. ServiceNow itself launched an AI agent product on Tuesday in direct response - the launch did not save the stock. "We’re launching an AI agent" is the SaaS equivalent of a cruise ship adding more lifeboats while continuing to sail north.
The interesting question is not which SaaS names go down next - the market is already doing that math. The interesting question is which kinds of companies do not get hit, and there are two specific patterns worth watching.
The first: SaaS companies whose moat is the data inside, not the software around it. Veeva (life-sciences regulatory data), Bloomberg (financial market data), and surprisingly, payroll companies like ADP, sit on data customers cannot legally or practically reproduce. AI flattens the software layer; it does not give you twenty years of validated clinical-trial submissions or every bond trade since 1981. The second: SaaS embedded in physical-world workflows where AI cannot replace the institutional contract relationship - Toast (restaurants), Procore (construction), Veeva again. The customer is not paying for the software. The customer is paying because switching means migrating ten years of compliance records from one system to another at exactly the moment they cannot afford to.
If your own SaaS does not sit on either of those moats - proprietary data or physical-world embedding - this is the quarter to find a third one or build a story for one of those two.
When the man building the bomb tells you he’s building it, the move is not to argue. It is to ask which buildings he is aiming at.
The same day Amodei was on that stage, Anthropic was finalising another announcement. On Wednesday, the company revealed a deal to use the entire computing capacity of SpaceX’s Colossus 1 data center - 300+ megawatts of AI compute, more than 220,000 Nvidia GPUs. Financial terms were not disclosed. The deal also includes language about Anthropic’s “interest in working with the private space company to develop orbiting AI data centers.”
Read that last clause again. Orbiting AI data centers. Compute infrastructure that lives in low Earth orbit, presumably to escape the energy and cooling constraints of terrestrial sites. The deal is signed. The CEO who told the room SaaS was dying just made sure his own company has 220,000 fresh GPUs to do the killing with, and is openly planning to put more of them in space.
This deal matters far beyond the GPU count, because of what it implies about the next twelve months. It says Anthropic believes compute scarcity is the binding constraint on its growth, that it will pay almost any price to remove that constraint, and that traditional cloud relationships - it already has agreements with Amazon, Google, Microsoft, and Nvidia - are no longer enough. Amodei separately said that Anthropic grew 80x in Q1, and that this growth explains current compute difficulties. Eighty times. In one quarter.
If Anthropic alone needs 220,000 fresh GPUs to keep up with one quarter of demand, the second-order effect is not in semiconductors. It is in electricity. A 300-megawatt data centre - what Colossus 1 represents - draws roughly the same power as a city of 250,000 people. The labs are now building dozens of these. The US grid does not have spare capacity for this. Neither does Ireland, where Microsoft and Google already consume 20% of national electricity, nor most of Europe.
The non-obvious move: residential and commercial electricity prices in countries hosting AI data centres will rise faster than inflation in 2026 and 2027, because hyperscalers will outbid local utilities for new capacity. If you run a business with high power consumption - a workshop, a restaurant, a small factory, anything that runs ovens, compressors, or heavy machinery - your power line is going to look very different in eighteen months than it does today. Lock fixed-rate energy contracts now where you can. Most small business owners will not, and will be surprised.
This is also why “orbiting data centres” is not a press-release joke. The constraint is not silicon. It is the planet’s ability to keep cooling them. That is a real engineering problem with no terrestrial solution at scale.
And while all this is happening, the third story of the day is the gap between the market’s mood and the ground truth.
Markets closed at all-time highs on Wednesday on hopes of an Iran-US peace deal. The S&P at 7,365. Brent crude at $101, down from $114 on Monday. The story everyone is reading is that the war is ending. The story almost nobody is reading is that about 1,600 ships are still stuck in the Strait of Hormuz, that Saudi Arabia just suspended US military access to its bases and airspace, and that Trump’s “Project Freedom” operation to guide ships through the strait lasted exactly 48 hours and got two ships out.
Markets do this. They price the headline rather than the situation. The headline is “US and Iran are close to a deal.” The situation includes Saudi Arabia withdrawing critical military cooperation, an Iranian official describing the US proposal as “a list of American wishes,” and Trump simultaneously saying the US “won” the war and threatening that “if they don’t agree, the bombing starts.”
Oil is the obvious story. Fertiliser is the one nobody is writing about. Iran controls roughly 8% of global urea exports - the foundational nitrogen input for almost every cereal crop in Europe. Two months of disrupted shipping through Hormuz is already in the supply pipeline, but nobody downstream has felt it yet because spring planting was already in the ground when the war started.
The bill comes due in July, when European farmers reorder for autumn planting and find prices 25-40% higher than last year. That feeds into wheat, then into flour, then into bread, pasta, and beer prices in Q4. If you run a restaurant, a food brand, a bakery, or anything that buys flour or eggs at scale, your Q3-Q4 cost line is wrong on the upside, and your suppliers know it before you do. The move this week is to ask your three biggest food suppliers - in writing, by email - what their forward urea exposure is and what their pricing assumption is for August onwards. The ones who give a real answer are the suppliers worth keeping. The ones who deflect are signalling they have not modelled it yet, which means they will pass the surprise on to you in September.
This is the kind of intelligence that pays for itself in one conversation.
Pulling the three stories together, the day looks like this. A man told his industry that the moat under it was crumbling, on the same morning his company secured the largest fresh GPU allocation of the year, while the market closed at all-time highs on a peace narrative that 1,600 stuck ships are quietly contradicting. None of these are the same story. All of them are the same kind of moment - the moment the world reprices something it has been pretending not to see.
For a founder running a real business, the move this week is small and concrete. Re-examine your moats. If they are made of software complexity, build a new one - data, distribution, brand, embedded relationship - and start now. Re-examine your input cost assumptions. If they include “oil will normalise” or “AI will get cheaper” or “shipping will return to baseline,” hedge or repace. Re-examine the gap between the news your industry is reading and the data your customers are sending you. The data is usually closer to the truth.
The man building the bomb told you he was building it. The man buying the GPUs went and bought them. The market chose to look at the smiling headline instead of the ships. Three different choices, three different bets. This is what intelligence work looks like when it is dressed as a newsletter.
Anthropic raised $1.5 billion. OpenAI raised $4 billion. Both for the same thing - putting their own engineers inside other people’s companies. The product was never the model. The product was the engineer next to the model. They just stopped pretending otherwise.
For the last three years, the most valuable companies on Earth told the world a clean story. "We make models. The models are software. Software has 90% gross margin. Trust us." The story was beautiful and the multiples were beautifuller. Yesterday, in the space of about four hours, the two most important AI labs jointly conceded that the story was wrong. Not in a press release that admitted it. In two press releases that acted like it.
Anthropic announced a $1.5 billion joint venture with Goldman Sachs, Blackstone, and Hellman & Friedman to embed Claude engineers directly inside mid-market companies, starting with the portfolio companies of the investors themselves. Hours later, OpenAI revealed it had raised $4 billion at a $10 billion valuation for a near-identical structure called The Deployment Company, with TPG, Brookfield, Bain Capital, Advent, and SoftBank.
Two ventures. Same week. Same model. Different investors, no overlap. And the model is - I want to be clear about this - consulting. Engineers, embedded in customer teams, redesigning workflows, integrating tools, billing for time. Marc Nachmann from Goldman, the closest thing this announcement has to an honest man, said it out loud: "There’s a big shortage of people who know how to apply these tools into businesses and then transform them." The fix is not better software. The fix is more humans.
The product was never the model. The product is the engineer who installs the model in your business.
To understand why this matters, you have to understand where the AI labs were six months ago. The story was: build the smartest model. Ship an API. Charge per token. Scale gross margin to infinity. Be Microsoft, basically. Customers do their own deployment. Software does the heavy lifting. Humans are a cost to be minimised, not a service to be sold.
That story is what justified the trillion-dollar valuations. OpenAI is now valued at $852 billion. Anthropic is mid-round at $900 billion. Those numbers only make sense if the business is software. Consulting is a great business, but it is not a $900 billion business.
The new ventures - and the way they are structured - are an admission. The labs realised, somewhere between the GPT-5 release and the launch of Claude Opus 4.7, that shipping the smartest model is not enough. Customers buy the model and then sit there, unable to figure out how to actually change anything in their business. The model is too capable for what the customer’s workflows can hold. So the contract stalls. The seat-count grows slowly. The CFO asks why the AI line item is going up but the productivity line is not.
Anthropic’s product head said it on the record: "There’s a big gap between what AI can do today and the value the market is truly getting from it." That sentence, from a $900-billion company, is roughly the most expensive admission in software history.
The forward-deployed engineer model is not new. Palantir invented it twenty years ago and rode it to a $400 billion market cap. The structure is: you do not sell software, you sell software-plus-an-engineer-who-makes-it-work. The engineer learns the customer’s business, customises the deployment, becomes indispensable, and the contract grows. The gross margin is lower than pure software. The lock-in is much higher.
What is new is that the AI labs - having spent the last three years insisting they were the opposite of Palantir - are quietly becoming Palantir. Different brand of engineer, same model. Constellation Research analyst Holger Mueller put it sharply: "Regardless of how they dress it up, the two new joint ventures do look very much like consultancies." The dressing up is doing real work, because the IPO multiples for a consultancy are not the multiples for frontier-model software. So they will keep dressing it up for as long as it works.
So why are the private equity firms the partners, and not the labs themselves?
Because the PE firms own the customers. Blackstone alone holds positions in roughly 250 portfolio companies. Goldman’s asset management arm holds another large basket. TPG, Brookfield, Bain, Advent each control hundreds. Across all the named partners, you are looking at thousands of mid-market companies that are simultaneously: (a) too small to have built their own AI capability, (b) too large to be ignored, and (c) contractually obligated to do what their PE owners suggest.
That last part is the move. AI adoption in mid-market is currently bottlenecked not by interest, not by budget, but by internal capability. The CFO of a $200M-revenue manufacturer in Wisconsin does not have an AI deployment team. She has an IT manager who is busy keeping the ERP running. So the AI initiative gets a one-line budget allocation in October and a sad PowerPoint update in March.
So the labs found a clean shortcut. Instead of selling AI to mid-market companies one by one, they sell it to a PE firm, which sells it to its 250 portfolio companies, which - because they want to be sold to a strategic acquirer in three years at a higher multiple - have to look modern. AI adoption becomes a portfolio-wide initiative, not a per-company decision. The lab gets enterprise revenue. The PE firm gets a markup story for exit. The portfolio company gets an embedded engineer they did not budget for. Everybody wins, except possibly the customers of the portfolio company, who now interact with AI agents that were configured by an engineer with a year of experience.
The customer of the AI lab is no longer the company that uses the AI. It is the financial owner of the company that uses the AI.
Three implications, in descending order of how much I think people are talking about them.
One - if you are a founder building on top of OpenAI or Anthropic APIs, you are now competing with the labs themselves. The forward-deployed engineer who shows up at a Blackstone portfolio company is not selling Claude. She is selling "AI for your specific business workflow." That is the same thing every AI startup pitch deck has been selling for the past two years. The labs have just decided to sell it directly. If your value-add was "we wrap their model and tune it for vertical X," your moat got smaller this week.
Two - if you run a mid-market company that is not PE-owned, you are about to be at a strategic disadvantage you did not know you had. Your PE-owned competitors will get embedded engineers paid for by the venture structure. You will get a self-serve seat license and a help article. Both of you will pay similar money to the same lab. Only one of you will see workflow change. This is not new in business - distribution has always mattered more than product - but the gap is going to widen faster than usual.
Three - and this is the one almost nobody is naming - the labs just made themselves much harder to value. A pure software company at $900 billion is a stretch but defensible. A consultancy with software at $900 billion is not. The IPO bankers will dress this up beautifully when the S-1 lands. But somewhere in the footnotes there will be a paragraph about "alternative go-to-market structures" that, three years from now, the analysts will be obliged to take seriously. The IPOs are coming this fall. The window for the clean software story is closing.
I am not arguing this is a disaster. I think it is, mostly, an honest move. The labs spent three years saying the model would automate everything. The model did not automate everything. The customers needed help. The labs are now sending help. That is what mature software companies do. It is just very different from what the labs spent three years saying they were.
If you are an operator paying attention, the move to make this week is small but specific. Read your AI vendor contracts and look for any language about "professional services" or "deployment partners" or "forward-deployed engineering." If those terms are in the contract, you are about to be sold something. If they are not, ask why - because by Q3 they will be. The labs have just told everyone, in two press releases on a Monday, that the next eighteen months of AI commercialisation are going to look much more like a McKinsey engagement than an API call. Plan accordingly.
The most expensive admission in software history happened on a Monday afternoon, dressed up as two normal-looking joint ventures. The operators who notice early adjust their AI strategy this quarter. The operators who notice late will adjust it next year, with worse terms.
In ninety-six hours, the king of investing handed over his arena, the king of cheap aviation died on the runway, and Amazon told the kings of logistics that their kingdom now belongs to it. None of these stories are about each other. All of them are about the same thing.
Most weeks in business journalism, you write one story. This week, three landed in the same forty-eight hours, and the temptation is to write them as three pieces. That would be a mistake. They are one piece. They just look like three because the wires categorise them differently - one goes to the markets desk, one to the aviation desk, one to whatever desk handles the death of an airline.
The pattern only shows up if you read all three at once. So that is what we are going to do.
The shorthand for this week, when economists write it up in 2031, will be something like "the spring the dynasties cracked." Three of them. The investing dynasty in Omaha. The aviation dynasty at Fort Lauderdale. The logistics dynasty in Atlanta and Memphis. None of them collapsed. All of them blinked. Every operator in every other industry should be watching what kind of blink it was.
Let us go in order.
Berkshire Hathaway’s annual meeting in Omaha was supposed to be a coronation. Greg Abel, anointed years ago by a now ninety-five-year-old Warren Buffett, ran his first meeting as CEO. He did fine. Operating earnings were up eighteen percent. Cash on the balance sheet hit a record three hundred ninety-seven billion dollars. Abel ruled out breaking up the conglomerate. The wires called it a "steady debut."
What the wires did not say loudly enough: the lines outside the arena were shorter. The merch hall was thinner. Berkshire shares are down six percent year-to-date in a market up five and a half percent. The stock has trailed the S&P by more than thirty percentage points since Buffett signaled the handoff.
This is what an investing dynasty looks like when it cracks. Not a collapse. A polite, well-managed thinning. The cult of personality required a personality. The personality is now sitting on the floor with the directors, holding up a numbered jersey. Abel will run the company perfectly well for fifteen years. He will not be the reason anyone flies to Omaha.
A dynasty does not die when its king dies. It dies the morning everyone realises they were here for the king, not the kingdom.
That same weekend, two thousand kilometers southeast, a different kind of cracking was finishing.
Spirit Airlines is dead. Not failing - dead. The ultra-low-cost carrier had been zombie-walking through two bankruptcies since 2024. On Friday, the parent company’s over-the-counter shares fell more than sixty-two percent to fifty-two cents. By the weekend, the airline announced it would cease operations. A five hundred million dollar bailout request to the Trump administration had failed.
This one is the simplest of the three. It is the dynasty of cheap aviation. Spirit invented the model that every budget airline in America copied: unbundle everything, charge for the seat assignment, charge for the carry-on, charge for the boarding pass printed at the gate, fly at a loss-leading fare and make the margin on resentment. For nineteen years it worked. For the last three years it stopped working, because the math underneath - jet fuel - changed.
Jet fuel is up thirty-one percent since November. We covered that story two days ago in this column. What we did not say then is that some airlines were going to die from it before the summer. Spirit was the most exposed. Spirit died first. There will be others.
The dynasty cracking here is not Spirit’s. Spirit was small. The dynasty cracking is the idea Spirit represented - that you could run a serious business on near-zero margins forever, that scale would eventually paper over the fundamentals, that "we will figure it out" was a strategy. For nineteen years, the cheap-aviation dynasty was the proof case for that idea. The idea now has a tombstone.
The third crack is the largest of the week, and the only one that arrived in the form of a press release rather than an event.
On Monday morning, Amazon announced Amazon Supply Chain Services - a single integrated freight, distribution, fulfillment, and parcel-shipping product offered to every business, not just sellers on Amazon’s marketplace. Within hours, GXO Logistics dropped eleven percent, UPS dropped about ten, FedEx and C.H. Robinson sank nine each. Amazon stock rose one percent.
That move - the second-largest single-day collective drop in third-party logistics history - is the third dynasty crack. The dynasty here is the one that made UPS and FedEx household names. The dynasty that said: logistics is its own business, with its own moats, run by specialists, with brown trucks and purple planes. Amazon spent fifteen years quietly building the largest fleet in private hands, then told the rest of the industry they could rent it.
UPS will be fine. FedEx will be fine. GXO will probably be fine. None of them will be the dynasty they were on Friday afternoon. Their valuations now have to absorb a permanent question: how big does Amazon Supply Chain Services get before our terminal multiple has to be cut.
It is the same crack we saw with Spirit, just at a different temperature. A business model assumed forever. A new entrant with an unfair advantage. A market that re-prices in an afternoon. The only difference is that UPS does not get a tombstone. UPS gets a smaller permanent share of a market it used to define.
So why are these three stories one story?
Because they are all answers to the same question: what happens to a dynasty when the conditions that built it stop being the conditions that exist.
Buffett’s Berkshire was built in a world where capital was scarce, valuations were rational, and an honest mind reading 10-Ks could find compounding gold nobody else saw. That world ended somewhere around 2014. The dynasty kept running on momentum and reputation for another decade. This week, the reputation transferred. The momentum did not.
Spirit was built in a world where jet fuel could be assumed cheap, regulators could be assumed permissive, and customers could be assumed price-blind to the point of complete indifference about their dignity. That world ended somewhere around 2022. Spirit kept flying for another four years. This week, the runway ended.
UPS and FedEx were built in a world where logistics was a logistics problem - a question of trucks, planes, hubs, and union labor. That world ended the year Amazon decided its warehouse footprint should rival the entire US Postal Service. The legacy carriers kept running on contracts and brand for another decade. This week, the contracts started looking renegotiable.
The dynasty does not crack when the world changes. The dynasty cracks when the dynasty finally notices the world changed - usually long after everyone else did.
For operators reading this in May 2026, the question to bring to your next leadership offsite is not "are we a dynasty." Most companies are not. The question is "what world are we built for, and is that world still the one we are operating in."
If you cannot answer the second half of that sentence in two minutes, you are running a dynasty without realising it. Which means, statistically, the crack is already underway. You just have not heard it yet.
The week dynasties crack publicly is the week every operator should stop reading about dynasties and start auditing their own assumptions. The next dynasty-cracking story is somebody’s company. Make sure that somebody is not you.