How AI Spreads: Faster Than You Think, and From the Bottom Up

How AI Spreads: Faster Than You Think, and From the Bottom Up

A reader asked why I’ve been writing about AI so frequently lately. I only write when I feel compelled to, and if there’s nothing to say, I might go a month or two without updating. The reason I keep writing about AI is that at its current rate of diffusion, many people’s life trajectories will change in the not-too-distant future.

On the question of whether AI will exert deflationary pressure on the economy, there are numerous perspectives, many of which are tied to the speed of AI diffusion.

AI model iteration is still accelerating. Boris Cherny, a core engineer on Claude Code (who, incidentally, comes from an economics background rather than computer science), said in an interview last month that you should “build products for the model six months from now” — meaning AI model updates are already so rapid that what today’s model can’t handle well will likely be a non-issue six months later. If six months is the current iteration cycle for AI models, the implications for the broader economy are self-evident, as many ordinary people have felt firsthand over the past three months.

But technological iteration speed is not the same as technological diffusion speed. Skeptics argue that AI diffusion faces several major sources of resistance:

First, the legacy of prior IT investments and the pursuit of system stability. Many traditional enterprises have been around for a long time and have already sunk substantial costs into IT infrastructure. Their existing ERP, CRM, and other systems would carry significant risk if overhauled in a short timeframe, so these enterprises tend to wait.

Second, traditional organizations typically believe their “institutional memory” (mentorship traditions, for instance) and cultural values are irreplaceable by AI, which weakens their motivation to proactively introduce AI for end-to-end transformation. This, too, leads them to wait.

Third, once a full-process overhaul is initiated, it entails large-scale training, reassignment, or layoffs — all of which generate additional costs and reputational risk. If the overhaul fails, the losses far outweigh the gains. So until the certain gains clearly exceed the costs, or until they have no choice, organizations will default to waiting.

Fourth, regulatory resistance. AI’s safety remains challenged on multiple fronts.

Academic discussions go further still, from the Luddite resistance to machinery during the Industrial Revolution to the Solow Paradox (Robert Solow’s quip in a 1987 book review: “You can see the computer age everywhere but in the productivity statistics.”)

My observation is that all of the above perspectives analyze the question from the vantage point of organizational decision-makers or historical precedent. In reality, we need to think about it from a more granular, different perspective.

During the British Industrial Revolution, Luddism arose when workers saw that power looms purchased by factory owners were replacing their manual positions on a massive scale. The loom and the worker had a very clear substitution relationship. In fact, across the first three technological revolutions — steam engines, electrical equipment, and information-age computers — it was always the enterprise that proactively purchased equipment to boost productivity. There has never been a historical precedent of employees purchasing advanced equipment en masse to use in their employer’s workplace.

AI diffusion is different. It isn’t a case of AI companies first pitching labor-replacement products to other enterprises. Instead, many workers started using AI on their own after seeing that it could materially boost their productivity. This means that, with workloads held constant, they can dramatically reduce their working hours — which is obviously beneficial for the employees themselves. AI subscriptions are so cheap that this behavior — employees voluntarily purchasing advanced production tools to make their own lives more comfortable — quickly became a widespread reality.

Middle management tacitly accepts that employee productivity and output quality are steadily improving. Senior leadership, in most cases, has no strong justification for blocking this trend. This is the status quo at most traditional enterprises.

If we draw an analogy, AI currently functions more like outsourced labor that employees pay for out of their own pockets. A similar phenomenon occurred during the pandemic, when American remote workers outsourced their own tasks to programmers in India.

So this “gentle erosion” has faced remarkably little resistance. Because during this honeymoon period, the enterprise, its multi-layered management, and employees are all benefiting. This means AI is diffusing far faster than anyone imagined.

The real resistance, then, is not all-encompassing. The barriers listed above only come into play when an enterprise contemplates a full-scale deployment.

This pattern of technology diffusion is indeed fundamentally different from previous revolutions. Companies can resist. They can invoke trade secrets or compliance requirements to prohibit employees from using AI. But in practice, such prohibitions are meaningless — employees will use it anyway, unless the entire work environment is completely disconnected from the internet.

This, in turn, forces all enterprises (and other organizations) to embrace the AI era as quickly as possible. If the AI tools an enterprise provides its employees are low-quality and hard to use, employees will proactively choose whatever works best for them. Employees’ self-driven, voluntary adoption is what makes AI diffusion categorically different. Frankly, the traditional, highfalutin concerns of enterprises are nothing more than reflexive risk-aversion thinking. In the face of the overall diffusion trend, these concerns are powerless — they can’t even slow it down. Management knows the future won’t look like today; they’re just fulfilling their present duties. And behind closed doors, how could any manager possibly resist AI’s allure?

When Robert Solow said in 1987 that computers’ impact on productivity was invisible in the statistics, it was still the MS-DOS 3.X era. Windows had just released version 2.0, and cutting-edge American office workers were still using WordPerfect. Forget the internet age — the PC software age hadn’t even arrived yet. After Windows 3.0 and MS Office launched in 1990, the world started looking different. From the 1990s onward, massive productivity gains were visible everywhere. So the Solow Paradox, as applied to AI diffusion, is largely irrelevant.

These are my main views on the speed and pattern of AI diffusion. AI’s impact on employment is a separate question.

That’s all.