I am stuck in London courtesy of the terrorist policies of Donald Trump and his…
AI is is fast becoming another control tool of capitalism
My friend Alan Kohler, who is the finance presenter at the ABC, wrote an interesting article today about AI (January 19, 2026) – AI platforms like Grok are an ethical, social and economic nightmare — and we’re starting to wake up – in which he argued that while he thought climate change was “humanity’s biggest problem” at the onset of 2025, he now considers “AI is more pressing” and outlines the case where “problems — ethical, social and psychological — would still be horrendous.” I have been researching this question as part of the work I am doing on degrowth and its compatibility with a Capitalist economic logic. Regular readers will know that we will not be able to achieve environmental sustainability – getting the ecological footprint down to regenerative capacity – within a Capitalism mode of production. The logic of the aim and the logic of the accumulation system contradict each other. I also consider that AI inserts yet another contradictory logic, which will reinforce the economic crises that are endemic to Capitalism. If we are to broadly benefit from AI, then a new mode of production based on collective sharing will be required. That sort of system would also allow for a successful transition to a degrowth economy. Here are some preliminary thoughts on the AI issue as I research the topic more extensively.
MORE
Capitalism is a control system.
The urgency of maintaining control stems from the fact that the system is inherently conflictual.
Capital relies on the conversion of labour power (a potential) into labour (actual work) to produce surplus value and desires to offer the workers as little of the product produced as possible (the real wage).
The workers have other ideas.
They know they must work to live because, unlike capital, they do not have an independent means of survival without offering their labour power to a capitalist.
However, because they would rather be writing poetry or surfing down the beach, they desire to earn as much as they can while working as little as possible.
Hence the logic of Capitalism leads to a control function being necessary to ensure that the surplus value extracted each working day is as large as possible.
The early shift from cottage ‘putting out’ production to factory production was not about improved technology but about reducing the losses from workers operating out of decentralised working sites, where the workers had more discretion over what they did and for how long they would work.
The problem for capital though is the operation of this control system superimposed on the underlying logic to accumulate ever increasing wealth for the capital owners renders it susceptible to crises.
The surplus value is latent profit but only becomes realised when the production is sold for more than the costs of production.
And given that the workers comprise the vast majority of the population, that profit realisation process requires them to have sufficient means to actually buy the finished goods and services.
It also relies on capitalists continually reinvesting, which presents a further dilemma.
Capitalist investment adds to current expenditure and sales but also adds productive capacity which then requires further expenditure growth in the next period to absorb the extra production.
If the capitalists become uncertain about the future and stop investing, while they assess the situation and/or consumers reduce their expenditure because they fear unemployment or are trying to pay down debt accumulated from a previous period of overspending, then a crisis emerges.
Suppressing worker pay, which is a standard motivation under this control system, jeopardises the realisation process and then sets off a chain reaction where business investment is also stifled because capitalists realise that they have sufficient productive capital in place to meet the current sales demand.
An ‘overproduction’ crisis arises – capitalists expected sales to be greater than they turn out to be and hence overproduced final goods and services and once they discovered the reality they cut back production, lay off workers and use a range of other tactics within their ‘control suite’ to minimise the losses.
Enter AI.
An individual capitalist not only faces the conflict with the workers they hire but they are also in competition with other capitalists for market share and supremacy.
Over time, the weaker capitalists have gone broke and overall capital is increasingly concentrated in the hands of a few.
The problem though is that this competition between the individual capitalists blurs their focus on the macroeconomy and one of the defining features of the work of John Maynard Keynes in the 1930s was the recognition of the fallacy of composition when applied to economic thinking.
What an individual capitalist will think is good for them, may turn out to be disastrous when all of the capitalists employ the same strategy.
We have the famous example of an individual firm cutting its wages and increasing its profit rate because the reduction in costs was not offset by any lost sales resulting from the workers having less income.
The example abstracts from any morale issues that the firm might face – including sabotage, costly exit, etc.
But if all the firms cut their wages, then costs per unit might decline overall but so will sales because wages are both an element of cost and a crucial element of income, which defines the capacity of workers to consume.
So what might apply at the firm level will not translate into being applicable at the sectoral or economy-wide level.
That observation led to Keynes’s devastating critique of neoclassical economics and is still something that the dominant approach to macroeconomics (New Keynesian paradigm) fails to come to terms with.
It is why the mainstream profession really doesn’t have a coherent macroeconomic framework.
But the point of relevance here is that while an individual capitalist might see that deploying AI tools will reduce costs, perhaps increase productivity, and replace the irksome need to hire as much labour as before, if all capitalists pivot to this model, troubles will emerge.
In a capitalist system, labour creates value.
At present, AI systems are essentially building capacity based on past value created.
So I notice quite often now if the topic of the AI query is Modern Monetary Theory (MMT), then it will cite my work in its summaries.
But its responses to those queries relied on me doing that work.
Thus as it stands the AI technology is running on past value.
And knowledge comes from research.
Yes, AI is capable of conceiving and executing complex statistical research analysis – but only as an assistant.
Why?
The reason that human oversight is required is to validate outcomes and ensure that the research process followed was not just GIGO (Garbage-In, Garbage-Out).
But it is becoming clearer that AI can eliminate significant swathes of labour, particularly in the process or routine areas of activity.
Alan Kohler wrote:
It’s not a question of whether AI and robots will replace human jobs, but how many.
The question then is where will the value come from.
Further, and of crucial relevance to this discussion, is where will the demand for goods and services come from?
For an individual capitalist, the incentive to reduce their wage bills by deploying as much AI as possible may not if isolated dent total spending capacity in the economy much.
But given all of them are competing with each other and introducing all the latest technologies as fast as they can, without much thought for the longer-term implications, then under current institutional arrangements for income distribution, a problem is going to emerge relatively quickly.
AI probably will provide capitalists with a mega sort of wage cutting capacity but in that context introduces a further contradiction to the underlying logic of the system.
Individual capitalists will have to innovate AI as quickly as they can (and we are seeing the early manifestations of that).
But the overall system will not be able to maintain stability as that innovation unfolds.
The stability of the capitalist system requires that profits be realised.
But social stability requires that workers are rewarded adequately so they can live a reasonable life.
AI will concentrate income further to the top, given the control that a few IT companies seem to have on the technology.
The open source movement was able to provide excellent access to best-practice technological developments to the common folk for free, which in the early days of the Internet, before capital took control, generated significant potential for a new era that could move us beyond capitalism to a more cooperative, sharing society.
However, the investment required to develop the AI technology is at a larger scale that the sort of innovations that were common in the early days of the Internet.
Sure enough, a lot of that AI investment has used the work of others (including my own) without payment, so it is hard to compute exactly what the scale has been.
But I don’t see open source AI becoming the norm.
I see a lot of progressives seeing this dilemma as being a further justification for their advocacy of a basic income system.
Thus, AI displaces labour at multiple levels of each organisation and the workers go off and learn to do art or play harmonica while living it up on their basic income payment from government.
Quite apart from the indecency of the ‘privatise the gains, socialise the losses’ implication of this – that is, capitalism only could survive if subsidised by the state – I have seen no credible basic income proposal that would allow for the state to cover the complete wages bill that would be required to maintain economic activity levels and allow the profit expectations of the capitalist class to be realised.
Putting workers on some minimal basic income will not cut it I am afraid.
The question then is can the capitalist distribution system function to ensure profit is realised as AI runs through employment?
My guess is that it cannot produce a stable outcome where all parties are sufficiently rewarded to forestall social instability.
Alan Kohler responded to comments from the CEO of OpenAI who said a “new economic model” would probably be required:
In other words, the leading AI person hasn’t got a clue about the harm of what he’s doing, he’s guessing, while acknowledging that it’s going to require a mysterious new economic model.
A related issue is how AI fits into the ‘control’ aspect of capitalism.
Clearly, capitalists see AI as a new tool for further consolidate their power vis-a-vis labour.
It is being used to intimidate labour into increased compliance to the needs of capital rather than a liberating force for humanity.
We are now becoming increasingly aware of the way AI is being used to manipulate reality.
I sometimes consult YouTube to learn about how to do something – like yesterday, the cistern at our house malfunctioned and I spent 4 minutes studying a tradesperson instructing me on the problem and solution.
This is the educational Internet.
But I now observe so much fake Internet, driven by AI and pathetic ‘influencers’ trying to garner attention with tawdry videos.
It is nigh on impossible for the common folk to differentiate reality from fake.
And like advertising was used by capitalists to manipulate our consumer preferences and in many cases out rightly deceive us into purchasing things we would normally not purchase had we had the correct information, AI is accelerating that manipulation in the hands of the capitalists.
Further, it is being used to tilt the political process to advance the lobbying interests of the few, which distorts the decision-making of those who are unable to differentiate fact from fiction.
Apropos of my blog post last Thursday – Curbing the freedom of writers will not advance human rights (January 15, 2026) – there has been so much AI-generated manipulation from both sides of the conflict that distort the perceptions in the public space.
The case of US-firm Clock Tower X being hired by the Israeli government to manipulate ChatGPT and YouTube among other platforms to frame the genocidal actions of the government in a particular (positive) light (Source).
An expert on media analytics told Al Jazeera that:
What companies like Clock Tower X are promising is that, if they can flood the information space with sites and content sympathetic to Israel – what’s called RAG poisoning – there’ll be enough there to at least muddy the waters around what others see as a clear-cut genocide.
Conclusion
I am continuing to research this question in order to integrate these issues in the broader degrowth framework I am putting together.
It is clear to me that AI, while it has the potential to be a powerful force to advance humanity, is fast becoming another control tool of capitalism, which will reinforce the inherent contradictions of that system.
The problem is that the damage done while the system internally combusts will, likely, be massive.
The fake nudes and the rest of the slime are just part of this damage.
That is enough for today!
(c) Copyright 2026 William Mitchell. All Rights Reserved.
The theft of intellectual “property” and copyright by oligarchs is a key part of the AI modus operandi.
We have known since Bernays that the control of information is essential for political manipulation.
AI revenue management tools (Algorithmic price-fixing) pushing up prices in the following commercial sectors
Real estate rentals: RealPage (its revenue management tool ‘yieldstar’ and AIRM)
Hotels and Casinos: The software called Rainmaker from Cendyn
General retail (groceries, electronics, apparel): Yieldgo, 7learning, Priceshape.
E-commerce: Software tools called Feedvisor and Aura
Healthcare: Software called MultiPlan
Concert tickets
The only way to stop this is via national legislation (make it illegal).
The need (for capitalism to begone) to throttle and drain the wealth flowing to capital, “under current institutional arrangements for income distribution, a problem is going to emerge relatively quickly” is the nub of the problem.
And I agree with “I see a lot of progressives seeing this dilemma as being a further justification for their advocacy of a basic income system.”. A neoliberal’s dream for extracting public money as it flows through the intermediating hands of UBI recipients. The ability of the AI/LLM urgers of capital and their captive PMC to co-opt public money to consolidate their control versus the ability of the workers to claw back the power of government for the benefit of the many ahead of the few will play out. It’s not a hopeful outlook. The many are not well served by who they elect to protect their interests.
The imposition of the absolute fiscal power of the state to restrain the greed of capital (as exemplified by China and its dealings with such as Jack Ma) at the expense of the workers is already required with financialised capitalism in full swing. AI will merely accelerate the need for a rapid economic paradigm shift away from capitalism to at least a predominant socialism, if not a benevolent form of communism. What we are here for has to be more than:
Because markets;
Go die.
That will continue if the kakistocratic puppets of the plutocratic oligarch puppeteers keep running the show.
Bill, thanks
A reading recommendation, if I may:
Jonathan Taplin, The End Of Reality: How Four Billionaires Are Selling Out Our Future, 2023.
The muso in you may recognise the author, Jonathan Taplin.
Bill, your analysis of AI’s impact on capitalism highlights some profound structural challenges. Harari warns that widespread automation could make human labour largely irrelevant, leaving ordinary people economically powerless while wealth and property concentrate with those who control AI, data, and infrastructure. Given that household wealth is overwhelmingly tied up in property, if wages collapse, humans may struggle to maintain ownership, and survival could depend on redistribution. I understand that under MMT, governments don’t need taxation to fund the non-government sector and can create money as long as productive capacity exists. But if most productive capacity and property are concentrated in a few oligopolies, how do you see inflation being managed, and how can ordinary people retain meaningful economic leverage? Broadly, how do you envision this system working in practice without turning humans into largely dependent consumers while capital and property concentrate?
Bill, I realize my comment at 7:28 may contain some muddled thinking or imperfect framing. I’m asking genuinely as I’m trying to understand this issue better — your response will help me think through how to engage with society and participate meaningfully in politics moving forward
Thing is, I can tell you exactly how AI works, and once you understand that, this whole discussion gets turned upside down. Which, fittingly, is about how it goes learning about MMT.
First, an artificial neural network (ANN) is just a funky notation for a single equation. The nodes of the network are simple nonlinear equations with at least one coefficient; the book Deep Learning (Goodfellow et al, 2017) recommends f(x) = if x ≥ 0 then ax else 0. The edges (lines, connections) of the network are function composition. “Deep” learning is “deep” because of the size of the network, but noting that an ANN is just an equation, what we really mean is the dimension (number of free coefficients and parameters) and, for lack of a better word, the “extent” of the nonlinearity or the “flexibility” of the function.
So now we can safely put the idea of neurons out of our heads, because ANNs have nothing to do with neurology. From this point forward we’ll only be talking about a high-dimensional nonlinear function.
Writing a computer program to generate text or images has always been about the probability of co-occurrence of letters, words, pixels, whatever. So the function we’re talking about is something of the form, given this sequence of words, what’s the most likely next word? And this has always been done by looking at some corpus of text and calculating the incidents of word co-occurrences in that text. But how do you get from co-occurrence in the corpus to generating “new” text?
The answer is regression analysis. We have a big nonlinear function with billions of free coefficients, and we can numerically encode words to be able to parameterize the model. That’s all “training” a model means, it’s just the same thing you do anytime you’re doing curve fitting. You try to find coefficient values that minimize the error between your curve and your data. It’s just, in this case, the curve is an arbitrarily-selected billions-dimensional nonlinear function, and the data are the incidence rates of words showing up together in the corpus.
Hopefully your statistical spidey senses are going off at this point: isn’t this a recipe for disastrous overfitting? And the answer, of course, is yes—but…
In nonlinear regression, the quality of the fit is measured with a holdout set, what’s called out of sample validation in my world. Split the data into two subsets, use one to calibrate the model, use the other one to evaluate the fit.
The core of deep learning is something they call “model regularization,” and Goodfellow et al use this example to get the idea across: Imagine the classic overfitting case where you have ten points that roughly follow a parabola, and you fit a ninth-degree polynomial. Mean squared error (MSE) will be zero, but the moment you discover an eleventh datum the party suddenly ends. This is because a “good fit” only truly happens when there’s an ethereal correspondence between the “mathematical nature” of the phenomenon your data were measured from, and the function template you’re using for the regression.
Anyway, they instead note something else that happens in the ten points, ninth-degree polynomial case: the coefficient values get to be very large. So they say, hey, what if we changed the error function from MSE to something more like, E(c) = MSE + Pc*c, where c is a vector of the coefficient magnitudes, * is dot product, and P is a magical “hyperparameter” that’s under your control. Artificially inflate regression error based on the sum of squares of the coefficient values, which is the same thing the regression algorithm is computing as error is being evaluated, and you choose the degree to which coefficient magnitudes impact error. It’s usually possible to find a value of P that’ll force the regression to settle on coefficient values that, despite being a ninth-degree polynomial, locally look like a parabola. So if that eleventh point shows up, it’s no problem—as long as it shows up in the domain where you’ve coaxed your polynomial to masquerade as a parabola.
And with a wave of the magic wand, out of sample validation disappears! Instead the overfitting is being done by hand, via manipulation of any number of hyperparameters, so that the out of sample validation itself is being overfit.
And what are the characteristics of overfitting? Why, you produce results that look great in the lab—or the tech demo, in this case—then fall flat in the real world.
Or I guess you can become a participant in the great circle of overfitting, by calling AI an “assistant.”
Anyway, given this is what AI actually is, all these predictions that’re made about how it’ll shake up the world—it’s just more white noise from Big Tech. The AI bubble is not like the web bubble, because the web was always useful.
All this said, Bill, you should read the Goodfellow book, see if I’ve mischaracterized deep learning. Just be careful not to get lost in the sauce, stay focused on methodology, and remember, Dirac deltas don’t exist in discrete systems like digital computing.
Oh yeah, minor side note to all that: Nonlinearity is a strict requirement for the mini-functions “inside” the nodes of an ANN, because the whole thing only works if each composition increases the complexity of the function. Since the composition of two linear functions is another linear function, it’s a nonstarter. The whole method only works if there’s a ton of “excess flexibility” in the functional form of the ANN, since we need the “slack” for it to be possible to then narrow the model function back down to reproducing the holdout set.
We can also easily see why ANNs have to be feedforward, because regression analysis only makes sense on closed-form functions.
We have just seen ChatGPT roll out advertising, tacit recognition that it doesn’t do anything that anyone wants to pay for. As my bro put it, ‘paid advertising-the last refuge of a scoundrel.’
Overall, AI is garbage in-garbage out. Just ask AI a question on something you know about and see the response. It spews out a lot of garbage because it draws on a lot of garbage. In many cases, it can’t answer a simple question you put to it because it claims that the required information is not available or cannot be found, which I find amusing.
If we start using AI outputs as AI inputs, it will become a degenerative positive feedback process, which is likely because AI cannot consciously test something for its veracity like a human can. If someone says that A + B = C, AI cannot test it. Even if it could, for AI to be functional, it must cease testing things to generate an answer/output. You and I can test things, especially if we question the results (some people, like me, never accept anything at face value, which AI does, as evidenced by the garbage it spews out), simply by putting A and B together and observing the result. Also, we can choose when to cease testing in order to generate an answer/output. We have discretion. AI does not. If we programmed AI to test everything before generating a result, it would go on testing forever. If we could program AI to cease testing at some point, the cessation point would be arbitrary. Humans can determine/decide when there is sufficient validity in the information being drawn upon to generate a credible answer. AI can’t. It’s likely to generate a false answer (cease testing the information too soon) or not able to provide an answer (forever be testing the information). Unless AI can replicate a human brain, which it can’t and never will, humans will forever be needed by humans. Sure, AI can perform tasks in a near instant that a human can’t, but so can a simple calculator, a modern phone, and a laptop computer. So can a hammer, an oven, a lathe, and a pen. These things have been around for ages and humans have still been needed by humans. When in my mid-teens in the late-1970s, my class at school was told that, thanks to computers, hardly anyone of us would have a job. That wasn’t the case at all.
Quite recently, fears of the impact of ‘automation’ on the employment of humans have resurfaced, as if automation is a new thing! AI, automation, etc., has never erased the need for humans by humans, and it never will. In instances where this stuff is useful, it has made humans more productive, not obsolete. So long as humans equitably share in productivity gains (which we don’t, and which is the real problem), we can all enjoy the benefits of material things without the need to work as many hours in a week. This leaves us with more time for leisure and more time to meaningfully devote to caring for dependents (old and young) and creating pleasurable things for people to enjoy, like music and various forms of visual art and literature, of which all but leisure can be remunerated (become a form of paid work). In other words, it can allow us to harness every human to meet the human needs served by material things (means) and meet other needs and desires that are going largely unmet by a world obsessed with expanding GDP (obsessed with means and losing sight of the ends). It would also shut-up the people who needlessly worry about an aging population and the rising dependency ratio.
AI also requires a lot of real resources, which will be its inevitable downfall. Should the rate of resource throughput ever be capped to something that is within the ecosphere’s regenerative and waste assimilative capacities (if we don’t, ecological decline will drastically reduce the available rate of throughput, whether we like it or not, which is why I pity future generations), we will be forced to make some difficult choices. We won’t be able to avoid these choices, as we often do at present, by bumping up the throughput (i.e., instead of X or Y, just have X and Y).
Choices will come down to whether we are willing to allocate a large share of the capped throughput to data centres at the expense of more important things. Provided the masses have some say on the allocation of resources – and that’s not guaranteed, since they have little say at present – they will eventually realise how overrated AI is and how little we ‘need’ it. AI will become a minor thing we give little attention to. Meanwhile, humans will still be required humans!
AI will be a useful technology for some purposes. AI will only make life difficult for many people if the ruling psychopaths are able to use AI, as they have used all new technologies, to successfully engage in chrematistic endeavours, as they have done so since the advent of agriculture 11,000 years ago. In a world operating on oikonomic principles, which would include a cap on the rate of resource throughput, only useful things would see the light of day, meaning that only the useful aspects of AI would survive, and only if the large resource requirements to maintain it are worth the opportunity cost in terms of other useful things foregone.
I agree with the broad message and most of the detail of Bill’s post. I found several other posts in this thread interesting, especially the ones about AI. I don’t think AI will be (technically) as good or as bad as most pundits predict. But capitalism will certainly attempt to use AI for all the worst uses conceivable and possible and it will vast damage with AI.
I’m a Marxian and a CasP-ian in my political economy thinking. Like Marx himself, I do not call myself a Marxist. Being a CasP-ian refers to my general acceptance of the Capital as Power thesis of Shimshon Bickler and Jonathan Nitzan. I accept their main theses. I leave it to people read the “Capital as Power” book and other articles easily available on the internet.
I am also a priority monist, an emergentist, evolutionist and complex systems thinker in my philosophy. I leave it to people to figure out how these positions might influence my thinking. Honesty compels me to admit I have no “letters” in any of these fields, nor in mathematics or AI. My B.A., taken about 45 years ago, and which I never put to use, involved media, cinema and literary studies plus a few life science subjects. I spent much of my spare time back then reading Marx and environmental science, neither of which I was actually studying for credit.
I recommend Capital as Power as an advance on and beyond Marx and also as a refutation of Classical and neo-Classical economics. I don’t recall that it addresses MMT but it certainly holds that Capitalism is a power and control system and nothing else (certainly not a valuing system) and it holds this in an entirely new and a specific scientifically valid way, IMHO. It also conceptualizes capital in a new way. Again, I recommend CasP and I leave it to people to read it.
Capitalism uses everything for its own purposes (the mere accumulation of capital, specifically money and financial capital for a tiny elite) and it ruins absolutely everything good stemming from the natural world including mankind and our own emergent creative complexity and production. That destruction of all for elite wealth accumulation is its true and only aspect and logic. It destroys everything that should be nurtured and sustained and that will nurture and sustain us. We certainly cannot save anything worthwhile, including the current complex and supporting Holocene biosphere, without the complete and utter abolition of capitalism.
Just a minor complaint: If with AI we mean LLMs, then they will become open source — and soon. In software this the the case, all infrastructure is open source. Web server market (that used to be predominantly closed source) is now open source. A major part of the OS market (which started off as closed source) is now based on open source standards (linux for android, BSD for macos). The cloud is based 100% on linux. The basic tools of machine learning (Meta’s pytorch and Google’s tensorflow) are open source. And these tools are used to build LLMs. Now, llama, the LLM of Meta (ie facebook, instagram etc) is already open source as a program and partly open sourced as weights. AI will become open source in the end, esp. after the relevant hardware costs fall. Right now I can run a relatively satisfactory neural net (of limited scope, not a huge LLM) on my not-extravagant pc. It will increasingly become easier and easier to run largish models on cheap computers. These models will be open source.
dont share the pessimism re AI, infact , it represents one of the new internal contradictions of capitalism which will lead to the downfall of capitalism,
ultimately any new tech is about enhancing human utility. right now AI and big data is being used to tilt markets in favor of capital, but how long can that last. capital uses AI to manipulate markets , and some form of allocation takes place. but what if we get rid of markets , and simply allocate . AI and a million fold increase in computing power now gives us the means to do so,
we probably need a few hundred years of productivity gains, where the first law of system dynamics has a chance to have an effect . the first law of systems dynamics is essentially doing more with less, with the ultimate theoretical objective of doing everything with nothing. the further along this trajectory humans move, the less sustainable the current economic framework becomes, because in the long run incentives wont matter.
the current rules are set by generally heavy set men over the age of 50 , who are generally all lawyers, where symbolic systems triumph over rationality, but their great great grand kids might take a different view, and eliminate market allocation and debt on the grounds of efficiency and a human first rather than a system first ethic.
like all things to do with humans, there is the possibility of a bukharian nightmare , where we move from the current capitalist hell, to a socailist hell rather than a paradise , so a raising of consciousness needs to happen first.
not sure if your a fan of donald fagan bill, but i agree with his vision of the future in IGY,
“A just machine to make big decisions
Programmed by fellas with compassion and vision”
we need tibetan monks to write the algorithms that run a future society , rather than elon musk,
AI can actually go even deeper, to the core of the Democratic institutions in a highly destructive way:
«World-first social media wargame reveals how AI bots can swing elections»
https://techxplore.com/news/2026-01-world-social-media-wargame-reveals.html