By Zartosht Ahlers
In his essay ‘The Fragment on Machines,’ Karl Marx envisions the role technology can play in the liberation of workers:
Capital employs machinery, rather, only to the extent that it enables the worker to work a larger part of his time for capital, to relate to a larger part of his time as time which does not belong to him, to work longer for another. Through this process, the amount of labour necessary for the production of a given object is indeed reduced to a minimum, but only in order to realize a maximum of labour in the maximum number of such objects. The first aspect is important, because capital here — quite unintentionally — reduces human labour, expenditure of energy, to a minimum. This will redound to the benefit of emancipated labour, and is the condition of its emancipation.
While vague, the idea makes intuitive sense: a world can be imagined in which technology will take on the brunt of the uncomfortable, meaningless work that defines modernity, freeing up people to enjoy the products of automation in their ample free time. This is what Aaron Bastani terms ‘Fully Automated Luxury Communism,’ a future in which technology has created a post-scarcity economy.
Marx describes automation as the pathway towards a utopia—that through the reduction of labor, labor is empowered. (He writes that “[t]he saving of labour time [is] equal to an increase of free time, i.e. time for the full development of the individual, which in turn reacts back upon the productive power of labour as itself the greatest productive power.”) André Gorz envisions a future in which automation enables a society of liberated time, writing that the current society is one of “phantom work, spectrally surviving the extinction of that work by virtue of the obsessive, reactive invocations of those who continue to see work-based society as the only possible society and who can imagine no other future than a return to the past.” Increased automation allows an escape from this wage-based society.
But this is not the only possible vision of an automated future. As Schmelzer and Vetter write, “the promise of full automation does not provide an answer to the fundamental problems of the dominance of modern technology mentioned above, nor does it itself change the terms of ownership or the form of alienated labour, nor can it account for the resource, ecological, and global justice problems associated with full automation.” In other words, a fully automated future might be one in which the owners of capital, the automated factories and the algorithms ruling over those factories, would fully control society, with no need to placate Labor. Or the future might be one in which those who cannot afford it are excluded from the benefits of automation, stuck ‘outside’ society.
Throughout this semester, we have encountered a variety of solutions to this problem. Some thinkers, including André Gorz, have suggested a regular payment to all individuals, a Universal Basic Income (UBI), that would ensure that even if all capital was owned by a small number of individuals, all people would have a chance to consume. But UBI does not solve the automated future’s inequalities. As Alyssa Battistoni explains, UBI
does not challenge capital’s control over investment. It may distribute wealth more broadly, but it leaves the forces that generate wealth in private hands. It is therefore hard to see how a UBI could really constitute a ‘capitalist road to communism,’ as some of its champions have suggested. Rather, it seems more likely to be a sop to the poor in a world still run by private investors.
Additionally, UBI does not take meaningful steps away from the incentive structure that encourages the owners of capital to continue extracting wealth from us—a future abound with ads awaits! Lastly, insofar as self-restraint is fundamental in preventing a fully automated world from consuming what remains of the world’s resources, UBI does little to enshrine a necessary culture of degrowth of frugality.
Other thinkers argue that key to ensuring a utopic automated future is democratic procedural input. Schmelzer and Vetter, adding a degrowth perspective, contend that
[T]hinkers critical of industrialism emphasize the need to gain democratic control over technological developments. And while this critique is not against automation per se – in the case of unpleasant, tedious, debilitating, or dangerous work, automation is desirable from a degrowth perspective – it also emphasizes the need to reconceptualize and transform work, so that we can see and enact the socially useful activities that sustain our lives as the fundamental form of participating in society, based on a logic of care.
Democratic control over the process of automation is surely important, but it too falls short of ensuring an equitable automated world. This makes some intuitive sense: current-day America is largely democratic but is surely not an equitable society. This is for a large variety of factors—and I am in no position to adequately pinpoint the failings of modern American democracy in creating an equitable society (at least not in a blog post)—but I want to focus on the simple fact that values change. Thereby, the democratic process creates laws that shape future people by present values, creating a persistent ‘conservative’ problem.
Now, this is not a huge problem in most present-day democracies. While it is troublesome that our present-day legal landscape is dictated by the values of people from the 18th century, laws can always change. This is, however, a huge problem when it comes to automation.
This is because of something called value lock-in. Value lock-in describes the risk of ossifying present-day values, inequalities, and biases for eternity, through technological progress. While the term is commonly used in the context of adequately capturing the existential risks of Artificial Intelligence, the concern of value lock-in is just as present in the context of automation. The principle is simple: the more advanced the technology, the more it will lock-in past values. Artificial decisionmakers, at the heart of any fully automated world, are eternal, self-reproducing, and depending on the degree of automation, cannot be micromanaged. William MacAskill, futurist, ethicist, and inventor of the term, argues that we have a (relatively) short window to decide what values and mores we want to commit our society to for the long-term future.
All of this is complicated by the fact that current-day values are unlikely to be correct. And even at today’s (relatively low) levels of technological advancement the issues of ossification have already begun. Social media algorithms not only create norms and beliefs, but they also entrench those ideas and beliefs into the very fabric of society. Similarly, algorithms that set bail amounts not only reflect past biases, but also dictate present-day and future outcomes based on these biases. And these problems are comparatively easy to fix! If we are concerned about the ‘values’ of our social media algorithms, we can press delete. Once a complex global automation algorithm handles international shipping routes and production schedules, it will be impossible to hit reset and figure out a better one. These issues are exacerbated by the perhaps intractable difficulty of AI-explainability: we are really good at training algorithms to identify cats, but we cannot figure out how to have the very same algorithm explain to us what it is doing.
I will show my cards: I am oversimplifying some things. But not many. The risk that a fully automated future will ossify injustice and inequality is likely. The great thing about humans is how malleable we are from generation to generation. The values and beliefs of my parents are markedly different from my values and beliefs. And hopefully, future generations will think of our generation as being bigoted and conservative. But as the degree of automation and the associated artificial decision-making increases, this ethical change will slowly come to a halt. And the risk? Millions of generations being ruled by an artificial decision-maker with an engrained bias.
A fully automated future is not a path towards emancipation. A fully automated future can only be attempted when we are darn sure that the values we hold and the society we can envision, is one we feel comfortable beginning to enshrine. But automation cannot emancipate that which we cannot emancipate ourselves.
 Fragment on Machines, 701
 Fragment on Machines, 711
 The Future Is Degrowth, pg. 175.
 The Future Is Degrowth, pg. 175.
 What We Owe the Future