{"id":7448,"date":"2022-11-30T16:13:05","date_gmt":"2022-11-30T21:13:05","guid":{"rendered":"https:\/\/blogs.law.columbia.edu\/utopia1313\/?p=7448"},"modified":"2022-11-30T16:13:05","modified_gmt":"2022-11-30T21:13:05","slug":"zartosht-ahlers-perils-and-promise-of-automation","status":"publish","type":"post","link":"https:\/\/blogs.law.columbia.edu\/utopia1313\/zartosht-ahlers-perils-and-promise-of-automation\/","title":{"rendered":"Zartosht Ahlers | Perils and Promise of Automation"},"content":{"rendered":"<h2>By Zartosht Ahlers<\/h2>\n<p>&nbsp;<\/p>\n<p>In his essay \u2018The Fragment on Machines,\u2019 Karl Marx envisions the role technology can play in the liberation of workers:<\/p>\n<blockquote><p>Capital employs machinery, rather, only to the extent that it enables the worker to work a larger part of his time for capital, to relate to a larger part of his time as time which does not belong to him, to work longer for another. Through this process, the amount of labour necessary for the production of a given object is indeed reduced to a minimum, but only in order to realize a maximum of labour in the maximum number of such objects. The first aspect is important, because capital here &#8212; quite unintentionally &#8212; reduces human labour, expenditure of energy, to a minimum. This will redound to the benefit of emancipated labour, and is the condition of its emancipation.<a href=\"#_ftn1\" name=\"_ftnref1\">[1]<\/a><\/p><\/blockquote>\n<p>While vague, the idea makes intuitive sense: a world <em>can<\/em> be imagined in which technology will take on the brunt of the uncomfortable, meaningless work that defines modernity, freeing up people to enjoy the products of automation in their ample free time. This is what Aaron Bastani terms \u2018Fully Automated Luxury Communism,\u2019 a future in which technology has created a post-scarcity economy.<\/p>\n<p>Marx describes automation as the pathway towards a utopia\u2014that through the reduction of labor, labor is empowered. (He writes that \u201c[t]he saving of labour time [is] equal to an increase of free time, i.e. time for the full development of the individual, which in turn reacts back upon the productive power of labour as itself the greatest productive power.\u201d<a href=\"#_ftn2\" name=\"_ftnref2\">[2]<\/a>) Andr\u00e9 Gorz envisions a future in which automation enables a society of liberated time, writing that the current society is one of \u201cphantom work, spectrally surviving the extinction of that work by virtue of the obsessive, reactive invocations of those who continue to see work-based society as the only possible society and who can imagine no other future than a return to the past.\u201d<a href=\"#_ftn3\" name=\"_ftnref3\">[3]<\/a> Increased automation allows an escape from this wage-based society.<\/p>\n<p>But this is not the only possible vision of an automated future. As Schmelzer and Vetter write, \u201cthe promise of full automation does not provide an answer to the fundamental problems of the dominance of modern technology mentioned above, nor does it itself change the terms of ownership or the form of alienated labour, nor can it account for the resource, ecological, and global justice problems associated with full automation.\u201d<a href=\"#_ftn4\" name=\"_ftnref4\">[4]<\/a> In other words, a fully automated future might be one in which the owners of capital, the automated factories and the algorithms ruling over those factories, would fully control society, with no need to placate Labor. Or the future might be one in which those who cannot afford it are excluded from the benefits of automation, stuck \u2018outside\u2019 society.<\/p>\n<p>Throughout this semester, we have encountered a variety of solutions to this problem. Some thinkers, including Andr\u00e9 Gorz<a href=\"#_ftn5\" name=\"_ftnref5\">[5]<\/a>, have suggested a regular payment to all individuals, a Universal Basic Income (UBI), that would ensure that even if all capital was owned by a small number of individuals, all people would have a chance to consume. But UBI does not solve the automated future\u2019s inequalities. As Alyssa Battistoni explains, UBI<\/p>\n<blockquote><p>does not challenge capital\u2019s control over investment. It may distribute wealth more broadly, but it leaves the forces that generate wealth in private hands. It is therefore hard to see how a UBI could really constitute a \u2018capitalist road to communism,\u2019 as some of its champions have suggested. Rather, it seems more likely to be a sop to the poor in a world still run by private investors.<a href=\"#_ftn6\" name=\"_ftnref6\">[6]<\/a><\/p><\/blockquote>\n<p>Additionally, UBI does not take meaningful steps away from the incentive structure that encourages the owners of capital to continue extracting wealth from us\u2014a future abound with ads awaits! Lastly, insofar as self-restraint is fundamental in preventing a fully automated world from consuming what remains of the world\u2019s resources, UBI does little to enshrine a necessary culture of degrowth of frugality.<\/p>\n<p>Other thinkers argue that key to ensuring a utopic automated future is democratic procedural input. Schmelzer and Vetter, adding a degrowth perspective, contend that<\/p>\n<blockquote><p>[T]hinkers critical of industrialism emphasize the need to gain democratic control over technological developments. And while this critique is not against automation per se \u2013 in the case of unpleasant, tedious, debilitating, or dangerous work, automation is desirable from a degrowth perspective \u2013 it also emphasizes the need to reconceptualize and transform work, so that we can see and enact the socially useful activities that sustain our lives as the fundamental form of participating in society, based on a logic of care.<a href=\"#_ftn7\" name=\"_ftnref7\">[7]<\/a><\/p><\/blockquote>\n<p>Democratic control over the process of automation is surely important, but it too falls short of ensuring an equitable automated world. This makes some intuitive sense: current-day America is <em>largely<\/em> democratic but is surely not an equitable society. This is for a large variety of factors\u2014and I am in no position to adequately pinpoint the failings of modern American democracy in creating an equitable society (at least not in a blog post)\u2014but I want to focus on the simple fact that values change. Thereby, the democratic process creates laws that shape future people by present values, creating a persistent \u2018conservative\u2019 problem.<\/p>\n<p>Now, this is not a <em>huge<\/em> problem in most present-day democracies. While it is troublesome that our present-day legal landscape is dictated by the values of people from the 18<sup>th<\/sup> century, laws can always change. This is, however, a huge problem when it comes to automation.<\/p>\n<p>This is because of something called value lock-in.<a href=\"#_ftn8\" name=\"_ftnref8\">[8]<\/a> Value lock-in describes the risk of ossifying present-day values, inequalities, and biases for eternity, through technological progress. While the term is commonly used in the context of adequately capturing the existential risks of Artificial Intelligence, the concern of value lock-in is just as present in the context of automation. The principle is simple: the more advanced the technology, the more it will lock-in past values. Artificial decisionmakers, at the heart of any fully automated world, are eternal, self-reproducing, and depending on the degree of automation, cannot be micromanaged. William MacAskill, futurist, ethicist, and inventor of the term, argues that we have a (relatively) short window to decide what values and mores we want to commit our society to for the long-term future.<\/p>\n<p>All of this is complicated by the fact that current-day values are unlikely to be <em>correct<\/em>. And even at today\u2019s (relatively low) levels of technological advancement the issues of ossification have <em>already<\/em> begun. Social media algorithms not only <em>create<\/em> norms and beliefs, but they also entrench those ideas and beliefs into the very fabric of society. Similarly, algorithms that set bail amounts not only reflect <em>past<\/em> biases, but also dictate present-day and future outcomes based on these biases. And these problems are comparatively easy to fix! If we are concerned about the \u2018values\u2019 of our social media algorithms, we can press delete. Once a complex global automation algorithm handles international shipping routes and production schedules, it will be impossible to hit <em>reset<\/em> and figure out a better one. These issues are exacerbated by the perhaps intractable difficulty of AI-explainability: we are <em>really<\/em> good at training algorithms to identify cats, but we cannot figure out how to have the very same algorithm <em>explain to us what it is doing<\/em>.<\/p>\n<p>I will show my cards: I am oversimplifying <em>some<\/em> things. But not many. The risk that a fully automated future will ossify injustice and inequality is <em>likely<\/em>. The great thing about humans is how malleable we are from generation to generation. The values and beliefs of my parents are markedly different from my values and beliefs. And <em>hopefully<\/em>, future generations will think of our generation as being bigoted and conservative. But as the degree of automation and the associated artificial decision-making increases, this ethical change will slowly come to a halt. And the risk? Millions of generations being ruled by an artificial decision-maker with an engrained bias.<\/p>\n<p>A fully automated future is not a path towards emancipation. A fully automated future can only be attempted when we are <em>darn sure<\/em> that the values we hold and the society we can envision, is one we feel comfortable beginning to enshrine. But automation cannot emancipate that which we cannot emancipate ourselves.<\/p>\n<h1 style=\"text-align: center;\">Notes<\/h1>\n<p><a href=\"#_ftnref1\" name=\"_ftn1\">[1]<\/a> Fragment on Machines, 701<\/p>\n<p><a href=\"#_ftnref2\" name=\"_ftn2\">[2]<\/a> Fragment on Machines, 711<\/p>\n<p><a href=\"#_ftnref3\" name=\"_ftn3\">[3]<\/a> https:\/\/www.greeneuropeanjournal.eu\/questioning-the-centrality-of-work-with-andre-gorz\/<\/p>\n<p><a href=\"#_ftnref4\" name=\"_ftn4\">[4]<\/a> The Future Is Degrowth, pg. 175.<\/p>\n<p><a href=\"#_ftnref5\" name=\"_ftn5\">[5]<\/a> https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1111\/1467-923X.13169<\/p>\n<p><a href=\"#_ftnref6\" name=\"_ftn6\">[6]<\/a> https:\/\/www.thenation.com\/article\/society\/sarah-jaffe-aaron-benanav-automation-work\/<\/p>\n<p><a href=\"#_ftnref7\" name=\"_ftn7\">[7]<\/a> The Future Is Degrowth, pg. 175.<\/p>\n<p><a href=\"#_ftnref8\" name=\"_ftn8\">[8]<\/a> What We Owe the Future<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Zartosht Ahlers &nbsp; In his essay \u2018The Fragment on Machines,\u2019 Karl Marx envisions the role technology can play in the liberation of workers: Capital employs machinery, rather, only to the extent that it enables the worker to work a&hellip; <a href=\"https:\/\/blogs.law.columbia.edu\/utopia1313\/zartosht-ahlers-perils-and-promise-of-automation\/\" class=\"more-link\">Continue Reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2322,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[38964],"tags":[],"class_list":["post-7448","post","type-post","status-publish","format-standard","hentry","category-resources-4-13"],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/posts\/7448","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/users\/2322"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/comments?post=7448"}],"version-history":[{"count":0,"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/posts\/7448\/revisions"}],"wp:attachment":[{"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/media?parent=7448"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/categories?post=7448"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.law.columbia.edu\/utopia1313\/wp-json\/wp\/v2\/tags?post=7448"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}