Hype about Chat GPT
I will never forget my introduction to the multiplication tables. For months, the world seemed to consist of nothing but questions from the teacher and – coordinated with that – from the parents, how much 7×6 is, 9×8 or 11×13. If I didn’t quickly give the right answer, I found myself in front of a sheet of paper to write down the 6s, 8s or 13s, if my father was in a bad mood, then 10 times. So to this day I can do mental arithmetic quite well and wonder about younger people who type 2×5 into the computer that tells them the result is 10.
When I transferred to Rosensteingasse, an educational institution for the chemical industry, in the 1960s, I learned how to use a slide rule. Since then, multiplying and dividing have been simple operations. It became more exciting when dealing with logarithms; but finally we could leave the dreaded logarithm tables in the school bag, even though the machine gave much less accurate results. As a rule, these only had to be brought out and used again for school mathematics tests. For the teachers, it remained decisive that we were still able in principle – despite technical help in everyday life – to deliver results without external aids. A few “stuffed” classmates whose parents could afford it bragged about the first versions of calculators during breaks; however, their use in class was prohibited.
In the laboratory of a holiday internship, I was able to gain my first experience with calculating machines that led to the correct result with the help of mechanical inputs along a scale, a crank and recurring bell signals. It was left to the laboratory director, who would show up from time to time, to check the results with a slide rule.
In the lessons of my sister, 10 years younger than I, the pocket calculator had already advanced much further. As an external teaching aid, it made the different living conditions of the students clear and separated those whose parents “could afford it or wanted it” from those who had to see for themselves how they could get ahead. Even more decisive was the almost unanimous fear of adults that the ability of students to perform arithmetic operations independently would come to an argumentative end with the use of the calculator at school: Why should young people still learn arithmetic if they could delegate this knowledge to a machine without a second thought, especially since the latter was always ready to solve the arithmetic tasks at hand faster, more reliably and more precisely?
And how am I supposed to evaluate the students now?
The worst hit, of course, were the “arithmetic teachers” who had previously based their student evaluations on proof that they could do arithmetic without technical assistance. The only way out for them was to declare the use of the new technology cheating and to impose negative sanctions. Only in this way did they see a chance of still being able to fulfill the task of giving halfway comprehensible grades.
In this way, they unintentionally mutated into the spearhead of a technology-averse cultural pessimism that all too easily conceals a generational hatred of the old against the young. After all, it wouldn’t have done them, today’s old people, any harm to have had to struggle with the basics of cultural techniques. Why should the young be better off today.
The history of technology skepticism, and not only in the Austrian school system, did not come to an end with my graduation. In the last few years, digital media with their sheer inexhaustible possibilities have been pushing for use in schools in ever new waves.
These developments did not take place in a vacuum, but were accompanied, indeed promoted, by the neoliberalization of Western societies and the associated increase in the importance of direct economic interests. As a result, schools found themselves under increasing pressure to prepare young people for the demands of a dynamically evolving job market that relentlessly declared proof of proficiency in the use of digital media to be the ticket of admission. By implementing media- and communications-specific content, albeit tentatively, education policy responded to demands in this regard from the business community, which expected schools to be able to select from the best possible pool of young people with digital skills.
Hand in hand with the growing economization of schools as a public good, progressive educational circles began to hope that the aggressive use of digital media could finally put teaching on a modern footing, if not revolutionize it. The establishment of new institutions in the field of teacher training, such as the Institute for Media and Communication Studies at the University of Klagenfurt in the 1970s, whose initial equipment resembled a futuristic technological laboratory, bears witness to this.
School as a humanistic enclave or as part of society
The most decisive factor, however, was probably the omnipresence of the so-called new media in the lives of the students themselves. As “digital natives,” the categorical difference between real and digital has become largely fluid; as a rule, they now have more differentiated digital skills than their teachers, who first have to acquire them in the course of a “second educational path. It’s no wonder, then, that new pedagogical concepts, which urge comprehensive inclusion of young people’s lifeworlds, suggest that the new technological possibilities should not be frantically kept out of the picture, but rather that they should be empowered to deal with digital media in a self-determined way. And demand the same of teachers.
Admittedly, the implementation of such communication, which is based on up-to-date technology, requires a considerable and, moreover, rapidly changing use of resources, both in terms of hardware and software. This is a challenge that most school locations are failing to meet in the face of an education policy that is gradually falling behind. Even the technology-driven reactions to the school closures during the pandemic have not fundamentally changed this. Many initiatives in this regard are limited to comparatively modest contributions, such as providing notebooks and tablets to socially disadvantaged students or offering special programs.
Education versus training
These efforts to open schools almost unconditionally to the new technological possibilities are countered by references to humanistic educational traditions. These traditions claim to focus on the human being and his its potential for self-development in order to free itself from dependence on the world of life and thus also on technology. Their representatives oppose a growing instrumentalization of human existence, which would stand in the way of comprehensive personality development. In the sense of a Humboldtian concept of education, it would be necessary to accompany young people in coming to terms with themselves in order to develop their particular potentials and not to limit themselves to technology-based educational issues.
Paradigmatic for this are the reflections of the philosopher Günter Anders “The Antiquity of Man“, according to which mankind would have lost its sovereignty against the demands of its technological counterpart at the latest with the Second Industrial Revolution, in order to fulfill the specifications of machines in one-sided dependence from then on.
The culture industry and the tendency toward cultural pessimism
This skepticism about technological progress is reflected in large parts of the culture industry, which has not become any smaller with the loss of audiences over the period of the pandemic: “We want to return to the only true, because authentic, cultural experience, which should preferably not be mediated by technology,” is the cry for help from all media channels. And yet it all boils down to the desire not to have to change anything, at least not the basic structures, in order to once again recommend itself as a social reinsurance institution. Although advanced institutions such as Ars Electronica have been demonstrating for decades now the new possibilities of art production, mediation and reception that result with the help of new technology-supported processes, the culturally pessimistic tenor is still unmistakable, especially in the traditional scenes.
Such a basic tendency of the culture industry to be hostile to technology, which has always been countered only half-heartedly in terms of cultural policy, points to a particular selectivity in the historical view of the culture industry. Such an approach can refer to a highly elaborate discourse on artistic lines of tradition; at the same time, it repeatedly refuses to recognize that in the course of history, it was first and foremost technological developments “remote from culture” that triggered the further development of the cultural industry: Starting with the construction of railroads and the associated increase in mobility, the use of electricity and artificial light, the development of new media such as radio, records, film, television, and digital media – all of these have had a more lasting influence on the position of the diverse forms of cultural expression in society than the achievements of one or another artistic genius.
If the cultural industry today recommends itself above all as a retreat into a supposedly better past, then its representatives are accepting that its main driving forces, creativity and innovation, are increasingly shifting to the research and development departments of technology corporations. These have long seen themselves as the real creative forces in society and act as such. With the help of resources that the cultural sector can only dream of, ever new experimental spaces are being opened up; the scenarios negotiated there – for example, in the context of the creation of new programs for so-called artificial intelligence – now tell us more about feasible designs for the future than even the most powerful (cultural) political programs can.
And now this too: machines take over writing
In this context, a program in the field of artificial intelligence research has recently caused a surprising volte-face. With Chat-GPT, it is possible not only for a few specialists, but for any user with just a few clicks to have a program create texts based on a few inputs that are indistinguishable from, or even surpass, self-produced results.
Such a breakthrough in the anticipation of human thinking is astonishing and its lasting consequences for human communication cannot yet be estimated in any way. It is safe to assume that this technological innovation will be a great frustration for all those who make a living from writing and reading, especially artists and intellectuals. They are all told that what they excel at can be produced just as well, or at best much better, by machines.
The faction of cultural pessimists warns in many a scare shot against an unmanageable extent of unverifiable fake production: After all, texts produced by machines can tell “everything” and thus also lend meaning to “everything” (as if this had not always been a human ability). They would lack a (culturally constituted) framework of classification, especially since they are not subject to any moral restrictions, unless such have been programmatically imposed on them. Thus, however, they would lack a decisive basis of trust (or distrust), which always resonates in the production and reception of texts made by humans.
And behind all this is the fear that programs such as Chat-GPT would open the door to a takeover of the central human form of communication, namely language, by the machine; the human being would be “incapacitated” in the truest sense of the word in his genuine power of articulation.
Social critics, in turn, point to a new quality of human exploitability if the underlying AI-guided writing program, with its mass use, were to mutate into a “huge database full of users’ desires, needs, and infatuations that can be tracked down, preloaded, archived, tracked, and exploited for all sorts of purposes at will.”
But what does this new technological phenomenon mean in terms of the school system? After all, young people are as of now able to delegate almost any text-related task that can be even halfway standardized to the machine. As in the case of the calculating machine, the teacher is no longer able to give an individual assessment. This also makes the question of individual support obsolete, at least as long as the students are able to provide the machine with the correct specifications.
Once again, the collapse of traditional dependency structures between students and teachers is at stake. No wonder a number of schools have banned the use of the programs. And so we experience a new edition of the struggle for the (school)m acceptance of support tools, this time not on the basis of the cultural technique of arithmetic, but that of writing. In both cases, we are dealing with the possibility of a technological outsourcing of originally genuinely human abilities that can be accomplished faster and better by the machine. If the experience with computing aids is anything to go by, there is little to suggest that the prohibitions pronounced to maintain an idealization of human thought and articulation will hold for long. It is much more likely that, in the case of Chat GPT, schools will have to adapt to the new technological possibilities, even if they threaten to be shaken to their foundations once again. The pro and con arguments can be taken over – almost 1 : 2 – from the time of the introduction of computing aids: Economization, labor market reorientation, depersonalization, rescue of an educational concept not related to immediate benefit, ….
Don’t worry, the concept of “human” will remain if….
But if, after arithmetic, writing (and thus a good part of standardized thinking) can now increasingly be delegated to machines, what remains of the genuinely human? In a first attempt, Hannah Arendt’s idea of “birth” comes to mind. According to her, each new generation is able to bring forth unpredictable aspects of the human, of which we have as little idea today as we do of the next technological achievements.
But this means that the relationship between humans and machines is not fixed once and for all, but must always be renegotiated. This is probably why it has never been so important to deal with “technology assessment” as a central political task. The current naivety in dealing with ever new technological possibilities testifies above all to the unwillingness of political representatives to recognize in the – increasingly technological – shaping of the community as the central category of their actions.
This has been true not only since the introduction of “social crediting” in China, which bears witness to the fact that advanced technology has always been used politically to secure power (lessons in this regard can be drawn from both the technologically supported modernity claims of National Socialism and the beginning of communist rule in the Soviet Union). But this also includes the fact that technological progress has made possible unimagined advances in the field of human emancipation, for instance in overcoming its natural limitations.
There is much to suggest that the mass introduction of programs such as chat GPT points above all to a new way of dealing with errors. Perhaps it is the error that will mark the central distinguishing category between man and machine in the future.
Not Chat-GPT, but Dealing with Errors Decides the Future of Schools
Thus, dealing with mistakes could be a central motif in the further transformation of both education and culture. It is the handling of errors that will determine how we want to interact with each other in the future. In the future, machines will be better at representing, confirming and evaluating than people who are designed to be contradictory. Humans, on the other hand, are inscribed with the common endeavor to look beyond their own horizons, to deal with what is not yet, what is possible, what is desirable and worth striving for. And what mistakes we make on the way to possible forms of realization. Not only technology hubs, but also cultural and educational institutions could be outstanding experimental spaces for this.
So anyone who is bent on getting everything right will sooner or later be playing into the hands of a future in which the machine takes over the reigns of humans. Because the machine always does what it can do better than humans. That is what it was designed for.
But those who rely on the defectiveness of humans, and thus on their ability to learn from mistakes and to develop further, need not fear a future dominance of the machine.
It is the errors that make people human in the first place in their limitations. There is much to suggest that people learn and develop primarily on the basis of mistakes. But this means that they also recognize each other on the basis of mistakes. And that they know how to use them productively within the framework of agreements.
Bild: ©Chris Ryan/getty images
LATEST POSTS
- Help, the saviours are coming
- Art, Culture and Borders – Why boundaries are necessary for a thriving coexistence
- Hype about Chat GPT
- The Autonomy of Art
- About the Value of Art
- Our Jobs are no Longer Our Lives
- It’s Me, Your Non-Visitor
- Quality in the Cultural Sector
- Another Non-word of the Year: ArtandCulture
- On the creeping deflation of culture