Log In

Academia: Is AI Hype? (Yes) - by Timothy Burke

Published 1 month ago4 minute read

Second, there is growing evidence that AI is being used as a tool for cheating in education, as a cost-effective generator of disinformation, as a way to generate routine communications or handle routine tasks (often obviously so and thus ineffectively), and in some cases, by people who are trying to save money on professional services (in generating legal briefs or legal representation, etc., again often ineffectively so.) Many of these uses remind me of people who believed that “cruise control” as a feature on cars, before autonomous driving, was itself an autopilot and who had accidents as a result. AI in these contexts “works” in some approximate sense because of a lack of scrutiny of the output (say, in a course where the instructors are grading five hundred essays that respond to a highly routinized prompt). Its users are not particularly well-served by AI in these contexts, but then again, in a fair number of these instances, the processes that are generating a need for AI are badly designed in the first place. If we were more self-aware collectively, we might almost use generative AI in these circumstances the way you use a bloodhound to track a criminal. Where we can use current-generation AI, we have a process or procedure that functions as an unnecessary obstacle with little purpose or we have a predatory political economy collecting rents of some kind. This is not what generative AI is being touted for—identifying excess bureaucracy, mindless communication, or punitive make-work. But it is what it’s being used to handle.

Third, at least some of the promoters of generative AI inside academia are identifying an efficiency function where none was asked for nor needed. Here I’ll focus in particular on claims I’m seeing about historical research, since that’s one thing I know well. There is no great crisis requiring historians to process a larger volume of archival data at increased speeds. Nothing depends on us doing so. More importantly, historical research requires constant adjustment of interpretations and questions in the process of reading a document. We don’t need and didn’t want a single final meaning of a given document to be dumped on our doorstep as a service.

Later in this series of essays, I’ll have more to say about what I do think generative AI can deliver, and why I ultimately agree that it’s not hype in the sense of its current and near-future capabilities. As you will see, however, the most useful deployment of current and near-future generative AI in research and expression absolutely requires that you already know a great deal. This is not a new problem in research or in creativity. You couldn’t use a card catalog without knowing what you were looking for as well as knowing what a card catalog is. You couldn’t use Google search back at its height of effectiveness without already knowing enough to iterate your keywords, refocus your searches, or mine out the materials you consulted from one search to refine the next.

The problem with the hype about generative AI, and its headlong insertion into many tools and platforms, is that it is brutally short-circuiting the processes by which people gain enough knowledge and expressive proficiency to be able to use the potential of generative AI correctly. Many of the boosters of generative AI inside academia seem to me dismiss this problem altogether, just as they ignore the likely consequences of AI-generated slop filling up existing databases and archives. People who don’t know what they want to know—and don’t know how to spot the difference between slop and knowledge—are being pushed to use AI as a substitute for processes of learning, acquisition and agentive creation. By the time we collectively understand why that was a terrible thing to do, it will be too late to undo it.

That’s the hype. Companies making AI are desperate to have it seem needed and they are working to create a simulation of that need via indiscriminate deployment of their products and through the same kinds of networks of boosters, promoters, and institutional entrepreneurs that dutifully assembled during the 1990s to predict that digital technologies would usher in political and economic utopias through the intrinsic capabilities of those technologies. Those networks served us poorly then and they are serving us poorly now. The real use cases of generative AI are not as entry-level tools but as sophisticated extensions of human capabilities and skills that take years of intensive effort to develop.

Discussion about this post

Origin:
publisher logo
Eight by Seven
Loading...
Loading...
Loading...

You may also like...