I detest today's "AI" / LLMs [1] for many reasons, primarily ethical and moral. At the same time, I am fascinated by the misuse of said generative LLMs.
In particular, I have seen a number of essays recently describing people misusing LLM output or relying on the LLM to produce most or all of an #assignment of some sort - even when they know they mustn't use the LLM in this fashion.
Most of these have been in the context of #student use of LLMs to write assignments, even when they have been warned not to. One particularly egregious example was described by a university #professor, where he created an assignment in which it was easy to tell if the submitted work had been created with an LLM or not. A majority of the students - I don't recall the exact proportion, but it was something like 75%, a supermajority - used LLM. He discussed this with the class, and had those who had generated the assignment with LLM write a short essay (or apologia) -- and then found that something like half of them had used LLM to do *that* assigned work.
Some professors have described their students as having their #thinking, #language, and #analysis #skills atrophy to the point of inability to do even basic work. It seems to me that it is like #addiction, in that they keep doing it despite knowing it is (a) forbidden, (b) easily detected, and (c) self-destructive.
1/x
[1] LLMs are in no way Artificial Intelligence. Calling them "AI" is a category error.
#AI #LLM #misuse #GenerativeAI