Dan Davies on ChatGPT and AI as a conversational partner:
In many ways, even “autocomplete” or “the motorised prayer wheel” seem to be giving the artificial intelligence too much credit. The way that people seem to be using it is more like the technology that used to be called “talking to a pillow”. We’ve created a cybernetic teddy bear; something that helps to sustain an illusion of conversation that people can use in order to facilitate the well-known psychological fact that putting your thoughts into words and trying to explain them to someone else is a good way to think and have ideas. (That this would be a big use case ought to have been obvious to anyone who knew the history of ELIZA).
I genuinely don’t know how revolutionary this might be, even if this is all there is to it. A machine that doesn’t get bored listening to you could be an incredible boost to a lot of people. It’s actually quite hopeful in my view; although it is nowhere near as science fictional and glam as “AGI”, this could be a very relevant use case.
We know that the human need for attention is almost insatiable. A lot of social problems have at their root the fact that some children learn that although negative attention isn’t as nice as positive attention, it’s still attention and it’s a lot easier to get. A low-quality substitute for human attention that’s much easier to produce could do a lot of good, although I feel like it might need to be carefully regulated in the way that most other low-quality mass-market products that mess around with your brain chemistry are.
Any LLM is just a probabilistic solution to "which word is most likely to follow the last one?" And not surprising on that basis that it proliferates on LinkedIn and in work emails, since both are places where there is a lot of standardized boilerplate that needs to be there but should be written to attract minimal attention. But I can write my own boilerplate, I don't need software for that.
For a lot of other uses, it requires the ability to tell what is specifically correct, but more importantly, it requires the ability to know that a detail is wrong and why it is wrong. In corporate contexts, this is a concern about "governance"--a chatbot that offers products below cost or misrepresents a policy. That is, where there's a right and a wrong answer, probability doesn't work--we need the correct answer every time.
Nor does it work when there's a real asymmetry of knowledge and experience, when you are presenting to the person with the knowledge. It's hard to know what's wrong, if anything, but your audience will absolutely know. There was a thing on Bluesky the other week where Christopher Nolan announced he was making a version of The Odyssey and a bunch of people were amazed that he had found something so obscure. My audience, should it still exist, is disproportionately made up of people who know what The Odyssey is and understand the implication of not knowing what it is. That's the risk of going up against expertise when you don't have it yourself.
Ah, but a conversation with yourself on a matter that's mostly one of opinion? That's an instance where an LLM could work--but it's working as any other kind of parallel thinking, brainstorming, or free association--the purpose is to dislodge yourself from your current track of thinking and consider it from another perspective, and there are a lot of ways to do that.
No comments:
Post a Comment