This was gripping to read. It is very much like 'a psychopath without a psyche' - that's a great phrase. I can't help thinking of all the individual challenges that quietly go on without being highlighted, all the grovelling individual apologies - and yet this info is fed back to its 'brain' somewhere, whether its a LLM or not, and someone somewhere is benefitting.
Thanks for commenting, Mandy. I think what I find most disturbing is how inevitably students and journalists (amongst others) will understandably use an AI tool to find information about a topic and will be presented with an answer which sounds SO brilliant and clear and confident and yet it could be WRONG. And the person reading the answer has little recourse to checking it. If you are confidently told "these are the words so-and-so used" why would you not believe it? Don't get me wrong: I think AI is also brilliantly useful but gosh, we have a way to go before we can use it well!
Thanks, A! It makes me think of a small child asking an adult something and trusting the reply. But then as the child gets older, understanding that adults can either be wrong or misguided or deceitful and still asking questions but being wise enough to know that the answer may not be good enough. We are still at the stage of small, credulous children.
There are so many examples of conversations like this with AI. I don't know how we get people to recognise that it is just a text prediction mechanism and doesn't/can't check the accuracy of what it spouts. The recent tweaking of the interface to make it more sycophantic and fawning is, as you say, nauseating (and very American in tone). That might alert a few people to the lack of sincerity but I suspect more will be successfully flattered by it
This was gripping to read. It is very much like 'a psychopath without a psyche' - that's a great phrase. I can't help thinking of all the individual challenges that quietly go on without being highlighted, all the grovelling individual apologies - and yet this info is fed back to its 'brain' somewhere, whether its a LLM or not, and someone somewhere is benefitting.
Thanks for commenting, Mandy. I think what I find most disturbing is how inevitably students and journalists (amongst others) will understandably use an AI tool to find information about a topic and will be presented with an answer which sounds SO brilliant and clear and confident and yet it could be WRONG. And the person reading the answer has little recourse to checking it. If you are confidently told "these are the words so-and-so used" why would you not believe it? Don't get me wrong: I think AI is also brilliantly useful but gosh, we have a way to go before we can use it well!
That is a brilliant exposé Nicola
Your logic and argument had A1 in a vortex
A1 does not have a human brain
However it can use copious explanations and many untruths
It reminded me of Alexa with the empty suggestions and responses
Quite worrying to think of the implications
A1 was completely outwitted
Thanks, A! It makes me think of a small child asking an adult something and trusting the reply. But then as the child gets older, understanding that adults can either be wrong or misguided or deceitful and still asking questions but being wise enough to know that the answer may not be good enough. We are still at the stage of small, credulous children.
There are so many examples of conversations like this with AI. I don't know how we get people to recognise that it is just a text prediction mechanism and doesn't/can't check the accuracy of what it spouts. The recent tweaking of the interface to make it more sycophantic and fawning is, as you say, nauseating (and very American in tone). That might alert a few people to the lack of sincerity but I suspect more will be successfully flattered by it