I'd love to see you or other specialists in cognition get asked a bit more about this stuff in lieu of the CEOs and many of the critics. I find many of both the boosters and the critics to be deeply mistaken in how they either think the brain or the LLM's work. In particular it seems a very common mistake on both sides to imagine that there are "occasional hallucinations" when in fact the whole thing is "hallucination" if you want to look at it that way. Some critics who do understand this then think that it invalidates the whole utility of the field but that (seems to me) to be wrong on two important counts: these things are producing outputs that are statistical predictions, as such they're just as "right" or "wrong" as the mean of a set of numbers is. It could be useful but you don't want to be, as the old joke has it, "the statistician who drowns crossing a river that averages 2 foot deep".
But by the same token, the idea that humans aren't ALSO just hallucinating all the time seems bizarre to me, especially as an aphantasic. You could look to how people imagine, for example, the racial mix of the Roman or Victorian age, to see how much they are just doing what the LLM's are doing with the added conviction that comes from them having envisaged this thing that they actually have no direct knowledge of.
I'd really like to see someone provide a more nuanced take than the simplistic Pro- or Anti- positions that are prevalent.
I should add that some critics, like Prof Emily Bender don't seem to make these mistake but neither are they pushing back against common assumptions of greater reliability in humans AFAICT.
"So, here's my rallying cry to all the psychologists, neuroscientists, and human behaviour experts out there:" ?
Everyone should participate and speak their mind, AI effects everyone. Any specialist, no matter which field he specializes in has all been taught the same narratives, it's like writing a book and printing millions thereof.
The world has transformed into a woke culture and a world of entitlement and many of these so called specialists you mentioned has become part and parcel thereof.
Not because they want to, but because everyone else does, the collective mind or should I say the adaptation of a collective narrative.
As for AI, it can never become anything more than the narrative that it's being fed all based on what their creators wants it to advocate.
It's like a super fast information retrieval machine fact checking to make sure it fits the narrative of it's creators spoon-feeding those that allow it too.
AI will and already are following the same trend as the media, only super fast and far more efficient.
And whether AI leans to the left or right is determined by people such as Mark Zuckerberg, Bill Gates, George Soros and the likes of them.
After all, AI is all about the algorhytms that's fed into it.
Blaming AI is like blaming a car for reckless driving instead of the driver or the programmer if it's automated.
I was interested to read this because I can't say I've read or seen any Tech CEO's responses to AI. I've been far more interested in the end-users perspective, that of everyone who's diving in and finding the tips, tricks, and limitations.
I do like the call out to the human behaviourists. I am not one, by any professional means, but there is some overlap, for I am a poet, a observer and synthesizer of the human condition.
Professionally, I'm a proofreader, so it was concern for my career that first got my attention. As that concern has waned, so has my poetic interest grown, especially as I have contacts actively using ChatGPT, Midjourney, Photoshop, Bing AIs for artwork creation, to speed up output, and delve into the possible.
I have only dabbled. I am far more interested in the limitations of AI, specifically in its inability to cope with the metaphor that we call 'space' - the concept that there is a vast stretch of nothingness between you and me, both physical and in terms of the knowledge we each have about the other. From what I have observed, 'space' is a concept that AI, as it currently stands, cannot fathom. Or perhaps that is a limitation of the underlying programming.
Though current AI has been trained on images and data that includes depictions of nothingness, within the code, as in our own existence, there is some thing between every thing. As humans, we understand the lack of space, of knowledge, because we understand the metaphor of that which is zero. We may seek, then, to fill space, but comprehend the patterns in the metaphor, and the depth of knowledge needed to fill those patterns. A gap in technical knowledge, for instance, must be filled by study and design.
Each gap-filler is the end result of time and effort. It cannot be made up on the spot, but must have foundation, structure, development - and we understand each of those words to have greater meaning than the sum of their parts. Predictive AI sees only the words, or more precisely, a binary code string.
Thus, I continue to be delighted by the absurd: at base level, AI is a construction of ones and zeros, unable to interact with zero. Unable to comprehend that every single instance of one is the sum of much, much more.
...
I have typed more than I intended, yet arrived at an answer. Yes, I think CEOs are the wrong people. Ask instead the poet, the philosopher, the layman, who does not need to use AI, but is interested all the same.
I'd love to see you or other specialists in cognition get asked a bit more about this stuff in lieu of the CEOs and many of the critics. I find many of both the boosters and the critics to be deeply mistaken in how they either think the brain or the LLM's work. In particular it seems a very common mistake on both sides to imagine that there are "occasional hallucinations" when in fact the whole thing is "hallucination" if you want to look at it that way. Some critics who do understand this then think that it invalidates the whole utility of the field but that (seems to me) to be wrong on two important counts: these things are producing outputs that are statistical predictions, as such they're just as "right" or "wrong" as the mean of a set of numbers is. It could be useful but you don't want to be, as the old joke has it, "the statistician who drowns crossing a river that averages 2 foot deep".
But by the same token, the idea that humans aren't ALSO just hallucinating all the time seems bizarre to me, especially as an aphantasic. You could look to how people imagine, for example, the racial mix of the Roman or Victorian age, to see how much they are just doing what the LLM's are doing with the added conviction that comes from them having envisaged this thing that they actually have no direct knowledge of.
I'd really like to see someone provide a more nuanced take than the simplistic Pro- or Anti- positions that are prevalent.
I should add that some critics, like Prof Emily Bender don't seem to make these mistake but neither are they pushing back against common assumptions of greater reliability in humans AFAICT.
"So, here's my rallying cry to all the psychologists, neuroscientists, and human behaviour experts out there:" ?
Everyone should participate and speak their mind, AI effects everyone. Any specialist, no matter which field he specializes in has all been taught the same narratives, it's like writing a book and printing millions thereof.
The world has transformed into a woke culture and a world of entitlement and many of these so called specialists you mentioned has become part and parcel thereof.
Not because they want to, but because everyone else does, the collective mind or should I say the adaptation of a collective narrative.
As for AI, it can never become anything more than the narrative that it's being fed all based on what their creators wants it to advocate.
It's like a super fast information retrieval machine fact checking to make sure it fits the narrative of it's creators spoon-feeding those that allow it too.
AI will and already are following the same trend as the media, only super fast and far more efficient.
And whether AI leans to the left or right is determined by people such as Mark Zuckerberg, Bill Gates, George Soros and the likes of them.
After all, AI is all about the algorhytms that's fed into it.
Blaming AI is like blaming a car for reckless driving instead of the driver or the programmer if it's automated.
I was interested to read this because I can't say I've read or seen any Tech CEO's responses to AI. I've been far more interested in the end-users perspective, that of everyone who's diving in and finding the tips, tricks, and limitations.
I do like the call out to the human behaviourists. I am not one, by any professional means, but there is some overlap, for I am a poet, a observer and synthesizer of the human condition.
Professionally, I'm a proofreader, so it was concern for my career that first got my attention. As that concern has waned, so has my poetic interest grown, especially as I have contacts actively using ChatGPT, Midjourney, Photoshop, Bing AIs for artwork creation, to speed up output, and delve into the possible.
I have only dabbled. I am far more interested in the limitations of AI, specifically in its inability to cope with the metaphor that we call 'space' - the concept that there is a vast stretch of nothingness between you and me, both physical and in terms of the knowledge we each have about the other. From what I have observed, 'space' is a concept that AI, as it currently stands, cannot fathom. Or perhaps that is a limitation of the underlying programming.
Though current AI has been trained on images and data that includes depictions of nothingness, within the code, as in our own existence, there is some thing between every thing. As humans, we understand the lack of space, of knowledge, because we understand the metaphor of that which is zero. We may seek, then, to fill space, but comprehend the patterns in the metaphor, and the depth of knowledge needed to fill those patterns. A gap in technical knowledge, for instance, must be filled by study and design.
Each gap-filler is the end result of time and effort. It cannot be made up on the spot, but must have foundation, structure, development - and we understand each of those words to have greater meaning than the sum of their parts. Predictive AI sees only the words, or more precisely, a binary code string.
Thus, I continue to be delighted by the absurd: at base level, AI is a construction of ones and zeros, unable to interact with zero. Unable to comprehend that every single instance of one is the sum of much, much more.
...
I have typed more than I intended, yet arrived at an answer. Yes, I think CEOs are the wrong people. Ask instead the poet, the philosopher, the layman, who does not need to use AI, but is interested all the same.