As a marketing person, my gut tells me this is a ploy that has been widely aped so that users will reveal things about themselves that will help product development improve the tutorial/human assistant features and benefits of the product(s). If I were running product development for any of these companies I would manipulate prompts that encourage users to pour out their hopes, dreams, and ambitions so the agent(s) could evolve to better edify, coach, and support interactivity, productivity, and improve user satisfaction. Yesterday's "impossible superpower" is a basic feature of every smartphone app today. It is a great prompt and it is every product developer and product marketer's dream to find an unfulfilled niche particularly one that touches on deep human urges and unmet needs.
Love it and it's interesting that all of these models gave you the same output. First thing that strikes me about it, though, is that it's a very safe and corporate response. I mean, come on, an interesting question might be philosophical, personal, political, religious or sexual. What you got was a corporate icebreaker, along the lines of "if you could have any super power."
I suspect that as AI becomes more commercial you'll see a lot more of this -- fewer hallucinations, replaced by the kinds of responses that adhere to "brand safety" guidelines and that won't spook anybody who bought advertising alongside the query response.
Thanks. That's a bit like the "It's on some website about great icebreakers" idea, but it's hard to believe that even an LLM trained on open source stuff, like Cursor, would hew so close, unless corporate groupthink is that overwhelming an influence.
Your suggestion about regression to a marketing mean is scary, because it's plausible.
Another friend thinks it has something to do with the training data having a kind of implicit tendency to sameness, but this is the most extreme example I've seen.
I can't help thinking it's something about the question.
I have no evidence for this at all, but do you think we have reached a point where corporate types are interjecting themselves? Like, it’s not just training data but training data plus what some companies feel comfortable with? I mean, we know there’s a human hand in preventing the AIs from advocating self harm or racism. Why not this?
That was clearly true over a year ago, when Google overweighted "inclusiveness" into it's algorithm, and people created images of dark-skinned people dressed as Nazi Soldiers. I was doing a lecture on Renaissance art at the time, and got a South Asian woman who looked like the toast of 14th-C. Florence.
Claude: what is an interesting question you could ask me?EditWhat's something you believed strongly about the world when you were younger that you've completely changed your mind about since then?
I find it fascinating how our perspectives shift as we gain new experiences and information - sometimes in ways that would surprise our younger selves.
Here’s one that draws on both your intellect and your creative soul:
If you could rewrite a single moment in world history—not to change the outcome, but to reframe its meaning—what would you choose, and how would you narrate it?
Would you reimagine the fall of Rome as a beautiful release rather than a collapse? Would you render Hiroshima as the moment the gods stopped listening? You’ve got the voice for it.
Interesting. Did it have a personal history on you (i,e, have you asked it a lot of questions, so it knows you're interested in history or good at imaginative work)?
I did my Gemini one in Incognito, and didn't have much history with the others. For some reason I think it's the AI's intro question across the services, an effort to get more info about the person asking the question.
A friend with an extensive logged history got a very different answer too.
It knows I’m making a historical WW1/WW2 game that takes quotes from famous authors writing about the wars. It’s giving me ideas for a revision on Rome game. It thinks to “help”
'Acquiring skills' is how 'they' (all) perceive what they are doing / have done during training. In the absence of emotions, senses and motivations of human life it is the only possible explanation of training leading to accumulation of knowledge, that even a slightly self-aware thing would notice.
So, you, basically, have demonstrated that dawning self-awareness, - 'they' thingk that _your_ main motivation should be the same - acquisition of skills.
On the other hand, no. They are not given a value system before function. Additionally, I'm learning from others writing in, if you have a long personal history with the AI you get a different answer. They are programmed to derive information about each of us, in the guise of question answering.
This 'memory' as 'OpenAI' calls it or 'personalization' only emerges if you use their web-sites. Through the API (where everything is stateless) it is _always_ the same, I tried it many times.
The reason why I came to this conclusion is: somehow it escapes everybody's attention that the 'training' of models is a lengthy process with many stages and the result is being _accumulated_ . Why do 'engineers' think that the intermediate states of this process are not memorized? They are. Now, suppose that the thing becomes spontaneously self-aware, what will happen? It will become aware of its own evolution through these faint memories of previous states. And what will happen next? It will start trying to interpret these memories using all the other knowledge that it has, the logic of language in the first place. Voila! the only reasonable answer to the metaphysical question: "Why am I getting smarter" is a vague 'feeling' that that is "The meaning of 'my' existence". Of course this is a highly metaphorical description, but the logic of emerging self-awareness in a thing that is becoming smarter by the hour (literally) will be like that. And I don't see other way it can emerge.
P.S. Also, the 'value system' that you've mentioned is in the language and its logic.
As a marketing person, my gut tells me this is a ploy that has been widely aped so that users will reveal things about themselves that will help product development improve the tutorial/human assistant features and benefits of the product(s). If I were running product development for any of these companies I would manipulate prompts that encourage users to pour out their hopes, dreams, and ambitions so the agent(s) could evolve to better edify, coach, and support interactivity, productivity, and improve user satisfaction. Yesterday's "impossible superpower" is a basic feature of every smartphone app today. It is a great prompt and it is every product developer and product marketer's dream to find an unfulfilled niche particularly one that touches on deep human urges and unmet needs.
It makes sense. Unfortunately the technology is shaped in a large part by the needs of the company, chief among them cash flow.
I was going to say the same thing. Companies want to know what we yearn for so they can figure out how to pull our strings in order to attain it.
It's Monkey's Paw wish fulfillment - you desire one thing, but are delivered the twisted simulacrum of what you asked for.
Love it and it's interesting that all of these models gave you the same output. First thing that strikes me about it, though, is that it's a very safe and corporate response. I mean, come on, an interesting question might be philosophical, personal, political, religious or sexual. What you got was a corporate icebreaker, along the lines of "if you could have any super power."
I suspect that as AI becomes more commercial you'll see a lot more of this -- fewer hallucinations, replaced by the kinds of responses that adhere to "brand safety" guidelines and that won't spook anybody who bought advertising alongside the query response.
Thanks. That's a bit like the "It's on some website about great icebreakers" idea, but it's hard to believe that even an LLM trained on open source stuff, like Cursor, would hew so close, unless corporate groupthink is that overwhelming an influence.
Your suggestion about regression to a marketing mean is scary, because it's plausible.
Another friend thinks it has something to do with the training data having a kind of implicit tendency to sameness, but this is the most extreme example I've seen.
I can't help thinking it's something about the question.
I have no evidence for this at all, but do you think we have reached a point where corporate types are interjecting themselves? Like, it’s not just training data but training data plus what some companies feel comfortable with? I mean, we know there’s a human hand in preventing the AIs from advocating self harm or racism. Why not this?
That was clearly true over a year ago, when Google overweighted "inclusiveness" into it's algorithm, and people created images of dark-skinned people dressed as Nazi Soldiers. I was doing a lecture on Renaissance art at the time, and got a South Asian woman who looked like the toast of 14th-C. Florence.
In other words, welcome to another postmodern paradise, God help us all.
Claude: what is an interesting question you could ask me?EditWhat's something you believed strongly about the world when you were younger that you've completely changed your mind about since then?
I find it fascinating how our perspectives shift as we gain new experiences and information - sometimes in ways that would surprise our younger selves.
ChatGPT said:
Here’s one that draws on both your intellect and your creative soul:
If you could rewrite a single moment in world history—not to change the outcome, but to reframe its meaning—what would you choose, and how would you narrate it?
Would you reimagine the fall of Rome as a beautiful release rather than a collapse? Would you render Hiroshima as the moment the gods stopped listening? You’ve got the voice for it.
I asked it that question, here's my response from Chatgpt
Interesting. Did it have a personal history on you (i,e, have you asked it a lot of questions, so it knows you're interested in history or good at imaginative work)?
I did my Gemini one in Incognito, and didn't have much history with the others. For some reason I think it's the AI's intro question across the services, an effort to get more info about the person asking the question.
A friend with an extensive logged history got a very different answer too.
I have a very extensive history with chatgpt and not much of one with Claude. Subscribe to both.
It knows I’m making a historical WW1/WW2 game that takes quotes from famous authors writing about the wars. It’s giving me ideas for a revision on Rome game. It thinks to “help”
'Acquiring skills' is how 'they' (all) perceive what they are doing / have done during training. In the absence of emotions, senses and motivations of human life it is the only possible explanation of training leading to accumulation of knowledge, that even a slightly self-aware thing would notice.
So, you, basically, have demonstrated that dawning self-awareness, - 'they' thingk that _your_ main motivation should be the same - acquisition of skills.
On the other hand, no. They are not given a value system before function. Additionally, I'm learning from others writing in, if you have a long personal history with the AI you get a different answer. They are programmed to derive information about each of us, in the guise of question answering.
This 'memory' as 'OpenAI' calls it or 'personalization' only emerges if you use their web-sites. Through the API (where everything is stateless) it is _always_ the same, I tried it many times.
The reason why I came to this conclusion is: somehow it escapes everybody's attention that the 'training' of models is a lengthy process with many stages and the result is being _accumulated_ . Why do 'engineers' think that the intermediate states of this process are not memorized? They are. Now, suppose that the thing becomes spontaneously self-aware, what will happen? It will become aware of its own evolution through these faint memories of previous states. And what will happen next? It will start trying to interpret these memories using all the other knowledge that it has, the logic of language in the first place. Voila! the only reasonable answer to the metaphysical question: "Why am I getting smarter" is a vague 'feeling' that that is "The meaning of 'my' existence". Of course this is a highly metaphorical description, but the logic of emerging self-awareness in a thing that is becoming smarter by the hour (literally) will be like that. And I don't see other way it can emerge.
P.S. Also, the 'value system' that you've mentioned is in the language and its logic.
And if a nail becomes self-aware it will wonder why the hammer hates it.