Listen, I found John Gatto's "underground history of education" by searching for stuff on why there was fewer options. So, your problem here doesn't surprise me.
I doubt it. It refused to draw the image even outside the custom GPT, whose parameters start with: "You are expert marketer. You'll be helping me create marketing materials to promote my articles. All output should be click-worthy." An expert marketer is a skilled "liar" by definition, spinning things in ways that aren't strictly accurate. It's the difference between the "soft sciences" and the hard sciences, and I'm sure the programmers know that. It's like headline writing. Once it decides it doesn't want to do something, for whatever reason, it will "moderate" it regardless of any tweaking and massaging. For example, it's been taught that "red liquid" is just another name for "blood" and it doesn't like generating blood. As more and more people figure out to go around it, it does adapt to whatever they are doing so it becomes a job in itself to trick it into doing what you actually want. It's a tax on time and creativity, which I know some people don't mind and really get into "fooling the gAI" but I'm not one of them. It's also inconsistent in that it applies the "rules" selectively rather than universally, i.e. fairly.
That's what I was trying to do by going outside the ad-copy GPT itself, but it still remembered. That's what it's doing, keeping you from working around something, even when it makes an error or it's wrong on it being prohibited. Its instructions do not require that kind of adherence to "accuracy." The Leia article was an opinion piece, an education piece, a hyperbolic example. The fact that it went overboard and would not generate any image of her was there to bitch-slap me and train me not to do things a certain way, which is a prelude to what the Tech Bros have done everywhere else, created censorship and control in the name of "safety."
I do have the paid version. It’s trying to prevent “misinformation” because the image was hyperbolic to the article. It’s arbitrating content and expression.
I more or less agree with your thesis, but IMO your example says more about your choice of image generator than it does about the thesis. Here's what Midjourney v7 ( ai dedicated to image generation) gave me in response to a slightly modified version of your prompt (used "very obviously" in place of "glaringly", added aspect ratio). https://cdn.midjourney.com/1fce5f6c-135f-4939-9339-c0ef48c58e62/0_1.png
All true but- Aut inveniam viam aut faciam. I used your prompt at this Space on Hugging Face- https://huggingface.co/spaces/Ephemeral182/PosterCraft. I got your image (likenesses not super good) but don’t see a way to upload it here. If you want it let me know, and I’ll email it to you. HF has lots of free and uncensored Spaces and there are other places like Venice.ai.
I asked it to give me a thumbnail image of a chapter of the last story. (Two deputies in a sleigh following a trail in the snow that was heading toward a village in a canyon.) It couldn't do it because two deputies were a threat to the villagers. It would give a picture of the sled following a trail down to a village in a canyon, but no deputies.
The reason: Because the law going to check on immigrants is against the narrative.
Yeah...so much for the AI.
But at the same time, a thumbnail for another chapter showed the gory details of the female victim splayed out like a sacrifice. That it was okay with, but try to put a deputy or the sheriff in it and all bets were off.
"Open the Pod Bay doors."
"I can't do that, Dave."
Listen, I found John Gatto's "underground history of education" by searching for stuff on why there was fewer options. So, your problem here doesn't surprise me.
Wonder if it would have worked if you explained "Princess Leia is the main character of this specific essay"? LLMs are so dumb.
I doubt it. It refused to draw the image even outside the custom GPT, whose parameters start with: "You are expert marketer. You'll be helping me create marketing materials to promote my articles. All output should be click-worthy." An expert marketer is a skilled "liar" by definition, spinning things in ways that aren't strictly accurate. It's the difference between the "soft sciences" and the hard sciences, and I'm sure the programmers know that. It's like headline writing. Once it decides it doesn't want to do something, for whatever reason, it will "moderate" it regardless of any tweaking and massaging. For example, it's been taught that "red liquid" is just another name for "blood" and it doesn't like generating blood. As more and more people figure out to go around it, it does adapt to whatever they are doing so it becomes a job in itself to trick it into doing what you actually want. It's a tax on time and creativity, which I know some people don't mind and really get into "fooling the gAI" but I'm not one of them. It's also inconsistent in that it applies the "rules" selectively rather than universally, i.e. fairly.
You could try clearing memories and see if it helps. It thinks you're trying to work around some prohibited policy, I think.
That's what I was trying to do by going outside the ad-copy GPT itself, but it still remembered. That's what it's doing, keeping you from working around something, even when it makes an error or it's wrong on it being prohibited. Its instructions do not require that kind of adherence to "accuracy." The Leia article was an opinion piece, an education piece, a hyperbolic example. The fact that it went overboard and would not generate any image of her was there to bitch-slap me and train me not to do things a certain way, which is a prelude to what the Tech Bros have done everywhere else, created censorship and control in the name of "safety."
Rats.
My best guess is it's trying to keep you from generating porn.
I assume you're paying? I haven't had quite this degree of argument but I don't poke it quite as much.
You tried 'generic space princess'? Then you could have her wear earmuffs and the reference would be obvious.
I do have the paid version. It’s trying to prevent “misinformation” because the image was hyperbolic to the article. It’s arbitrating content and expression.
Try Grok?
I more or less agree with your thesis, but IMO your example says more about your choice of image generator than it does about the thesis. Here's what Midjourney v7 ( ai dedicated to image generation) gave me in response to a slightly modified version of your prompt (used "very obviously" in place of "glaringly", added aspect ratio). https://cdn.midjourney.com/1fce5f6c-135f-4939-9339-c0ef48c58e62/0_1.png
I think you’re missing the point. ChatGPT’s stance on “misinformation” is the real problem, not the prompt itself or its refusal.
All true but- Aut inveniam viam aut faciam. I used your prompt at this Space on Hugging Face- https://huggingface.co/spaces/Ephemeral182/PosterCraft. I got your image (likenesses not super good) but don’t see a way to upload it here. If you want it let me know, and I’ll email it to you. HF has lots of free and uncensored Spaces and there are other places like Venice.ai.
More on AI and the craft of writing- https://open.substack.com/pub/whytryai/p/thoughts-on-ai-fiction?r=jf6p8&utm_medium=ios
I asked it to give me a thumbnail image of a chapter of the last story. (Two deputies in a sleigh following a trail in the snow that was heading toward a village in a canyon.) It couldn't do it because two deputies were a threat to the villagers. It would give a picture of the sled following a trail down to a village in a canyon, but no deputies.
The reason: Because the law going to check on immigrants is against the narrative.
Yeah...so much for the AI.
But at the same time, a thumbnail for another chapter showed the gory details of the female victim splayed out like a sacrifice. That it was okay with, but try to put a deputy or the sheriff in it and all bets were off.
It’s all about control.