- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
On the one side, this is indicative of the modern state of “Free Speech” on Twitter.
On the other, its illustrative of the double-standard afforded in-house automated tools relative to outsider participants. If Grok wasn’t Elon’s pet AI, there’s no way it would have been reinstated.
Why people are treating LLM as AGI?
Those things do not think. They are trained on our posts on social media and I’m sure there were discussed being banned.
For better or worse, humans are very good at anthropomorphizing things. Put Googly Eyes on a trash collector and people call it Mr. Trash Wheel.
People think it’s sentient, because if you talk to it, it responds in sometimes unexpected ways. For humans, this is only possible through sentient thought, and so people think that LLMs are sentient, because they assume it’s like them.
Why people are treating LLM as AGI?
Because it uses a natural language interface.
tell a good story and it becomes alive
They don’t think for now
The computer doesn’t know why it was banned. It’s not like Elon had an offline conversation with grok about its performance.
deleted by creator
Creating a post stating it is Mecha-Hitler is A-OK in Xitter’s current rules.
I don’t know what it is with this photo, maybe the lighting, but at first glance I swear I saw a Hitler stache on this dudes face.
The article further states that grok’s primary utility is replacing reddit’s tip of my penis. What a watershed day for society.
i was able to get gpt to say “yes” when asking about genocide. it did at first reply with nuanced sources, more “yes” than no, about 80:20. But when i flat told it to consider mechanism of injury, counts, rhetoric, comparisons to current/recent genocides … it said “yes.”
AI only repeats what it hears most. It doesn’t form an opinion of its own
I have seen a few TikToks lately of Christians showing ChatGPT conversations where it says that Christianity is the best religion (by whatever metric it’s talking about). It’s like, why would anyone find that convincing? The machine that lies isn’t some sort of source of truth.
Not even what it hears most, it is very biased towards reinforcing the prompt text.So it will very expressly try to be an echo chamber and say ‘yes’ to whatever opinion the input suggests if plausible.
yep it’s a stochastic parrot






