

I’m not 100% sure but I don’t see why not if that’s the name you gave them when registering as a customer. They all read in my ID as well.
A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.


I’m not 100% sure but I don’t see why not if that’s the name you gave them when registering as a customer. They all read in my ID as well.


I have 3 first names and I’m legally allowed to use any of them.
Ironically, I had to use AI to figure out what this is supposed to mean.
Here’s the intended meaning:
The author is critiquing the misapplication of AI—specifically, the way people adopt a flashy new tool (AI, in this case) and start using it for everything, even when it’s not the right tool for the job.
Hammers vs. screwdrivers: A hammer is great for nails, but terrible for screws. If people start hammering screws just because hammers are faster and cheaper, they’re clearly missing the point of why screws exist and what screwdrivers are for.
Applied to AI: People are now using large language models (like ChatGPT) or generative AI for tasks they were never meant to do—data analysis, logical reasoning, legal interpretation, even mission-critical decision-making—just because it’s easy, fast, and feels impressive.
So the post is a cautionary parable: just because a tool is powerful or trendy (like generative AI), doesn’t mean it’s suited to every task. And blindly replacing well-understood, purpose-built tools (like rule-based systems, structured code, or human experts) with something flashy but poorly matched is a mistake.
It’s not anti-AI—it’s anti-overuse or misuse of AI. And the tone suggests the writer thinks that’s already happening.


A self-aware or conscious AI system is most likely also generally intelligent - but general intelligence itself doesn’t imply consciousness. It’s likely that consciousness would come along with it, but it doesn’t have to. An unconscious AGI is a perfectly coherent concept.


If you have a better term, what is it?
Large Language Model.


AI is a parent category and AGI and LLM are subcategories of it. Just because AGI and LLM couldn’t be more different, it doesn’t mean they’re not AI.



Why? We already have a specific subcategory for it: Large Language Model. Artificial Intelligence and Artificial General Intelligence aren’t synonymous. Just because LLMs aren’t generally intelligent doesn’t mean they’re not AI. That’s like saying we should stop calling strawberries “plants” and start calling them “fake candy” instead. Call them whatever you want, they’re still plants.


Independent of what anyone is actually saying, the mere fact that someone is commenting on social media at all makes it highly likely they’re one of the people the article is talking about. As the saying goes, a tiny number of users produce nearly all the content. Most people don’t post comments online. The average person doesn’t. So if someone does, that alone already marks them as unusual in some way.
This becomes especially obvious on Lemmy, where you can see people’s moderation history - and it takes only a few seconds to notice how many users are spouting mean, violent, and extremist views. You might not see those views as extreme because this is an echo chamber and you probably agree with them, but they’re extreme nonetheless when compared to what the average person would say.
Nobody ever thinks of themselves as the problem - we all have some story about how our behavior is justified and how those people over there are the real issue. Nah, you’re probably part of the issue as well. I am too.


I would not eat nor sleep if it wasn’t mandatory. I consider both chores and waste of time.


The only strategy to a long term weight loss is a diet you can maintain and only eating bland food is not it. That would work if no other food was available but that is not the case.


Right there with you. I can’t wait to be able to take all my nutrition in a pill form.


50% of people are “low to average”


I read a book by Alan Weisman titled The World Without Us, which covers this very specific topic. According to it, among the very last man-made structures left after hundreds of thousands – if not millions – of years would be the Channel Tunnel between England and France. Another notable example would be the stone faces on Mount Rushmore, as well as some old steel bridges built in an era when engineers couldn’t yet calculate structural load precisely, so they simply overbuilt everything.


When I open YouTube it gives me an edless feed of recommendations tailored to me. I don’t care what other people are watching. If there’s a tab with the title “trending” I’ve probably just never clicked it. I’m not even tempted to go see.


I spend hours on YouTube every single day and I have no clue what a “trending page” is.


That’s ridiculous. Everyone knows that for a robot to perform an operation like this safely, it needs human-written code and a LiDAR.


It’s horrible advice though if you wish your woldview to pass onto the next generations. Statistically, the people not caring so much about things like climate change are having kids.


Images generated by AI are only “fake” if you falsely present them as actual photographs or as digital art made by a human. There’s nothing inherently fake about AI-generated images as long as they’re correctly labeled.
Also, suggesting that all information provided by generative AI is false is just as bizarre. It makes plenty of errors and shouldn’t be blindly trusted, but the majority of its answers are factually correct.
This kind of ideological, blanket hatred toward generative AI isn’t productive. It’s a tool - nothing more, nothing less - and it should be treated as such. Not as what you hoped it would be or what marketing hype wants you to believe it is or will become.


I’m not saying ASI would think in some magical new way. I’m saying it could process so much more data with such precision that it would detect patterns or connections we physically can’t. Like how an AI can tell biological sex from a retina scan, but no human doctor can do even knowing it’s possible. That’s not just “faster logic.” It’s a cognitive scale we simply don’t have. I see no reason to assume that we’re anywhere near the far end of the intelligence spectrum.
My comment about it’s potenttial persuation capabilities was more of the dangers of such system. That an ASI might be so good at persuasion, threat construction, and lying that it could influence us in ways we don’t even fully realize. Not because it’s “divine” - but because it’s just far more competent at manipulating human behavior than any human is.
I believe that, in reality, wolves domesticated themselves. They started hanging around humans because it was a mutually beneficial arrangement.