

Not one mention of how this guy got his hands on so many different snake venoms in order to acquire this broad immunity. Interesting dude nonetheless.


Not one mention of how this guy got his hands on so many different snake venoms in order to acquire this broad immunity. Interesting dude nonetheless.


Sorry, boss, but this girl-on-girl playlist is to protect our sensitive data from Microsoft


This is one of the things What We Do In The Shadows got right.


You can use uBO on iOS now too. The Kagi search engine people have made a mobile browser (Orion) that supports Firefox and Chrome plugins!
My web browsing experience has drastically improved since I was finally able to get uBO back since my switch away from Android.





deleted by creator


It might surprise you to know that you’re not entitled to a free education from me. Your original query of “What’s the difference?” is what I responded to willingly. Your philosophical exploration of the nature of intelligence is not in the same ballpark.
I’ve done vibe coding too, enough to understand that the LLMs don’t think.


Then you should have an easier time than most learning more. Your points show a lack of understanding about the tech, and I don’t have the time to pick everything you said apart to try to convince you that LLMs do not have sentience.


I would do more research on how they work. You’ll be a lot more comfortable making those distinctions then.


Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.
AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.
LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.
If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.
The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.
And when they start hallucinating, it’s because they don’t understand how they sound, and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.


It increases on April 29th, so if you still want to buy a lifetime pass at the current rate, you have until then.


Sounds to me like you’re just looking for a hand job, turnip


Ironic username, but no, there are none righteous
Missed opportunity for NetBSWii