• 0 Posts
  • 13 Comments
Joined 7 months ago
cake
Cake day: June 4th, 2025

help-circle
  • If you define “not normal” as “not having empathy” then your argument is vacuously true. Like “I’m a good person because I say I am”

    If you define normal as the average of everyone then statistically you’re wrong about empathy. The Stanford Prison Experiment or basically any other social experiment that is now banned proves you wrong (hence they had to ban them because people do shitty things to each other just because).

    A good one (which was banned for causing stress to the participants via some amount of empathy) I could name would be the [Milgram Experiment](https://en.wikipedia.org/wiki/Milgram_experiment. Most people will question their actions if they can directly see they are harming a stranger… unfortunately most people will also apparently hurt others even while hearing the victim scream and beg them to stop just because an authority figure tells them to keep going and that it’s all part of the plan.

    I don’t think that people are sadistic or malicious by nature, but they sure as hell do not have strong empathy by default mate, and the prison experiment alone proves sadism is much more prevalent than you seem to think. As is the existence of the holocaust, the genocide in Gaza, all the other genocides, the existence of Guantanamo bay, the existence of capitalism in the first place, the need for a list of what is a war crime, war itself, etc.

    The reason any of these happen is because people care more about the status quo or themselves than certain other people. Soldiers kill soldiers because their desire to live and not be shamed as a defector outweighs any pain they’ll cause others. Ergo, there is seemingly an endless supply of people who will choose themselves/self-interest over others, in contrast to your hope that universal empathy is the default.

    You can feel bad for others and do shitty things just like you can be a psychopath and do kind things. Empathy doesn’t necessarily make someone good and the lack of it doesn’t make someone bad. Unless you define good and evil to mean that in which case there’s no shower thought just another definition of good and evil.


  • The claim that humans are always terrible by default is false, but claiming the polar opposite is also false.

    Many people have empathy, but not all, and it varies in strength/quality from one person to another.

    Many well adjusted people do not feel empathy. Many people are depresssed/over-stressed and not well adjusted because they have empathy.

    As for PTSD, the ability (or inability) to adjust to or move on from traumatic experiences is not directly correlated to empathy.

    Furthermore the ability to kill those who wish you (or those you care about) harm is evolutionarily advantageous. Anger and violence in response to stress and pain allows you to fight off predators/enemies/sources-of-pain. The majority of humanity feels these emotions.

    When in a state of anger and pain it is harder for us to think about our actions. Your claim that someone with empathy will always feel conflicted about hurting others is therefore false.

    Now most people with empathy might feel remorse but if their mind doesn’t put enough weight on that moment to remember it, there’s nothing for them to feel sorry for later. Does that mean they don’t feel empathy? Nope, they can still empathize with friends and family and characters on TV shows, they just don’t have a mind that catalogues their guilt. (There are unfortunately many people like this)

    I do think many people cause significant pain to others. But out of ignorance not malice. And there in lies a major problem with empathy. If you don’t think someone is actually hurting you won’t feel empathy for them even if you feel empathy for others. So if you aren’t aware of the pain others might feel around you, you won’t experience empathic responses even if you might for other kinds of pain.

    People might not be generally good or generally bad but we are typically stupid.

    If you can convince someone that some person is “just faking it for attention” they won’t feel empathy. Now the reverse is also typically true: if you can convince a person with empathy that that someone else’s pain is real they’ll feel empathy. Unfortunately people don’t like being told they’re wrong or having to change viewpoint or listen to evidence rationally so there are many people you cannot convince to feel bad for certain other people.

    Another thing to note is that many of the terms you’ve used are indefinite. What does well-adjusted mean? Psychopathy is prevalent in many fields and psychopaths can live healthy/stable lives. (Sadism and psychopathy are different btw) Are they well adjusted?

    What does good mean? The greater good or empathy? Because those two do not agree on everything. How far does empathy need to go for someone to be good in your opinion? Are people who eat meat evil because they lack empathy for animals?

    If there was a trolley problem-esque situation where you could save five lives but only if you killed a child with your bare hands, would your idea of a good person commit murder or let five people die because they couldn’t overcome their empathy?

    Lastly—and slightly unrelated—I’d like to note that I just had an odd thought: if you tried to logically dichotomize all actions into good or bad, you would need arithmetic to deal with the idea of a greater-good / utilitarianism. However by Gödel’s theorems, in any logical system in which arithmetic can be performed, there will be things that cannot be proven good or bad no matter how many axioms you add to the system. In other words it is actually by definition impossible to dichotomize actions into good or bad. Adding a third category won’t even fix it. Right? Any mathematician/logician/philosopher that can back me up or tell me I’m wrong?


  • Considering people seem to correlate scarcity with value, yeah, big time.

    I also doubt people would be willing to hunt/farm xenomorphs if they couldn’t get paid exorbitantly.

    Oh and I’d imagine people who have that eccentric desire to be the top of the food chain would probably think it’s the best food ever. “You’re not a real man ™️ till you’ve eaten xenomorph meat” lol


    Sidenote: I just had an idea for xenomorph farming:

    1. Find asteroid with enough gravity to keep the xenomorphs from yeeting themselves into space.
    2. Place egg.
    3. Add hosts.
    4. wait.
    5. Use robot to retrieve an egg and make sure it stays entirely sealed away with no chance of human contact.
    6. Throw another asteroid at the main one fast enough it liquifies both.
    7. Collect obliterated xenomorph parts and cook them as they enter collection to make sure they’re dead.
    8. sell to patrons for enough money to buy new asteroid and repeat the process.
    9. profit
    10. Eventually make a mistake and die a horrible death


  • I decided to look into this because I was curious.

    The unification and regulation of the French language came about in 1653 with the founding of the Académie Française and it actually took a while for the revolutionaries to pivot from “liberty of language” to “the only language in France should be French” English was already established by this time and the vowel shift was basically complete.

    According to Wikipedia, Middle French died out in the 17th century while Middle English died out in the 15th. Ergo: Modern English predates Modern French

    If we check back farther it seems the two languages developed similarly though the arbitrary divides for each age of language (old, middle, modern) seem to show with English being first by roughly a century.

    Of course this is all arbitrary since language doesn’t evolve discretely. However the Wikipedia entries for the oldest Gallo-Romance (precursor to French) is from 842CE, whereas old English poetry dates as early as 650-700CE. Once again suggesting English predates French.

    Now there is a difficulty here with French because it originates from Vulgar Latin which could be considered older than English, but I’m not sure many would call it French since lots of European languages branched from Vulgar Latin

    As for silliness… yeah no arguments there lol


  • We should start saying that neurodivergence is a high dimensional manifold.

    The lines between OCD and ADHD and Autism and even like schizophrenia and generalized anxiety and depression are all kind of blurred and all related. Within each of those it’s still a multidimensional space.

    Do you have train autism or obsession over morality autism or both?

    Is being antisocial due to the autism or the anxiety?

    Do you need to have things organized because of the autism or the OCD?

    Do you have the I-can’t-sit-still ADHD or the I-can’t-get-myself-to-do-anything ADHD? (Or both)

    Do you have difficulty regulating emotion due to ADHD, autism, or bipolar depression?

    Do you feel no reward for achievement because you’re ADHD or depressed?

    Do you have the “I only know I’m hungry when I start shaking” kind of neurodivergence or the “I just eat instinctively and compulsively” kind of neurodivergence? Why? ADHD? Anhedonia? Autism? Anxiety? Depression?

    Point is that it’s very difficult to categorize neurodivergence. It’s a mess of symptoms that generally seem to correlate in certain groups but it’s not precise.

    What’s more important (imo) is how it affects your life and you can deal with it. Have a shit ton of executive dysfunction (symptom of ADHD)? Stimulant medication might help, but depending on the way your brain works maybe Adderall works better than methylphenidate or maybe you’d be helped by an NRI or even an SSRI.

    If you have to use a different medication, then are you really the same kind of neurodivergent as the other ADHD people? Probably not neurologically, but it doesn’t matter; you’ll still be able to find support and community from those other ADHD folk.



  • Thanks, I almost didn’t post because it was an essay of a comment lol, glad you found it insightful

    As for Wolfram Alpha, I’m definitely not an expert but I’d guess the reason it was good at math was that it would simply translate your problem from natural language into commands that could be sent to a math engine that would do the actual calculation.

    So basically act like a language translator but for typed out math to a programming language for some advanced calculation program (like wolfram Mathematica)

    Again, this is just speculation because I’m a bit too tired to look into it rn, but it seems plausible since we had basic language translators online back then (I think…) and I’d imagine parsing written math is probably easier than natural language translation


  • Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

    I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

    Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

    This has two major preventative issues for AGI: input size limits, and determinism.

    The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

    This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

    Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

    Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

    ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

    All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

    This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

    Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

    Now there are some more exotic neural networks architectures that could surpass these limitations.

    Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

    However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

    You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

    SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”

    Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently

    In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.

    The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.

    Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.



  • I’ve come to the conclusion that suffering is really just anything that invades your focus without your desire for it to happen.

    Thinking about anything you would rather not think about is suffering. You get cut and your brain constantly reminds you of it because evolution is a bitch. Hatred, envy, anger, intrusive thoughts, headaches, itchy clothes, annoying noises in your environment, etc. Anything that steals your attention without your consent is suffering.

    So if you’re so focused on avoiding suffering you aren’t able to focus on doing what you want then yep, suffering.