• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: August 9th, 2023

help-circle





  • I understand your skepticism, but gas-powered leaf blowers have annoyed the hell out of me for years. I live in a relatively small city in Northern California, and I can always hear and smell a leaf blower before I can even see it. I can’t overstate how strongly gas-powered leaf blowers smell. The smell of gas permeates my apartment, even with the windows closed, and is the kind of smell that gets stuck my nostrils for hours. The noise is pretty disruptive, but the smell is way worse to be honest. I’m not sure why they smell so much worse than other gas-powered things, but it’s like they’re just spewing gas out into the air.

    I have no problem with electric or battery-powered leaf blowers, just please use them at a reasonable time of day - after 8am and before 10pm.




  • This is state-sponsored terrorism. Absolutely despicably evil that anyone would wake up one day and think to themself “what should I do today….oh I know, let’s go traumatize some Palestinian kids by kidnapping and torturing them.” What the fuck.

    What sets this government apart is the level of support and encouragement it provides to settlers, whether through supplying them with weapons or funding the creation of new outposts. This backing has enabled and emboldened settlers to carry out attacks on Palestinians, with the aim of displacing communities and annexing their land.






  • It isn’t just you and me. Not even the people who designed them fully understand why they give the responses they give. It’s a well-known problem. Our understanding is definitely improving over time, but we still don’t fully know how they do it.

    Here’s the latest exploration of this topic I could find.

    LLMs continue to be one of the least understood mass-market technologies ever

    Tracing even a single response takes hours and there’s still a lot of figuring out left to do.


  • I highly doubt that. For so many reasons. Here’s just a few:

    • What data would you train it on, the Constitution? The entirely of federal law? How would that work? Knowing how ridiculous textualism is even when done by humans, do you really think a non-thinking algorithm could understand the intention behind the words? Or even what laws, rules, or norms should be respected in each unique situation?
    • We don’t know why LLMs return the responses they return. This would be hugely problematic for understanding its directions.
    • If an LLM doesn’t know an answer, instead of saying so it will usually just make something up. Plenty of people do this too, but I’m not sure why we should trust an algorithm’s hallucinations over a human’s bullshit.
    • How would you ensure the integrity of the prompt engineer’s prompts? Would there be oversight? Could the LLM’s “decisions” be reversed?
    • How could you hold an LLM accountable for the inevitable harm it causes? People will undoubtedly die for one reason or another based on the LLM’s “decisions.” Would you delete the model? Retrain it? How would you prevent it from making the same mistake again?

    I don’t mean this as an attack on you, but I think you trust the implementation of LLMs way more than they deserve. These are unfinished products. They have some limited potential, but should by no means have any power or control over our lives. Have they really shown you they should be trusted with this kind of power?



  • Casey Newton founded Platformer, after leaving The Verge around 5 years ago. But yeah, I used to listen to Hard Fork, his podcast with Kevin Roose, but I stopped because of how uncritically they cover AI and LLMs. It’s basically the only thing they cover, and yet they are quite gullible and not really realistic about the whole industry. They land some amazing interviews with key players, but never ask hard questions or dive nearly deep enough, so they end up sounding pretty fluffy as ass-kissy. I totally agree with Zitron’s take on their reporting. I constantly found myself wishing they were a lot more cynical and combative.


  • That’s an interesting article, but it was published in 2022, before LLMs were a thing on anyone’s radar. The results are still incredibly impressive without a doubt, but based on how the researchers explain it, it looks like it was accomplished using deep learning, which isn’t the same as LLMs. Though they’re not entirely unrelated.

    Opaque and confusing terminology in this space also just makes it very difficult to determine who or which systems or technology are actually making these advancements. As far as I’m concerned none of this is actual AI, just very powerful algorithmic prediction models. So the claims that an AI system itself has made unique technological advancements, when they are incapable of independent creativity, to me proves that nearly all their touted benefits are still entirely hypothetical right now.


  • The article explains the problems in great detail.

    Here’s just one small section of the text which describes some of them:

    All of this certainly makes knowledge and literature more accessible, but it relies entirely on the people who create that knowledge and literature in the first place—that labor that takes time, expertise, and often money. Worse, generative-AI chatbots are presented as oracles that have “learned” from their training data and often don’t cite sources (or cite imaginary sources). This decontextualizes knowledge, prevents humans from collaborating, and makes it harder for writers and researchers to build a reputation and engage in healthy intellectual debate. Generative-AI companies say that their chatbots will themselves make scientific advancements, but those claims are purely hypothetical.

    (I originally put this as a top-level comment, my bad.)