• 0 Posts
  • 13 Comments
Joined 7 months ago
cake
Cake day: June 5th, 2025

help-circle

  • Hackworth@sh.itjust.workstoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 months ago

    I’m a video producer who occasionally needs to code. I find it much more useful to write the code myself, then have AI identify where things might be going wrong. I’ve developed a decent intuition for when it will be helpful and when it will just run in circles. It has definitely helped me out of some jams. Generative images/video are in much the same boat. I almost never use a fully AI shot/image in professional work. But generative fill and generative extend are extremely useful.







  • One of the few reliable uses of an LLM is brainstorming, as a wall to bounce ideas off of, or more accurately a semantic mirror. In low-stakes situations (like a writer thinking about their story from a different perspective), you’re essentially probing the latent space for interesting connections between meanings. It’ll default to the most common, generic connections, of course. So if the writer wants to tease through more surprising possibilities, they’ll quickly learn to direct the model to less well-worn territories. It rarely even requires anything approaching jailbreaking methods like U$1||G 1337 5P34K.

    If we think of an LLM as something akin to an external imagination, we can interpret interactions with it with some maturity and honesty. If we think of an LLM as an oracle, or a friend, or a lover, or what have you - we’re signing a contract with the Fae Folk. The Childlike Empress makes no distinction between good and evil beings of Fantastica, as they all must live in the imaginations of mankind. In high-stakes situations, this kind of imaginitive freedom can have (and does have) enormous consequences.

    I see some similarities in the way that the “Doom Caused Columbine” conversation happened early on. And just as that resulted in the establishment of the ESRB, hopefully this incident (and others like it) will lead to some reform. But I don’t know exactly what that reform needs to look like. I think education is helpful, but I don’t think it’s enough. We largely know about the harms of social media and it is no less of an issue. Guardrails can kind of be set up, but the only way to do it presently (technically speaking) is hamfisted and ineffective. And adults are no more immune to the potential harms of abusing an LLM than they’re immune to being influenced by advertisements.