• 1 Post
  • 296 Comments
Joined 11 months ago
cake
Cake day: February 5th, 2025

help-circle



  • the cost of this kind of inviduality is.

    The thing is, it’s a rare individual who “benefits” from the direction we are continuing to move in. Unless they’re a bunch of sadists who like watching the rest of the world suffer while they insulate themselves with security forces, that they can’t really trust, because where do you get the people to maintain the security?

    A society where the richest can walk down High Street in London without a thought to “personal security” is better for the people at the top, too. Unless they’re sadists.


  • “Back in the day” a lot more people went to church on a regular basis. They also beat their children on a regular basis, and a much larger percentage of those children grew up to perpetrate violence, domestic and otherwise, in their adult lives.

    The core teachings of Jesus, Buddha, the Dalai Lama, and the rest are good. People standing in the pulpit saying whatever it takes to fill the pews and get donations… less good on average. Theory is easier than putting that theory into faithful practice.


  • Well, of course, it’s different than a Casino. It’s bigger. It’s a longer running game. But it still pushes those “get rich quick” addiction buttons. You’re right, there are addiction awareness resources built up around traditional gambling channels, disclosure that “the house always wins.” In a sense, the stock markets are a long enough, slow enough running game that many players do actually die before the longer running Ponzi schemes collapse - so maybe the lack of addiction support groups is a little big justified there.

    There’s also an unclear distinction drawn between “day traders” and “long term investors” which is so fuzzy as to be meaningless anywhere near the boundary, if there even is a boundary. How can you tell if your mutual fund is day trading?





  • The statement that “No one can own what AI produces. It is inherently public domain” is partially true, but the situation is more nuanced, especially in the United States.

    Here is a breakdown of the key points:

    Human Authorship is Required: In the U.S., copyright law fundamentally requires a human author. Works generated entirely by an AI, without sufficient creative input or control from a human, are not eligible for copyright protection and thus fall into the public domain.

    “Sufficient” Human Input Matters: If a human uses AI as an assistive tool but provides significant creative control, selection, arrangement, or modification to the final product, the human’s contributions may be copyrightable. The U.S. Copyright Office determines the “sufficiency” of human input on a case-by-case basis.

    Prompts Alone Are Generally Insufficient: Merely providing a text prompt to an AI tool, even a detailed one, typically does not qualify as sufficient human authorship to copyright the output.

    International Variations: The U.S. stance is not universal. Some other jurisdictions, such as the UK and China, have legal frameworks that may allow for copyright in “computer-generated works” under certain conditions, such as designating the person who made the “necessary arrangements” as the author.

    In summary, purely AI-generated content generally lacks copyright protection in the U.S. and is in the public domain. However, content where a human significantly shapes the creative expression may be copyrightable, though the AI-generated portions alone remain unprotectable.

    To help you understand the practical application, I can explain the specific requirements for copyrighting a work that uses both human creativity and AI assistance. Would you like me to outline the specific criteria the U.S. Copyright Office uses to evaluate “sufficient” human authorship for a project you have in mind?

    Use at your own risk, AI can make mistakes, but in this case it agrees 100% with my prior understanding.


  • First, how much that is true is debatable.

    It’s actually settled case law. AI does not hold copyright any more than spell-check in a word processor does. The person using the AI tool to create the work holds the copyright.

    Second, that doesn’t matter as far as the output. No one can legally own that.

    Idealistic notions aside, this is no different than PIXAR owning the Renderman output that is Toy Story 1 through 4.


  • Nobody is asking it to (except freaks trying to get news coverage.)

    It’s like compiler output - no, I didn’t write that assembly code, gcc did, but it did it based on my instructions. My instructions are copyright by me, the gcc interpretation of them is a derivative work covered by my rights in the source code.

    When a painter paints a canvas, they don’t record the “source code” but the final work is also still theirs, not the brush maker or the canvas maker or paint maker (though some pigments get a little squirrely about that…)


  • Yeah, context management is one big key. The “compacting conversation” hack is a good one, you can continue conversations indefinitely, but after each compact it will throw away some context that you thought was valuable.

    The best explanation I have heard for the current limitations is that there is a “context sweet spot” for Opus 4.5 that’s somewhere short of 200,000 tokens. As your context window gets filled above 100,000 tokens, at some point you’re at “optimal understanding” of whatever is in there, then as you continue on toward 200,000 tokens the hallucinations start to increase. As a hack, they “compact the conversation” and throw out less useful tokens getting you back to the “essential core” of what you were discussing before, so you can continue to feed it new prompts and get new reactions with a lower hallucination rate, but with that lower hallucination rate also comes a lower comprehension of what you said before the compacting event(s).

    Some describe an aspect of this as the “lost in the middle” phenomenon since the compacting event tends to hang on to the very beginning and very end of the context window more aggressively than the middle, so more “middle of the window” content gets dropped during a compacting event.


  • If you outsource you could at least sure them when things go wrong.

    Most outsourcing consultants I have worked with aren’t worth the legal fees to attempt to sue.

    Plus you can own the code if a person does it.

    I’m not aware of any ownership issues with code I have developed using Claude, or any other agents. It’s still mine, all the more so because I paid Claude to write it for me, at my direction.


  • the sell is that you can save time

    How do you know when salespeople (and lawyers) are lying? It’s only when their lips are moving.

    developers are being demanded to become fractional CTOs by using LLM because they are being measured by expected productivity increases that limit time for understanding.

    That’s the kind of thing that works out in the end. Like outsourcing to Asia, etc. It does work for some cases, it can bring sustainable improvements to the bottom line, but nowhere near as fast or easy or cheaply as the people selling it say.


  • As an experiment I asked Claude to manage my git commits, it wrote the messages, kept a log, archived excess documentation, and worked really well for about 2 weeks. Then, as the project got larger, the commit process was taking longer and longer to execute. I finally pulled the plug when the automated commit process - which had performed flawlessly for dozens of commits and archives, accidentally irretrievably lost a batch of work - messed up the archive process and deleted it without archiving it first, didn’t commit it either.

    AI/LLM workflows are non-deterministic. This means: they make mistakes. If you want something reliable, scalable, repeatable, have the AI write you code to do it deterministically as a tool, not as a workflow. Of course, deterministic tools can’t do things like summarize the content of a commit.





  • I frequently feel that urge to rebuild from ground (specifications) up, to remove the “old bad code” from the context window and get back to the “pure” specification as the source of truth. That only works up to a certain level of complexity. When it works it can be a very fast way to “fix” a batch of issues, but when the problem/solution is big enough the new implementation will have new issues that may take longer to identify as compared with just grinding through the existing issues. Devil whose face you know kind of choice.