

Just launch a ton of ball bearings into orbit, and end space flight for ever. Or at least for the forseeable future.
Edit: Bearing balls, actually. Ball bearings is where the bearing balls are. But we want them in orbit.


Just launch a ton of ball bearings into orbit, and end space flight for ever. Or at least for the forseeable future.
Edit: Bearing balls, actually. Ball bearings is where the bearing balls are. But we want them in orbit.


They’re the ones selling shovels in this gold rush, though.


This is going to end up with the US defaulting, isn’t it?


Eh, the people around Putin know that they have better odds of maintaining their lives and fortune when no nukes fly.


And the AFD in Germany.


Current genAI? Never. There’s at least one breakthrough needed to build something capable of actual thinking.


He’s also already admitted that they’re out of training data. If you’ve wondered why a lot more websites will run some sort of verification when you connect, it’s because there’s a desperate scramble to get more training data.


It’s safe to assume that any metric they don’t disclose is quite damning to them. Plus, these guys don’t really care about the environmental impact, or what us tree-hugging environmentalists think. I’m assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don’t care about the environment, the problem with LLMs is how poorly they scale.
An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it’s done talking.
If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it’s worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we’re just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.
So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.


Yes. A surprising number of people can nominate people for the noble peace prize. This isn’t like the Oscar, where a nomination is already an achievement.


Nobody knows for sure.


Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there’s likely a huge difference in how human brains and LLMs work.


Judge finds that anthropic has to pay restitution to the reddit users. Affirms that posts belong to users.
Well, I can dream.


So, what should Europe do?


Is Tesla sabotaging their own cars in order to commit insurance fraud?


I’m starting to think an article referring to LLM as AI is s red flag, while them referring to them as LLM is a green flag.


First of all, copying or modifying somebody else’s work without their permission isn’t theft. Information cannot be owned in the way a physical object can be, as access to information is nonexclusive, meaning any number of people can use the same piece of information without impeding each other. Contrast that with physical objects, say a car. If I’m using your car, you can’t use it, because I’m doing so. If I copy your book, you still have the original. Hence its not theft.
Copyright is a legal privilege governments grant to artists, so that the artists can be paid for their work. (In practice, it mostly protects big publishers and a few wealthy artists. Most artists can’t afford to the legal battle necessary to get the state to actually enforce the legal privilege they’ve been granted).
This is a weird thread. Lots of people for artists losing control of their creations quickly while simultaneously against artist creations being used by others without consent.
You are conflating copyright infringement and plagiarism. Plagiarism is claiming that you created the works of somebody else. This is morally wrong, regardless of whether you have the consent of the original author. By claiming that you created something you didn’t, you are lying to your audience. (In fact, even disguising your earlier work as new is considered plagiarism). The plagiarist is not a thief, they’re a liar. When you put somebody’s work into an LLM, and claim you created the output, you have committed plagiarism. Unless you credit every work used in the training of said LLM.
when I publish a book, to steal it is consenting to be Luigi’d; no matter how long ago it came out.
You do know that Luigi Mangione plead not guilty to the charges? And yet you use his name as a euphemism for murder. You can’t own information, copying it is not stealing.
What’s that, and why should it be destroyed?