• 0 Posts
  • 71 Comments
Joined 8 months ago
cake
Cake day: May 12th, 2025

help-circle



  • Not every system fits every country.

    France had republics before the current 5th that had the president as more of a ceremonial role. But it did not work for us, and both the third and fourth republics ended up with political instability and governments falling one after the other.

    The 5th republic purposefully gave more power to the president, to remediate the political instability that France had seen with the previous systems. It works.

    No democratic system is perfect. The one Greece has, per your comment, sounds great in theory. But the day where the 3 top parties can’t come to an agreement, and the elections don’t change the outcome, you’ll have an extended period of instability where the government is unable to do anything. And that is absolutely awful for a country.

    It is great that Greece isn’t encountering these issues. But France has, and the current system is a fix to that. Let’s not repeat bad History by reverting to a system we know does not work for us.



  • It’s more complicated than that. Don’t get me wrong, I voted for the left block and was pissed they didn’t get to form a government. But it is more complicated than that.

    The president has to pick a government that will be able to pass laws with the vote of the parliament. While the left block had the plurality, the rest of the parliament would likely not have voted their policies. Picking a government that would satisfy the rest of the parliament was the best move for stability and to have a government able to do something.

    That’s not anti democratic. And that’s actually the system that is used in most representative democracies, in different forms, which always summarises to: Head of state picks a government that has the most chances to be accepted by the parliament.




  • Of course the “understanding” of an LLM is limited. Because the entire technology is new, and it’s far from being anywhere close to being able to understand to the level of a human.

    But I disagree with your understanding of how an LLM works. At its lower level, it’s a bunch on connected artifical neurons, not that different from a human brain. Now please don’t read this as me saying it’s as good as a human brain. It’s definitely not, but its inner workings are not so far. As a matter of fact, there is active effort to make artificial neurons behave as close as possible to a human neuron.

    If it was just statistics, it wouldn’t be so difficult to look at the trained model and identify what does what. But just like the human brain, it is incredidbly difficult to understand that. We just have a general idea.

    So it does understand, to a limited extent. Just like a human, it won’t understand what it hasn’t been exposed to. And unlike a human, it is exposed to a very limited set of data.

    You’re putting the difference between a human’s “understanding” and an LLM’s “understanding” in the meaning of the word “understanding”, which is just a shortcut to say that they can’t be compared. The actual difference is in the scope of understanding.

    A lot of the efforts in the AI fields gravitate around imitating a human brain. Which makes sense, as it is the only thing we know that is capable of doing what we want an AI to do. LLMs are no different, but their scope is limited.