I’m sick and tired of the Capitalist religion and all their fanatic believers. The Western right-wing population are the most propagandized and harmful people on Earth.

  • 0 Posts
  • 22 Comments
Joined 3 years ago
cake
Cake day: June 8th, 2023

help-circle




  • It were already infested with Capitalist slop, marketing and propaganda. It was an market religious, propagandized garbagedump for a long long time before AI… The sooner that Capitalist brain-rot disappears the better.

    ‘AI slop’ is just an extension of normal ‘Capitalist slop’ - garbage that have zero purpose outside a market battlefield…







  • Lets hope so! They manipulated the first Deepseek model, and reinserted US propaganda and garbage lies into the llm again. So even before groqopedia (sigh), Perplexity were the first private AI corporation that openly felt they were competent/unbiased enough to decide what ‘truth’ is, and they took it upon themselves to ‘fix’ the information before the information went to their users. Oh, thank you benevolent Perplexity Unt’s, for protecting us all against wewil ! (sigh)

    After that ideological stunt, they simple cant be trusted for ANY information! They have shown that THEIR interpretation of ‘truth’ is the real truth, and that they have the GOD-GIVEN right, to alter other sources pf truth, on behalf of their users. So, all their users are victims of that ideological sh*t perspective. Perplexity WILL just re-doctor information as they, and their sponsors, see fit. Perplexity are just another US propaganda outlet - opinions sold for profit - and the sooner they collapse the better for humanity.

    Well, oc all the other big AI corporations and their Unt owners/ceo’s do the same - just behind closed doors, so I’m eagerly waiting for the whole US AI bubble to collapse and pull down everything else with it… Go go China, Bricks+ and all of the global ‘south’ !



  • Sims@lemmy.mltoTechnology@lemmy.worldLLMDeathCount.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    12
    ·
    1 month ago

    I don’t think “AI” is the problem here. Watching the watchers doesn’t hurt, but I think the AI-haters are grasping for straws here. In fact, when comparing to the actual suicide numbers, this “AI is causing Suicide !” seems a bit contrived/hollow, tbh. Were the haters also as active in noticing the 49 thousand suicide deaths every year, or did they just now find it a problem ?

    Besides, if there’s a criminal here, it would be the private corp that provided the AI service, not a broad category of technology - “AI”. People that hate AI, seem to really just hate the effects of Capitalism.

    https://www.cdc.gov/suicide/facts/data.html (This is for US alone !) overview

    If image not shown: Over 49,000 people died by suicide in 2023. 1 death every 11 minutes. Many adults think about suicide or attempt suicide. 12.8 million seriously thought about suicide. 3.7 million made a plan for suicide. 1.5 million attempted suicide.






  • Compute prices are not going down. If you want a budget friendly compute, then you may need to look for NPU’s. They are slower, and is unfortunately also getting more expensive pr TOPS. Electricity prices in the West are going up as a low output gets bought up by datacenters instead. The newest GPU’s are more efficient in re. to power pr Top, but they are not budget friendly.

    I have not seen a recent comparison of GPU vs Npu; TOPS vs price vs power consumption, but @ aliexpress, I saw a small 40 TOPS Npu as a nvme stick with 16gb ram that draws 10 watts, or so (search for ‘ai accelerator nvme’). This little thing can scan files 24/7 and your i5 can help out in peaks. Afaik you can run a distributed model that also runs in, and uses your i5 memory/compute, so if you max out i5 memory, perhaps the combined compute is enough for a larger model ? Maybe a few Npu sticks can work together on the same model ?

    Alternatively, you could look for the next version of a Huawei gpu card (first had baby kinks afaik), or one of the other upcoming producers from China. They’ll come faster and faster, but are earmarked for local consumption first.

    Another suggestion is to buy one of the old P40/80 (from the ‘pascal’ chipset, I think) (or was it k40/80 ??). They should still support a descent range of modern quantization needs and often have 24gb ram. A refurbished miner card cost around 50-70$+ - cant remember exactly tho.

    Lastly, you could try something different. If the i5, with enough memory, can run a large model slowly, you could add a dedicated KV cache, so most of the tokens won’t need to be recalculated. Memory and bandwidth are the most important here, but any old server could be upgraded to be a dedicated KV cache server (might need a network upgrade tho!).

    Anyway, Ideas are easy and cheap. If I were you, I would ask an AI for a little python app where you can add a product and it returns a graph where it compares with other products and show optimality over time given prices/power and Tops - Gpu vs Npu. Good hunting…