

It also doesn’t help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. “thinks” in “conceptual spaces” is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.
On this point I can highly recommend this open access and even language-wise accessible article: https://link.springer.com/article/10.1007/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)

Yes, thank you. It seemed bizarre to me as I was reading the article that this point is not brought up at all. Of course, it’s impossible to perform controlled realistic experiments to disentangle the effects. But to not even acknowledge this crucial limitation in the research makes the reporting and research deeply flawed. The research would really need to take into account each conflict’s preconditions, which is a very daunting task, to become more reliable. I understand it’s hard to do this research, but it’s only fair to demand that researchers temper their conclusions based on to the limitations. That kind of rigorous approach doesn’t sell as many books or lead to as many media appearances though, sadly.