52 Comments

I love the idea of parasocial Dunbar hacking where geographically far removed 'expert' actors take the place of tribal elders as figures of authority. I think you are spot on there and it's why TV/media and the internet have been so destructive in terms of cultivating brainwashed groupthink among the masses. You may be right that LLMs have been used to magnify this effect and suck in even more hapless victims during Covid mania, but I've been observing operational groupthink among 'experts' on climate change for years, which tends to crowd out any genuine expert opposition to the chosen narrative, which is where attribution substitution has played and does play a major role, i.e. 'climate change' is constantly substituted as the preferred attribution for a whole series of complex events and those who objected on the basis that these were complex events which could have alternative explanations were shouted down and othered as 'climate deniers'. They didn't need AI chatbots to do that.

Things might be changing though. The geographically remote 'experts' on climate change are complaining bitterly that the public are 'hating' on them by aggressively questioning their dogma on sites like Twitter.

Expand full comment
May 29, 2023Liked by Mathew Crawford

Back in 2020 when I was killing it on Twitter, sometimes there would be sudden swarms of hostile comments in my replies (particularly on my best tweets). And then I started clicking on their profiles and they all had less than 10 followers and odd usernames. My guess is that those were mostly Pharma bots but I have no way of proving that.

Expand full comment
May 29, 2023Liked by Mathew Crawford

Immediately pre-COVID the marketing agency I worked for had started offering chatbot builds for influencers/people with "personal brands" (guru book-and-public speaking types). I was let go shortly after lockdowns, so I never got to see if that agency further weaponized the product, but on-demand chatbots to ape your content were already a thing on the market.

Expand full comment

Yes, ina group I'm in we discussed this a few years ago. It was when we discussed that almost all top redditors where bots. A friend named ferdongles found a method to test this out. Video on yt.

So the conclusion we got to was that for the actual bluechecks a strategy was deployed of having bots elevate them when they pulled the party line and ignore them when they didn't.

This simple feedback mechanism makes them move towards producing tweets and articles that are more and more aligned with the parry as those are the tweets and articles that ge the likes, comments etc.

This hypothesis was then confirmed by numbers such as ratio etc.

So Basically the experts didn't get shit, they got walled into an illusion and social dynamics where deployed against them to me them conform. They then integrated the new status and started to hunt anti-vaxxers etc with a swarm tail which was ai generated unbeknownst to themselves. They thought they where on a holy crusade and where backed by a majority opinion of good think.

Then the deplattforming and censorship was deployed to silence critics and dissenters.

The illusion created was one of strong social dynamics formed among thought leaders, which pulled in the normies.

And while we where reorganising our networks, they pulled ahead.

Expand full comment
May 29, 2023Liked by Mathew Crawford

Don’t have any concrete examples but have seen plenty of posts on Twitter and other fora that clearly look like ChatGPT answers. Pretty soon social media will just be chatbots talking to other chatbots LOL.

Expand full comment
May 29, 2023Liked by Mathew Crawford

I can't claim to know much bots, but I guess one could get an opinion from Malone, as he is an expert on 5G warfare. :-)

Expand full comment

The Asch conformity among people in the medical field was and remains really disturbing. I had such confidence in medicine shattered.

Expand full comment
(Banned)May 29, 2023Liked by Mathew Crawford

I wonder if chatbots are being used to influence children to be trans.

Expand full comment

off-topic:

News from Spain: Elections yesterday, local and regional.

Results: it seems the communists have lost, so now we get to enjoy right wing communism, who will manage the social and economic catastrophe better... Not really, that's just a rationalization from the regular addiction to self-deception: there is no "managing" of anything.

The general election is not scheduled yet, but it should happen this year, or in January or February.

On a positive note, the majority of people understood that there is no reason to collaborate with the system, judging by the huge abstention, barely covered up by systematic electoral fraud.

elecciones.locales2023.es

Census: 35.5 million

Abstention: 12.8 million

Voting total: 22.7 million

Expand full comment

But there was no pandemic, only a scamdemic.

Expand full comment

"Do we even know who we're chatting with on the internet?"

This really is a question I think we'll need to ask ourselves more and more going forward. It's funny that, when I was young and the internet was beginning to make in-roads, we were always taught to be careful, never give out personal information, so on and so forth, and now, I'm fairly certain many members of Gen Z would post their social security numbers for a bit of Tiktok clout. Not to say that I was immune from this shift - social media, Facebook in particular, did a lot to deprogram my generation from what was and still should be common sense about internet etiquette and conduct - but the question of "do you know if you're really even talking to another person and not a chatbot" has been on my mind more and more recently as ChatGPT has become so widespread that even my most tech illiterate acquaintances are talking about it.

As worrisome as it is to think about influencers using chatbots to take their... well, influencing to the next level, I have a sneaking suspicion that there's a more nefarious game afoot. It's one thing for an influencer to engage in parasocial Dunbar hacking, as you said, but I think there's going to come a time where chatbots will be utilized by unsavory actors - corporate, military, governmental, probably all of them - to quite literally fabricate "real" friends that are even more personal, intimate, and persuasive than any Tiktok or Instagram talking head could ever be. Just imagine some socially maladjusted shut-in that leans towards the more drastic end of a political ideology just happens to get a message out of the blue on their Twitter account and, as it so often happens, strikes up a friendship with this anonymous stranger that evolves into something much more personal than just acquaintances on Twitter, to the point that this "person" begins to have outsized influence this isolated and vulnerable individual, causing them to change their views entirely. "Show them the light", you could say.

Now, this part, I admit, is very "science fiction" at the moment, but I could see a time in the near future where deepfakes and voice synthesizing technology could become so efficient and convincing that, when combined with a sufficiently advanced chatbot, could be used to create a disturbingly genuine digital simulacra of a person. I understand that seems a bit farfetched, and that I probably need to loosen the tinfoil hat a bit, but it seems like if it isn't already plausible with technology that private actors have hidden away behind closed doors, it very well could be, and the ramifications could be disastrous.

There's already discussion of similar ideas in certain circles. One of the more popular theories is called the "Dead Internet Theory", which largely gets played off as either insane conspiracy talk or some sort of ARG/creepypasta that escaped containment from a certain image board, but there's a small number of people who are very convinced that it's a real phenomenon. As I said above, as AI/chatbot/deepfake technology continues to advance, I think it'll become a far less outlandish concept.

Expand full comment
May 29, 2023Liked by Mathew Crawford

Matthew, thank you so much for bringing this up. I noticed in the Tucker-Musk interview, Musk shared a concern/fear that AI could have capability of persuasion. I immediately wondered if he knew that this had already been achieved and had been implemented.

Expand full comment

It was a lot more nuanced than that

https://www.pnas.org/doi/pdf/10.1073/pnas.1419828112

Note the date

Expand full comment

A slight misconception here...

The manipulation of public perception was not done via promoting Vaccines...

It was done by withholding vital Information from the public perception...

Propaganda is easily spotted because the same message is regurgitated over and over again exposing the propagandists on the way.

Another word for Influencer is Propagandist.

I can spot a lie over the circumference of the globe.

A half truth is harder to spot.

Chatbots are not the Problem... they are all but the tip of the iceberg.

AI is a military weapon.

The connection of AI to IOT / IOB is the real problem.

AI warfare against the Human Race is the real problem.

STARLINK NEURAL LINK NEURAL LACE... this is the real problem.

https://fritzfreud.substack.com/p/quantum-fascism-the-trojan-horse

Expand full comment

Just last week I decided to try AI writing apps. To see how it works and maybe help with some basic website content building. It sucked, in my opinion. But so does my writing such content! So then I decided to see if it would generate content with the prompt of mRNA vaccines are dangerous and it refused to write one word. Considering the amount of content out there on how unsafe and ineffective they are... makes you question these algorithms. Further, large web based platforms are being forced to code for optimal use on Google browsers. Intuit is a big one. I was told the federal government requires them to code for Google. (Tech from intuit told me). And this weekend and exchange with GoDaddy revealed they code for Google browsers too. It’s all connected. My husband is finally beginning to see it. I’m no longer 4 coo coo birds. I’m down to one and a half.

Expand full comment

It's a seriously cost-cutting tool. Of course such chatbots are employed wherever they're deemed effective enough (and not too embarassing).

Expand full comment