9 Comments
Jan 25, 2023Liked by Mathew Crawford

Thanks, Mathew. The Nicky Case vid made my head spin a bit, but fascinating that different numbers of iterations using essentially the same variables produce different outcomes (if I got it right). Truth is stranger than fiction.

Expand full comment
Jan 25, 2023Liked by Mathew Crawford

You're the first person besides myself that I've ever seen promote or talk about Nicky Case's teaching games.

Expand full comment

There is a point past which Game Theory just gets unweildy, though: once uncertainty about [epistemic, quantitiative, and doxastic] typespaces is introduced, the nut becomes uncrackable because there's no closed-form solution and no equilibrium.

Let's say that I _believe_ that my adversary _believes_ that our game has a specific payoff structure P, although my reading of the available information indicates that the actual payoff structure is P† with probability distribution ~A†(φ†) where φ† is a vector of parameters that characterise the (arbitrary) distribution A†.

My adversary _believes_ that I _believe_ that we're facing some other payoff structure P*≠ P†, with a different distribution (A*) characterised by different parameters φ*.

My adversary states quite openly that he uses haruspicy and astrology as his key methods. The latter requires him to know my birthday - which I _believe_ he can not possibly know. I also _believe_ that it wouldn't matter if he guessed my birthday correctly - given what I know about the usefulness of the set {haruspicy, astrology}.

Then I find out that his guess at my astrological sign is correct... 1 in 12 can be dumb luck (and has a 'tilt' if he guesses that I was born in the Upper Hemisphere - i.e., New Zealand[1]).

Does my belief about the usefulness of his method change?

One thing is clear: in multiplayer, multiperiod games with uncertainty, the 'rational' equilibrium will NOT HAPPEN unless

 ▪️ ALL of the participant behave 'rationally', AND

 ▪️ ALL of the participants believe that EVERY OTHER agent behaves 'rationally'.

The Stanford Encyclopaedia of Philosophy page on the Epistemic Foundations of Game Theory is interesting, especially the discussion of the paradoxes that arise from self-reference -> https://plato.stanford.edu/entries/epistemic-game/#ParSelRefGamMod

This is why Uncertainty Quantification is of paramount importance in all quantitiave endeavours, but has the (demotivating) paradoxical by-product that if properly implemented the outcomes of any realistic model have forecast bounds that look like an ear trumpet, and 'zero effect' is almost always in every useful confidence interval.

[1] New Zealand is a meme: it's not a real place. It was made up by Tongans and Samoans to prevent their islands from being over-run by Yanks seeking to escape world events. Their hope is that all the Yanks would head out into the ocean off Australia looking for this 'New Zealand' place, and would run out of food and water, and die.

They even invented a pretend-version of Polynesian (Maori) who aren't fond of Tongans and Samoans.

Crafty swine, those Coconuts.

Expand full comment

I wonder if this could explain what is going on with Trump continuing to push the jabs good position?

Expand full comment

>In technical terms, we say that the Nash equilibrium is for all players to defect.

It's not just a Nash equilibrium. It's the dominant strategy, which is a much stronger solution concept.

>Both companies rationally choose to spend on advertising. Nobody wins (except the ad executives).

In that payoff matrix both companies are actually better off if they both advertise than if they both don't - 100 customers vs 0

-----

In the conclusion of the game that you linked to it mentions that face-to-face communication is extremely important for the development of trust. And we have lost a lot of it with lockdowns, work from home, and the general paranoia about being around other people.

Expand full comment