Thanks, Mathew. The Nicky Case vid made my head spin a bit, but fascinating that different numbers of iterations using essentially the same variables produce different outcomes (if I got it right). Truth is stranger than fiction.
There is a point past which Game Theory just gets unweildy, though: once uncertainty about [epistemic, quantitiative, and doxastic] typespaces is introduced, the nut becomes uncrackable because there's no closed-form solution and no equilibrium.
Let's say that I _believe_ that my adversary _believes_ that our game has a specific payoff structure P, although my reading of the available information indicates that the actual payoff structure is P† with probability distribution ~A†(φ†) where φ† is a vector of parameters that characterise the (arbitrary) distribution A†.
My adversary _believes_ that I _believe_ that we're facing some other payoff structure P*≠ P†, with a different distribution (A*) characterised by different parameters φ*.
My adversary states quite openly that he uses haruspicy and astrology as his key methods. The latter requires him to know my birthday - which I _believe_ he can not possibly know. I also _believe_ that it wouldn't matter if he guessed my birthday correctly - given what I know about the usefulness of the set {haruspicy, astrology}.
Then I find out that his guess at my astrological sign is correct... 1 in 12 can be dumb luck (and has a 'tilt' if he guesses that I was born in the Upper Hemisphere - i.e., New Zealand[1]).
Does my belief about the usefulness of his method change?
One thing is clear: in multiplayer, multiperiod games with uncertainty, the 'rational' equilibrium will NOT HAPPEN unless
▪️ ALL of the participant behave 'rationally', AND
▪️ ALL of the participants believe that EVERY OTHER agent behaves 'rationally'.
This is why Uncertainty Quantification is of paramount importance in all quantitiave endeavours, but has the (demotivating) paradoxical by-product that if properly implemented the outcomes of any realistic model have forecast bounds that look like an ear trumpet, and 'zero effect' is almost always in every useful confidence interval.
[1] New Zealand is a meme: it's not a real place. It was made up by Tongans and Samoans to prevent their islands from being over-run by Yanks seeking to escape world events. Their hope is that all the Yanks would head out into the ocean off Australia looking for this 'New Zealand' place, and would run out of food and water, and die.
They even invented a pretend-version of Polynesian (Maori) who aren't fond of Tongans and Samoans.
Note that I'm not looking to ignite a debate on Trump himself here but rather how game theory applied to geopolitics might explain the inscrutable behaviors we've witnessed across the globe the past 3 years.
People definitely did not understand what was happening as this kicked off in early 2020 and all emergency powers were not explained in real time. Intentionally so i think.
>In technical terms, we say that the Nash equilibrium is for all players to defect.
It's not just a Nash equilibrium. It's the dominant strategy, which is a much stronger solution concept.
>Both companies rationally choose to spend on advertising. Nobody wins (except the ad executives).
In that payoff matrix both companies are actually better off if they both advertise than if they both don't - 100 customers vs 0
-----
In the conclusion of the game that you linked to it mentions that face-to-face communication is extremely important for the development of trust. And we have lost a lot of it with lockdowns, work from home, and the general paranoia about being around other people.
I edited the Company A/B example for clarity. In order to be a prisoner's dilemma, the system cannot be better off from double defection (else there is no dilemma). The clarification is that these are customers taken from the other company, not "advertising as education" in which new customers are produced.
Agreed that limiting interpersonal communication was lost during the pandemic. The sickness of it all was forcing people into parasocial information pools.
I should have included both Nash equilibria and dominant strategies in a single post, but as I mentioned, I plan to come back and write a more basic prisoner's dilemma article at another time. I wanted readers to jump as quickly as possible to the game in order to experience the difference in iterated trust games.
Thanks, Mathew. The Nicky Case vid made my head spin a bit, but fascinating that different numbers of iterations using essentially the same variables produce different outcomes (if I got it right). Truth is stranger than fiction.
You're the first person besides myself that I've ever seen promote or talk about Nicky Case's teaching games.
There is a point past which Game Theory just gets unweildy, though: once uncertainty about [epistemic, quantitiative, and doxastic] typespaces is introduced, the nut becomes uncrackable because there's no closed-form solution and no equilibrium.
Let's say that I _believe_ that my adversary _believes_ that our game has a specific payoff structure P, although my reading of the available information indicates that the actual payoff structure is P† with probability distribution ~A†(φ†) where φ† is a vector of parameters that characterise the (arbitrary) distribution A†.
My adversary _believes_ that I _believe_ that we're facing some other payoff structure P*≠ P†, with a different distribution (A*) characterised by different parameters φ*.
My adversary states quite openly that he uses haruspicy and astrology as his key methods. The latter requires him to know my birthday - which I _believe_ he can not possibly know. I also _believe_ that it wouldn't matter if he guessed my birthday correctly - given what I know about the usefulness of the set {haruspicy, astrology}.
Then I find out that his guess at my astrological sign is correct... 1 in 12 can be dumb luck (and has a 'tilt' if he guesses that I was born in the Upper Hemisphere - i.e., New Zealand[1]).
Does my belief about the usefulness of his method change?
One thing is clear: in multiplayer, multiperiod games with uncertainty, the 'rational' equilibrium will NOT HAPPEN unless
▪️ ALL of the participant behave 'rationally', AND
▪️ ALL of the participants believe that EVERY OTHER agent behaves 'rationally'.
The Stanford Encyclopaedia of Philosophy page on the Epistemic Foundations of Game Theory is interesting, especially the discussion of the paradoxes that arise from self-reference -> https://plato.stanford.edu/entries/epistemic-game/#ParSelRefGamMod
This is why Uncertainty Quantification is of paramount importance in all quantitiave endeavours, but has the (demotivating) paradoxical by-product that if properly implemented the outcomes of any realistic model have forecast bounds that look like an ear trumpet, and 'zero effect' is almost always in every useful confidence interval.
[1] New Zealand is a meme: it's not a real place. It was made up by Tongans and Samoans to prevent their islands from being over-run by Yanks seeking to escape world events. Their hope is that all the Yanks would head out into the ocean off Australia looking for this 'New Zealand' place, and would run out of food and water, and die.
They even invented a pretend-version of Polynesian (Maori) who aren't fond of Tongans and Samoans.
Crafty swine, those Coconuts.
I wonder if this could explain what is going on with Trump continuing to push the jabs good position?
Note that I'm not looking to ignite a debate on Trump himself here but rather how game theory applied to geopolitics might explain the inscrutable behaviors we've witnessed across the globe the past 3 years.
People definitely did not understand what was happening as this kicked off in early 2020 and all emergency powers were not explained in real time. Intentionally so i think.
Not until recently in my opinion.
>In technical terms, we say that the Nash equilibrium is for all players to defect.
It's not just a Nash equilibrium. It's the dominant strategy, which is a much stronger solution concept.
>Both companies rationally choose to spend on advertising. Nobody wins (except the ad executives).
In that payoff matrix both companies are actually better off if they both advertise than if they both don't - 100 customers vs 0
-----
In the conclusion of the game that you linked to it mentions that face-to-face communication is extremely important for the development of trust. And we have lost a lot of it with lockdowns, work from home, and the general paranoia about being around other people.
I edited the Company A/B example for clarity. In order to be a prisoner's dilemma, the system cannot be better off from double defection (else there is no dilemma). The clarification is that these are customers taken from the other company, not "advertising as education" in which new customers are produced.
Agreed that limiting interpersonal communication was lost during the pandemic. The sickness of it all was forcing people into parasocial information pools.
I should have included both Nash equilibria and dominant strategies in a single post, but as I mentioned, I plan to come back and write a more basic prisoner's dilemma article at another time. I wanted readers to jump as quickly as possible to the game in order to experience the difference in iterated trust games.