I was turned onto this book by an online book group; it sounded interesting. I enjoyed the writing–it was very easy to read. Amusingly, given the subject of so much of the book, I found it read easily enough that it was a quick skim rather than sticking deeply to mind. To a certain point, that’s intentional–the author talks about it as getting better at water cooler discussions, and it seems like the overview that sticks (so far) could be very handy for that purpose.
For half of the book, the discussion is about Systems 1 and 2. System 1 is automatic, visual, great at averaging, vigilant for danger, etc. It functions automatically and often has opinions that can feel like considered judgement. System 2 is analytical and detailed, but lazy. It has trouble with any computation trickier than multiplying two two-digit numbers together. The trick is that we all think of our lives as mostly System 2 phenomena… but it’s usually System 1 throwing up an answer, with system two giving it a casual “yup, looks fine” certification.
There are a lot of interesting ideas that get explored at pop-culture length. (Which is probably all I could handle, since that both fields are distant fro my own.) There are a lot of fascinating asides and details, like intensity matching (which gets abused by System 1 for everything from evaluating how much to support a cause, or how long a prison sentence should be), WYSIATI (what you see is all there is), which leads to consistent biases in evaluation depending on “irrelevant” criteria like the order of presentation or adding extraneous, useless information, and more. One of the trickiest things is that S1 hates not having an answer… so, instead of prompting you to think hard, it answers a similar but related question. A trivial example is that “How is your life overall these days?” usually gets reduced to “How do you feel at this moment?”
Less of the book is devoted to two further persistent, consistent flaws. In the field of economics, there are a lot of assumptions about rational agents (ie, everyone) and how they act. Humans fail against that baseline in consistent ways. For example, reading contracts thoroughly is unusual–and while an Econ doesn’t care about the font size, Humans do. A huge effect is anchoring–we care more about how things are relative to our current status than absolutely. Which means that you can twist things, just by presenting something as a gain followed by a wager for a degree of loss, or just presenting it as a wager from current standings. (People are risk and loss averse.)
It’s a book that I’m glad I read, and I hope that some of it sticks, despite how easy it went down.