Creativity and logic: a dynamic pairing for knowledge
Making sense of how we use different modes of thought to make decisions and discoveries
In our broader cultural imagination, the ideal thinker or problem solver would be someone like Sherlock Holmes: analytic, deductive, focused on piecing the evidence together. Rational and logical thinking are valued as the guide to scientific discovery and good decision making. On the other side, irrational or illogical are normally thrown around as aspersions or insults. However, there are a few problems with this as an ideal.
For one, there is an expanding field of research, mostly grounded in psychology, documenting all the ways that humans aren't rational - even when they are trying hard to be. This work is framed in terms of cognitive biases, and one of the more interesting accounts comes from work of Daniel Kahneman and Amos Tversky - documented in Kahneman’s book Thinking, Fast and Slow. For those who aren't aware of it, there is good evidence that humans operate with two different thinking systems. System 1 thinking is fast as it relies on associations, habits and background context. It is energy efficient but also error prone. System 2 thinking is the slow thinking that is demonstrably more difficult but is careful, rational and reliable. Given the extra effort required for System 2, we all spend most of the time operating in fast thinking mode, so - on this account - most of the time we are not very logical and rational.
Another problem with painting the careful rational and logical thinker as the ideal arises when we look at the history of people whose ideas or discoveries have changed the world. Whether it was Archimedes leaping out of his bathtub, Isaac Newton sitting under an apple tree, Einstein getting his best ideas shaving, or Roger Penrose crossing a road on a walk with a friend - many of the most brilliant insights and discoveries in history took place away from the work desk and in odd situations. Creativity, inspiration and seemingly random connections have driven discoveries and problem solving over the course of human history. We all know the benefit of taking a break or sleeping on something, yet the ideal thinker in our imagination focuses on rationality, logic and hard work.
So what actually matters for problem solving, research and uncovering the truth? Is it careful rationality and logic? Or is it creativity and inspiration? The boring but correct answer is that we need both. But they need to be paired in a particular logical structure for the greatest effect. Many human weaknesses in thinking and problem solving arise when we don't harness them in the right order for the right purpose.
Different modes of thinking
In The Primacy of Doubt, which I reviewed late last year, Tim Palmer provides an intriguing account of different modes of thinking that shifts the focus from the account offered by Kahneman and Tversky. He argues (at times speculatively) that we have a low-power, less focused mode of thinking and a power-intensive mode. The power-intensive mode lines up very closely with Kahneman’s System 2 or Slow Thinking. In this mode, we focus closely on a particular problem and work it through step by step. This is difficult and tiring work as, in Palmer's account, we focus our energy narrowly and work according to classical logic and dynamics.
By contrast, in the low-power mode, our brains operate more randomly or noisily, make connections between different ideas or areas and (he speculates) take advantage of quantum computational effects. For Palmer, the low-power mode kicks in when we aren't focusing closely on anything, are relaxing or let our mind wander. While Kahneman and Taversky see the advantages of fast thinking primarily as one of energy and effort minimisation, Palmer argues that the low-power mode is essential for thinking. It is when we are operating in the low-power mode that we come up with new ideas, make connections and develop insights. As Palmer puts it:
in order to generate random new ideas, the brain must be susceptible to noise, and this appears possible only when the brain is in its low-power mode.1
In other words, we cannot do science or research without this less rigorous and less logical 'low-power' mode as we'll never come up with anything new or different.
While parts of his account - especially postulating the existence of quantum computational effects within our brains - are speculative, it lines up strongly with best practice in research and science. While there may not be a universal scientific method, good scientific reasoning progresses according to a standard pattern: develop a hypothesis or theory; then test that against evidence or experiments.
This suggests that scientific methods (or patterns of thinking) make use of Palmer's two modes of thinking in a structured way. To come up with hypotheses or new theories, we need to extend our thinking and come up with new ideas - which requires us to take advantage of the low-power mode. But once we have something, we rigorously test it through logic, evidence and experiment - using the power-intensive mode.
This process of cycling between hypothesis and experiment, or creativity and logical testing, is a coherent strategy that draws on the strengths of our two modes of thinking: low-power for hypotheses and creativity; power-intensive for experiments and logical testing. Or to use a different, and widely accepted, account of how the brain operates - science depends on a dynamic relationship between left and right brain strengths.
Pairing creativity and logic
This explicit pairing of creativity and careful experiment, or randomness and logic, into a defined, repeating cycle of hypothesis and experiment forms a dynamic relationship at the heart of science. It couples two different human capacities, or brain modes, together to get more out of each of them in the pursuit of knowledge or truth. Yet this cycle isn’t something exotic or abstruse. It is very familiar in day to day life even if we rarely notice it.
For example, a rapid cycling through (random) ideas and testing is very common when we are making decisions. To pick one simple example, how do you decide what you want to eat for a given meal? Imagine you are going out to a restaurant with friends and you need to choose where. If there isn’t a default, a bunch of ideas are likely to be thrown out based on all sorts of random or well founded reasons (low-power thinking). Either as you go, or once you have a long list, you need to apply some logic, rigour and possibly research to sift through the options and make a decision. The level of rigour here will likely (hopefully!) not match the rigour in testing a scientific hypothesis, but the thinking pattern is essentially the same.
Once you start looking for it, I expect you will see this pattern in many things you do. You may source ideas internally, from browsing, or other people, but decisions - like new discoveries - are almost always based on the collection of ideas, including random ones. If we never allow new ideas, then we don't do anything different. But, at the same time, most of the new ideas get discarded once we test them, whether thinking them through, doing more research or trying them out. We all know that something that sounds brilliant when you first think of it often turns out to be mediocre in practice.
This simple, dynamic model of thinking, like the scientific method, reflects a principled epistemic humility. The reason we need new ideas, even random ones, is that our current state of knowledge is limited and likely flawed. If our knowledge was certain and completely reliable, we wouldn't need anything new. But, at the same time, we have to be humble about what occurs to us as many of the ideas we come up with are flawed. That is why we have to test them rigorously to find out which are good or true. We need new ideas and to test our thinking rigorously as our thinking is unreliable by itself.
We need both creativity and logic
This model also neatly predicts common flaws in human thinking - in a less granular but more easily applicable way than the literature on cognitive biases. The dynamic pairing of creativity and logic (or ideas and testing) only works when we pursue both sides strongly. We get into trouble when we don't do either one well.
One common human flaw is to think our ideas are great, or reliable guides to truth, without testing them properly. We have all been in situations where something went wrong because we didn't think it through or do our research. Not testing our ideas, whether they are our own ideas or inherited from others, underlies a long list of cognitive biases - from confirmation to anchoring and recency biases. It reminds us that one way to think better is to follow a simple principle: keep testing my (or our) ideas and don't take just take what I (or we) think as true or reliable.
The mirror flaw on this model is that we aren't creative enough or don't consider a wide enough range of ideas. The truth is often strange or unexpected and we cannot uncover it if we restrict ourselves to the narrow range of ideas that we think are acceptable or we are comfortable with. If we are too careful or logical in identifying the options worth considering, we can easily miss the truth. To pick one example, the review of US Government intelligence failures leading up to the 9/11 terror attacks pointed some of the blame back to 'failures of imagination'. The intelligence community couldn't take advantage of the dynamic pairing of creativity and logic because they weren't sufficiently creative, and were too confident in their judgements about what was and wasn't possible.
This pattern of being both insufficiently creative and too epistemically confident to see the truth neatly describes many episodes in the history of science. There are many cases of scientists dismissed as lunatics or cranks who turned out to be right. To pick a few examples, Isaac Newton's theory of gravitation was originally derided as 'spooky action at a distance'; Ignaz Semmelweiss' discovery that washing hands reduced mortality in childbirth and surgeries was deemed offensive to gentleman doctors; and Barry Marshall had to give himself stomach ulcers to prove they were caused by a bacteria. There is no a priori reason why the universe should match human thinking, knowledge and rationality, but in our common epistemic confidence we tend to assume that we have figured it out.
This obviously doesn’t mean that all, or many, people with crazy ideas are right. There is a very long list of people who were rightly dismissed as cranks as their ideas didn't hold up under testing. Knowledge is hard and we have to be open to the fact our ideas might be wrong - which means both we sometimes need to take wacky ideas seriously but also we have to test all our ideas as rigorously as we can. Science, as a practice, is built around doing both of these well and thereby taking advantage of a dynamic connection of creativity and logic.
The practicalities of doing this well can be difficult, not least as there has been a lot published on both how to be creative and to test ideas - including evidence that emotions often play a role. Nevertheless, this model gives us a useful and simple starting point: we, as humans, are built to be both creative and logical, and we need to couple them together to build knowledge or get the most out of our thinking. Importantly, at least on Palmer's account, this requires patterns of concentration and relaxation; or work and rest. It also requires us to have epistemic humility, and accept that we don't know everything and often get things wrong.
Palmer, T., The Primacy of Doubt, Oxford University Press, 2022. p. 230
Sherlock Holmes famously had another characteristic which you touch on at the end - he was also unemotional. Of all his traits, this was the one that struck me as least human. This, I guess, is in line with the versions created in Star Trek through Dr Spock (alien) and data (machine).
The logic path you describe - creativity and then testing - has obvious attractions. But many of our decisions as humans (and as human societies) can not be tested. Counterfactuals are often impossible to construct and many of our actions are one offs. Even the choice of a restaurant can not be truly 'tested' in the scientific way.
One of the underlying questions that arise for me is the extent to which the need for humility is driven by preference change (which has a strong emotional element). Human beings change their preferences constantly (counter to simple forms of economic theory) and contextually. What is desirable today is not desirable tomorrow. While it is arguable this flows from the type of increased knowledge you discuss, I see no compelling evidence of a tight causal connection. While I accept that this might be due to complexity masking the relationship, I suspect the relationship between knowledge and desire varies a lot across decision making continuum.
The importance of this, I think, comes into play when understanding / knowledge is linked to changing preferences of what is better (not just what is). Your restaurant example is a case of trying to determine what is better in the context of non-static preferences which drive the lived experience. In these cases, there is no truth because there can not be a single truth.
Better stop now. My brain hurts. Over to you.