Neatly resolved. I wonder if it is worth wrestling some more with the dynamics which lead to this result. One element seems to be time. You mention the challenge of establishing knowledge at any point of time. Your description seems to presuppose two things: (1) the absence of 'complete' knowledge today; (2) the potential for more knowledge tomorrow. It also seems to lean heavily into our humanness which itself has three dimensions: (1) we are not God and therefore absolute (transcendent) knowledge is beyond us (which means that while more knowledge is possible complete knowledge is not, no matter how much time we have); (2) we are diverse in our thinking traditions and experience (which naturally leads to differential theories and differential interpretations of the accuracy of theories); (3) we are interdependent (none of us operate as individual thinking machines, consequently our view of 'knowledge' relies heavily from a set of relational (trust based) conditions rather than a set of analytic ones. In combination, these factors mean that epistemic certainty is beyond our reach. An interesting question you may not have answered is why these factors (assuming I am close to understanding your proposition in the way you do!) do not lead to a conclusion that we should be epistemically skeptical. I suspect your answer lies in the practical (humility seems to work ok) but I am not sure you have fully made this case.
I've hardly resolved things! I'd be happy with intelligibly summarised though.
That is something I'll have to explain better. One short answer is that we clearly do know things, such as information that keeps us alive. But there is more to say.
It is useful for me that you start with a simplified example of risk assessment. For quite a long time in my ancient past I was involved in risk assessments that required scientist inputs (expert judgment). It was my experience that some scientists, irrespective of their level of expertise had great difficulty doing risk assessment. Quite a few of them should be kept well away from the responsibility. Leaving aside the influence of social environment, at the time I put their problem down to doctrine(s), for the more meticulous perhaps a doctrine based on an understanding that knowledge could only be acknowledged if sufficiently verified in theory. There was never enough.
Recently I have come across the work of Iain McGilchrist (see his massive argument base in 'The Matter with Things'), which might begin to offer an intelligible explanation for, among other things, 'doctrines'. I wonder if these doctrines might be related to 'mental maps' that I understand neuroscientists look for, and find mostly though not entirely in human cognition. (Some other creatures appear to have mental maps.)
My 2nd related thought might contribute something. Animals have highly developed skills in discriminating for example food from non-food in complex reality, requiring little obvious training via trial and error. Despite this, error can be serious. We have cases in the UK where river gravels are contaminated with small metal lead shot / weights. Swans need gravel in their indigestion and are known to accumulate lead poisoning. Their otherwise very adequate and instant recognition and participatory skills fail them. ‘Their world' does not tell them enough. I think we can address many of our own limitations plus doctrines, but, even under the specific conditions of scientific investigation we continue to require like the swans what I tentatively call 'participatory knowledge'.
It is possible these days to 'copy and paste' reliable engineering design. ‘Reliability’ more generally though is difficult to mechanise. (See Erica Thompson’s recent ‘Escape from Model Land’) and we are back to expert judgment and the in/ability to see the big picture, the complex reality (McGilchrist again?).
Neatly resolved. I wonder if it is worth wrestling some more with the dynamics which lead to this result. One element seems to be time. You mention the challenge of establishing knowledge at any point of time. Your description seems to presuppose two things: (1) the absence of 'complete' knowledge today; (2) the potential for more knowledge tomorrow. It also seems to lean heavily into our humanness which itself has three dimensions: (1) we are not God and therefore absolute (transcendent) knowledge is beyond us (which means that while more knowledge is possible complete knowledge is not, no matter how much time we have); (2) we are diverse in our thinking traditions and experience (which naturally leads to differential theories and differential interpretations of the accuracy of theories); (3) we are interdependent (none of us operate as individual thinking machines, consequently our view of 'knowledge' relies heavily from a set of relational (trust based) conditions rather than a set of analytic ones. In combination, these factors mean that epistemic certainty is beyond our reach. An interesting question you may not have answered is why these factors (assuming I am close to understanding your proposition in the way you do!) do not lead to a conclusion that we should be epistemically skeptical. I suspect your answer lies in the practical (humility seems to work ok) but I am not sure you have fully made this case.
I've hardly resolved things! I'd be happy with intelligibly summarised though.
That is something I'll have to explain better. One short answer is that we clearly do know things, such as information that keeps us alive. But there is more to say.
It is useful for me that you start with a simplified example of risk assessment. For quite a long time in my ancient past I was involved in risk assessments that required scientist inputs (expert judgment). It was my experience that some scientists, irrespective of their level of expertise had great difficulty doing risk assessment. Quite a few of them should be kept well away from the responsibility. Leaving aside the influence of social environment, at the time I put their problem down to doctrine(s), for the more meticulous perhaps a doctrine based on an understanding that knowledge could only be acknowledged if sufficiently verified in theory. There was never enough.
Recently I have come across the work of Iain McGilchrist (see his massive argument base in 'The Matter with Things'), which might begin to offer an intelligible explanation for, among other things, 'doctrines'. I wonder if these doctrines might be related to 'mental maps' that I understand neuroscientists look for, and find mostly though not entirely in human cognition. (Some other creatures appear to have mental maps.)
My 2nd related thought might contribute something. Animals have highly developed skills in discriminating for example food from non-food in complex reality, requiring little obvious training via trial and error. Despite this, error can be serious. We have cases in the UK where river gravels are contaminated with small metal lead shot / weights. Swans need gravel in their indigestion and are known to accumulate lead poisoning. Their otherwise very adequate and instant recognition and participatory skills fail them. ‘Their world' does not tell them enough. I think we can address many of our own limitations plus doctrines, but, even under the specific conditions of scientific investigation we continue to require like the swans what I tentatively call 'participatory knowledge'.
It is possible these days to 'copy and paste' reliable engineering design. ‘Reliability’ more generally though is difficult to mechanise. (See Erica Thompson’s recent ‘Escape from Model Land’) and we are back to expert judgment and the in/ability to see the big picture, the complex reality (McGilchrist again?).