re: footnote 4, absolutely, there are some ways this resembles Bayesian — if I am reasoning about how to update a coherent unified belief, incorporating many related (second-hand) statement about states of the world, I can imagine using Bayesian updating to combine them. But there are also dis-analogies with Bayesian reasoning: Second-hand knowledge bundles up many things; not just facts, but signals of allegiance or affection, *vibes*... and it does not come bundled with anything like a reasonable credence, usually, but something more like a fuzzy provenance. "My friend said that guy hates us" is concealing a surprisingly long chain of reasoning. I think we find it easier to map human reasoning on to Bayesian models in constrained domains where we can make the content of speech, and its truthfulness more "credibility-of-factual-statements"-like, as in for example, scientific publishing or in prediction markets. Formalising it more generally is I think, difficult. Maybe ecosystem-like metaphors are better there? we can imaging which discourse organisms flourish in which environment, and it might be many trophic levels removed from the nourishment provided by photosynthetic energy from the light of knowledge
Broadly, I'm considering whether it is useful to analyse the way we store and process knowledge/information/etc as a multi-part structure. E.g. For some belief A, we could treat it as some kind of structure like [A, Credence(A), Source(A),.....]. It rapidly gets very complex and messy but - at least on first glance - it looks as though it should have some structural similarities to Bayesian reasoning given the focus on [A, P(A)].
It also occurs to me that Bayesian probabilities could be interpreted as an edge case of Credences that arises in certain contexts where we are dealing with 'factual statements' in domains with clear norms around types of objectivity and evidence. Is that too much of a stretch?
I wonder of it is worth exploring how your thinking might translate into building broadly trusted sources of societal information, and when these should and should not be used. Perhaps we should also be thinking about situations where reasoned thinking leads to different but incompatible solutions.
On the Bayesian discussion, and you know I am not an expert, developing a model (structure) seems worthwhile as a mechanism for explaining what you mean. But I doubt using the model would be viable beyond this.
re: footnote 4, absolutely, there are some ways this resembles Bayesian — if I am reasoning about how to update a coherent unified belief, incorporating many related (second-hand) statement about states of the world, I can imagine using Bayesian updating to combine them. But there are also dis-analogies with Bayesian reasoning: Second-hand knowledge bundles up many things; not just facts, but signals of allegiance or affection, *vibes*... and it does not come bundled with anything like a reasonable credence, usually, but something more like a fuzzy provenance. "My friend said that guy hates us" is concealing a surprisingly long chain of reasoning. I think we find it easier to map human reasoning on to Bayesian models in constrained domains where we can make the content of speech, and its truthfulness more "credibility-of-factual-statements"-like, as in for example, scientific publishing or in prediction markets. Formalising it more generally is I think, difficult. Maybe ecosystem-like metaphors are better there? we can imaging which discourse organisms flourish in which environment, and it might be many trophic levels removed from the nourishment provided by photosynthetic energy from the light of knowledge
Broadly, I'm considering whether it is useful to analyse the way we store and process knowledge/information/etc as a multi-part structure. E.g. For some belief A, we could treat it as some kind of structure like [A, Credence(A), Source(A),.....]. It rapidly gets very complex and messy but - at least on first glance - it looks as though it should have some structural similarities to Bayesian reasoning given the focus on [A, P(A)].
It also occurs to me that Bayesian probabilities could be interpreted as an edge case of Credences that arises in certain contexts where we are dealing with 'factual statements' in domains with clear norms around types of objectivity and evidence. Is that too much of a stretch?
Excellent.
I wonder of it is worth exploring how your thinking might translate into building broadly trusted sources of societal information, and when these should and should not be used. Perhaps we should also be thinking about situations where reasoned thinking leads to different but incompatible solutions.
On the Bayesian discussion, and you know I am not an expert, developing a model (structure) seems worthwhile as a mechanism for explaining what you mean. But I doubt using the model would be viable beyond this.