Knowledge asymmetry in interdisciplinary collaborations and how to reduce it

Community member post by Max Kemman

Max Kemman (biography)

How can tasks and goals among partners in a collaboration be effectively negotiated, especially when one party is dependent on the deliverables of another party? How does knowledge asymmetry affect such negotiations? What is knowledge asymmetry anyway and how can it be dealt with?

What is knowledge asymmetry? 

My PhD research involves historians who are dependent on computational experts to develop an algorithm or user interface for historical research. They therefore needed to be aware of what the computational experts were doing. Continue reading

Four things everyone should know about ignorance

Community member post by Michael Smithson

michael-smithson
Michael Smithson (biography)

“Ignorance” is a topic that sprawls across a grand variety of disciplines, professions and problem domains. Many of these domains have their own perspective on the unknown, but these are generally fragmentary and often unconnected from one another. The topic lacks a home. Until fairly recently, it was a neglected topic in the humanities and human sciences.

I first started writing about it in the 1980’s (e.g., my book-length treatment, Ignorance and Uncertainty: Emerging Paradigms), but it wasn’t until 2015 that the properly compiled interdisciplinary Routledge International Handbook on Ignorance Studies (Gross and McGoey 2015) finally appeared.

Given the wide-ranging nature of this topic, here are four things everyone should know about ignorance. Continue reading

Making predictions under uncertainty

Community member post by Joseph Guillaume

Joseph Guillaume (biography)

Prediction under uncertainty is typically seen as a daunting task. It conjures up images of clouded crystal balls and mysterious oracles in shadowy temples. In a modelling context, it might raise concerns about conclusions built on doubtful assumptions about the future, or about the difficulty in making sense of the many sources of uncertainty affecting highly complex models.

However, prediction under uncertainty can be made tractable depending on the type of prediction. Here I describe ways of making predictions under uncertainty for testing which conclusion is correct. Suppose, for example, that you want to predict whether objectives will be met. There are two possible conclusions – Yes and No, so prediction in this case involves testing which of these competing conclusions is plausible. Continue reading