Definition

The maximum time span within which predictions about a situation are predictable, beyond which outcomes become random.

Quote

Philip Tetlock on 80000 Hours Show (two epsiodes):

Philip tetlock: The magician statistician, Persi Diaconis at Stanford, once asked the question: “How many times do you have to shuffle a deck of cards before all information is lost?” So you have a deck of cards, open up a new deck of cards, all the cards are perfectly ordered from deuces up through aces. And they’re all in exact order, same order. And how many times you have to shuffle, I mean do a proper shuffle? I guess there’s a definition of what a proper shuffle is, and how many proper shuffles do you have to do before all order is lost? I think the answer is five or six (ed: It’s 7).

I mean, there are things that are happening that are random and how much randomness, and you’re not getting full card shuffles every day or every month or every year. There are substantial pockets of stability in history. But how fast is the randomness compounding, so the optimal forecasting frontier is going to be very, very close to chance when you reach a certain point. …

I mean, if you want to call me a pessimist, because I wouldn’t think they’re going to do a very good job a century out- a generation out. Now, when they get to five to 10 years, maybe there’s going to be some advantage, but it’s going to be increasingly small. …

Robert Wiblin: Yeah, I guess it just depends on the nature of the question. Because if you’re saying, yeah, who’s going to be prime minister of the UK in 50 years time? I mean no superforecasters are going to get that, everyone is just back to chance. It’s just like guessing names at that point. But something like, which party will be in power, maybe you can get a little bit of resolution there.

Robert Wiblin: So for example, if you’re trying to forecast progress in artificial intelligence, like forecasting at what point do you get transformative change? Like at what point will the algorithms reach the point at which you can get transformative change, is very, very hard. But trying to potentially forecast just the amount of computational ability that we will have or how fast will computer chips be? Seems like potentially we can have something to say about that even looking 50 or 100 years out. Just because we have enough of a historical record and adjusting to the trends there. So it gets gets much harder, no doubt. But I think, superforecasters might be able to do better than just chimps throwing at dartboards.

Philip tetlock: And maybe I’m naive, but I think when astronomers and astrophysicists tell me that the sun is going to go supernova in three or 4 billion years, I think they’re probably right. It’s going to come close to the Earth’s orbit, it’s going to destroy all life on the planet.

Robert Wiblin: Somethings are kind of mechanistic.

Philip Tetlock: And yeah, there are some categories of things, right? There are timescales and there are levels of determinism and certain operating laws where you have enough confidence that we think we can extrapolate out. I mean, where’s climate on that continuum?