Mix

Timelines

Katja Grace survey: A ~20% probability of this sort of AI by 2036; a ~50% probability by 2060; a ~70% probability by 2100. These match the figures I give in the introduction.

Holden: think there's more than a 10% chance we'll see something PASTA-like enough to qualify as "transformative AI" within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100).

Argument for AIs will be concious

According to the PhilPapers Surveys, 56.5% of philosophers endorse physicalism, vs. 27.1% who endorse non-physicalism and 16.4% "other." I expect the vast majority of philosophers who endorse physicalism to agree that a sufficiently detailed simulation of a human would be conscious. (My understanding is that biological naturalism is a fringe/unpopular position, and that physicalism + rejecting biological naturalism would imply believing that sufficiently detailed simulations of humans would be conscious.) I also expect that some philosophers who don't endorse physicalism would still believe that such simulations would be conscious (David Chalmers is an example - see The Conscious Mind). These expectations are just based on my impressions of the field. – ‣

LLM sizes in millions of parameters

GPT 2 – 117

GPT 3.5 – 1,500

GPT 4 – 1,700,000

Notes