www.fgks.org   »   [go: up one dir, main page]

An important new research paper by IGP Director Dr. Milton Mueller conducts the first systematic evaluation of the assumptions and logic underlying the fears around Artificial General Intelligence (AGI).

The urgency surrounding so-called AI governance is influenced by the belief that AGI could lead to human extinction. Dr. Mueller’s paper dives deep into the myths and misconceptions fueling that narrative by reviewing literature from computer science, economics, and philosophy.

The paper points to three inter-related fallacies at the heart of AGI doomer scenarios:

  • The Concept of “General Intelligence” in Machines: why the idea of a machine possessing a generalized form of intelligence akin to human cognitive abilities is fundamentally flawed.
  • Anthropomorphism in AI: why the attribution of human-like goals, desires, and self-preservation motives to machines distorts our understanding of AI capabilities and agency.
  • The Power of Superior Calculating Intelligence: why the assumption that an AGI’s advanced computational abilities would grant it unlimited control over physical resources and social institutions is unrealistic.

This evaluation carries profound implications for public policy. The myth of an omnipotent AGI diverts crucial resources and attention away from addressing the tangible, immediate risks posed by specific AI applications. Moreover, it fosters a climate of fear that can lead to overregulation and the centralization of control over the digital economy, stifling innovation and competition. As policymakers, industry leaders, and the public grapple with the rapid advancements in AI, it is essential to critically assess the claims of AGI doomers. Their influence shapes policy frameworks and assumptions that could inadvertently harm the digital economy’s dynamism and openness.

We invite you to explore this comprehensive work and join the conversation on how to ensure digital governance is grounded in realistic assessments of risks and opportunities.

Read the paper here.