AGI in new clothes?

Peter Sweeney
1 min readJun 28, 2018

The vision of theory-free science, wrestling complexities that are beyond human cognition, is a vision of AGI. Is it possible? Yes. Does it inherit all of the same challenges and unknown unknowns? Mostly.

To take one of many problems with this thesis: How do we ratchet up from predictive knowledge applied in closed systems (like games) to explanatory knowledge applied in open systems (like biology)? In your words, “How might we interpret the solutions offered and overlay them back to our ontologies of the world?” Indeed! Note that this challenge isn’t satisfied in human-understandable generalizations of complex models. It demands conjectural leaps from phenomenological models to explanations of how and why these phenomena occur.

In your example from AlphaGo, recall that human explainers made this leap, not the machine. Such is our present reality. The only universal explainers are human. You’re quite right to cast these as “epistemological questions.” We don’t even understand the problem of creativity and conjectural knowledge, let alone the solution. Where do we draw confidence that the solution extends from deep learning?

I discussed this in more detail last year with David Weinberger, if you’d like to weigh in on those arguments.

Thanks in advance and for your thoughtful article!

--

--

Peter Sweeney
Peter Sweeney

Written by Peter Sweeney

Entrepreneur and inventor | 4 startups, 80+ patents | Writes on the science and philosophy of problem solving. Peter@ExplainableStartup.com | @petersweeney

No responses yet