Peter Sweeney
1 min readFeb 17, 2019

--

I found this argument confusing. The Church-Turing-Deutsch principle shows that a universal computer can simulate every physical process, including minds. Grounded in this principle, AGI is possible. And analog systems cannot be universal because they lack the facility for error correction.

Unlike that clear statement of what makes AGI possible and why analog computers are ruled out, the three “laws” of AI mentioned here seem a mish-mash of different concepts. Ashby’s Law is “a simple statement of a necessary dynamic equilibrium condition in information theory terms.” Von Neumann’s work provided a foundation for formalizing emergence from simple interaction rules. And the third law, uncredited was derived (I believe) from a quote by Ian Stewart: “If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.” Yet, this view presumes the thing we’re trying to explain.

And the opposing position is more reasonable: If AGI is possible (which in principle, it is), it’s unlikely that such a complex thing will be built until/unless it is first understood. Deutsch wrote, “Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough.” And he offers good explanations why that’s so.

I couldn’t find these three laws formulated as an integrated whole (?), and the parallel with Asimov’s “Three Laws of Robotics” left me wondering whether they’re more rhetorical than a coherent explanation.

--

--

Peter Sweeney
Peter Sweeney

Written by Peter Sweeney

Entrepreneur and inventor | 4 startups, 80+ patents | Writes on the science and philosophy of problem solving. Peter@ExplainableStartup.com | @petersweeney

Responses (1)