I’m surprised more attention isn’t directed to the underlying foundations of the debate, specifically whether empiricism is a sound foundation for creating knowledge (and knowledge-creating machines). This is obviously a very old question, yet the prevailing empiricist position in AI is rarely challenged. The difference between the “hard” physical sciences and those that aspire to it is the explanatory foundations.
Can AI make continued and predictable progress without this foundation or do explanatory gaps explain the yawing back and forth, the trial and error?