I think formalisms (e.g. “everything is numbers”) hold a subordinate role to explanations. And the explanations for why AI works follow a different evolution. Consider Turing and universal computation, or Putnam and the computational theory of mind. These are explanations, formalized for the purposes of criticism, testing, and yes, creating intelligent machines.
Computer science has tended to instrumentalism, where explanations are denied and formalisms merely applied as tools for prediction. This may lead one to view the philosophy of AI as “a bunch of numbers.” But explanations survive this forgetting.
I agree that there’s a wide gap between today’s prediction engines and AGI (where conjectural scientific knowledge is the exemplar). I also agree that belief systems (and bad philosophy) are holding things back.
But explanations, made precise by formalisms, drive things forward.
Thanks for raising the profile of these topics. They deserve more attention.