Can we say, We prepare training data based on our explanations of the underlying phenomena? What otherwise is the basis for “real patterns”? A cat means animals with whiskers? A panda means bears with distinct fur colorations?
I’m excited about this research: it furthers the (inevitable) progression towards explanatory AI. But framing it as an attempt to explain data seems an infinite regress?