Yes, the explanations that figure in how we learn (“peddle faster!”) are different than the inexplicit stuff that goes on in our subconscious (such as all the minute corrections needed to maintain our balance).
The article concerns the former, but doesn’t deny the latter. It only highlights that explanations generally figure prominently in these intelligent systems, even if the designers don’t recognize them as such. Frequently, explanations are overlooked because they exist outside the system proper (for example, in how the system is designed, in how data is selected, in how environmental conditions are controlled, etc.). When explanations are unexamined, there may be a tendency to conclude they’re not really needed.
So while an inexplicable system may be very useful (for example, in a “balance-maintaining” system), it’s less clear how such a system leads to new knowledge in the scientific sense (for example, how peddling faster relates to angular momentum).