Discussion about this post

User's avatar
Dhara Yu's avatar

Great read - looking forward to the rest of the series!

This reminded me of this recent-ish essay: https://gershmanlab.com/pubs/NeuroAI_critique.pdf. The core argument here (as I understand it) is that the very notion of biological inspiration for AI is somewhat ill-defined, because characterizing biological principles in the first place requires coming to the problem with a computational framework to make sense of the data: it’s not like the principles are just “there”. The proposal here is that rather than attempting to use neural/cognitive plausibility as a source of design principles, we can instead use it as a tiebreaker between candidate algorithms, under the assumption that algorithms that are actually implemented by biological systems are generally better.

As a separate but related note: even if we did have detailed, integrative, computational theories of complex cognitive phenomena, it’s not obvious that we should strive to integrate those insights into AI systems, because the constraints imposed on biological intelligence and AI systems are often quite different. For example, if a particular cognitive phenomenon is a byproduct of the fact that we humans have limited working memory, it’s not clear why we would try to bake that into AI, which is not bound by those same limitations. For some domains, the core problem being solved by both humans and AI may be similar enough that the solutions may be transferrable (and that doing so could be beneficial), but for others this seems less obvious.

Expand full comment
Jeff Bowers's avatar

I think an important problem with the field of NeuroAI is that strong claims are too often made based on extremely weak evidence. For instance, you write: " language models can predict human imaging data remarkably well", but it is important to be clear what the authors you cite found. They reported models could account for near 100% of explainable variance, but the explainable variance was very very low (in most cases between 4-10%), and that a similar amount of variance was observed in non-language areas, raising questions about how to interpret the findings. And you write cognitive scientists have suggested that AI progress refutes one of the dominant linguistic paradigms. Again, that is true, but the evidence for this claim is extremely weak, as reviewed here. https://psycnet.apa.org/record/2026-83323-001. Do you think the papers you cite make a strong cases for ANN-human alignment, and challenge the role of innate priors? I think the field could use a bit more scepticism.

Expand full comment
9 more comments...

No posts

Ready for more?