Researchers built a model that behaves like a brain. Without being trained on neural data, the model produced a peculiar signal — one that was later discovered in actual brain activity.
This whole narrative is amazing -- especially for this age of beginning AI. What especially hits home for me is how this modeling of the "counterintuitive" patterns in minds are found because of how the data is presented in a way that highlights the consistent counterintuitive presence across species.
I've been feeling that modern, corporate/government driven science (the bulk of money driven research) is often pointed towards confirmation bias, or at best, the clear binary of Right/Wrong.
This article somehow eases my own concerns about people often seem to be judging AI before even exploring (or the exploration, itself!) how much more impactful it could become when we aren't driven so clearly by our assumptions and biases... (But maybe that's part of where the fear comes from? What if AI is much more than we assume?)
"Give the system (human, bird, AI, etc) enough structural integrity, don’t force it to act like a spreadsheet with vibes, and it starts surfacing patterns that human confirmation bias would have edited out. Not “the AI is magical,” but: it inhabits a different geometry of attention, so it notices different things."
This is a beautiful thought piece that I appreciate! Thank you for sharing!
Absolutely stellar breakdown of the ICN discovery. The idea that our brains intentionally maintian neurons signaling "wrong" answers as a flexibility mechanism is wild, and kinda explains why pivoting under pressure feels less catastrophic than it should. I've noticed when working on complex problems that thebest solutions often come from mental paths I initially dismissed. Its like having backup routes pre-cached before the main road closes.
Great study! Still, the computational model did not discover these, the scientists were able to notice them through the model. This is an important difference.
This is such a beautiful reminder that learning isn’t just about converging on the “right” answer, it’s also about keeping other possibilities alive. From a People-Based Learning lens, those incongruent neurons feel a lot like the voices, perspectives, and questions we hold onto even when a group seems to have settled. Humans do this socially all the time: we learn better when we’re exposed to disagreement, alternative interpretations, and near-miss ideas, not just efficient consensus. The fact that brains and brain-like models preserve this “productive wrongness” suggests it’s not noise, it’s a feature. Learning, whether neural or human, seems to depend on staying relationally and cognitively open to what might be, not just what currently works.
This whole narrative is amazing -- especially for this age of beginning AI. What especially hits home for me is how this modeling of the "counterintuitive" patterns in minds are found because of how the data is presented in a way that highlights the consistent counterintuitive presence across species.
I've been feeling that modern, corporate/government driven science (the bulk of money driven research) is often pointed towards confirmation bias, or at best, the clear binary of Right/Wrong.
This article somehow eases my own concerns about people often seem to be judging AI before even exploring (or the exploration, itself!) how much more impactful it could become when we aren't driven so clearly by our assumptions and biases... (But maybe that's part of where the fear comes from? What if AI is much more than we assume?)
"Give the system (human, bird, AI, etc) enough structural integrity, don’t force it to act like a spreadsheet with vibes, and it starts surfacing patterns that human confirmation bias would have edited out. Not “the AI is magical,” but: it inhabits a different geometry of attention, so it notices different things."
This is a beautiful thought piece that I appreciate! Thank you for sharing!
Absolutely stellar breakdown of the ICN discovery. The idea that our brains intentionally maintian neurons signaling "wrong" answers as a flexibility mechanism is wild, and kinda explains why pivoting under pressure feels less catastrophic than it should. I've noticed when working on complex problems that thebest solutions often come from mental paths I initially dismissed. Its like having backup routes pre-cached before the main road closes.
Great study! Still, the computational model did not discover these, the scientists were able to notice them through the model. This is an important difference.
Time to use human models, not monkeys.
This is such a beautiful reminder that learning isn’t just about converging on the “right” answer, it’s also about keeping other possibilities alive. From a People-Based Learning lens, those incongruent neurons feel a lot like the voices, perspectives, and questions we hold onto even when a group seems to have settled. Humans do this socially all the time: we learn better when we’re exposed to disagreement, alternative interpretations, and near-miss ideas, not just efficient consensus. The fact that brains and brain-like models preserve this “productive wrongness” suggests it’s not noise, it’s a feature. Learning, whether neural or human, seems to depend on staying relationally and cognitively open to what might be, not just what currently works.
Impressive!