If you’re in a room with neuroscientists and somebody brings up philosophy, you can expect at least a few scoffs and eye rolls. I can see where this sentiment comes from. We’re strained from trying to make sense of our data as it stands, so to hear that we’re using the word "representation" wrong is frustrating – especially if it’s unclear whether it changes anything at all about our theorizing. Add philosophical jargon which floods the mind with syllables, and you’ve lost most.
But neuroscientists engage in philosophy all the time – even if we don’t call it that. When we evaluate computational models, we do so in light of assumptions about their likeness to brains (an epistemological consideration). When a scientist accuses their colleague of blobology – the project of mindlessly mapping regions to tasks – they do so from all sorts of unspoken considerations. They might question the modularity of the mind, the explanatory strength of correlation, or the minimum criteria for what constitutes a mechanism. Each of these philosophical questions is at its core independent from neuroscience, and people with books and glasses even thicker than ours have strong views on them.
It is through the examination of implicit assumptions that philosophy molds more macroscopic beliefs we have about the brain and how to study it. As a concrete example, Barack & Krakauer (2021) point out that representation entails detachability – representations are capable of existing without their typical causes. So, research that shows a brain region systematically light up during the presentation of an object category is not enough to infer representation, and models that require stimulus input for a representation are wrong. These insights are gained through armchair analysis, which even brings along prescriptions about what type of research is needed. For example, it pleads for work that tests whether the region lights up when people think about using the object (without being able to see it). As another example, Dijkstra & de Bruin (2016) discuss philosophical accounts of causality and show how they yield different judgments about which type of neuroscience experiments allow us to infer that a region is causally involved in cognition. Or take Carl Craver's Explaining the Brain, which shows how various philosophical considerations play into seminal findings in neuroscience, such as long term potentiation (LTP). For me, it helped iron out a bunch of implicit worries, such as whether neuroscience is even in the business of finding mechanisms, as even cherished cases like LTP involve probability shifts rather than the lawlike effects I associate with the word "mechanism".
Besides molding our individual thinking, philosophy can sometimes have urgent and sweeping effects on research programs. Take a recent philosophical criticism of causal structure theories in neuroscience of consciousness. Most famously, this includes Information Integration Theory (IIT), which roughly states that systems are conscious to the extent they make a difference to themselves, measured by Φ. If the parts interact in such a way that Φ > 0, then the system is conscious, otherwise it is not. The theory maintains that systems with feedback connections are always conscious, while feedforward systems are never. The unfolding argument shows that causal structure theories are either false or unscientific (Doerig et al. (2019); Hanson & Walker, 2020; others). In brief, any input-output relation can be achieved by both feedforward and feedback networks. So, any behavior we use to index consciousness (e.g., being awake or making a verbal report) could be equally instantiated by a feedforward (Φ = 0) and a feedback network (Φ > 0). In fact, an awake system could be entirely feedforward and a sleeping system could be entirely feedback in nature. Thus, the unfolding argument shows empirical observation is uncoupled from causal structure – Φ is neither necessary nor sufficient for indices of consciousness. If that’s right, IIT proponents have two responses. They could agree consciousness can happen when Φ = 0, which prefalsifies the theory, or maintain that despite identical observations, only the Φ > 0 system is conscious, which makes the theory unfalsifiable.
While there are criticisms of the unfolding argument (e.g., Tsuchiya et al., 2020), the debate is unresolved and undoubtedly given too little attention relative to what it shows. Millions of dollars continue to be poured into testing IIT like nothing ever happened. Yet, if the unfolding argument stands up to scrutiny, it will mean this money is wasted on a false or unscientific theory of consciousness.
For a rundown of the unfolding argument and other problems with IIT, see Jake Hanson’s inspiring dissertation defense.