It's probably not reducible to neuro-anatomy. It is a computational feature. It's a problem for neuroscience, not philosophy. A lot of modern philosophy consists of posturing. The greatest contribution so far from the philosophers was the hard problem.
Information is not something tangible. I agree that the brain is like a receiver, but not in the sense that it's able to pick up hidden frequencies in the world. Information is a property of neural networks in the brain. Information is immaterial; it is a feature of basic cognition. It is a receiver, but not of tangible frequencies, but unordered chaos.
Take the calculation of entropy in the possible states of a random variable. Calculating this allows us to determine the 'surprise' of these occurrences. Essentially, if the occurrence of an event has a lower probability, it's surprisal will be higher. Well, doesn't that seem like common sense? And yet, such a thing could not have been known to the Greeks. Entropy in information theory arises from computational theory, of which arises from the paradigm of that time.
It would seem that surprisal is an inherent property of the random variable, but not necessarily an innate concept. It required formal articulation, which rests on mathematical, logical, and philosophical assumptions. We must ask ourselves: can such a thing exist independently of mind? This seems to be a problem for the philosophy of science, though I feel mentioning it is relevant.
The brain is limited in its capacity to make valid inferences. The information available is determined by the inputs. The brain cannot function outside of this capacity.
For the sake of clarity, it's likely better to model our own minds on a condition similar to Occam's Razor. I believe that any epistemic representation of the mind is cyclical to a degree. If I believe my mind to be equivalent to a radio transmitter, that model will certainly factor into how I rationalize my own thought processes, and in doing so solidify my adopted model. This is the same with viewing the mind as a computer, or anything in particular.
For instance, if you were trying to explain gastrointenstinal movement, you may develop a model to help visualize it. This is what psychologists call heuristic thinking. I take sense data, information, stimuli (the nomenclature is unimportant), and make it articulate. But if I am trying to model cognitive processes, won't this mechanism skew my thought? And even if I am aware of this, is there not danger in viewing this as an independent property of the mind? How can I develop a model of mind when my own preconceptions limit this ability?
Hence, from a philosophical perspective I don't think much can be achieved. The most philosophy can do is answer inquiry with disproof. Neuroscience will likely prevail in this respect.