Why can we produce paradoxes of self-reference?

It's been a while. The reasons are the usual so let me not do go into lengthy explanations and apologies. I won't pursue the megalomanic project that I started the blog for, at least not for now. An essay on something related that I wrote on Kant's Mathematical Antinomies in Oxford, and a couple of conversations with clever people, showed me that my basic position concerning these things is even more problematic than I had foreseen (again, the usual...), so I'll pause that one to think a bit longer. For now, I have some other, more miscellaneous stuff lined up, and I'll start with a little quirky idea, nothing with much force, but, I find, interesting.

Let me put on my Cartesian ruff and do some armchair musing about the physiology of abstraction, that is, lets look at whether we can go say anything, by way of introspection, about what happens in the brain when we abstract (I know, it's 2016, I'm neither a philosopher of mind, nor a neuroscientist, this is just ridiculous...but then, from the little researching I did, there seems to exist no scientific theory on this at the moment and also it's a gloomy Sunday, so take it more as a form of hang over brain jogging...).

Concretely, I want to think about the following: Start by noting that there are such things as paradoxes of self-reference, which means that people have an ability to formulate paradoxes of self-reference. The formulation of such paradoxes usually involves some form of abstraction (more below). This means that, if introspective observations of this kind have any relevance at all for the formulation of physiological models of abstraction (which I think, in principle they do, although I'm happy to admit that there are a zillion more pressing things to consider for the latter), then any such model would have to explain this ability of ours, ideally somewhat elegantly. Can we, then, playing the good ol' "inference-to-the-best-explanation"-game, develop a simple model of abstraction that naturally gives rise to this ability, that explains this ability?

To start with, despite my wearing a ruff, let me still state some working assumptions: I call "mental objects" - denoted, e.g., \( m(\text{giraffe})\) etc. - whatever specific objects ("the-cup-in-front-of-me"), abstract concepts ("love"), and everything in between,  I may have on my mind. A mental object is not the same as the concept it pertains to, for example the mental object "love" and the concept "love" are different. But this distinction start to become important only once I ask about the nature of "love" and other concepts. I don't ask this question and distinguishing between them will introduce and unnecessary complication for my point. So \( m(\text{giraffe})\) may just as well denote the concept "giraffe". Mental objects come with fuzzy boundaries but I assume that we can talk about them meaningfully and not necessarily always only as some "phenomenological whole". I assume that there exists a correspondence between mental objects and some physiological process - \( p(\text{giraffe})\) - in my body, especially my brain, some sort of supervenience relation. And I assume, more for conceptual convenience than for necessity, that every specific mental phenomenon supervenes on a unique physiological state, i.e. one could hypothetically construct a function \( f(\cdot)\) that maps every mental state to a unique physiological state (note that this is stronger than mere supervenience), i.e. \( f(m(\text{giraffe})) = p(\text{giraffe})\). Furthermore I assume that there exists some kind of mechanism - \( mech(\text{giraffe})\) - by which the physiological states on which mental states supervene are implemented. This could be a complex pattern of or a rule for neural firing, something working in some regions or being distributed across many of them, a hormonal mechanism, I don't know and I don't need to care for this post. I just assume that it exists. Again, I also assume that the mechanism is uniquely determined for every physiological state, and thus, by the above assumption, by the mental object. A "model" is just the tuple \( (mech(\cdot),f(\cdot),D)\), where \( D \) is the domain of the model, i.e. set of all mental objects to which the model applies. So a model specifies a mechanism for some set of models, together with mappings between physiological states and mental objects. Finally, I say that I "explain" the occurrence of a mental phenomenon if I have a model that predicts the occurrence of the corresponding physiological state of affairs.

I have a simple, quirky idea on the basis of which I want to develop such a model is then is that those models are good candidates for answering the above question of which the following is true: The mechanism that implements the physiological states for any mental object is independent of the level of abstraction of the concept that corresponds to this mental object. Call such models "flat models".

What the heck is a flat model? Here is a little toy example: Take the mental objects \( m(\text{giraffe})\) and \( m(\text{animal})\). Now,\( m(\text{giraffe})\) and \( m(\text{animal})\) differ, I take it, to most people in their relative degree of abstractness, me included. Here, I think of the relative degree of abstractness of two mental concepts - denote it \( a(m(\text{giraffe}))\) etc. - as being determined by whether one concept subsumes the other. So for most people, depending on their favourite ontology or metaphysics, conceive either of \( m(\text{giraffe})\) as subsuming \( m(\text{animal})\) - "every giraffe is an animal but not every animal is a giraffe" - or the other way around. Then, a flat model is just any model in which \( mech(\text{giraffe}) = mech(\text{animal})\), even if \( a(c(\text{giraffe})) \neq a(c(\text{animal}))\), and the same for any other pair of mental objects. Specifically, what I have in mind here is that, whenever people think of some abstractum, they don't somehow think all the things that are subsumed by this abstractum at once but instead the thinking of the abstractum is implemented by exactly the same mechanism as the one that would be implemented if one was to think of any thing that is subsumed by this abstractum.

How would flat models help to understand our ability to formulate self-referential statements? It does, I think, give a simple explanation for our ability to abstract from anything to anything, which itself is a key requirement for self-reference. For instance, in set theory, there exists the following principle:

Unrestricted comprehension principle (UCP):

\( \forall u (u \in \{ x | \phi(x) \} \leftrightarrow \phi(u))\), for all formulae \( \phi(x)\)

This principle, which is also called "unrestricted abstraction principle", says that for any property (represented by the binary predicate \( \phi(\cdot)\), "to have property \( \phi\)") there is the set of those entities that satisfy this property. This is, to me,  just a set-theoretic way of stating that I can formulate an abstract concept, a set, from any collection of entities. This is because, if you give me any entities "\( x\)" and "\( y\)", I can construct the property "being either \( x\) or \( y\)" and already, by the above principle, I have a new entity "the set of \( x\) and \( y\)" that can then figure in new sets, and so on. Importantly, one can prove that any set theory containing the UCP allows for paradoxical self-references (i.e. in those theories there exist sets that are members of themselves if and only they're not members of themselves).

Now, flat models give, I think, quite a natural explanation for why the UCP would seem a natural feature of set theories. This is because their blindness for the relative degree of abstraction of concepts implies that any physiological limits to the ability to formulate new concepts, cannot, in such models, depend on how abstract the concepts from which one abstracts already are. Compare this, for example, with a non flat model in which the mechanism of thinking an abstractum would be modelled by a simultaneous implementation of the mechanism of all the objects that are subsumed by this abstractum. Such a model would involve an effectively exponential increase in the work that has to be done for thinking of an abstractum and one would not be surprised if this increase puts a bound on just how abstract one's abstracta can become, thus undermining the UCP.

Another nice feature of flat models in this respect concerns the ability not only to produce self-referential statements, but moreover for these statements to be paradoxical: When I produced the set "set containing \( x\) and \( y\)" above, I tacitly assumed that we are free in constructing any property. Of course, if there was no such property as "being either \( x\) or \( y\)", then this would severely constrain the force of the UCP. It is only because I have a lot of freedom on the properties that I can construct those self-referential sets that are also paradoxical. Now, to answer the question which properties exist and which do not, I need a rule. This is a bit like, in a language, knowing the vocabulary is not enough. I also need grammatical rules to tell me which words I can combine, in what fashion. The second nice feature of flat models, then, is that their blindness for hierarchy gives this rule a very simple formulation: There exists a mechanism for every set of existing mechanisms. Non-flat hierarchies would, in general, have to specify the rule for different degrees of abstraction, which seems just unnecessarily complex.

And that's basically it, that's the whole idea. To briefly sum up: If we model the physiology of abstraction using flat models, then we get particularly simple explanations for why the UCP seems to be a natural property of set theories and for the ability to formulate self-referential paradoxes (with a similar argument holding for semantic and epistemic self-reference). What to make of this? If we were interested in finding a good model for the physiology of abstraction, and if we think the argument above to be feasible, then this would lend some abductive support for flat models. But, for one, such support could only be very weak since there are many other non-flat models for which a similar argument could be made. For two, and more importantly, this is 2016, and introspection is not really the kind of thing we can use today for supporting physiological models, right?

Maybe the argument could help us improve our understanding of paradoxes, or conceive of solutions to them? Well, the only thing that comes to my mind regarding this is the following: The ability to abstract must have been crucial for human-kind to develop, and so a neural architecture that makes abstraction very flexible and broadly available should correspondingly have been favourable. On the other hand, the ability to produce formal systems that do not suffer from inconsistency due to the possibility of self-referential paradoxes has, I take it, been relatively much less important (in fact I can't think of a single such formal system). So, from the physiological perspective of the brain, which itself of course does not know of any contradictions - to which the thought "this thought does not happen" is just like "will they kick me out of this place if I don't order another coffee" - asking for a solution to self-referential paradoxes seems not only non-sensical but even counter-productive! At the same time, there is no obvious argument known to me that our semantics or set theories would have to be sensitive to our models of the workings of the brain. In the end, formal languages that are both complete and consistent might be the core conceptual tool of the coming centuries. I doubt it, but anything goes...

But then, what is all of this good for at all? Well, I find it striking that my limited knowledge of how neurons work and grow and inter-relate seems to go well together with how I experience my mental abilities (and this is certainly not by construction the case á la "I use neurons to think about how neurons function and therefore are bound to arrive at models that go well together"; I can indeed think of zillions of utterly incompatible ways for neurons to function, using the same neurons). And, in the end, ruffy musing like the above can still be helpful in formulating more rigorous research questions or challenges to existing evidence along the lines of: If your favourite model for abstraction is non-flat, then how do you explain the fact that I can run my abstraction sausage machine on any input you hand me? Let me know what you think.