I work in the philosophy of logic and mathematics, and much of my work draws on ideas and methods from proof theory, a contemporary branch of mathematical logic. My work concerns the traditional question of the non-empirical justification of mathematical and logical knowledge, as well as the nature and content of the hypothesis that the meaning of terms are generated by their use.  These two topics are connected: one explanation for the special status of logical and mathematical knowledge is analyticity—that is it follows from the meanings of the words used—and the meaning of logical and mathematical concepts has often been thought to reside in aspects of use.  Proof theory is a natural but underutilised tool for addressing these issues, since it is a sophisticated way of studying the use of logical and mathematical concepts as deployed in the construction of proofs and arguments.  I have four current research projects on these topics, which together form the four chapters of my dissertation. 

Proof-Theoretic Validity and Inquisitive Logic

My first project investigates what principles are analytic under various notions of proof-theoretic validity. Proof-theoretic semantics promises to offer a formalisation of meaning-as-use by identifying word meanings with formal proof rules governing their use.  One traditional proposal has been that the meanings of the logical connectives are defined by their introduction rules. Proof-theoretic validity, first proposed by Prawitz (1973), is a formal explication of what it means to give the introduction rules this privileged position.  In its original formulation, Prawitz conjectured that the logic justified by proof-theoretic validity would be intuitionistic—a logic that formalises the ideas of constructive mathematics.  Piecha and Schröder-Heister (2018) recently disproved this.  This raises a pressing question; what logics are justified by Proof-Theoretic Validity.  I have shown that there is a notion of proof-theoretic validity which provides a semantics for Inquisitive logic, which is a formal system designed to unify the treatment of questions and assertions.  From this it follows that the strongest logic that could be captured by the standard notions of proof-theoretic validity is Inquisitive logic.

Frege’s Theorem and Potential Infinities

My second project looks at whether the potential infinite can answer an important objection to Neo-Fregeanism: that it makes the existence of an actual infinity of objects analytic. The great promise of Neo-Fregeanism is that the truths of arithmetic may be reduced to definitions, the rules of logic, and Hume’s Principle via an interpretation result called Frege’s Theorem (Frege 1893; Hale and Wright 2001; Heck 2011).  Hume’s Principle tells us that two sets contain the same number of objects if their members can be lined up one-to-one, but it is only true if there are actually infinitely many objects.  Neo-Fregeans claim our concept of number makes Hume’s Principle analytic. It would follow that it is an analytic truth that there are infinitely many objects.  This is a major objection to Neo-Fregeanism.  Recently, interest has grown in potential infinities in the foundations of mathematics (e.g. Linnebo 2018). The assumption of a potential infinity of objects is far more circumspect than the assumption of an actual infinity.  If a version of Frege’s Theorem that requires only a potential infinity is available, this offers a response to the worry about the status of Hume’s Principle. I show that first-order arithmetic is reducible to Hume’s Principle in a modal theory representing potential infinites, but the stronger arithmetic theory that Frege’s Theorem treats is not. In other words, the neo-Fregean can avoid the commitment to an actual infinity but can only claim rather elementary arithmetic to be analytic on that basis. 

Property Theory with Currying Types and Formal Semantics

My third project documents difficulties with using Property Theory with Currying Types (PTCT) to explicate meaning-as-use.  PTCT, as proposed by Fox and Lappin (2008), takes the untyped lambda calculus and adds types.  The untyped lambda calculus is intended to formalise a notion of generalised computation which does not rule out functions being applied to themselves: one can think of it as a “naïve” notion of procedures.  I argue that its use as a formal semantics is vitiated by its Turing completeness: for, PTCT can represent every computable function.  By Rice’s theorem there will be no computable procedure for deciding when two terms in PTCT are identical, and so there will be no uniform way to decide when two meanings are the same.  A further effect of Rice’s theorem is that we cannot, in general, have a decidable procedure to find if a term is a member of a type.  As types give us our grammatical categories, this means we have no general way to decide if a term is e.g. a noun or not.  Syntactic matters, such as grammatical category, are thus not decidable in this formalism, which is an undesirable result.

Genus as a Complexity Measure for Proofs

A fourth project, in collaboration with Andrew Arana, concerns a method proposed for evaluating how complex a proof is to discover. Traditionally, logicians have used proof lengths, either the number of symbols or the number of steps, to measure how difficult a proof is to verify. However, Detlefsen (1990) proposed that there is another type of complexity: how complex it is to discover a proof (“discovermental complexity”).  Intuitions about complexity of discovery are not well-served by measures of length because they do not distinguish between steps that are entirely obvious and those that require creativity and thought.  Carbone (2009) claims that the genus of a proof graph is an alternative geometric measure of discovermental complexity which overcomes some of these difficulties.  The genus of a graph is a measure of how many holes a three-dimensional surface must have for the graph to be drawn on it without any lines crossing.  Arana and I argue that the genus of a proof graph is rather a measure of the structural components of a proof.  This means it misses important features of the proof, such as the connectives and quantifiers, that contribute to its discovermental complexity.