This article is both a comment to Neyland’s ‘On Organizing Algorithms’ and a supplementary note to our ‘The Concept of Algorithm as an Interpretative Key of Modern Rationality’. In the first part we discuss the concepts of algorithm and... more
This article is both a comment to Neyland’s ‘On Organizing Algorithms’ and a supplementary note to our ‘The Concept of Algorithm as an Interpretative Key of Modern Rationality’.  In the first part we discuss the concepts of algorithm and recursive function from a different point of view from that of our previous article. Our cultural reference for these concepts is again Computability Theory. We give additional arguments in support of the idea that a culture informed by an algorithmic logic has promoted modern rationality both in science and in society. We stress again the importance of distinguishing between algorithms applied to quantifiable entities such as space, time and value and those applied to ontological entities such as human actions. In the second case, the algorithm is applied outside of its domain of definition and leads to social disaggregation. We finally show that our theoretical system is fully consistent with Neyland’s interesting analysis of algorithms at work.
Research Interests:
We study some classical and modern aspects of computability theory, including priority constructions, lowness notions, hyperimmune Turing degrees, and K-triviality.
Research Interests:
Suppose that $p$ is a computable real and that $p \geq1$. We show that in both the real and complex case, $l^p$ is computably categorical if and only if $p \neq 2$. The proof uses Lamperti's characterization of the isometries of... more
Suppose that $p$ is a computable real and that $p \geq1$.  We show that in both the real and complex case, $l^p$ is computably categorical if and only if $p \neq 2$.  The proof uses Lamperti's characterization of the isometries of Lebesgue spaces of $\sigma$-finite measure spaces. 
Research Interests:
Here is a brief exposition of the notion of decidability and different models of computation into persian extracted from my M.A thesis...
Research Interests:
We present a novel method for extracting the dominant dynamic properties of crowded scenes from a single, static, uncalibrated camera using a codebook of tracklets. Our approach relies only on tracklets of fixed length which are generated... more
We present a novel method for extracting the dominant dynamic properties of crowded scenes from a single, static, uncalibrated camera using a codebook of tracklets. Our approach relies only on tracklets of fixed length which are generated based on sparse optical flow. A grid of points is placed on the image plane and local meanshift clustering is employed to extract dominant directions of tracklets in the neighborhood. A Gaussian Process (GP) is fitted to each tracklet resulting in a codebook, with each codeword representing a local motion model. At test time, a mixture of weighted local GP experts is applied, providing multimodal density estimates for next object location and simulation of full object trajectories. Our scenarios come from challenging crowded scenes, from which we extract dominant local motion-patterns and use the model to simulate full object trajectories. In addition, we apply the learnt model to multiple object tracking. Random trajectories are sampled from the model that match the learnt scene dynamics. Minimum Description Length (MDL) is employed to pick the best trajectories in order to associate sparse detections over short time windows. Also, we modify a state-of-the-art multiple object tracking algorithm leading to significant improvement. Our results compare favorably to a state-of-the-art algorithm and we introduce a new challenging dataset for multiple object tracking.
Motivation for this rather abstract work comes from the study of Genetics and Systems Biology. The data sets in these fields are usually small, extremely high dimensional, noisy and with complex interactions, which makes inferring causal... more
Motivation for this rather abstract work comes from the study of Genetics and Systems Biology. The data sets in these fields are usually small, extremely high dimensional, noisy and with complex interactions, which makes inferring causal interactions extremely difficult and unreliable. In this work, we approach non- parametric inference through Gromov’s Metric Measure Space Theory. We define the space of computable probability measures from this point of view and pro- vide an invariant representation based on distance matrices, through which one can construct inference algorithms for any arbitrary data set, even with a mix of different kinds of data types. Using the theory of Ergodic Automorphisms over the space of metrics, we give an estimate for the metric. Finally, through con- centration of measure phenomena, we are able to prove dimension independent convergence rates for our invariant representation. As a corollary we show that the approach in Kernel Embedding of Distributions in RKHS can be seen as a special case.
Many scientific questions are considered solved to the best possible degree when we have a method for computing a solution. This is especially true in mathematics and those areas of science in which phenomena can be described... more
Many scientific questions are considered solved to the best possible degree when we have a method for computing a solution. This is especially true in mathematics and those areas of science in which phenomena can be described mathematically: one only has to think of the methods of symbolic algebra in order to solve equations, or laws of physics which allow one to calculate unknown quantities from known measurements. The crowning achievement of mathematics would thus be a systematic way to compute the solution to any mathematical problem. The hope that this was possible was perhaps first articulated by the 18th century mathematician-philosopher G. W. Leibniz. Advances in the foundations of mathematics in the early 20th century made it possible in the 1920s to first formulate the question of whether there is such a systematic way to find a solution to every mathematical problem. This became known as the decision problem, and it was considered a major open problem in the 1920s and 1930s. Alan Turing solved it in his first, groundbreaking paper "On computable numbers" (1936). In order to show that there cannot be a systematic computational procedure that solves every mathematical question, Turing had to provide a convincing analysis of what a computational procedure is. His abstract, mathematical model of computability is that of a Turing Machine. He showed that no Turing machine, and hence no computational procedure at all, could solve the Entscheidungsproblem.
Research Interests:
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet... more
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically.

In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail.
In this investigation we differentiate distinctly between human reasoning and mechanistic reasoning by showing that Tarski's classic definitions admit finitary evidence-based definitions of the satisfaction and truth of the atomic... more
In this investigation we differentiate distinctly between human reasoning and mechanistic reasoning by showing that Tarski's classic definitions admit finitary evidence-based definitions of the satisfaction and truth of the atomic formulas of the first-order Peano Arithmetic PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways:

(1) in terms of classical algorithmic verifiabilty; and

(2) in terms of finitary algorithmic computability.

We then show that the two definitions correspond to two distinctly different assignments of satisfaction and truth to the compound formulas of PA over N---PA(N, SV) and PA(N, SC).

We further show that the PA axioms are true over N, and that the PA rules of inference preserve truth over N, under both PA(N, SV) and PA(N, SC); and:

(a) that if we assume the satisfaction and truth of the compound formulas of PA are always non-finitarily decidable under PA(N, SV), then this assignment corresponds to the classical non-finitary standard interpretation PA(N, S) of PA over the domain N (accepted as consistent by Gentzen's argument); and

(b) that the satisfaction and truth of the compound formulas of PA are always finitarily decidable under the assignment PA(N, SC), from which we may finitarily conclude that PA is consistent.

We further conclude that the appropriate inference to be drawn from Goedel's 1931 paper on undecidable arithmetical propositions is that we can define PA formulas which---under interpretation---are algorithmically verifiable as always true over N, but not algorithmically computable as always true over N.

We conclude from this that Lucas' Goedelian argument is validated if the assignment PA(N, SV) can be treated as circumscribing the ambit of human reasoning about 'true' arithmetical propositions, and the assignment PA(N, SC) as circumscribing the ambit of mechanistic reasoning about 'true' arithmetical propositions.
Research Interests:
This paper shows that some simple model of Ibn Sina's temporal sentences had some features from the view point of Recursion Theory... Also contains a brief critique of Nicholas Rescher's account of Ibn Sina's sentences...
Research Interests:
Resumen: En la bibliografía reciente sobre lógica dialógica se estudia el caso de tonk y el concepto antirrealista de armonía. Ahora bien, desde la publicación de esos textos la teoría dialógica ha sido vinculada con la Teoría... more
Resumen: En la bibliografía reciente sobre lógica dialógica se estudia el caso de tonk y el concepto antirrealista de armonía. Ahora bien, desde la publicación de esos textos la teoría dialógica ha sido vinculada con la Teoría Constructiva de Tipos (CTT) la cual posee sus propios medios para responder a tonk. El objetivo principal del presente artículo es mostrar que, desde la perspectiva dialógica, la armonía de las reglas de la CTT es consecuencia de un nivel más fundamental de significado en el que las reglas se formulan independientemente del jugador que las aplica.
Research Interests:
Upload File
This book brings together young researchers from a variety of fields within mathematics, philosophy and logic. It discusses questions that arise in their work, as well as themes and reactions that appear to be similar in different... more
This book brings together young researchers from a variety of fields within mathematics, philosophy and logic. It discusses questions that arise in their work, as well as themes and reactions that appear to be similar in different contexts. The book shows that a fairly intensive activity in the philosophy of mathematics is underway, due on the one hand to the disillusionment with respect to traditional answers, on the other to exciting new features of present day mathematics. The book explains how the problem of applicability once again plays a central role in the development of mathematics. It examines how new languages different from the logical ones (mostly figural), are recognized as valid and experimented with and how unifying concepts (structure, category, set) are in competition for those who look at this form of unification. It further shows that traditional philosophies, such as constructivism, while still lively, are no longer only philosophies, but guidelines for research. Finally, the book demonstrates that the search for and validation of new axioms is analyzed with a blend of mathematical historical, philosophical, psychological considerations.​
This paper deals with the question: what are the key requirements for a physical system to perform digital computation? Oftentimes, cognitive scientists are quick to employ the notion of computation simpliciter when asserting basically... more
This paper deals with the question: what are the key requirements for a physical system to perform digital computation? Oftentimes, cognitive scientists are quick to employ the notion of computation simpliciter when asserting basically that cognitive activities are computational. They employ this notion as if there is a consensus on just what it takes for a physical system to compute. Some cognitive scientists in referring to digital computation simply adhere to Turing computability. But if cognition is indeed computational, then it is concrete computation that is required for explaining cognition as an embodied phenomenon. Three accounts of computation are examined here: 1. Formal Symbol Manipulation. 2. Physical Symbol Systems and 3. The Mechanistic account. I argue that the differing requirements implied by these accounts justify the demand that one commits to a particular account when employing the notion of digital computation in regard to physical systems, rather than use these accounts interchangeably.
Turing’s contention that all mental functions can be reduced to computable operations seems to be questioned precisely by applied computation to text processing. Criticisms have been addressed to the test proposed by Turing for an... more
Turing’s contention that all mental functions can be reduced to computable operations seems to be questioned precisely by applied computation to text processing.  Criticisms have been addressed to the test proposed by Turing for an empirical verification of his conjecture, both from an objective and a subjective point of view, by Penrose and Searle respectively.  Automated text processing allows us to transpose Searle’s objections into a linguistic context and to show that they raise the same questions as those brought up by Penrose’s objections, namely the problems of computability and indeterminacy.  These very questions were among Turing’s last concerns and he seemed to envisage a coupling of indeterminate descriptions of physical phenomena with scientifically computable predictions of their objective states.  A suitable discussion of these problems requires however, as S. Barry Cooper suggests, a full recognition of the new scientific paradigm emerging from the divelopments in physics in the 20th century.  In this regard, Merleau-Ponty’s epistemological reflections as well as, on a more formal level, the foundational implications of the new calculus of indications introduced by the English mathematician George Spencer Brown, seem most relevant indeed.
In mathematical literature, it is quite common to make reference to an informal notion of naturalness: axioms or defintions may be defined as `natural', and part of a proof may deserve the same label (i.e.: ``in a natural way...''). Our... more
In mathematical literature, it is quite common to make reference to an informal notion of naturalness: axioms or defintions may be defined as `natural', and part of a proof may deserve the same label (i.e.: ``in a natural way...''). Our aim is to provide a philosophical account of these occurrences. The paper is divided in two parts. In the first part, some statistical evidence is considered, in order to show that the use of the word `natural', within the mathematical discourse, largely increased in the last decades. Then, we attempt to develop a philosophical framework in order to encompass such an evidence. In doing so, we outline a general method apt to deal with this kind of vague notions - such as naturalness - emerging in mathematical practice.
In the second part, we mainly tackle the following question: is naturalness a static or a dynamic notion? Thanks to the study of a couple of case-studies, taken from set theory and computability theory, we answer that the notion of naturalness - as it is used in mathematics - is a dynamic one, in which normativity plays a fundamental role.
Research Interests:
We classify the asymptotic densities of the $\Delta^0_2$ sets according to their level in the Ershov hierarchy. In particular, it is shown that for $n \geq 2$, a real $r \in [0,1]$ is the density of an $n$-c.e.\ set if and... more
  We classify the asymptotic densities of the $\Delta^0_2$ sets
  according to their level in the Ershov hierarchy.  In particular, it
  is shown that for $n \geq 2$, a real $r \in [0,1]$ is the density of
  an $n$-c.e.\ set if and only if it is a difference of left-$\Pi_2^0$
  reals.  Further, we show that the densities of the $\omega$-c.e.\ sets
  coincide with the densities of the $\Delta^0_2$ sets, and there are
  $\omega$-c.e.\ sets whose density is not the density of an  $n$-c.e. set
  for any $n \in \omega$.
We show that, given a non-degenerate, finitely connected domain $D$, its boundary, and the number of its boundary components, it is possible to compute a conformal mapping of $D$ onto a circular domain \emph{without} prior knowledge of... more
We show that, given a non-degenerate, finitely connected domain $D$, its boundary, and the number of its boundary components, it is possible to compute a conformal mapping of $D$ onto a circular domain \emph{without} prior knowledge of the circular
domain.  We do so by computing a suitable bound on the error in the Koebe construction
(but, again, without knowing the circular domain in advance).
Research Interests:
Academia © 2015