We present Holographic Declarative Memory (HDM), a new memory module for ACT-R and alternative to ACT-R’s De- clarative Memory (DM). ACT-R is a widely used cognitive architecture that models many different aspects of cognition, but is... more
We present Holographic Declarative Memory (HDM), a new memory module for ACT-R and alternative to ACT-R’s De- clarative Memory (DM). ACT-R is a widely used cognitive architecture that models many different aspects of cognition, but is limited by its use of symbols to represent concepts or stimuli. HDM replaces the symbols with holographic vectors. Holographic vectors retain the expressive power of symbols but have a similarity metric, allowing for shades of meaning, fault tolerance, and lossy compression. The purpose of HDM is to enhance ACT-R’s ability to learn associations, learn over the long-term, and store large quantities of data. To demon- strate HDM, we fit performance of an ACT-R model that uses HDM to a benchmark memory task, the fan effect. We ana- lyze how HDM produces the fan effect and how HDM relates to the standard DM model of the fan effect.
This book of Proceedings contains the accepted papers of the second International Workshop on Artificial Intelligence and Cognition (AIC 2014), held in Turin (Italy) on November 26th and 27th, 2014. The series of workshop AIC was launched... more
This book of Proceedings contains the accepted papers of the second International Workshop on Artificial Intelligence and Cognition (AIC 2014), held in Turin (Italy) on November 26th and 27th, 2014. The series of workshop AIC was launched in
2013 with the idea of fostering the collaboration between the researchers (coming from the fields of computer science, philosophy, engineering, psychology, neurosciences etc.) working at the intersection of the Cognitive Science and Artificial Intelligence (AI) communities, by providing an international forum of discussions and communication of the research results obtained.
Research Interests:
Numerous cognitive systems analyzed for the Paper Patterns for Cognitive Systems. 

Follow the link:

https://www.facebook.com/media/set/?set=a.217364258285578.55073.203359906352680&type=3
Research Interests:
Architecture Diagram
Piagetian Autonomous Modeler - Paradigm Two
Research Interests:
Error backpropagation is an extremely effective algorithm for assigning credit in artificial neural networks. However, weight updates under Backprop depend on lengthy recursive computations and require separate output and error messages... more
Error backpropagation is an extremely effective algorithm for assigning credit in artificial neural networks. However, weight updates under Backprop depend on lengthy recursive computations and require separate output and error messages -- features not shared by biological neurons, that are perhaps unnecessary. In this paper, we revisit Backprop and the credit assignment problem.
We first decompose Backprop into a collection of interacting learning algorithms; provide regret bounds on the performance of these sub-algorithms; and factorize Backprop's error signals. Using these results, we derive a new credit assignment algorithm for nonparametric regression, Kickback, that is significantly simpler than Backprop. Finally, we provide a sufficient condition for Kickback to follow error gradients, and show that Kickback matches Backprop's performance on real-world regression benchmarks.
Research Interests:
Consistent with the well-established tradition of cognitive pragmatics, this work hinges on the idea that human communication has to be considered inferential in nature. Starting from the empirically-based insights of Relevance Theory, I... more
Consistent with the well-established tradition of cognitive pragmatics, this work hinges on the idea that human communication has to be considered inferential in nature. Starting from the empirically-based insights of Relevance Theory, I will focus on the role of pragmatic inference processes in real language use, specifically in conversation. In order to address this question, I pursue a twofold goal. On the one hand, I intend to characterize the nature of conversational exchanges, by identifying the main features that underlie their elaboration. On the other hand, my goal is to provide some indications about the cognitive underpinnings of such conversational properties.
Relevance account states that language in context can be described as a matter of expressing and recognizing intentions and that this procedure is driven by expectations of relevance automatically processed. In accordance with the claim that the core of conversations lies in conveying and catching each other’s intentions, I will take into account the strategies employed by interlocutors and the cognitive mechanisms involved in this kind of process. Although Relevance theorists account for some important features of language in use, my hypothesis is that they falter in explaining some non-marginal aspects of real-time conversation because of two problematic issues: a) the propensity to emphasize the comprehension process omitting to account for the production process; b) the idea that it all comes down to processing relevance by means of a modular automatic device. Against these claims, I will argue that a) conversation is a joint activity performed in coordination and requires complex abilities as on the side of the hearer as on the side of the speaker; b) automatic mechanisms cannot underlie some essential aspects inherent conversation which are better explained by the role of conscious processes. Although the relation between language and consciousness has been traditionally neglected, the idea to put consciousness back into the reflection on language in context has important theoretical and empirical implications.
In cognitive science, automatic computational processes are widely considered to be the mechanisms underlying communication functioning. According to this interpretative model, formal computations may account for the linguistic... more
In cognitive science, automatic computational processes are widely considered to be the mechanisms underlying communication functioning. According to this interpretative model, formal computations may account for the linguistic comprehension and production processes in a mechanicistic perspective. In this sense, from a psychological point of view communication hinges on what individuals actually say by expressing a sentence, irrespective of extra-linguistic components. Although automatic mechanisms led by syntactic procedures may explain some key factors of language, these devices seem to be insufficient to allow for other important elements at the level of discourse. Starting from the idea that communicate means to produce and comprehend discourses, in this paper we will argue against the computational model of language, even in its actual revisited version. In particolar, we will claim that pragmatic inferences involved in discourse interpretation are not explainable by a strictly modular architecture. Assuming as a case study the elaboration of a specific inference – namely the scalar implicature – we will propose the necessity of an alternative model that clarify the cognitive architecture underlying discourse elaboration.
Research Interests:
Research Interests:
Visual objects identification is a key cognitive process for intelligent virtual agents that evolve in virtual environments. This process allows the elaboration of intern representation of the environment for cognitive manipulation and... more
Visual objects identification is a key cognitive process for intelligent virtual agents that evolve in virtual environments. This process allows the elaboration of intern representation of the environment for cognitive manipulation and posterior intelligent response production. There exists many architectures based on memory modules for visual elements identification of environment as they were invariant, this seems to be different as real humans process visual scene. This document presents a description of visual object identification task based on current neuroscience state of the art. This work is part of a proposal of a cognitive architecture that lend us bring virtual agents with more human behaviors. Finally, we realized an implementation that shows afferent/efferent flow and processing of information of our proposed architecture.
A cognitive architecture is presented for modelling some properties of sensorimotor learning in infants, namely the ability to accumulate adaptations and skills over multiple tasks in a manner which allows recombination and re-use of task... more
A cognitive architecture is presented for modelling some properties of sensorimotor learning in infants, namely the ability to accumulate adaptations and skills over multiple tasks in a manner which allows recombination and re-use of task specific competences. The control architecture invented consists of a population of compartments (units of neuroevolution) each containing networks capable of controlling a robot with many degrees of free-dom. The nodes of the network undergo internal mutations, and the networks undergo stochastic structural modifications, constrained by a mutational and recombinational grammar. The nodes used consist of dynamical systems such as dynamic movement primitives, continuous time recurrent neural networks and high-level supervised and unsupervised learning algorithms. Edges in the network represent the passing of information from a sending node to a receiving node. The networks in a compartment operate in parallel and encode a space of possible subsumption-like architectures that are used to successfully evolve a variety of behaviours for a NAO H25 humanoid robot.
Research Interests:
Memory is a crucial cognitive ability that supports the mere existence of humans. When they interact with other people or their environment in daily life, they can remember what happened, when it happened and also recall this kind of... more
Memory is a crucial cognitive ability that supports the mere existence of humans. When they interact with other people or their environment in daily life, they can remember what happened, when it happened and also recall this kind of information at a later point of time. This recalling helps them
to continue interactions with already encountered environment or remember details of a past experience. To evoke similar experiences between a human and a robot, it is evident that these robots should posses artificial memory systems that can mimic human-like memory characteristics. Inspired by the human cognitive ability, memory, this thesis aims to design and develop an artificial memory system that possesses certain properties of human memory. The memory system presented here is based on the theory of information processing in human memory. What remains central in these
artificial memory systems is the representation of the vast amount of information exchanged between the users and the robot. We take advantage of the Semantic Web language, namely, Resource Description Framework (RDF), to represent this information. Our memory system translates this information to basic memory units, which are defined by RDF triples. These triples, from a single interaction session between the user and the robot, are appended to an RDF graph and form a single episode. These episodes are then stored in a triplestore for long-term persistence. We experiment with recalling the stored information based on several usecases. This memory structure provides  homogeneity for storage and retrieval of information
across the entire memory system. Several experiments were carried out on moderate hardware to evaluate the performance of this memory system and this system showed a fair efficiency in runtime. This memory system also has the ability to scale to larger scenarios of longer periods of interactions, which aligns with the purpose of artificial companions: to turn interactions into relationships (Benyon & Mival 2000).
Research Interests:
This paper critically examines Apperly and Butterfill’s parallel ‘two systems’ theory of mindreading and argues instead for a cooperative multi-systems architecture. The minimal mindreading system (system 1) described by Butterfill and... more
This paper critically examines Apperly and Butterfill’s parallel ‘two systems’ theory of mindreading and argues instead for a cooperative multi-systems architecture. The minimal mindreading system (system 1) described by Butterfill and Apperly is unable to explain the flexibility of infant belief representation or fast and efficient mindreading in adults, and there are strong reasons for thinking that infant belief representation depends on executive cognition and general semantic memory. We propose that schemas, causal representation and mental models help to explain the representational flexibility of infant mindreading and give an alternative interpretation of evidence that has been taken to show automatic, fast and efficient belief representation in adults.
Upload File
In this paper a possible general framework for the representation of concepts in cognitive artificial systems and cognitive architectures is proposed. The framework is inspired by the so called proxytype theory of concepts and combines it... more
In this paper a possible general framework for the representation of concepts in cognitive artificial systems and cognitive architectures is proposed. The framework is inspired by the so called proxytype theory of concepts and combines it with the heterogeneity approach to concept representations, according to which concepts do not constitute a unitary phenomenon. The contribution of the paper is twofold: on one hand, it aims at providing a novel theoretical hypothesis for the debate about concepts in cognitive sciences by providing unexplored connections between different theories; on the other hand it is aimed at sketching a computational characterization of the problem of concept representation in cognitively inspired artificial systems and in cognitive architectures.
The standard objection against naturalised epistemology is that it cannot account for normativity in epistemology (Putnam 1982; Kim 1988). There are different ways to deal with it. One of the obvious ways is to say that the objection... more
The standard objection against naturalised epistemology is that it cannot account for normativity in epistemology (Putnam 1982; Kim 1988). There are different ways to deal with it. One of the obvious ways is to say that the objection misses the point: It is not a bug; it is a feature, as there is nothing interesting in normative principles in epistemology. Normative epistemology deals with norms but they are of no use in prac-tice. They are far too general to be guiding principles of research, up to the point that they even seem vacuous (see Knowles 2003). In this chapter, my strategy will be different and more in spirit of the founding father of naturalized epistemology, Quine, though not faithful to the letter. I focus on methodological prescriptions supplied by cogni-tive science in re-engineering of cognitive architectures. Engineering norms based on mechanism design weren’t treated as seriously as they should in epistemology, and that is why I will develop a sketch of a framework for researching them, starting from analysing cognitive sci-ence as engineering in section 3, then showing functional normativity in section 4, to eventually present functional engineering models of cogni-tive mechanisms as normative in section 5. Yet before showing the kind of engineering normativity specific for these prescriptions, it is worth-while to review briefly the role of normative methodology and the levels of norm complexity in it, and show how it follows Quine’s steps.
Research Interests:
Academia © 2015