This paper takes part in the methodological debate concerning the nature and the justification of hypotheses about computational systems in software engineering by providing an epistemological analysis of Software Testing, the practice of... more
This paper takes part in the methodological debate concerning the nature and the justification of hypotheses about computational systems in software engineering by providing an epistemological analysis of Software Testing, the practice of observing programs’ executions to examine whether they fulfil software requirements. Property specifications articulating such requirements are shown to involve falsifiable hypotheses about software systems that are evaluated by means of tests which are likely to falsify those hypotheses. Software Reliability metrics, used to measure the growth of probability that given failures will occur at specified times as new executions are observed, is shown to involve a Bayesian confirmation of falsifiable hypotheses on programs. Coverage criteria, used to select those input values with which the system under test is to be launched, are understood as theory-laden principles guiding software tests, here compared to scientific experiments. Redundant computations, fault seeding models, and formal methods used in software engineering to evaluate test results are taken to be instantiations of some epistemological strategies used in scientific experiments to distinguish between valid and non-valid experimental outcomes. The final part of the paper explores the problem, advanced in the context of the philosophy of technology, of defining the epistemological status of software engineering by conceiving it as a scientific attested technology.
Add File
This conversation between curator Hans-Ulrich Obrist, performance artist Marina Abramović and mathematician Gregory Chaitin is on pp. 29-44 of a collection of Obrist interviews published by Edizioni Charta in Milan, Italy, in 2003.
Add File
Model checking, a prominent formal method used to predict and explain the behaviour of software and hardware systems, is examined on the basis of reflective work in the philosophy of science concerning the ontology of scientific theories... more
Model checking, a prominent formal method used to predict and explain the behaviour of software and hardware systems, is examined on the basis of reflective work in the philosophy of science concerning the ontology of scientific theories and model-based reasoning. The empirical theories of computational systems that model checking techniques enable one to build are identified, in the light of the semantic conception of scientific theories, with families of models that are interconnected by simulation relations. And the mappings between these scientific theories and computational systems in their scope are analyzed in terms of suitable specializations of the notions of model of experiment and model of data. Furthermore, the extensively mechanized character of model-based reasoning in model checking is highlighted by a comparison with proof procedures adopted by other formal methods in computer science. Finally, potential epistemic benefits flowing from the application of model checking in other areas of scientific inquiry are emphasized in the context of computer simulation studies of biological information processing.
Add File
What is truth?
A survey of algorithmic information theory and its applications.
We offer a formal treatment of the semantics of both complete and incomplete mistrustful or distrustful information transmissions. The semantics of such relations is analysed in view of rules that define the behaviour of a receiving... more
We offer a formal treatment of the semantics of both complete and incomplete mistrustful or distrustful information transmissions. The semantics of such relations is analysed in view of rules that define the behaviour of a receiving agent. We justify this approach in view of human agent communications and secure system design.  We further specify some properties of such relations.
First Paragraph: I live just off of Bell Road outside of Newburgh, Indiana, a small town of 3,000 people. A mile down the street Bell Road intersects with Telephone Road not as a modern reminder of a technology belonging to bygone days,... more
First Paragraph: I live just off of Bell Road outside of Newburgh, Indiana, a small town of 3,000 people. A mile down the street Bell Road intersects with Telephone Road not as a modern reminder of a technology belonging to bygone days, but as testimony that this technology, now more than a century and a quarter old, is still with us. In an age that prides itself on its digital devices and in which the computer now equals the telephone as a medium of communication, it is easy to forget the debt we owe to an era that industrialized the flow of information, that the light bulb, to pick a singular example, which is useful for upgrading visual information we might otherwise overlook, nonetheless remains the most prevalent of all modern day information technologies. Edison’s light bulb, of course, belongs to a different order of informational devices than the computer, but not so the telephone, not entirely anyway.
Dragonfly is a simulation engine that extends the scope of current human-space interaction tools by encoding the basic principles of ecological psychology into an interoperable, interactive, CAD environment.
Complexity 1, No. 1 (1995), pp. 26-30.
"We examine the philosophical disputes among computer scientists concerning methodological, ontological, and epistemological questions: Is computer science a branch of mathematics, an engineering discipline, or a natural science? Should... more
"We examine the philosophical disputes among computer scientists concerning methodological, ontological, and epistemological questions: Is computer science a branch of mathematics, an engineering discipline, or a natural science? Should knowledge about the behaviour of programs proceed deductively or empirically? Are computer programs on a par with mathematical objects, with mere data, or with mental processes? We conclude that distinct positions taken in regard to these questions emanate from distinct sets of received beliefs or paradigms within the discipline:

    * The rationalist paradigm, which was common among theoretical computer scientists, defines computer science as a branch of mathematics, treats programs on a par with mathematical objects, and seeks certain, a priori knowledge about their “correctness” by means of deductive reasoning.

    * The technocratic paradigm, promulgated mainly by software engineers, defines computer science as an engineering discipline, treats programs as mere data, and seeks probable, a posteriori knowledge about their reliability empirically using testing suites.

    * The scientific paradigm, prevalent in the branches of artificial intelligence, defines computer science as a natural (empirical) science, takes programs to be entities on a par with mental processes, and seeks a priori and a posteriori knowledge about them by combining formal deduction and scientific experimentation.

We demonstrate evidence corroborating the tenets of the scientific paradigm, in particular the inherently unpredictable (even chaotic) nature of a large class of computer programs. We conclude with a discussion in the influence that the technocratic paradigm has been having over computer science.

Key terms: philosophy of computer science, ontology and epistemology of computer programs, scientific paradigms"
Early computing curricula in Norway were based on training courses in programming that were developed as computers were made available for research at universities and research institutes during the 1950’s. These developed into formal... more
Early computing curricula in Norway were based on training courses in programming that were developed as computers were made available for research at universities and research institutes during the 1950’s. These developed into formal curricula starting from the mid-1960s. This developed differently at the different universities, which account for in the sequel. It describes the main points in the development of research profile and curricula for the four Norwegian universities.
Add File
Conceptual overview of metabiology (Video)
Add File
I defend Piccinini’s mechanistic account of computation against three related criticisms adapted from Sprevak’s critique of non-representational computation. I then argue that this defence highlights a major problem with what Sprevak... more
I defend Piccinini’s mechanistic account of computation against three related criticisms adapted from Sprevak’s critique of non-representational computation. I then argue that this defence highlights a major problem with what Sprevak calls the received view; namely, that representation introduces observer-relativity into our account of computation. I conclude that if we want to retain an objective account of computation, we should reject the received view.
Add File
Add File
C.S. Calude, Randomness and Complexity, from Leibniz to Chaitin, World Scientific, 2007, pp. 423-441.
Lecture given Thursday 22 October 1992 at a Mathematics - Computer Science Colloquium at the University of New Mexico. The lecture was videotaped; this is an edited transcript.
A tribute to Leibniz (Video). The Rome version is in the Featured Papers section.
Add File

Log In

or
or reset password

Need an account? Click here to sign up

Enter the email address you signed up with and we'll email you a reset link.

Academia © 2014