Some steps towards a transcendental deduction of quantum mechanics

Michel Bitbol

Home

Published in: Philosophia Naturalis, 35, 253-280, 1998

PDF FILE AVAILABLE ON PITTSBURGH ARCHIVE IN THE PHILOSOPHY OF SCIENCE

Copyright Vittorio Klostermann (Francfort)

Table of contents:

  • Introduction
  • The functional a priori
  • Kant's concept of a transcendental deduction
  • A generalized transcendental deduction
  • Transcendental constraints, quantum logic and Hilbert spaces
  • Transcendental arguments about connection in time
  • Conclusion
  •  

    1-Introduction

    The debate on the interpretation of quantum mechanics has been dominated by a lasting controversy between realists and empiricists. The basic tenet of realists is that quantum mechanics tends to describe (either completely or incompletely) an intelligible reality underlying the phenomena. By contrast, some of the most consistent empiricists have considered quantum mechanics as a mere formal device enabling one to account as economically as possible for the statistical regularities of phenomena defined relative to certain experimental devices described in classical terms. As for physicists, they have often tried to combine some fragments of an ontological discourse with empiricist or positivist professions of faith .
    The problem is that none of these attitudes has reached a stage where it may be considered unproblematic, even by its most eager proponents.
    Surveying the realist interpretations of quantum mechanics, one can easily display their major defects. Empiricists are entitled to notice that, while it is drifting farther and farther from the classical ideal, making it less and less attractive for some of its original supporters, the most efficient and popular hidden variable theory (Bohm's theory) is hardly testable against standard quantum mechanics . They may also point out that neither the many-worlds interpretation, nor Dieks realist version of the modal interpretation, nor the spontaneous collapse interpretation, has proved as yet that it can cope in their own terms (namely without invoking meta-theoretical regulative principles) with some specific difficulties such as the preferred basis problem. Finally, empiricists may remark that decoherence theories, which claim to be able to provide a solution to the previous difficulties, are pervaded by interest-relative postulates which do not make them liable to an ontological reading, except if a completely convincing strategy of "closing the epistemological circle" of subject(s) and objects has been provided (and this seems a remote perspective).
    Conversely, one may easily understand that realist philosophers, and many scientists as well, are very reluctant to accept an empiricist view of theories which (at least until the recent rise of Van Fraassen's constructive empiricism ) has proved unable to account for what is so crucial in everyday research, namely a well-defined perspective, a clear direction, and a strong motivation. It is also no wonder that realist philosophers often emphasize the need for an explanation of the remarkable predictive success of a theory like quantum mechanics, thus criticizing hard-line empiricists who do not bother to look for one. Against these hard-liners, realists may point out that even a physicist and philosopher of science like Pierre Duhem, who used to advocate a purely instrumentalist conception of theories, eventually aknowledged that "the more (a theory) is improved, the more we feel that the logical order under which it brings experimental laws reflects an ontological order" .
    Here, it is not my intention to take any definite position in this apparently endless debate. I rather wish to show that, even if one remains essentially neutral with respect to it, one can draw many philosophically interesting lessons from quantum mechanics.
    In fact, the two options on which the current debate relies are far from being exhaustive. There is at least one more position available; a position which has been widely known in the history of philosophy during the past two centuries but which, in spite of some momentous exceptions (see J. Petitot) , has only attracted little interest until recently in relation to the foundational problems of quantum mechanics. According to this third position, a theory can be much less than a description of reality, without its being reducible to a unified summary of efficient predictive recipes. In more positive terms, it says that one may provide a theory with much stronger justifications than mere a posteriori empirical adequacy, without invoking the slightest degree of isomorphism between this theory and the elusive things out there. Such an intermediate attitude, which is metaphysically as agnostic as empiricism, but which shares with realism a committment to considering the structure of theories as highly significant, has been named transcendentalism after Kant. One may notice incidentally that Kant's undertaking was precisely meant to cut through the debate of the first half of the eighteenth century between empiricists and dogmatic rationalists. The purpose of transcendental philosophy was to take our intellectual functions much more seriously than empiricists did, without approving the risky metaphysical claim that our reason is able to disclose things as they are in themselves.
    Of course, I have no intention in this paper to rehearse the procedures and concepts developed by Kant himself; for these particular procedures and concepts were mostly adapted to the state of physics in his time, namely to Newtonian mechanics. I rather wish to formulate a generalized version of his method and show how this can yield a reasoning that one is entitled to call a transcendental deduction of quantum mechanics. This will be done in three steps. To begin with, I shall define carefully the word "transcendental", and the procedure of "transcendental deduction", in terms which will make clear how they can have a much broader field of application than Kant ever dared to imagine. Then, I shall show briefly that the main structural features of quantum mechanics can indeed be transcendentally deduced in this modern sense. Finally, I shall discuss the significance, and also the limits, of these results.

    2-The functional a priori

    Kant's classical definition of the transcendental attitude, as contained in the introduction to the second edition of the Critique of Pure Reason, develops thus: "I apply the term transcendental to all knowledge which is not so much occupied with objects as with the mode of our knowledge of objects, so far as this mode of knowledge is possible a priori " . Such a reversal of focus, from objects to our knowledge of objects, is typical of what Kant called the Copernican revolution.
    Both transcendent and transcendental considerations go beyond what is immediately given in appearances. But whereas manipulating transcendent entities means trying to account for the link between appearances by invoking something outside the boundaries of human knowledge, using a transcendental stragegy is tantamount to ascribing the unity of the manifold of appearances to something which definitely belongs to the human faculty of knowledge, namely to pure understanding. This shift enables one to stop wondering, or invoking pre-established harmony, when the remarkable agreement between the processes involving physical objects and our representations is at stake. Indeed, the greater part of this agreement arises automatically from the fact that, provided each object is construed as the focus of a dynamic synthesis of phenomena rather than as a thing-in-itself, its very possibility qua object depends on the connecting structures provided in advance by our understanding.
    Attractive as Kant's original strategy may appear, it has nevertheless some features which do not fit with current philosophical standards, and which will have to be modified if we want to proceed with the transcendental approach. Let us discuss two of these features, which are especially relevant to physics.
    Firstly, the element of passivity which enters in the way Kant said the objects are presented to us, is excessive. True, he insisted that in physics "Reason must approach nature with the view of receiving the information from it, not however in the character of a pupil who listens to all that his master chooses to tell him, but in that of a judge, who compels the witnesses to reply to those questions he himself thinks fit to propose" . But this way of anticipating the answers of nature was restricted to the intuitive and intellectual form of knowledge. Regarding what he called the matter of knowledge, Kant relied on the empiricist and aristotelian tradition, and considered that it is passively received as sensations; that in other terms the objects are given to us by means of sensibility . Even though Kant's use of the concept of thing-in-itself can be read as a way of expressing that, in our knowledge of objects, we cannot separate what is provided by our cognitive capacities from what affects us, he never extended his remark one step further, namely from the cognitive forms to the form of experimental activity. And he therefore did not recognize that experimental activity is able to shape appearances and not only to select it or order it; that in other terms experimental activity partakes of the constitutive role he ascribed to our cognitive capacities. The idea that phenomena cannot be separated from the irreversible operations of experimental apparatuses is to be ascribed to Bohr, not to Kant .
    This is one reason why, if we want to apply the transcendental method to quantum mechanics, we must adopt a thoroughly modernized version of it, such as Hintikka's version. According to Hintikka, what is needed to make the transcendental method acceptable nowadays is a shift of emphasis from passive reception and purely mental shaping to effective research activities and instrumental shaping . As he writes, "(...) the true basis of the logic of existence and universality lies in the human activities of seeking and finding" . The definition he gives of the transcendental attitude is modified accordingly. The transcendental attitude no longer consists in reversing attention from the objects to our knowledge, but rather from the objects to our games of seeking and finding. As a consequence, the objects are no longer regarded as elements of our experience, but rather as (i) potential aims for our activities of research and resolution and (ii) sub-structures within the formal system by which we anticipate the results of our activities.
    The second point which does not fit with current philosophical standard concerns the latin expression a priori. In Kant's definition of the term 'transcendental', the use of this expression is misleading. It may sound as if the forms or the connecting structures which we present in anticipation to the appearances are innate, or at least that they are uniquely determined "for all times and for all rational beings" .
    Actually, Kant has never gone as far as asserting that the a priori forms of intuition and thought are innate. He even dismissed explicitly this idea in the Critique of pure reason. According to him, the forms of intuition and thought are not chronologically but only logically prior to experience. And the reason why they are logically prior to experience, the reason why they cannot be extracted from experience, is that experience is only possible under the condition that it has been shaped by them .
    It is true however that Kant has maintained a uniqueness and invariability claim about his forms of intuition and thought. Now, it is precisely this invariability claim which makes Kant's version of transcendental philosophy so vulnerable to the criticisms of modern philosophers of science who rightly notice that twentieth century physics has undermined many particular features of his original a priori forms, or at least that it has considerably restricted their range of application to the immediate environment of mankind. The transcendental approach could then only survive and develop in the kind of version proposed by Neo-kantian philosophers such as Hermann Cohen or Ernst Cassirer, who both aknowledged to some extent the possibility of change of the a priori forms and their plurality as well. Nowadays, there is also another flexible and pluralist conception of the a priori; it is the pragmatist version of transcendental philosophy as defined by Putnam after Dewey. According to Putnam, each a priori form has to be considered as purely functional. It is relative to a certain mode of activity, it consists of the basic presuppositions of this mode of activity, and it has therefore to be changed as soon as the activity is abandoned or redefined. Putnam calls it a quasi-a priori when he wants to emphasize this flexibility . This conception of the a priori may easily be combined with Hintikka's characterization of the transcendental attitude as a process of redirecting attention from the objects to our activities of seeking and finding, and I shall thus retain it as part of a coherent neo-transcendentalist approach.

    3-Kant's concepts of a "transcendental deduction"

    In the first edition of his Critique of Pure Reason, Kant presents us with two varieties of the deduction. The first one develops as an argument from the possibility of experience, and it is called "objective"; the second one is based on the necessity of the unity of apperception (namely the fact that all representations have to be related to their common subject), and it is called "subjective". The first one is weaker than the second one, but also less controversial. Indeed, the "objective" variety of the deduction only aims at deriving the background presuppositions of an experience which just happens to be organized as we know it, whereas the "subjective" variety somehow purports to demonstrate that this organization must obtain . Here, I shall mainly discuss the "objective" variety, but later on I shall also make use of a thoroughly modified version of the "subjective" variety.
    Kant's statement of the "objective" variety of his deduction is as follows: "The transcendental deduction of all a priori concepts has (...) a principle according to which the whole enquiry must be directed: to show that these concepts are a priori conditions of the possibility of all experience" . In other terms, borrowed from Charles Taylor, a transcendental deduction is "(...) a regression from an unquestionable feature(...)" of our knowledge to "(...) a stronger thesis as the condition of its possibility" . Now, what is the central unquestionable feature from Kant's standpoint? What is the characteristic mark of what he calls experience as against pure fleeting appearances? It is objectivity, since experience has been taken by Kant as equivalent to objective empirical knowledge . Now, transcendental philosophy defines objectivity in two ways. These two ways are closely interrelated in Kant's writings, but it is very important to emphasize the distinction in the context of a study of quantum mechanics. According to the first definition of objectivity, something is objective if it holds for any (human) subject. According to the second (more restrictive) definition, objectivity amounts to the possibility of organizing certain sets of appearances in such a way that their succession can be ascribed selectively to (a plurality of) objects. In order to find the pre-conditions of experience in Kant's sense, one must therefore enquire into how it is possible to represent something as an object.
    The heart of this enquiry is concentrated in the section of the Critique of Pure Reason entitled The analytic of principles. There, Kant explains that in order to be construed as "objective", a connection of perceptions has to be regarded as universal and necessary. For if it were not the case, if the connection were particular and contingent, nothing could prevent one from ascribing it, at least partly, to the idiosyncratic and temporary situation of the subject of perceptions. Prescription of a necessary temporal connection between appearances according to principles of pure understanding, is thus what makes it possible to consider our representations as objective, and more specifically as representations of (a plurality of) objects. It is what gives rise to knowledge properly speaking, provided knowledge is defined as the relation of given representations to well-defined objects. In Cassirer's terms "The necessity of the judgement does not stem from the unity of an object behind and beyond the cognition, but this necessity is what constitutes for us the only conceivable sense of the thought of an object" . Since no empirical study goes beyond the mere statement of regularity, one cannot hope to derive the necessity of successions from it; then, if our understanding did not impose the mark of necessity on certain successions, one would never treat them as if they were the expression of something which occurs to an object independently from any subjective position.
    Particular deductions are then carried out by Kant for the three modes of connection in time, namely permanence, succession, and simultaneity; and they yield respectively the principle of the permanence of substance, the law of causality, and the law of reciprocity of action.
    These a priori laws of understanding, which are rules for the employment of categories, are not to be mixed up with the laws of physics. Empirical information is needed in order to know the particular laws of nature . However "all empirical laws are only specific determinations of pure laws of the understanding" , since the pure laws of understanding are after all what make possible the very objects whose behaviour is supposed to be ruled by empirical laws. In his Metaphysical foundations of natural science, Kant then gave a hint of how Newton's three laws of motion can be taken as specific determinations of the three mentioned laws of understanding when the latter are applied to the empirical concept of material body . This procedure may be considered as a step towards a transcendental deduction of Newtonian mechanics. Admittedly, however, this deduction is doomed to remain partial, not only because a momentous empirical element (the concept of material body) has been used to derive the laws of motion, but also because, once the laws of motion have been obtained, one has to introduce further empirical material (i.e. the Kepler laws) in order to derive the inverse-square law of gravitation.

    4-A generalized transcendental deduction

    At this stage, our problem is the following: can one transpose Kant's partial transcendental deduction of Newtonian mechanics to quantum mechanics, by mere substitution of the empirical elements which serve to determine the basic laws of understanding? Things are certainly not so simple. Kant's reasoning has to be altered much more than that in order to become applicable to quantum mechanics. But such an alteration has not to be deplored. For it yields two substantive advantages with respect to Kant's original undertaking. Firstly, it broadens considerably the scope of the transcendental method, thus making it liable to an increasing number of applications. Secondly, as we shall see later, it allows a transcendental deduction of quantum mechanics which is in many respects more extensive than Kant's deduction of Newtonian mechanics.
    Let us first recapitulate the two major steps of the original transcendental deduction. Its departure point is the fact that the flux of appearances is unified in such a way that it has the character of experience, or of representation of objects. And its end result is a set of laws of understanding considered as the conditions of possibility of experience. Both steps have to be thoroughly modified in order to meet the requirements of a transcendental deduction of quantum mechanics.
    To begin with, let us emphasize that organization of phenomena in such a way that they can be regarded as appearances of a plurality of interacting physical objects having properties, is by no means an indispensible ground of scientific activity. True, this organization is an 'unquestioned feature' of our everyday life; and, as Kant noticed , it is also a basic presupposition of judgments considered as the elementary units of our language. But this feature, which nothing in the manipulations and observations we perform in our immediate environment has ever forced us to question, does not have any reason to remain unchallenged in every domain of experimentation. In some scientific situations, such as contemporary microphysics, the cost of maintaining an object-like organization of phenomena is out of proportion with its advantages. Instead of contenting ourselves with the unquestioned fact of the object-like organization presupposed in our acting and speaking, we should thus try to figure out what is the basic function it fulfils in our lives and in classical science . Once this is done, the familiar object-like organization of the surrounding world is likely to appear as a very restricted class of the structures which are able to fulfil this function.
    What is then the minimum task the object-like organization carries out in our everyday lives? As I have already suggested in §2, this organization enables us to orientate our activities by anticipating the outcome of each act we perform, in such a way that the rules of anticipation can be communicated and collectively improved. That objects operate in our experience as anticipative frameworks has long been noticed by philosophers of the phenomenological tradition . But they are by no means the most general anticipative frameworks one may conceive. Indeed, their anticipative function is embodied by predicates which (according to Carnap's partial definition method, or S. Blackburn's quasi-realist approach ) can be construed operationally as dispositions to manifest again and again a well-defined set of appearances when the same object is put under specified conditions. The anticipative function of the objects thus relies on the possibility of reidentifying a bearer of predicates across time; and the procedure of reidentification in turn requires a sufficient amount of continuity and determinism in the evolution of phenomena. When doubts are raised about the latter condition's being fulfilled, a substitute for the objects qua anticipative structures is required. This substitute can be afforded by the concept of a reproducible global experimental situation. Now, replacing the concept of identity of an object by that of reproduction of experimental situations does three things. It releases, as required, the constraint on reidentification of bearers of predicates; it substitutes the most general acception of objectivity (universal validity of statements) for a restrictive acception (object-like organization of phenomena); and it enables one to generalize the broadest version of the concept of anticipation, namely that of probabilistic anticipation. Popper's concept of propensity, which characterizes types of experimental arrangements rather than individual objects, and which provides probabilistic predictions rather than exclusively deterministic predictions, implements this kind of change.
    However, everything is not settled at this point. For, if the previous kind of operationalistic anticipative framework is to be efficient at all, it must be grounded on a reliable procedure for ascertaining that (experimental) situations are effectively reproduced. Of course, this procedure could itself amount to describing and performing a second-order experiment, whose anticipated outcome is precisely the instrumental set-up of the first-order experiment. But the regress has to be stopped somewhere. It is at this point that the object-organization of experience and discourse rises again. Indeed, predicating a property of an object is a way of implying the class of situations in which the appearances arising from the dispositional content of this property are observed. As Kant claimed repeatedly, referring to objects and properties is not tantamount to stepping back in 'cosmic exile' (that is in no worldly situation at all), thus talking about things as they are in themselves; it only means that one endorses implicitly the sort of situation which is common to every sentient and rational being inhabiting the environment of mankind. Describing an experimental set-up in terms of reidentifiable objects possessing properties is therefore a natural way of stopping the regress of explicitly stated situations and anticipations, by means of their implicit use.
    We can then see clearly that the familiar object-like organization of the surrounding world is not only one among the many structures which are able to afford communicable anticipations. It is also designed to be the last-order one. Bohr's insistance on everyday language and concepts to describe the experimental apparatuses, and Wittgenstein's remark in On certainty that "no such proposition as 'there are physical objects' can be formulated" are two ways of expressing this special limiting status of the object-like organization.
    Now we can state precisely what we take as the departure point of our transcendental deduction of quantum mechanics. As a first step of such a deduction we shall not choose a supposedly 'unquestionable' feature of knowledge (such as the object-like organization of the whole field of phenomena), but rather a basic requirement bearing on the mode of anticipation of the results of our game of seeking and finding. The latter requirement can be stated by means of a language which only presupposes the object-like behaviour of the experimental devices, not of the field of investigation. Actually, if one took (as Kant did) the object-like organization of the field of investigation as an unquestioned departure point, this would already be a way of requiring implicitly something specific about the mode of anticipation of the result of our game of seeking and finding. Therefore, the type of departure point which has just been suggested for the extended version of the transcendental deduction is nothing more than a proper generalization of Kant's.
    The departure point of the new kind of transcendental deduction having been chosen, let us now wonder which kind of result we should expect from it. In Kant's reasoning, the end-product of the deduction was a set of laws of understanding, of which the laws of physics are specific determinations. The most crucial among the a priori laws of understanding are those which concern relations in time, especially the law of causality which concerns succession. But one must be careful at this stage. If one does not pay sufficient attention to Kant's writings, some misunderstandings may arise. Indeed, some of his sentences sound as if, in order for experience to be made possible at all, one's understanding had to impose, say, the law of causality onto the succession of appearances. Actually, things are more subtle. The a priori laws of understanding which concern succession in time are called analogies of experience; they are not constitutive of the content of our intuition , but rather regulative of investigations. They do not allow us to construct the existence of consecutive phenomena, for this would only be acceptable in the most extreme form of idealism; they only provide something like "(...) a rule to guide me in the search of (a phenomenon) in experience, and a mark to assist me in discovering it" . As a consequence the a priori laws of understanding do not have to be valid in the absolute within the field of appearances . In order to make experience possible, in order to constitute experience, it is sufficient that we presuppose that appearances necessarily occur according to these laws, and that we always look for them according to such a presupposition. This qualification arises more or less explicitly from many sentences in Kant's deduction of the law of causality; for instance: "When we know in experience that something happens, we always presuppose that something precedes, whereupon it follows in conformity with a rule. For otherwise I could not say of the object, that it follows (...)" . When carefully analyzed, Kant's laws of understanding thus do not bear directly on some passively received material of knowledge, but rather on the strategies of action and anticipation that we must use in order to get something which deserves to be called objective knowledge. They are not descriptive laws but rather law-like prescriptions; and moreover they are prescribed not so much to the phenomena as to our research-behaviour. Let us retain this idea for our modern variety of the transcendental deduction: the end-product of a transcendental deduction is a strong structure of anticipation which is prescribed to our activity of seeking and finding.

    5-Transcendental constraints, quantum logic, and Hilbert space

    To recapitulate, a generalized transcendental deduction is a regression from a set of minimal requirements about the process of anticipation of phenomena, to a strong anticipative structure as the condition of possibility for these requirements to be satisfied. Accordingly, in physics, a transcendental deduction is a regression from a set of constraints imposed on the prediction of experimental phenomena, to a strong predictive structure as the condition for these constraints to be obeyed. As we shall see, quantum mechanics construed as a predictive structure can mostly be derived this way, provided a little number of very general constraints are imposed on the prediction of phenomena.
    What are these constraints?
    To begin with the phenomena which have to be anticipated are contextual phenomena. This looks like a very drastic constraint indeed; one by which an essential ingredient of quantum mechanics is introduced in the reasoning from the outset, thus threatening our deduction with the charge of circularity. But I think this judgment is wrong. Saying that the phenomena to be anticipated are relative to an experimental context is tantamount to removing a familiar constraint, rather than introducing an additional one; it is tantamount to removing the constraint of de-contextualization. Let me explain this by means of a historical example. As Descartes and Locke realized, large classes of phenomena can only be defined relative to a sensorial, perceptive or instrumental context. They correspond to the so-called secondary qualities. Kant later generalized this remark in his Prolegomena. According to him the spatial qualities, which were considered as primary or intrinsic by Locke, have also to be construed as appearances , although Kant does not say that they are relative to a particular sensory structure of ours but rather that they are relative to the general form of empirical intuition. It was thus widely accepted among philosophers, from the end of the seventeenth century onwards, that a phenomenon is usually (or even always) relative to a certain context which defines the range of possible phenomena to which it belongs. However this epistemological remark, with all the momentous consequences that its generalization could have had, did not change the way classical physicists conceived their objects. One reason for this indifference is easy to figure out. As long as the contexts can be combined, or at least as long as the phenomena can be made indifferent to the order and chronology of use of the contexts, nothing prevents one from merging the distinct ranges of possible phenomena relative to each context into a single range of possible conjunctions of phenomena. This being done, one may consider that the new range of possible compound phenomena is relative to a single ubiquitous context which is not even worth mentioning. Then, once one has forgotten the ubiquitous context, everything goes as if phenomena were reflecting intrinsic properties.
    One should nevertheless notice that taking for granted the possibility of combining all the contexts, and/or the perspective of a perfect indifference of phenomena to the order of use of the contexts, means imposing a drastic constraint. It is equivalent to impose what we have called the constraint of de-contextualization. The structure of propositions in ordinary language, which allows us to ascribe several characteristics to a single object as if they were intrinsic properties (independent of any context), presupposes that this constraint is obeyed. Now, as it can easily be shown, this presupposition is closely associated to Boolean logic; for the logical operations between the propositions of a language underpinned by such a presupposition are isomophic to the set-theoretical operations between corresponding subsets of states of affairs. Moreover, the same presupposition is also closely associated to a Kolmogorovian theory of probabilities; indeed, Kolmogorov's theory relies on classical set theory (or on a logic isomorphic to classical set theory) for the definition of the 'events' on which the probabilistic valuation is supposed to bear.
    Now what happens if the constraint of de-contextualisation is removed? In this situation, the rules of Boolean logic and of the Kolmogorovian theory of probabilities may still subsist, but in a fragmented form. To each experimental context, one may associate a given range of possible determinations, and a range of propositions which depend on a Boolean sub-logic. And to determinations chosen within each such range, one may associate real numbers in such a way that they obey the axioms of the Kolmogorovian theory of probabilities. But it is no longer possible to organize the whole set of experimental propositions, depending on several incompatible contexts, according to the structure of a single Boolean logic; nor is it possible to organize the whole set of probabilistic valuation as if they were bearing on a single Kolmogorovian domain of events.
    At this point, we must introduce the second constraint, (or rather the real constraint, since the first one was no constraint at all) in order to overcome the previous dismantling of the logic and probability field. This constraint is that to each experimental preparation, univocally described by means of a language which presupposes the familiar object-like organization, there must correspond a unified (non-Kolmogorovian) mathematical tool of probabilistic prediction, irrespective of the context associated to the measurement which follows the preparation. The sought unification of the predictive tool under the concept of a preparation may be expressed either by means of a single symbol allowing one to calculate the list of probabilities corresponding to any context, or by using transformation rules for the probabilistic valuations from one context to another. The first strategy is usually carried out by associating a single "state vector" to each preparation, and the second strategy is tantamount to adopting Dirac's "transformation theory".
    The previous constraint can be considered as a generalized equivalent of Kant's departure point for his so-called "subjective" transcendental deduction of the categories. The difference is that, whereas Kant demanded "(...) that all the manifold in intuition be subject to conditions of the originally synthetical unity of apperception" , we demand that the manifold of probability assignments which bear on measurements following a given type of experimental preparation be subject to the unity of this type of preparation. The unifying pole is no longer a mentalistic entity (the apperception, or the "consciousness of oneself" ), but rather the objectified end-product of an experimental activity (the preparation). And the elements to be unified are no longer passively received contents of intuition, but rather formalized acts of anticipation.
    Taking into account the two former constraints, namely contextuality and unification of the predictive tool under the concept of a preparation, the basic structure of quantum mechanics is close at hand. Here, I shall only give a hint of how the reasoning proceeds, in two steps: the first one concerns quantum logic, and the second one concerns the relation between vectors in Hilbert space and probability valuations .
    1) As Patrick Heelan noticed, meta-contextual languages able to unify two or more contextual languages are isomorphic to Birkhoff's and Von Neumann's quantum logic. To show this, he used the following set of general assumptions:
    To begin with, let us consider two Boolean experimental context-dependent languages LA and LB. Then, let us define a relation of implication (which clearly operates at a meta-linguistic level "ML"), in such a way that one language implies another language iff every sentence of the first one is also a sentence of the second one. After that, we consider two other languages: LO which is such that it implies any language, and LAB which is such that it is implied by the all the other languages, including the set-theoretical complements L'A and L'B of LA and LB in LAB.The crucial assumption is that LAB is broader than a language made of all the propositions of LA and LB and their logical conjunctions or disjunctions. This assumption aims at expressing context-dependence; indeed, in the case of context-dependence, a combination of contexts must yield experimental consequences which are definitely distinct from mere combinations of what occurs when each context is used separately. Finally, we define two functors x and + in the meta-contextual language ML, which are the exact equivalents of "and" and "or" in a first-level language: x stands for "least upper bound" (of the relation of implication) and + stands for "greatest lower bound". With these definitions and assumptions, it is easy to show that the structure of the meta-contextual language ML can but be an orthocomplemented non-distributive lattice. Then, if this structure is projected onto the first-level language, it takes the form of the familiar "quantum logic". To summarize, the specific structure of "quantum logic" is unavoidable when unification of contextual languages at a meta-linguistic level is demanded. In this sense, one can say that quantum logic has been derived by means of a transcendental argument: it is a condition of possibility of a meta-language able to unify context-dependent experimental languages.
    2) As J.L. Destouches and P. Destouches-Février argued convincingly, the formalism of vectors in a Hilbert space, together with Born's correspondence rule, is the simplest predictive formalism among those which obey the constraint of unicity in a situation where de-contextualization cannot be carried out. To show this, J.L. Destouches starts from a list of distinct context-dependent probability valuations for the results of each possible subsequent measurement performed after a given preliminary measurement (or, more generally, after a given preparation). The problem is that each probability valuation does not hold beyond a certain couple [PREPARATIONV , MEASUREMENTW]. In order to overcome this lack of unity, one is led to define a set X in such a way that (a) an element XV of this set is associated to each preparation with index V, and (b) the probability valuations PVW for a couple [PREPARATIONV , MEASUREMENTW] is a function (indexed by W) of XV. XV is then called an "element of prediction" associated to the V-th preparation. Then, J.L. Destouches demonstrates that, provided one adds enough elements to X for transforming it into a vector space X*, the procedure for calculating a probability valuation PVW from an element of prediction XV can be simplified as follows. Firstly, one defines special elements of prediction XVW(i) such that the probability of obtaining the result Wi if experiment W is performed after preparation V, is equal to 1. Secondly, one replaces the element of prediction XV by the linear superposition SciXVW(i) (where ci can be either real or complex). In the simplest case (which corresponds exactly to the Hilbert-space formalism of quantum mechanics), one identifies XV to SciXVW(i). One can then show that in any case the sought probability valuation PVW is then given by: PVW(Wi)=f(ci).
    The next problem is to determine the function f. At this point, P. Destouches-Février demonstrates that, when the probability valuations to be obtained out of a given element of prediction bear on magnitudes which may be "incompatible" (namely magnitudes which may be such that they cannot be measured simultaneously with an arbitrary precision), the function f is unique, and takes the form f(ci)= |ci|2. The demonstration relies on a generalized variety of the Pythagoras theorem in space X*.
    To summarize, the formalism of vectors in a Hilbert space associated with Born's rule affords the simplest unified meta-contextual probability valuation algorithm, if the contexts are sometimes incompatible (in the above sense), and each contextual probability sub-structure is Kolmogorovian. It is a structural condition of possibility of a unified system of probabilistic predictions, whenever the constraint of de-contextualization has been released.

    6-Transcendental arguments about connection in time

    Of course, everything is not settled at this point. The formalism of vectors in a Hilbert space, construed as a meta-contextual probability theory, is not enough to constitute quantum mechanics properly speaking. Many elements have to be added to it. To begin with, we need a law of evolution of the probabilistic predictive symbols, namely the vectors themselves. Now, it can be found in many textbooks that under several assumptions ensuring: (i) that the numbers computed by means of the Born's rule obey the Kolmogorov's axioms at all times (i.e. that the evolution operators are unitary), and (ii) that the set of evolution operators has the structure of a one-parameter group of linear operators (where the one parameter is time), one obtains the general form of both Schrödinger's and Dirac's equation, leaving open the structure of the Hamiltonian. The Hamiltonian can eventually be obtained either by means of an application of the correspondence principle with classical physics, or by introducing directly the fundamental symmetries which underly classical mechanics and/or relativistic mechanics.
    It is not very difficult to convince ourselves that at each step of this mode of derivation of the law of evolution of the predictive symbol, transcendental arguments play the key role.
    Some of them are transcendental arguments per se, e.g. the requirement of trans-temporal stability of the probabilistic status of the predictive tool (if not, one would just have to give up the attempt at providing enduring probabilistic valuations for experimental events).
    The other ones are bridging transcendental arguments. They establish a bridge between the specific form of transcendental deduction which was used by Kant within the direct spatiotemporal environment of mankind, and the generalized sort of transcendental deduction needed in domains of scientific investigation which may go beyond the human Umwelt. This is especially clear for the correspondence principle, because it ensures a proper connection between (a) the basic (last-order) object-like organization which is common to both everyday life and classical mechanics, and (b) the contextual and meta-contextual organization of quantum mechanics. This is also clear for certain symmetry requirements such as time, space, and rotation invariance, which, as Eugen Wigner wrote, "(...) are almost necessary prerequisite that it be possible to discover, or even catalogue, (...) correlations between events" . Finally (even though this is much less obvious), the abstract statement according to which the set of evolution operators must be a one-parameter group of linear unitary operators can also be read as a bridging transcendental argument. Indeed, this condition is tantamount to splitting up the transcendental demand of unity of the predictive tool under the concept of a preparation, according to the three kantian modes of connection in time (namely permanence, succession, and simultaneity). This can be seen quite easily, provided one realizes that imposing the structure of a time-parameter group of linear unitary operators to the set of evolution operators has the three following consequences:
    (1) It amounts to projecting the continuity of the parameter 'time' onto the domain of the probabilistic predictive tool (namely the state vector).
    (2) It entails that the evolution of this predictive tool is deterministic .
    (3) It enables one, by the linearity of the evolution operators, to maintain the structure of the linear superpositions of state vectors across time.
    Let us analyze these three consequences more precisely.
    (1') Continuity provides the possibility of identifying a certain state vector as the time-transform of the state vector which was initially associated with a given preparation; it fulfills the function of the category of substance, applying it to the predictive tool rather than directly to phenomena.
    (2') Determinism of the state-vector evolution ensures that a state vector at a certain time follows state vectors at previous times according to a univocal rule; it fulfills the function of the category of causality, again applying it to the predictive tool rather than directly to phenomena .
    (3') As for the constant structure of the linear superpositions of state vectors across time, it means that there is an enduring internal relation between the predictive contents of two or more preparations when they have been combined into one single compound preparations ; it fulfills the function of the category of reciprocity, by applying it to the predictive content of coexisting preparations, rather than directly to coexisting phenomena.
    To summarize, imposing that the set of evolution operators have the structure of a time-parameter group of linear unitary operators is tantamount to shifting the locus of the categories of understanding, and especially the analogies of experience, from the phenomena to the predictive frame. This move explains Schrödinger's (quasi-) realist construal of y-functions, and it is in good agreement with G. Cohen-Tannoudji's remark that Hilbert space, not ordinary space, is the proper place of quantum objectivity . A similar idea was also advocated by P. Mittelstaedt .
    At this point, it is interesting to draw some philosophical consequences from the fact that the formalism of quantum mechanics, together with some appropriate boundary conditions, enables one to derive both quantization conditions and prediction of wave-like distributions of phenomena. In the light of the way in which the formalism has been justified, these two effects acquire a meaning which is thoroughly different from what is usually implied in the loosely realist mode of expression of the quantum physicists. Here, wave-like distributions and quantization no longer appear as contingent aspects of nature. They are a necessary feature of any activity of production of contextual and mutually incompatible phenomena whose level of reproducibility is sufficient for its outcomes to be embeddable in a unified and time-connected meta-contextual system of probabilistic anticipation.
    Of course, not everything in the quantum predictions can be transcendentally deduced. Just as in Kant's transcendental deduction of Newtonian mechanics, an empirical element has to be introduced somewhere. However, there are interesting differences between the empirical elements which had to be added to get Newtonian mechanics and the empirical elements which we must introduce to get standard quantum mechanics. In order to complete his deduction of Newtonian mechanics and to obtain the gravitation law, Kant had to add both an empirical concept (that of material body) and a set of empirical laws (Kepler's laws) . But in order to complete the transcendental deduction of quantum mechanics construed as a predictive formalism bearing on global experimental situations, we do not need the concept of an object of the investigation . Even less do we have to introduce any empirical law-like structure; for the basic law-like structure of standard quantum mechanics (i.e. Schrödinger's equation) has already be obtained. We only need one very simple, and non-structural, empirical ingredient, namely the value of the Planck constant. And we also need some additional ("internal") symmetry principles whose empirical or transcendental status is at present unclear.
    True, these are crucial ingredients. Let me insist on the value of the Planck constant which clearly appears to be empirical. This constant sets quantitatively, through Heisenberg's relations, the possibility of partially compensating for the mutual incompatibility of experimental contexts. If it were just equal to zero, measurements of conjugate variables would be indifferent to the order of measurements, and a basic condition of de-contextualisation would then be fulfilled. Conversely, the non-zero value of the Planck constant means that the de-contextualisation of experimental outcomes can only be performed up to a certain precision. Hence the need to regard Kant's original transcendental deduction which started from de-contextualized premises as a particular case, and to generalize it to a situation where contextuality becomes unavoidable.
    Now, we must not limit our investigation to the framework set by the Kant's Critique of pure reason. The Critique of Judgment introduced a new kind of transcendental argument which is admittedly weaker than the former one. This new variety of transcendental argument is not 'determinative' but 'reflective', and it is explicitly non-objective. Indeed, according to Kant, it is grounded on our subjective need to think nature as a systematic unity, and to presuppose a teleological order for that. Can't the value of Planck's constant be obtained this way, thus complementing the set of transcendental arguments which lead to quantum mechanics? The answer is positive, provided one uses the modern version of the teleological argument for the determination of the universal constants, namely the weak anthropic principle.
    In fine, there is but one element which is bound to remain beyond the reach of any variety of transcendental argument, be it grounded on subjective requirements: it is the occurrence of a particular outcome, after each single run of an experiment. This is not very surprising. As R. Omnès rightly pointed out, the actuality of each particular phenomenon cannot be accounted for by any physical theory. The only thing a physical theory does, and the only thing it has to do, is to embed documented actualities in a (deterministic or statistical) framework, and to use this framework to anticipate, to a certain extent, what will occur under well-defined experimental circumstances. What we have shown in this paper is that, at least in the case of standard quantum mechanics, such a framework can be justified as a structural condition for a minimal set of constraints on the prediction of phenomena (and on their predictor) to be obeyed.

    7-Conclusion

    To conclude, I shall briefly discuss the benefits we can draw from the kind of transcendental deduction I have just outlined, and also its limits. I think the specificity of a transcendental argument is that it starts from our engaged situation in the world, then deriving the basic pre-conditions of our orientation within this situation. In this respect, it is quite at variance with any variety of ontological attitude, be it the positivistic ontology of facts or the realist ontology of objects. Indeed, ontological attitudes systematically favour a disengaged outlook, even though their very undertaking is grounded on the presuppositions of an engaged activity. As Charles Taylor emphasizes, "With hindsight, we can see (Kant's transcendental deduction) as the first attempt to articulate the background that the modern disengaged picture itself requires for the operation it describes to be intelligible, and to use this articulation to undermine the picture" . But how does the transcendental approach manage to undermine the pictures so cherished by the supporters of the ontological (disengaged) outlook? It does so by showing that the predictive success of some of our most general scientific theories can be ascribed, to a large extent, to the circumstance that they formalize the minimal requirements of any prediction of the outcomes of our activity, be it gestural or experimental. The very structure of these theories is seen to embody the performative structure of the experimental undertaking. As a consequence, there is no need to further explain their efficiency by their ability to reflect in their structure the backbone of nature. The inference to the best explanation, which is the most powerful argument of scientific realists, looks much weaker, because the choice is no longer between the realist explanation of the efficiency of theories and no explanation at all. A third alternative has been proposed: it consists in regarding the structure of the most advanced theories as embodiments of the necessary pre-conditions of a wide class of activities of seeking and predicting.
    In the latter perspective, the project of ontologizing certain theoretical entities appears as a mere attempt at hypostasizing the major invariants of these activities. True, ontologizing theoretical entities enables the philosopher to make sense of the intentional attitude and the seriousness with which the physicist aims at his hypothetical objects. However, by doing so too dogmatically, one takes the risk of freezing the ontological structure. Intentional attitudes call for objects, but it would be very imprudent to assert that, conversely, self-existent objects are what justify the intentional attitudes. As for seriousness, it calls for a sense of the absolute, but it would be very imprudent to assert that, conversely, the existence of an absolute self-structured reality 'out there' is what justifies seriousness in our striving for structures.
    By contrast, the transcendental approach is able to afford both a non-metaphysical explanation of the structure and efficiency of theories, and a satisfactory account of the intentional directedness of scientific research in each paradigmatic situation, provided one associates it with some variety of internal realism in Putnam's sense.
    Now let me give a hint of the (alleged or true) shortcomings of the transcendental approach. I can see three of them.
    (1) The transcendental account comes too late. It can make sense of physical theories only ex post facto and it is thus no instrument of discovery. My answer to this criticism is twofold.
    On the one hand, I accept the criticism to a certain extent, although I think that this is the fate of every sound philosophical argument. As Wittgenstein would have it, philosophers only have to describe (the scientific activity) and leave it as it is. One must aknowledge that, during the preparatory phase of a scientific revolution, the realist discourse and representations prevail. One must also aknowledge that it is by criticizing some of these representations and testing other representations instead, that scientists are able to cross the boundary between the old paradigm and the new one. They do not use directly, during the initial stage of their process of discovery, the pragmatic transcendental method which consists in taking the basic presuppositions of a certain experimental activity as a departure point and obtaining a theoretical structure as a condition of its possibility. This is so because in order to carry out such a procedure one would have to define the type of activity whose norms are to be formalized, before the corresponding theory has been formulated. But the exact nature of the shift in the type of experimental activity is usually clear only after the theory has been stated. As long as the theory has not been fully formulated, physicists usually act as if they were only probing farther and farther into a traditional domain of objects which can be thought of as one possible projection of the norms of the old mode of experimental activity. It is the gap between the findings of the scientists and their general expectations about these putative objects which motivates a move towards radical changes. And it is by an analysis of the new paradigm that the philosopher is able to disclose retrospectively the shift in the type of experimental activity which made the changes unavoidable.
    On the other hand, it is not true that philosophy in general, and transcendental philosophy in particular, have had no role whatsoever in the major advances of science. Careful philosophical reflection may contribute, and has contributed in the past, to modifying the language-game of scientific research, thus favouring the evolution of heuristic representations. Transcendental approaches are especially efficient in weakening the ontological rigidities which hinder the major changes needed when the presuppositions of experimental activities have been so widened that their outcomes exceed by far the domain of validity of the accepted theoretical framework. As I mentioned previously, this ability did not give the transcendental approaches any importance during the preliminary phase of scientific revolutions. But it enabled a special variety of transcendental procedures, namely the use of principles of relativity, to play a key role during the central phase of the major scientific revolutions of the 17th and 20th century. Indeed principles of relativity operate as a way of emancipating law-like structures from particular situations, thus stating improved conditions of objective knowledge without recourse to ontologization (and even bypassing older ontological systems). Galileo's principle of relativity bypassed Aristotle's ontology of natural place. As for Einstein's principle of special relativity, it bypassed Lorentz' ontological-like electrodynamic explanation of contraction of moving bodies and slowing down of moving clocks. The only circumstance which prevented one from seeing clearly the transcendental nature of these principles of relativity is that their formulation was usually followed by a phase of renewal of ontological-like discourses: discourse about kinematic and dynamic properties of bodies in the case of classical mechanics, and discourse about the properties of four-dimensional space-time in the case of relativistic mechanics. But in quantum mechanics, recovery of an ontological-like mode of expression raises an impressive number of problems, and this may make transcendental approaches more permanently attractive in this case than in most other cases.
    (2) The pragmatic or functional version of the transcendental approach apparently leads to relativism. It looks as if it were possible to justify any (right or wrong) physical theory this way. The recipe is simple: take a mathematically coherent theory, display its normative structure, and invent an activity which goes with it.
    Actually, things are not so straightforward. The reason is that not every type of activity counts as an acceptable experimental activity. When defining an experimental activity, one has to take certain constraints into account, the most fundamental of them being that the activity must be so selected that it fits with the prescription of a sufficient degree of reproducibility and universality. Other constraints, expressed by irreducibly empirical universal constants, lead one to adopt certain classes of activities and their associated physical theories. For instance, the finiteness of the constant c is naturally associated with the (typically relativistic) practice of comparing ruler and clock readings from one inertial frame to another. As for the non-zero value of the constant h, it had the consequence that traditional practices, which presuppose the possibility of manipulating and studying reidentifiable bearers of properties, were explicitly or implicitly superseded by activities of production of (partially incompatible) contextual phenomena.
    But isn't acceptance of such constraints tantamount to aknowledging that there exists a pre-given independent reality 'out there' which imposes its structures on us, and which we ultimately have, as much as we can, to represent faithfully ? This consequence does not follow. Saying that an experimental activity is submitted to constraints does not amount to saying that certain structural patterns are imposed by something external. When he tried to make sense of the rules of arithmetics, Wittgenstein provided many important insights which clarify this point. To summarize, he indicated that even though the rules of arithmetic cannot be considered as true to a set of independent facts, they fit elegantly with certain constraints which appear from within the practice of applying them : the 'facts' which constrain these rules do not preexist to their being used. In the same way, even though the present physical theories cannot be considered as describing a set of intrinsically existent properties, they fit elegantly with certain constraints which appear from within the experimental practice. It is especially manifest in the quantum case that the 'facts' which constrain the norms of its associated experimental practice do not preexist to the enactment of this practice, for they are contextual, and their contextuality cannot in general be compensated due to the non-zero value of the Planck constant. As for the value of the Planck constant, which sets quantitatively the degree of incompatibility of contexts, it can be considered, from the point of view of the weak anthropic principle, as arising from within the generic situation of mankind (which defines the range of possible human practices), rather than as a completely extrinsic datum. This being granted, a theory like quantum mechanics no longer appears as a reflection of some (exhaustive or non-exhaustive aspect) of a pre-given nature, but as the structural expression of the co-emergence (see Francisco Varela's work) of a new type of experimental activity and of the 'factual' elements which constrain it .
    (3) Charles Taylor writes that "There are certain ontological questions which lie beyond the scope of transcendental arguments" . Actually, we could even assert that transcendental arguments are designed to avoid having to answer ontological questions in the metaphysical sense. But is not this refusal quite unsatisfactory? One might accept the conclusion of the transcendental deduction in its stronger version, namely that the structure of a theory reflects exclusively the necessary pre-conditions of experimental research, and still feel uneasy. For, even if the theory cannot claim to have captured any structural feature of reality, but only the basic underlying structures of a wide class of research activities, it remains that we partake, with our bodies and our experimental apparatuses, of something broader that we can but call 'reality'. Furthermore, the former notion of co-emergence of an experimental activity and its constraining 'factual' elements, which is so closely akin to the transcendental method, raises the temptation to adumbrate a picture of 'reality' as an organic whole made of highly interdependent processes. Could not one hope to get an insight into this real reality? I think that such a project is not only doomed to failure due to some contingent boundary between us and the "thing-in-itself"; it is hopeless because it is self-defeating. It is tantamount to assuming that it makes sense to seek what is reality independently of any activity of seeking; or to characterize reality relative to no procedure of characterization at all . Now, let us imagine that this paradoxical search can nevertheless be undertaken. The result one naturally expects in this case is that 'reality is A' as opposed to 'reality is not-A', for, if this were not the case, the whole process would have led to nothing worth mentioning. But is not the very statement that reality in the absolute is either A or not-A extremely daring? I should not venture to think that it is even likely.