Here we attempt a cosmonomic (Dooyeweerdian) understanding of the nature of computers and information, together with what this might say for such questions as whether the computer might possess intentionality (at some future time).
This page is an excerpt of a chapter of a book I am writing, a version in which my thoughts are more or less sorted, but still rather tatty. You might find references to other bits and pieces, and also diagrams missing. I will add them later. Also, sections 2,3 etc. are 'bolted on' and in need of integration with the rest.
He would have rejected an Aristotelian account of computer as substance (and accidents) because, as we have briefly glimpsed, it leads inevitably to futile arguments in AI. He rejected all essentialist approaches because of their presupposition of self-dependent essence, and he rejected philosophical realist approaches because they presupposed a detached observer [Henderson, ====]. The human knower is part of the reality that is known.
He would reject a Scholastic approach, in which things in the realm of Nature cannot be understood without reference to the realm of Grace, because this would subject 'natural' things like hardware, bits and software to encroachment and domination by the sphere of Grace; for example, it would disallow any meaningful discussion about the possibility of computers resembling human beings. Conversely, he also rejected the materialist, empiricist and rationalist approaches that remove God and often ethics from discussion.
He rejected a Kantian separation of noumenon from phenomenon because he believed that we can truly know the thing that is the computer - though never perfectly nor with absolute reliability ({D.5.3.5}). A purely subjectivist account of the nature of computers would be untenable because, though the meaning of computers depends on the human being, he held that the human ego is never autonomous.
His approach was more akin to that of existential phenomenology: we understand being as being-in-relationship. But whereas Heidegger assumed it was the entity side that gives a thing its being, Dooyeweerd held it was the law side that does so.
Since both Meaning and Law are exhibited in diverse aspects ({D.3}), to ask these questions involves determining in what ways the computer is meaningful and enabled in respect of each aspect. The proposal of this chapter is that the aspects in which the computer functions provides a framework for understanding the nature of computers. A possible benefit of this approach is that it provides a link with the framework for understanding human use of computers that we developed in chapter {F1}, which likewise was based on Dooyeweerdian aspects.
But what do we mean by 'functions'? As we made clear in chapter {F1}, in all post-physical aspects the computer functions only as object and not as subject -- but it nevertheless does so in meaningful ways so it is valid to speak of the computer doing and being in these aspects.
The computer functions as subject in the physical aspect; does this mean that this is the aspect which unlocks for us our understanding of the nature of computers? No. It certainly explains the (normal) predictability of computer behaviour, but it cannot alone differentiate computer from moon rocks let alone from other electronic devices like radios. That which unlocks a thing's meaning for us is the qualifying aspect ({D.3.3.9}). In {F1} we found three different qualifying aspects, depending on whether we were considering human-computer interaction (HCI), engagement with represented content (ERC) or human living with computers (HLC). However, since the qualifying aspects in the case of ERC and HLC depend on the type of application, and we wish to understand the nature of computers irrespective of application, we must focus our attention on HCI.
In HCI the qualifying aspect is the lingual. While the subject-functioning of the computer in the physical aspect might explain the computer's predictable quality of its activity (see later), it is the lingual aspect that accounts for its functioning and and existence as a computer. But, along with these two aspects we must also take the intervening aspects into account.
Doing this is what provides a cosmonomic starting point of a framework for understanding the nature of computers. We will begin with that part of the computer which the human user most directly experiences, the user interface (UI), then extend this way of thinking to the innards of the computer. We explore a number of issues, including the application of Dooyeweerd's entity theory to the 'things' that constitute a computer system.
>CE Table 1.4. Some computer things and activities meaningful at each aspect
>IG "Work:Research/Dooy/Book/Portions/tables/f2.aspectsUIIn.iff" -w4.37 -h4.37 -c -ra
We could argue that our framework for understanding the nature of computers needs to differentiate hardware from both (physical) materials and (psychic) bits and signals, and that between these two aspects of these two lies, very conveniently, an empty slot which is the biotic aspect. But that is merely philosophical convenience and we need rather sounder reasons than that.
It is obvious that the computer, not being alive, does does not function as subject in the biotic aspect. But it is also difficult to see how the computer functions as object in this aspect, since it is not a means of life for any living thing, nor is the content of the program it is running necessarily about a biotic topic. We seek to identify the biotic meaning that is germane to the computer being a computer, if such exists. It is interesting to note that Dooyeweerd had a similar problem in his discussion of Praxiteles' sculpture of Hermes and Dionysus [NC,III:112ff.].
We have three reasons for treating the hardware of the computer as its object-biotic aspect. The primary line of reasoning goes as follows.
However, there is a secondary reason to support this view, in that one important feature of living things is that they maintain a distinctness from their environment even as they interchange physical material and energy with it. Unlike many physically qualified things like areas of rock or river currents, organisms maintain a distinct spatially bounded physical structure and an active equilibrium state that differs from that which would result from purely physical processes like diffusion and energy flow. And organisms repair themselves and thus maintain their integrity as organisms. Computers too are spatio-physically distinct, maintain a different active equilibrium and maintain their integrity (e.g. by checksums built into memory cells).
Naïve experience - to which we must always listen sensitively in a Dooyeweerdian approach - offers a third reason for maintaining that hardware is the biotic-object functioning of the computer. For several decades it has seemed meaningful to compare and contrast machine with human body, which suggests they lie within the same sphere of meaning (i.e. aspect).
For these reasons, therefore, we treat the hardware that is the computer is best seen as its biotic object-aspect. However, to differentiate between biotic aspect involving living things, and this hardware aspect, we will use the word 'organic', usually in single quotations marks, when referring to the latter aspect. Our reasoning above might be of interest to Dooyeweerdian scholars. [ben]
One is the rather obvious reasoning that since it is we who placed the files on disk, so they must be there. But this account does not allow much precision; what, for example, is the difference between 'my book' and 'a file', both of which are 'in' my computer? Moreover, presupposes we know what we mean by 'files' and 'disk' and even 'placed on' and 'there'.
The second is by analogy with human functioning in each of these aspects. The physical aspect of both is the same (which is why there is a subject-subject relationship between human and computer in this aspect; see {F1.4.2}). In the 'organic' aspect, human nerve cells equate with memory chip gates. In the psychic aspect neuronal states of activation equate with memory cell (byte) bit patterns, and the activation sent from one neurone to another equate to signals. In the analytic aspect, human basic distinctions (concepts) equate to pieces of data. In the formative aspect, the structuring of concepts equate to data structures. In the lingual aspect, the semantic meaning to us of our concept structures equates with that of the data structures -- both of which might be seen as knowledge. But analogy does not provide a sufficiently reliable foundation for a framework of understanding.
The third is that since we experience something at the UI (in each aspect) then something 'inside' the computer must cause, or be caused by, what we experience. For example, the pixels we see that make up the screen are given the colours they have by the display hardware from a bitmap in computer memory, and that amount 55693 that I see on the screen expresses the current value of a numeric variable in the program I am running (or it might be calculated from a few other such variables), and the fact that I can recover the email I received yesterday from my friend rather strongly suggests that the computer 'has' that email.
But these three directions of attack do little more than argue that these things exist or occur inside the computer. They fail to account for what these things are and why they are what they are, and even for what 'existence' 'inside the computer' means. These issues are not simple. In MSDOS, the difference between a file 'existing' and 'not existing' (having been deleted) is that one byte of its name is flagged with a special bit pattern in the latter case; the rest of its name and all its data remain in place. (Note: that statement presupposes what we mean by byte, bit pattern, and data.)
The fourth direction does provide philosophical grounds for these issues. The problem is that these things are 'hidden' from our naïve experience. Dooyeweerd addressed this very question.
First, the fact that we cannot experience a thing directly with our sensory functioning does not mean it is inaccessible to naïve experience.
Dooyeweerd [NC,III:28-36] warned against confusing naïve experience with sensitive functioning (he criticised Naïve Realism for confusing the two, and especially in Bertrand Russell [NC,III:22-23]). Even though it is through sensitive functioning that most of our experience comes, "Naïve experience ... is by no means restricted to the sensory aspect of its experiential world." [NC,III:102, footnote] So the fact that we cannot directly experience a thing does not rule it out from being part of our naïve experience, and does not mean it is in any way a mere theoretical abstraction.
But, if they are hidden how can we experience them? Though our sensory function is the primary means of sensing, it can be 'opened' by means of techniques and technological apparatus such as microscopes, telescopes and using developed physical theory (for experiencing cells, galaxies and atoms, respectively; there are various reasons why things may be 'hidden'). Our experience of such things may be indirect but it is still everyday and not theoretical; even though such techniques and technological apparatus are the product of the theoretical attitude, their concrete actualisation in life brings them into the sphere of our naïve experience. "The naïve attitude cannot be destroyed by scientific thought. Its plastic horizon can only be opened and enlarged by the practical results of scientific research." [NC,III:31]
Thus the fact that such things as bits, data structures and files are not directly experienced sensorily does not mean they cannot be part of our naïve experience. We have apparatus by which we may experience the innards of the computer indirectly -- such as memory dump software or even File Manager or the Linux ls
program.
Therefore we have adequate grounds for extending each of our aspects of the UI of a computer to the innards of a computer, as we did in the third column of Table 1.4. More generally:
This may be see as 'levels' of the computer, which may be called respectively the physical level, hardware level, bit level, data level, formative level, and knowledge level; we will discuss the close similarity to Newell's levels later.
Remember that in all aspects after the physical, the computer functions only as object within human subject-functioning. The human who so functions might be the user, the programmer, a service engineer, a hacker who manages to connect to the computer across the Internet and so on. Which of the above aspects interest them depends on what they are doing.
The innards of the computer have meaning to the user only in the aspects in which the user is able to interpret their functioning or being. To be able to interpret the innards in a particular aspect, the user must possess both the information to know how to do so, and the necessary tools. For example:
This case of the computer illustrates quite clearly Dooyeweerd's claim that aspects are modes of being. The computer as such exists as materials, as hardware components, as memory etc., as raw pieces of data of various types, as structured data, as applications content, all at the same time. But what are the relationships among these things?
>CE Table 1.6 Aspectual Beings of Computer
>IG "Work:Research/Dooy/Book/Portions/tables/f2.Levels+Aggn.iff" -w4.37 -h4 -c -ra
But it does not seem quite right to think of the relationship between the operating system and the programs as part-whole, at whichever level we choose to view it. In one way, the OS is to be seen as part of the program. The program makes system calls when it requires resources such as printing, network access, memory allocations and access to files, and these system calls are the operating system within the program. But in two ways a program seems to be part of the operating system: as one of the tasks that the OS manages and as a special type of resource that provides core functionality of a certain kind to other programs in the same way that resources of the OS provide functionality like printing. Much of the functionality that once was provided by separate programs has now migrated into the OS, and we can expect the trend to continue.
Dooyeweerd offers a way of resolving this paradox, in his notion of correlative enkapsis. As we discussed in {D.4.5.5}, enkapsis is a relationship between meaningful wholes, and correlative enkapsis is that which exists between forest and its denizens, the trees, animals, insects, other plants, etc. The forest is both composed of, and provides a habitat, an Umwelt, for its denizens, which live in it and rely on it ({D.4.5.5}). In like manner, the operating system is composed of the pieces of software that 'live' in it and rely on it. This is why the distinction between OS and program becomes blurred and functionality migrates from one to the other.
This correlative enkapsis may be seen in several aspects, including the psychic, in which both OS and programs are seen as allocated memory and machine code operations, the analytic, in which both OS and programs are data, an the formative, in which both are seen as structures and processing.
However, the operating system has a strong economic aspect, in that it manages the programs and other resources and provides resources for the programs. This might provide us with a way of discussing the quality of operating systems: a good operating system may be seen as one that best fulfils the norms of this aspect, which is frugality. Operating system quality is a topic not often addressed theoretically though it is often commented upon in everyday life, for example in the widespread antipathy to Microsoft's Windows operating system -- that this is fuelled by reference to much that is meaningful within the economic aspect, notably its inefficiency and waste of resources, suggests that this proposal emerging from Dooyeweerd might not be wide of the mark.
In computers, this means that as we can move along the aspectual sequence, we 'add' the meaning of each aspect to what we already have, and it is a different kind of meaning with each move. Thus, starting with the physical aspect (materials like silicon, phosphor, glass):
01100001
is the number 97 under binary coding).
And in the reverse direction, we may speak of 'implementation' (loosely defined): profit level is 'implemented' in numbers, which are 'implemented' in binary-coded bit patterns, which are 'implemented' in voltages, whose components are 'implemented' in silicon. A similar account, in both directions, may be made of the user interface screen, beginning with phosphor and glass to make a cathode ray tube.
Thus we have a two-way mapping between each pair of neighbouring aspects. From the characteristics of aspects discussed in {D3.2}, several things may be said about this relationship between the levels of a computer system.
Because of the fundamental irreducibility in meaning between aspects ({D.3.2.3}), how an aspectual being in one aspect may be implemented in an earlier aspect is not determined. For example,
01100001
is the latter 'a' under the ASCII code.
Moreover, what is one being in one aspect might be many beings and even many activities in the earlier aspect. What seems a static aspectual being in one aspect might involve not just earlier static aspectual beings but also earlier aspectual dynamic functioning. For example, the information 'profits last year' might not be stored in the database or computer as a single datum (analytic aspect) but, whenever it is called for, a quick calculation of profit is made on the basis of two other figures, income and expenditure. Such 'virtual data', though a single lingual aspectual being, is stored as multiple analytic aspectual beings together with the formative aspectual functioning that is the subtraction process. Another example: whereas in most computers a memory bit is implemented as a static electric charge, in one digital system I once worked on in the mining industry where there is much electric interference the single bit was implemented as a phase change in alternating current waveforms. (Remember that the Whiteheadian distinction between process and entity is dissolved in Dooyeweerd ({D.4.3.5}).)
That the relationship between beings in different aspects is not determined, neither in the one-to-one mapping nor in the number of things in the earlier aspect that contribute to the thing in the later, gives the software designer considerable freedom.
But this same irreducibility makes it impossible to interpret something in one aspect unambiguously in the next aspect if we do not import meaning from that aspect. For example, given a memory dump (bits expressed in hexadecimal: psychic) one has no idea what data is held there (analytic aspect), unless one knows (a) where in memory it is held (b) what coding system is used. Likewise, given a piece of program code like 'x = y + z'
(analytic and formative aspects) we can have no idea what it stands for (lingual aspect) - the curse of programs written without comments!
It is the irreducibility between the analytic and psychic aspects that makes randomizing and file compression possible. Randomization involves an operation on the bit pattern of a number that makes sense in the psychic aspect (for example involving exclusive-or) but makes no sense at the analytic aspect. File compression involves bit-level (psychic aspect) operations that alter the coding without altering the analytic data: for example compressing a file encoded as ASCII characters as a ZIP file.
This also enables us to understand errors of various types. What is an error at one level is explicable at the next lower level. For example, if a program's memory cell is overwritten by another program (such as a virus) then, from the point of view of the psychic aspect, all that has occurred is that a bit pattern has changed, and in principle we could know which program did this. But at the analytic aspect of data, the value in the variable has suddenly changed, and the change is completely inexplicable even in principle.
A thing seen in one aspect will always have an implementation in earlier aspects, but the converse is not necessarily true. Things in earlier aspects might not have any meaning (with regard to the computer system) in later aspects. For example,
(Strictly, under Dooyeweerd, we should say that the dust's symbolic meaning has not been opened up; for example, I could write a message in the dust to remind myself to do something when I return to the computer next time - but we will not consider such possibilities here.)
This approach can provide a philosophical account of a number of things that we experience day by day, and the differentiation of part-whole from enkaptic relations, can help prevent confused thinking and category errors in our arguments as we consider the nature of computers and information. [ben]
This view can, moreover, affirm and enrich various extant views. First, that most of these views presuppose a progress (or 'hierarchy' [Alavi and Leidner, 2001]) from data to information to knowledge parallels Dooyeweerd's notion of inter-aspect order and dependency ({D.3.2.5}).
Second, our view echoes Alavi and Leidner [2001:109] when they suggest "data is raw numbers and facts, information is processed data, and knowledge is authenticated information", in that processing is of the formative aspect and to 'authenticate' information presupposes that we know what it is about, hence its lingual qualification.
Langefors [1966] used the term 'capta' to denote what is captured from what is perceived, and noted that, by a process of interpretation, this can become information. This echoes the above Dooyeweerdian view of the difference between psychic perceptions and the rest, if by 'interpretation' we mean the 'adding' of the meaning of later aspects.
But there is a subtle difference. While Langefors speaks of capta 'becoming' information by a process of interpretation, as though some raw material is transformed over time, Dooyeweerd would say that the perceptions, raw data, information and knowledge are all just different ways of speaking of the same thing ({D.3.3.1}). While there is often some temporal process involved, especially when engaged in theoretical thinking, there is also often a Gestalt immediacy in which the bit-perception is the data is the information is the knowledge. We are here more concerned with the everyday stance, with its immediacy, than the theoretical stance.
The inadequacy of presupposing temporal process as necessary to the above sequence is highlighted by Alavi and Leidner [2001:109] who draw attention to Tuomi's [1999] "iconoclastic argument" that "the often-assumed hierarchy from data to knowledge is actually inverse: knowledge must exist before information can be formulated and before data can be measured to form information." This echoes both Dooyeweerd's critique of theoretical thought as presupposing the meaningfulness of the cosmos (what the information is about) in order to make the necessary distinct concepts about which to think ({D.5.5.x}), and also Dooyeweerd's notion of dependency in the anticipatory direction ({D.3.2.5.2}), especially when it is remembered that Dooyeweerd's view is that we function as whole human beings rather than as brains, information processors, etc. In particular, bits anticipate data, which anticipates (processed and structured) information, which anticipates signification (knowledge).
We should note that Dooyeweerd's use of the word 'symbol' differs from that in computer science or artificial intelligence. Symbol, to Dooyeweerd, is necessarily related to the signification that is the kernel meaning of the lingual aspect, and without signifying something cannot be a symbol as such. By contrast, the word 'symbol' in programming refers to abstract, syntactic token without necessary reference to what it stands for.
Finally, we should also note that since none of the aspects are absolute, neither is our functioning in them ({D.5.3.5}). This means that we can never capture the entire and exact meaning that we wish in symbolic form, we cannot ever structure things precisely as we intend, we cannot even distinguish things perfectly (respectively: lingual, formative, analytic). So our data, information and knowledge can never be perfect, and we would do well to recognise this in our design and use of computer systems. To overcome this limitation, we should not strive to maximize our data, information and knowledge structures, but rather design our systems in such a way that the user's human living with the computer and engagement with represented content is not overly hindered by this limitation. For example: by its wording when the computer gives a result, invite the user to question it. This is considered further in chapter {F3}.
We can detect a similar, but even more complex, state of affairs with the computer. The computer as such (ignoring its application) could be seen as a semi-manufactured product like a plank, but one in which there are several levels of semi-manufacture: the hardware formed out of physical materials, the bits founded on the hardware, the raw data founded on bits, the information founded on data, giving the potential to be used lingually to create the application, just as the plank of wood gives the potential to create the chair.
This is just a different way of saying what has already been said, because involve irreversible foundational enkapsis. Also, both plank of wood and structured, processed information are qualified by the formative aspect.
We will not pursue it further here for fear of confusing the reader, though we touch on it again in chapter {F4}. However, if any reader wishes to pursue it, the whole edifice above could be rewritten in terms of levels of semi-manufacture rather than of aspectual beings, and it might prove very fruitful to do so. But such an exercise enters uncharted territory. Dooyeweerd did not critically analyse semi-manufacture beyond noting the philosophical issue as raised by the plank of wood. While his "at least" above suggests he was aware of the possibility of more complex cases, but he offered no principle by which the levels can be differentiated. Also, it is not clear that the notion of semi-manufacture is itself useful because rush chairs, for example, are made directly out of natural raw materials rather than semi-manufactured products. And are the aggregations in table 1.6 as well as the inter-aspect additions of meaning to be seen as semi-manufacture? Such questions are still to be addressed. We will thus continue with our approach as above because it clearly differentiates the levels and also highlights other issues that the semi-manufacture approach might not. [dooy resch]
This does, of course, confirm our naïve experience of the computer as technology, but is that all? No. It enables us to differentiate the diversity of technological processes and capabilities that have been necessary to bring computers and information technology to its current state. It is too simple to merely point to the formative aspect; in considering the nature of computers and I.T. as a technological phenomenon it is useful to recognise the diverse ways in which we function in the formative aspect, and to differentiate these in our study. [ben]
The answer is that the latest aspect in which the computer functions as subject rather than as object (i.e. 'by itself' without reference to human beings) is the physical and that the laws of this aspect are determinative (excepting quantum effects). All its other functioning that is meaningful in other aspects is an interpretation that we human beings place on the deterministic physical functioning. We interpret such physical activity in later aspects as activity in those aspects, and that later-aspect activity therefore is reliably predictable. Even in the non-determinative aspects computer 'behaviour' is determined.
This state of affairs gives us activity in these non-determinative aspects on which we can rely. It is what enables these 'intelligent' and 'useful' machines we call computers to exist as such. Thinking God's thoughts after him, as Newton tried to do, I find myself in awe at this wonderful cosmos!
In both cases, one can say that the computers have the requisite 'knowledge' but what I want to know might not be represented directly within the computer. Some processing must be carried out, in the first a mathematical extrapolation and in the second a text search with numeric comparisons. Both are what, in database terminology, is called virtual data.
What this means is that, especially at the knowledge level (lingual aspect), dynamic activity is usually an essential component of the knowledge that the computer imparts to us.
This activity is no mere manipulation of data (analytic aspect) but is deliberate shaping of it directed to some end (formative aspect). Something is formed, something is brought into being that was not there before -- new structures or new values and states of data. This processing depends on the analytic anticipatory propensity of data to be placed in relationship and to be changed in value. It is the formative aspect that enables computer systems to seem 'intelligent' and 'responsive' and it is its functioning in the lingual aspect that makes such intelligent responsiveness meaningful to us. Knowledge, said Newell [1982], is 'generative'; the formative aspect explains why this is so. As we shall see later, this insight is useful in enriching Newell's ideas.
What is a computer program? We can immediately differentiate two uses of this concept. One is that set of written instructions, rather like a recipe or music score:
BEGIN.
Open Window.
Ask "Would you like me to greet you? ".
Obtain answer.
if answer = 'y' then Print "Hello World!" in Window;
else Print "Bye bye!" in Window.
Close Window.
END.
The other is the program actually running in the computer, when our example actually prints 'Hello world!' in a window on screen. The former 'exists' at least as long as does the medium on which it is written (be it paper, disk, CD), and may be said to be lingually qualified. The existence, or rather the nature, of the latter is more problematic. What is the 'existence' of the program as it actually runs on the computer. Running a program involves several stages and the bringing into being of new entities: the file that contains instructions like the above, the compilation of the above readable instructions into machine code instructions in a separate file, the machine code in computer memory, and the execution of that code. (In interpretive languages like Basic, and in visual languages like LabVIEW [Green, 1996] or Istar [Basden and Brown, 1996] the computer seems to run the actual text or graphical symbols of the written-program, but this is not so, and some translation into machine code always occurs.)
How do all these relate to each other? In our previous discussion of the aspects of the computer, it is the machine code in memory being executed that we have been talking about -- the program-in-memory -- and we treated it as a psychic aspectual being enkaptically bound within the meaningful whole that is the computer system. But what of the written form (program-written), on paper and in file, and the file containing the machine code? And if that code is executed twice (perhaps with a different user's answer each time) is it more useful to see these two executions as same being or different beings?
In Dooyeweerd's discussion of performance art [NC,III:110], we might find useful insight:
"It would be incorrect to assume that all works of fine are display the structure of objective things. This will be obvious if we compare plastic types (i.e. painting, sculpture, wood carvings, etc.) with music, poetry and drama. "Works of art belonging to the last category lack the constant actual existence proper to things in the narrower sense. They can only become constantly objectified in the structure of scores, books, etc. .. such things as scores and books, are, as such, symbolically qualified. They can only signify the aesthetic structure of a work of art in an objective way and cannot actualize it. This is why artistic works of these types are always in need of a subjective actualization lacking the objective constancy essential to works of plastic art. Because of this state of affairs they give rise to a separate kind of art, viz. that of performance, in which aesthetic objectification and actualization, though bound to the spirit and style of the work, remain in direct contact with the re-creating individual conception of the performance artist. The latter's conception, as such, cannot actualize itself in a constant form, though modern technical skill has succeeded in reproducing musical sound-waves by means of a phonograph."
(Remember that Dooyeweerd's use of 'objective' and 'subjective' has nothing to do with whether something is fact or opinion; see {D.3.3.4.1}.)
A computer program is not fine art, but there are striking parallels.
Just as there are many 'languages' in which to write music, so there are many languages in which to write programs. The program may be written in what used to be called a high level language, from the classic Algol to the recent Java, all of which express what the program should do in terms of raw data and their structures and processing (which we held to be meaningful at the analytic and formative aspects). Such written programs must then be 'compiled' into (some version of) machine language, in which the instructions concern how bit patterns in memory and registers are manipulated as bit patterns rather than as data. It is at this level that the program is run. The same program may also be written in assembler language, which is a way of expressing machine code in symbolic form. As we have made clear, the written program must not be confused with the program as such as it resides in computer memory and is (ready to be) run.
Music is qualified by the aesthetic aspect; programming is not. But there is an aesthetic in writing programs. Computer scientist Donald Knuth recalls [====:130]
"I got hold of a program .. written by Stan Poley. That program was absolutely beautiful. Reading it was just like hearing a symphony, because every instruction was sort of doing two things and everything came together gracefully. I also read the code of a compiler that was written by ..: that code was plodding and excruciating to read, because it just didn't possess any wit whatsoever. It got the job done, but its use of the computer was very disappointing. So I was encouraged to rewrite that program in a way that would approach the style of Stan Poley. In fact, that's how I got into software."
But the aesthetic of writing a program is not the same as the aesthetic in writing music. The symbols of the music score signify that which is aesthetic, the music itself. But the symbols of a program signify the job to be done, and it is mainly the style in which that job is done, and the style in which the program is written, in which the program's aesthetics is found.
That aspects are irreducibly distinct in their meaning gives the program designer an enormous freedom in how they create their program. The irreducibility between the lingual aspect of aboutness and the formative aspect of process and structure is particularly important. Any give data structure may be used to represent a variety of meanings, and vice versa. For example, "
Note that the process of programming usually involves running a program development system (PDS), which might comprise, for example, a text editor and a compiler. Therefore, programming is seen an application task, of the type that we discussed in {F1}, and the programmer is seen as user of the PDS -- but not of the program s/he is writing. This fascinating process of programming, in its wider context of knowledge elicitation and system development, will be the topic we seek to understand in chapter {F3}.
By reference to the psychic to lingual aspects we were able to provide a useful definition of bits, data, information, knowledge. This led to understanding the Being of the computer system as an enkaptically bound meaningful whole, in which even the thoughts of the user could be meaningfully involved. We considered the dynamics of the computer in the various aspects, and then the nature of computer program, both as resident in the computer and in its written version, by likening it to music. The irreducibility of aspects gives the programmer much freedom.
Computer systems seem to exhibit greater complexity than any of the types of entity that Dooyeweerd himself discussed (the linden tree, the marble sculpture, utensils, books), partly because of their activity, partly because they have more aspects -- and in addition, we must then take account of the applications aspects (ERC and HLC). Therefore, we may offer computer systems to Dooyeweerdian philosophers as a case study that might raise issues that Dooyeweerd himself never saw or clarify issues that he only glimpsed, and thus extend or refine Dooyeweerdian theory.
The two Dooyeweerdian ways of viewing computer systems above, as aspectual functioning or aspectual being, are largely equivalent, and it is best to use them together. They have different characteristics and sometimes we will find one more helpful, sometimes the other. The view based on aspectual being enables us to speak about computers in nouns rather than verbs, such as allocations of memory, which enables us to speak about parts more easily. It also opens up for us the issue of implementation and encapsulation. But under this view, there tends to be a separation between things and their activity that can be unhelpful making it difficult to account for virtual data. The view based on aspectual functioning makes no difference between virtual data and data as stored at the formative and later levels, though it does allow us to differentiate them at pre-formative levels. It is a distinct advantage of the aspectual functioning view that it clearly distinguishes between computer's object-functioning in the post-physical aspects on the one hand and its subject-functioning in the physical aspect on the other, whereas under the aspectual being view, the distinction is blurred and it is too easy to think of the computer as having or being made up of things in these aspects that have their own substance independent of human beings. Under the aspectual functioning view, it is always clear that the post-physical descriptions of the computer refer ultimately to human functioning. Finally, the view in terms of aspectual functioning integrates normativity with being, enabling useful guidelines for the design of computer systems to be drawn up.
First, the question what a computer 'actually is' presupposes a notion of Being. If we presuppose Being to be unitary and primary, as under the matter-form ground-motive, then we have no grounds other than dogma for differentiating between the several views, and must eventually end up denouncing all but one as 'wrong'. But since in Dooyeweerd, Being arises from Meaning, we have grounds for digging deeper than 'actually is', and since to Dooyeweerd such Meaning is multi-aspectual, we have grounds for allowing distinct views of what a computer 'is' to stand side by side. The being of computers thus arises from aspects, and as we have seen, its being is multiple.
Specifically, second, it seems that that Schuurman is speaking of the computer's psychic aspect while Searle speaks of its analytic or formative aspect. That rationality occurs only within an aspect and never between aspects precludes the type of rational discourse between the two views that would conclude that one is right and the other wrong. However, since both fit within the aspectual framework, they can both be brought into multi-aspectual discourse, in which each accepts that the other is speaking about something different. On these grounds, we can find both views right and yet both wrong. Thus Schuurman, Searle and others are all correct in their views of what a computer 'is' -- but we must disagree when they imply that their view is more correct than the others, by using wording like "Rather, what it does is" [Searle, 1990:85] or "the basic structure of the computer is this" [Schuurman, 1980:21] because both close off debate.
Third, Dooyeweerd urges us to differentiate two possible thrusts of the statements about what a computer 'actually' is: do they refer to its subject-functioning (what the computer 'is' without reference to human beings) or to its meaningful functioning, regardless of whether it is subject or object (what a computer 'is' to us)? To Dooyeweerd, the computer only functions as subject in the physical (and pre-physical) aspects, and thus is 'in itself' merely physical. But it may function as object in any aspect, especially from the psychic to the lingual, which allows the multiple answers that we mentioned above.
"Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output."
Our approach, which states that the computer's subject-functioning lies only in the physical aspect, gives validity to those who argue against strong AI.
But it also gives some validity to those who argue for strong AI, and that computers possess intentionality. To see why, we must first ask, is Searle right when he claims that a computer "has a syntax"? Is not this syntax also "solely in the minds of those who program them and those who use them"? The computer functions only as object in the formative aspect of syntax, and what we see as its syntax is also ascribed to it by the very people Searle speaks of. In like manner, we also question Schuurman's belief that the computer "basically" manipulates bits because he implies (though does not say) subject-functioning that is meaningful in the psychic aspect. However, we do affirm that the computer "has a syntax" and manipulates bits, and that computers can sense, distinguish, plan, create -- so long as we remember that these are object-functioning.
But if we affirm these for the psychic and formative aspects, can we not also affirm similar statements based on the lingual aspect? Statements like "The Prospector program found a molybdenum deposit" are as meaningful and as valid as "Jim Smith found a molybdenum deposit using Prospector" -- so long as we see the first as object-functioning and the second as subject-functioning. To do so is neither anthropomorphism nor a metaphor. In terms of meaning, computer and human are alike, because both function within the same meaning-framework, but they function in different ways, one as object, one as subject. Likewise, it is valid -- under lingual object-functioning -- to say that the computer 'knows', 'understands', 'has an intention towards', and the like.
We can compare the various views in table 2.1, which shows those of Schuurman, Searle and strong AI and the two views of Dooyeweerdian: S-F is the subject-functioning, and M-F is the meaningful functioning (whether as subject or object).
>CE Table 2.1 Views of Functioning of Computer
>IG "Work:Research/Dooy/Book/Portions/tables/F2.AspsComputer.iff" -w4.27 -h1.75 -c -ra
Despite his claims to the contrary, it would seem from Table 2.1 that Searle is not very different from strong A.I. According to Dooyeweerd, the difference between humans and computers is not primarily a difference in causality (biotic or physical), as Searle contends, so much as one of in which aspects it works as subject. But Searle's suggestion might indeed show some validity, in that 'causality' could refer to the aspectual laws to which the entity responds as subject, and at the physical-biotic boundary the difference is that one functions as subject in the biotic aspect while the other does not -- but this is merely one of many differences.
If, however, we rephrase the question of whether a computer can exhibit certain human properties to "Could a computer ever be identical with a human (except in the material of which it is made)?", our answer must be "No!". This is not because of differences in properties, behaviour or 'being' but because of different subject-functioning: never will a computer function as subject in all aspects. But to put the question this way is meaningless, according to Dooyeweerd, because it presupposes that we can theoretically analyse the human 'self' (as distinct from what the self does, exhibits or is in an aspectual sense), and Dooyeweerd argued that the human self is beyond theoretical probing because it transcends the aspects within which theoretical analysis takes place (see {D.7.3.2}).
The implication so far is that we see the computer as something in which human beings have explicitly represented knowledge; what of subsymbolic computing such as neural nets? Here, the computer is not so much programmed as trained to give desired output to certain inputs. Both a KBS and a neural net (NN) have a number of inputs and outputs with a network of inferences from one to the other, but whereas in a KBS the nature and strengths of the inferences in the network are set to represent explicated expert knowledge, in a NN the weights are modified by the computer itself during a training period, in which thousands of sets of inputs are presented to it along with what the outputs should be for each set. A suitable algorithm works out how the weights should be modified for each training input. NN technology is useful for example for control of machinery and for recognition of speech, in which the input-output mapping is not known by any expert. The challenge for our framework is to account for its validity and its usefulness. Surely it cannot be seen as an object of human functioning in the later aspects, and yet it obviously functions in these aspects.
Thus our framework accommodates neural net computing. A possible benefit of it might lie in recognising that the mapping need no longer be seen as a fluid 'soup' nor as something mysterious but as a coherence of the aspects. Such a framework might also help guide the choice of input and output variables.
While our approach holds that the knowledge that is 'in' the computer is that which has been programmed, represented in some symbolic form (even when it might have undergone much subsymbolic transformation), this does not mean that the computer holds propositional knowledge, and certainly does not mean that all knowledge employed when using the computer system is "reducible to propositional knowledge". This is for two main reasons.
First, any knowledge that is 'in' the computer is not its own, but is always an object in human subject-functioning, an interpretation under one or more aspects of the computer's physical activity. Even that which has been generated by the computer's formative processing is still treated as knowledge only by reference to our subject-functioning in the lingual and other aspects. It is central to our approach that the computer system cannot be understood in isolation from its users or other humans, and our knowledge is not wholly propositional.
Second, the idea that the computer contains propositional knowledge presupposes the possibility that the proposition can capture perfectly and entirely what its writer intended. But, as has already be emphasized in {1.8}, this is impossible; especially, different people mean different things by the same symbol. Worse, what we usually mean by propositional knowledge in such debates has a strong abstractive Gegenstand ({D.5.4}), in which the human writer and reader of the proposition are no longer engaged with it nor with what it is about, so the proposition inherently distorts and narrows what it should represent.
Thus we affirm the main thrust of Adam's theme. But it may be that our view allows for a somewhat richer view of embodiment. As we saw in {D.5.3}, Dooyeweerdian philosophy contends that each aspect provides a distinct way of knowing. Whereas most feminist writers who discuss embodied knowledge are at pains to distance themselves from the Cartesian view and its over-emphasis on analytic knowledge, at the level of types of knowing, and thereby tend to emphasise a few types that counteract the Cartesian view, such as psychic knowledge (instinct), the Dooyeweerdian approach rejects the Cartesian view at a deeper level of ground-motive and thus can accept the Cartesian way of knowing alongside psychic knowing as just two ways among many others. See Fig. 2.4.
>CE Figure 2.4. Ways of knowing and embodied knowledge
A strong implication of this Dooyeweerdian framework is that all types of knowing should be able to be encapsulated in computer systems (as well as knowledge of all kinds of things), even though never perfectly. But how this may be achieved is not easy to see. Subsymbolic computing is offered, by its supporters, as the answer, but success in this seems to have been limited. Some feminist writers suggest robotics, on the grounds that the physical aspect is important, but Adam [ibid.:181] questions this, as we discuss in {F.5.4.1}. It may be achievable not so much by representation but rather by usage, and it may be that aspectually-oriented technologies, discussed in chapter {F4}, can help.
Support for this view may be found within the agent community, though not using Dooyeweerdian terminology. To Milewski [1997], the definition of what an intelligent agent is is becoming less clear. The general consensus in the past has been based on the traits and characteristics that an agent must possess, but such attempts at definition inevitably lead to confusion as people from different task disciplines argue about which traits are essential and which are not. According to Milewski's survey of traits argued for, agents should be or have: asynchronous and autonomous, changing behaviour according to accumulated knowledge, initiative taking, inferential, knowledgeable about general goals and preferred methods, natural language interfaces, and personality.
Milewski suggests a new perspective from which to understand what agents are: that of delegation. This perspective focuses on the relationship between the user and the agent and resonates strongly with Dooyeweerdian philosophy. Milewski speaks in terms very like aspectual functioning, and indeed most of his suggestions align closely with certain aspects: benefits of delegation must exceed the cost (economic aspect), delegation depends on sophisticated interactive communication (lingual aspect), delegation requires trust (pistic aspect), performance controls are a key part of delegation (formative aspect), and delegation depends on personality and culture (which, in Milewski's case, refer to things that are meaningful in the social, aesthetic and ethical aspect ====do: check that in the ppr). We can thus see that most of the later aspects are explicitly included.
It might not be just coincidence that Milewski chose the word 'delegation' to express his perspective. It is a juridical term, implying some kind of responsibility and is thus centred on some law-concept. This is precisely what Dooyeweerd's philosophy is: centred on the notion of Law and giving primacy to law-side over entity-side ({D.1.4.3}). Thus we see, in Milewski's work, a move away from a an entity-orientation towards a law-orientation, a move away from the traditional to a position akin to that of Dooyeweerd's.
# say: this section is for those who are technically aware of the issues
# so what? this shows how aspectual analysis can be applied
We must first understand the Internet in terms of its aspects. The physical aspect covers the electrical functioning of conductors spanning continents, electromagnetic waves passing between earth and satellites, and, in each computer, the operation of millions of P-N semiconductor junctions and field effects. The 'organic' aspect covers the hardware connection, such as modem, cables, phone socket and the telephone exchange making a direct connection to the modem of the Internet Service Provider (ISP), and also the more or less permanent connections this computer's communications devices have with other computers, either by cable or by satellite, and so on throughout the Internet. The psychic aspect covers the bit streams that flow as signals between my computer and the ISP computer, and at this level, both the modems, the telephone exchange, the cables and satellites, are invisible. The analytic aspect covers the data that flows back and forth, the formative aspect covers its structure and purpose, and the lingual aspect covers its application content, such as the content of web sites.
The ISO layers may be seen as relating to specific aspects, as shown in Table F2.8.8.
But to understand the conundrum that my computer is part of the Internet at the same time as the Internet is part of my computer, we need Dooyeweerd's entity theory. My computer may be said to be part of the Internet when viewed from the psychic aspect of both being source or destination of streams of bytes. But from the lingual aspect of content the Internet may be said to be 'within' or part of my computer. However, neither view is entirely satisfactory, and the relationship between any single computer and the Internet is not a part-whole relation but one of correlative enkapsis. In this kind of enkapsis, the individual entities like computers together form an Umwelt, which cannot exist without them but also they cannot exist (as connected computers) without it.
Understanding this as aspectual progress can help us in two ways in research strategy. First, recognising that each works in a different aspect, the Dooyeweerdian approach suggests that we would be wise to take account of the particular strengths and limitations of each aspect, and in particular the way in which it is not absolute. Second, recognising that there are yet further aspects, such as the social, might suggest future different types of search engine.
Aspectually guided research strategy is not restricted to search engine research, of course, but may be applied in like manner to many areas.
Analog computers in which voltages and currents carry symbolic meaning may be accounted for, not by reducing continuous voltages to binary numbers but more directly, by treating the voltages as themselves symbols that signify some continuous amount (e.g. 0 - 5 volts might map to a level of activation. This assigning of lingual meaning (semantics) to voltages involves the intervening aspects, but is a rather simple one-to-one way, with the result that it can be difficult to separate out the different aspects:
Device level = physical aspect,
'Logic' level (bit, signal) = psychic aspect,
Symbol level = analytic and formative aspects
Knowledge level = lingual aspect
and between Newell's notion of levels and Dooyeweerd's theory of modal aspects:
Further, four similarities may be found in the two approaches. To Newell the role of a symbol is to give 'distal access' to an entity; this has echoes of Dooyeweerd's idea that meaning is 'referring beyond' ({D.1.4.2}). Second, "the knowledge level does not itself explain the notion of aboutness; rather, it assumes it." [Newell, 1982:123] This is reminiscent of the kernel meaning of the lingual aspect (signification) as graspable not by theoretical thought but only by intuition ({D.3.2.1}). Third, Newell claimed that his levels are not derived from a priori theory but derived primarily from years of practice in artificial intelligence [ibid.:92]; Dooyeweerd's aspects are derived from years of reflection on everyday life ({D.1.4.5}). Fourth, Newell made a strong ontological claim for his suite of levels (though he recognised that this claim could "range from dead wrong to slightly askew, in the manner of all scientific claims" [ibid.:99]. Likewise, Dooyeweerd made a similar, though subtly different, claim, that the aspects are not just a point of view, though his suite of aspects should be subject to criticism and refinement (Dooyeweerd, 1955, II:556). The subtle difference is that the aspects cannot be said to 'exist' so much as 'pertain', since they are the very framework that makes existence possible ({D.4.3.3}).
That Newell's logic level corresponds with the psychic aspect (our bit level) becomes clear when we recognise that psychic activity involves states of neurones, signals sent along axons and signal processing. Our description of the inner workings of the computer at the logic level is almost exactly in these terms: digital states, signals and signal processing. However, we do not claim that computer memory cells and processor actually functions as subject in the psychic aspect. Rather, it functions as object in the post-physical aspects; it functions as subject only in the physical aspect (electric charges etc.) and we interpret that physical functioning as states and signals. The same applies to later aspects.
The single symbol level corresponds with two aspects, the analytic and formative. In computer science this manifests itself in the difference between basic types of data -- integer, boolean, text, etc. -- and data structures and algorithms. Interestingly, Newell had trouble (1993) with a dichotomy within his symbol level that he could not account for (and indeed tried to deny), but which might be explained by the difference between the analytic aspect of distinction and the formative aspect of deliberate shaping.
Finally, we missed out the circuit level (hardware) and the biotic aspect, yet they occupy the equivalent position in each suite. Might they correspond with each other? If we can only see the biotic aspect as life functions, then the answer must be "No." But if, instead, we define a limited version of the biotic aspect (which we may call the organic aspect) as concerned with maintaining the integrity of the organism distinct from its environment, then they might indeed correspond. We might note Dooyeweerd's own difficulty in identifying the biotic aspect of Praxiteles' sculpture (1955,III:112ff).
Nevertheless, despite these differences, we have good reason to propose that Newell's levels are remarkably similar to Dooyeweerd's aspects.
We can also extend or enrich it in a principled manner. For example:
Newell held that knowledge is generative. We have seen how this can be understood in terms of the formative aspect, by which the pieces of data in the computer is processed in pursuit of some end, and this increases our knowledge as we use the computer. But Newell linked generativeness to logical closure (the total of whatever could conceivably be deduced from the pieces of data represented in the agent -- which could be infinite in amount) which presupposes complete absence of purpose in whatever logical deductions are made, which in turn suggests a pre-formative aspect, viz. the analytical. It requires the formative aspect to give the deductions some end. That Newell failed to differentiate between the formative and the analytic aspects here might account for why his definition of knowledge (as logical closure) was so counter-intuitive.
Second, we saw that Newell wished to account for the apparent loss of determinism between the symbol level and knowledge level behaviour, by resorting to logical closure and a dogma that the knowledge level has the special property among levels of being incomplete. In this he merely side-stepped the issue. Dooyeweerd, by contrast, provides a framework in which neither determined nor non-determined behaviour requires explanation, because he refused to accept the Nature-Freedom dualism. The computer's physical subject-functioning is deterministic, but its object-functioning in the later aspects, especially the lingual (knowledge level) is our interpretation of this, and our own functioning in these aspects is not determined. For example, it is not determined whether we interpret a circle on screen as a letter 'O' or a number '0'.
But this does not yet quite solve the problem. What Newell was trying to account for was that an agent's own knowledge level behaviour is non-determined while the agent's symbol level behaviour is determined, where 'agent' could be either human or computer. But the two presuppositions we noted earlier ({F2.2.7.2}) Dooyeweerd urges us to question.
Thus our earlier questioning of Newell's assumption that knowledge level behaviour of the computer is non-deterministic, is brought into focus on the grounds that the subject-functioning of the computer is determinate (physical aspect) and its knowledge level (lingual) behaviour is object-functioning, which is to say it is our ascribing meaning to that physical functioning. Thus, if we set aside the non-determinacy of our ascription, the knowledge level behaviour of the computer would be deterministic. Likewise, our earlier question whether human symbol level behaviour is deterministic is brought into focus on the grounds that human (subject-) functioning in the analytic and formative aspects is not determined. At the very least, though Dooyeweerd would affirm Newell speaking of knowledge level behaviour of computers, he would question these assumptions of Newell's.
There is an increasing recognition of the irreducible levels we experience, for example, Bunge's system levels, Boulding's levels and Hartmann's strata. Humanist thinking seems to reaching for a meaningful, cohering diversity, but has yet to recognise clearly that this is the case. Though the detailed work has yet to be carried out, it seems that the Dooyeweerdian notion of aspects is able to account for and enrich these theories and provide a sound philosophical foundation for them.
I first discovered Newell's levels in the early 1980s, before I returned to academic life and long before I discovered Dooyeweerd, and immediately liked them and felt they accounted for what I was experiencing in information systems at the time. When I returned to academic life, I used the levels to structure my undergraduate and postgraduate teaching in a number of modules in order to ensure that I covered a wide range of relevant issues in a way that does not confuse them, and impart the 'wisdom' that integrates the human and ethical with the technical. I still do so. Table 3.3.3 shows how I structured various courses according to the levels (aspects); others might find it useful to do this. Stefik [====], likewise, now structures his treatment of artificial intelligence according to some of Newell's levels.
>CE Table 3.3.3. Structure of Courses
>IG "Work:Research/Dooy/Book/Portions/tables/F2.LevelsTeaching.iff" -w4.53 -h4 -c -ra
In several ways, Dooyeweerdian philosophy can help us understand the various views and place them in relation to each other. Dooyeweerd held [NC.III:109] that:
Both supporters and opponents of strong AI seek a "fixed point" in some kind of causality or "thing-reality", but because this is a mirage, suggests Dooyeweerd, both have problems in accounting for meaning (semantics) and must rely on dogma or mystical connection. But Dooyeweerd's approach, which presupposes meaning, does not need to find such a connection, since they occur especially in the inter-aspect relationships.
Dooyeweerd's notion of ground-motives ({D.6.4}) can throw light on the diversity of ways in which this question is addressed; see Table 3.2.1. Under the matter-form motive, X is mind and information, and Y is physical matter of which the computer is made. Under this dualistic ground-motive, the only way to harmonise X and Y is by giving absolute priority to one, and if necessary reduce the other to it. Materialists give priority to matter while holders of the Cyberspace perspective give priority to mind. Under the nature-grace motive, X is sacred 'divine spark' and Y is profane. The sacred-profane divide implies a normative and not just ontic divide, so those operating under this motive hold as a dogma that they must not attempt to see computers as similar to humans. Under the nature-freedom motive, X is non-determinacy and Y is determinacy. Various ways have been attempted to harmonise these. Some merely hold as a dogma that all freedom is illusory. Others suggest that even physical behaviour is non-determined owing to quantum probability. Newell (1982) tried to explain how the apparent determinativity of the symbol level was lost when we move the knowledge level.
Searle's answer, that X is biological causality and Y is physical, and these are fundamentally different, however, does not fit into these ground-motives. As we have seen ====tbw above, he seems to offer no grounds for this difference, holding it as a dogma, and offers no explanation of why it is that biological causality can "process information" while physical causality can only "manipulate symbols". However, Dooyeweerd's notion of aspects solves both these problems. Each aspect enables a different type of 'causality' ({D.3.3.6}), which accounts for the fundamental difference between biotic and physical causalities. Inter-aspect dependency ({D.3.2.5}) explains why biotic causality is necessary for lingual functioning, and our view of the computer as aspectual object-functioning in all aspects explains why this is also true in a computer. Though Searle holds the dogma that human and computer are fundamentally different, in Dooyeweerd, as we have seen, while the difference is maintained and accounted for, in terms of subject-functioning, the similarity is also maintained and accounted for, in terms of meaningful-functioning (see Table 2.1). Computers do not possess intentionality 'in themselves', but do exhibit intentionality when seen in terms of human functioning. Thus Dooyeweerd can not only provide philosophical underpinning for Searle's view but also take it further. In claiming that biological and physical causality are fundamentally different, Searle was perhaps reaching for what Dooyeweerd offered in his claim that the biotic aspect (sphere of law and meaning) cannot be reduced to the physical.
# ==== shorten this, and place somewhere around here: We will discuss this answer later, but it may be instructive to consider why it is that everybody seems to have missed it. Both Searle, those who suggested the first set of answers, and Boden seem constrained by an Aristotelian substance concept (including not just static stuff but also dynamic process). The way the question is asked and answered presupposes some self-dependent thing that can possess in itself the property of 'understanding Chinese' or '(Chinese) semantics', or some self-dependent process or causality that generates intentionality. As Boden puts it [p.103], the main question they must all address is "What things does a machine (whether biological or not) need to be able to do in order to be able to understand?" The strong AI position suggests intentionality may be rooted in physical or logical causal processes, Searle claims it must be rooted in biological causal processes, and Boden suggests it is rooted in symbolic causal processes in which "the brain is the medium in which the symbols are floating and in which they trigger each other." [Boden:99]. But to say that the understanding of Chinese is in the rule book presupposes, not some self-dependent thing or causality, but rather an author of the rule book, which puts this answer into a completely different philosophical scenario, a ground for which we discuss later.
>CE Table 3.2.1 Accounting for Extant Views of Human and Computer
>IG "Work:Research/Dooy/Book/Portions/tables/F2.AIviews.iff" -w4.27 -h2.12 -c -ra
A Dooyeweerdian framework can also overcome problems or mistakes in extant views. With a Dooyeweerdian approach we can perhaps resolve the problem in Newell's attempt to explain how determinacy is 'lost' between the symbol level and the knowledge level. Since aspects are irreducible to each other in their meaning, there is no inner causal link from one aspect to its next and this breaks the supposed link of emergence between the symbol level and knowledge level. Since the lingual aspect is non-determinative, non-determinacy at the knowledge level needs no explanation. But, contrary to Newell, since the analytic and formative aspects are also non-determinative, so also is the symbol level. Thus we no longer need define knowledge (counter-intuitively) as the logical closure of all that is represented.
Likewise, Boden [1990] accuses Searle of a category error in that it is people who possess intentionality and not brains or minds. Dooyeweerd would agree. But Boden offers no sound philosophical account of why it is valid to say that people rather than brains exhibit intentionality, whereas Dooyeweerd does so, in his stance that functioning in the aspects is carried out by meaningful wholes ({D.4.3.3}). He would also suggest that Boden's accusation is misdirected because she herself still seems to be assuming that intentionality arises from the self-dependent substance of human beings, whereas Dooyeweerd holds that our intentionality arises because we are subject to multi-aspectual law-promise ({D.3.3.11}).
# Boden p.13: sodium pumps.
# Dooyeweerd: Newell PSS and Searle and Boden -> untenable 'substance'.
There is also a problem with Searle's belief that computers can manipulate symbols but cannot process information. If we cannot allow that they process information, on what grounds may we allow that they manipulate symbols? According to Dooyeweerd, both their symbol manipulation and their information processing are object-functioning, in different aspects. There is no fundamental reason why we allow one and not the other (both object-functioning), but there is a fundamental reason why we agree with Searle that one should not be reduced to the other (distinct aspects).
Dooyeweerd's approach can also throw new light on Searle's [1980] Chinese Room thought experiment that we discussed in F2.1.x. "Where," asks Searle rhetorically, "in this room is the understanding of Chinese?" Because, as Dooyeweerd would see it, understanding involves some of the human aspects (analytic, formative, lingual and others; see {D.5.3}), we should seek to locate understanding (of Chinese) in some object of the human functioning that is understanding Chinese. This directs us very quickly to the rule-book itself (i.e. computer memory), which is a lingual expression of the meaning of the author of the rule-book. It is that author who understands Chinese, and it is her understanding that is expressed, which the Searle-in-the-room program follows. But, of course, we must remember what we stressed earlier: the understanding located in the rule book or computer memory, as functioning in a human aspect, is object- rather than subject-understanding. It is the author of the rule book who understands as subject-functor; the computer may only be said to understand Chinese as object-functor -- but such attribution of intentionality is not invalid.
That neither Searle nor Boden [1990] seriously examine this possible answer may be explained by their assuming a Cartesian rather than cosmonomic version of the subject-object relationship and, as mentioned earlier, a presupposition of some self-dependent substance-concept.
We see here an example of how a Dooyeweerdian framework can bring together apparently incommensurable views, because it repositions both views in a way that allows the claims of both some validity. This approach provides a basis on which debate among the various views may take place. Colburn [2000:80-81] sums up the debate about whether computers can understand with:
"If the idea that mental processing can be explained by computational models of symbol manipulation is an AI ideology, there seems to be a countervailing ideology, embodied by Searle and many others, that no matter what the behavior of a purely symbol manipulating system, it can never be said to understand, to have intentionality."
Dooyeweerd sought a philosophical method of dialogue that avoided clashes of ideology, not by denying ideologies but by understanding them using immanent critique and setting them within the same framework, so that they no longer, in Colburn's words "talk past each other".
# We have examined one possible Dooyeweerdian view of what computers, information, programs, etc. might be, and related it to extant views of the nature of computers and to various issues. In line with Dooyeweerd's primacy of Meaning over Being, we did not attempt to find some fundamental 'essence' of the computer, but rather look at what computer means, cosmically. I believe his philosophy gives us an approach to the question of the nature of computers. We have explored one possible Dooyeweerdian answer and find that it is quite serviceable in a number of ways and can do justice to the variety of computers. For example, it provides a different approach to Searle's Chinese Room.
# We have used Dooyeweerd's novel notion of the subject-object relationship to understand the nature of computer systems. In contrast to the Cartesian notion of human, thinking subjects separated from inanimate objects, to Dooyeweerd the main relationship is that between subject and law, by which an entity responds to the laws of various aspects to generate subject-functioning, and an object responds to this subject-functioning to undertake object-functioning. So it is made clear that a computer system can function as subject only in the physical aspect, but that it can function as object in other aspects, particularly from the organic to the lingual, in response to human subject-functioning. Both subject- and object-functioning are meaningful, and so we can say meaningfully that a computer "display text on screen", "work out a probability", "know that Paris is the capital of France". Such statements do not imply that the computer has its own subject-intentionality, but it has object-intentionality.
This recognition of distinct aspects of the computer's functioning, and especially when reified into the notion of aspectual beings, then leads us to something similar to Newell's levels but without some of the problems that his theory possessed; indeed Dooyeweerd can provide philsophical underpinning for Newell.
Boden MA (1990) "Escaping from the Chinese Room", pp. 89-104 in Boden MA (ed.) The Philosophy of Artificial Intelligence, Oxford University Press.
Clapham AR, Tutin TG, Moore DM (1989, 3rd edition) Flora of the British Isles. UK: Cambridge University Press.
Langefors B (1966) Theoretical Analysis of Information Systems. Lund, Sweden: Studentlitteratur.
Marr, D. (1982) Vision. USA, Cambridge MA: MIT PRess.
Milewski AE [1997] "Delegating to Software Agents", Int. J. Human Computer Studies 46:485-500.
Searle J (1980) "Minds, brains and programs" The Behavioral and Brain Sciences 3:417-57.
Searle J (1990) Minds, Brains and Programs pp. 67-88 in Boden MA (ed.) The Philosophy of Artificial Intelligence, Cambridge University Press; first published 1980 in Behavioral and Brain Sciences 3:417-24.
Reprinted pp.282-306 in Haugeland J (1981), Mind Design.
Tuomi I [1999] "Data is more than knowledge: implications of the reversed hierarchy for knowledge management and organizational memory" Proc. of the Thirty-Second Hawaii International Conference on Systems Sciences IEEE Computer Society Press, Los Alamitos, CA.
Instructions ('machine code') are themselves bit patterns, in which subsets of bits control the working of the CPU. For example, in the 68000 CPU, the following three instructions mean:
The effect of these three instructions is to shift the bit pattern in a memory cell two places to the left, the purpose of which is not known from this perspective. If the contents of that cell are meant to be a binary number, their purpose might be to multiply that number by 256, but if the contents are meant to be four ASCII characters, then their purpose might be to shift those characters one space to left, losing the first character. What is important here is to note that the bit pattern could, in principle, mean any symbol - numbers, text, and so on. This emerges as a practical problem when trying to interpret memory dumps or disk contents.
Written on the Amiga and Protext.
Compiled by (c) 2005 Andrew Basden. But you may use this material subject to conditions.
Created: 8 April 2006
Last updated:
z = x * y
" could mean "force is mass times acceleration" or "total cost = price times quantity" or a host of other things, and conversely, the latter could be represented, for example, by:
"cost = price times quantity",
"z = x * y",
"c = p * q;", or
"c=0; for (j=0;j
The difference between these lies not in the symbolic meaning but in the structure and means of enacting the meaning (which is the formative aspect in service of the lingual). The first three differ merely by virtue of employing differently named symbols. The last uses a different way of calculating the cost, by repeated addition rather than multiplication (in the C programming language). A similar freedom is possible between other pairs of aspects, which is why computer programming is such a creative, fun and fascinating process for those who engage in it fully.
1.14 Reflections
We have set out one possible Dooyeweerdian framework for understanding the nature of computers, information and programs, based on aspectual object-functioning of computer as it is meaningful as part of the user's subject-functioning. To achieve this, we worked from the aspects of HCI to yield an understanding of the multi-aspectual nature of the user interface, and from the to the aspects of what we called the 'innards' of the computer, which we cannot directly experience through our senses. The reliability of computers arises from the determinacy of the latest aspect in which the computer functions as subject, the physical, but the usefulness of the computer arises from its predictable functioning in its object aspects. The meaning we give it within these later aspects may be said to be 'inscribed' in its physical materials and processes.
2. Some Issues
We are now in a position to consider a number of issues.
2.1 Clarifying what a Computer 'actually is'
Though we noted that while both Searle and Schuurman sought to differentiate computers from intentionality, they disagreed over what a computer 'actually is'. Can the aspectual framework we have developed provide grounds on which such disagreements may be resolved or at least discussed?
2.2 The Intentionality of Computers
In the lingual aspect, a computer does not function as subject but only as object. So, we mainly agree with Searle when he says [1990:83]:
2.3 Subsymbolic Computing
The issue:
2.4 On Bodies and Non-Propositional Knowledge
It should be clear by now that the nature of computers is tied closely to the nature of knowledge. Adam [1998:180] maintains "The way that a number of aspects of knowing are not reducible to propositional knowledge, but rely instead on some notion of embodied skill, points to the role of the body in the making of knowledge." Thus far is might seem that this framework assumes that the computer contains propositional knowledge, on the grounds that we have given pride of place to the lingual aspect of symbolic representation. But this would be to fundamentally misunderstand it. Dooyeweerd rejected the Cartesian view of knowledge ({D.5.1.1}) that Adam and other feminist thinkers also reject.
2.5 Agents
====tbrw
In the agent perspective, we see the computer as able to undertake human-like tasks (whether or not it is seen as possessing intentionality). In Dooyeweerd's view, this amounts to the computer functioning in all aspects, doing so as object. This has been discussed in chapter {F1}, and the nature of the computer is an object-functor in these aspects.
2.6 An Analysis of The Internet
# say: users and IS developers increasingly see IN at symbol level and knowledge level. Esp this is marketed by society as 'the grand solution===='. It hides the psychic bit level at which time and distance of access are visible. This leads to silly expectations. When developers have this view it is dangerous - and increasingly they do since it is marketed that 'oh its easy to develop appls for IN.'
#==== ensure that time and distance of access are mentioned in bit level of innards.
# along with this: semantic web. ====
Psychic (bits)
2.7 Envisioning and guiding development of new ideas
Early Internet search engines mainly compared words or aggregations thereof. Current engines like Google also take into account the number of links to each page, which is to say, the structure of the Internet. Now research is underway to produce search engines that take the semantic meaning of the content of pages into account. This may be seen as progress along the sequence of aspect in which the search engine mainly functions: from analytic (distinct words), to formative (structure) to lingual (signification) [====].
2.x Hacking
Other aspects come into hacking and the breaking of security features. For example, passwords of a user can sometimes be guessed by knowing something of the life of the individual. Much of the hacker's craft involves finding the unexpected ways to achieve things. Dooyeweerd's suite of aspects might help guide us in guessing what these unexpected methods might be. We will not discuss this further, but merely note that Dooyeweerdian philosophy does at least provide a useful framework by which to consider this issue.
3. Relating to Extant Frameworks
And it also provides a basis for examining the views themselves to ascertain their strengths and possible weaknesses.
3.1 The Hardware Perspective
The hardware perspective is readily accommodated within our Dooyeweerdian framework, in viewing the computer from the 'organic' and perhaps physical aspect. Hardware errors are explained, as above. But because this perspective is now just one among several others, it no longer needs to carry the burden of accounting for all else that we experience, such as information. Ownership and rights however, are seen as vested not in the hardware itself but in the juridical aspect.
3.2 The Bit-Manipulation Perspective
The bit-manipulation perspective is likewise accommodated under the Dooyeweerdian approach, as the psychic aspect of computers. Its limitation, that we cannot unambiguously interpret bit patters as types of symbol or what they refer to, is precisely what we would expect from the psychic aspect being an irreducibly different sphere of meaning from the analytic and lingual aspects.
3.3 Enriching Newell's Theory of Levels
Towards the end of his paper Newell attempted to found his proposal of a knowledge level on Dennett's (1978) notion of the intentional stance, which in turn was based on Brentano's (1874) concept of intentionality. But there are significant differences from Dennett, and Newell called for closer analysis (1982:123). Newell's ontological claim for his levels, for example, sits uncomfortably with Dennett's notion of a stance, and Newell has more levels than Dennett has stances. It may be that Dooyeweerd provides a sounder philosophical foundation than would Dennett.
3.3.1 Similarities between Dooyeweerd and Newell
The reader might already have detected the similarity between Newell's levels and some of Dooyeweerd's aspects:
3.3.2 Differences
There are also some differences between them, such as that Newell did not recognise what Dooyeweerd called anticipatory dependency. Also, some level-aspect correspondences are weaker than others. While the device level, concerned with physical materials, is obviously the physical aspect of the computer, and the knowledge level's concern with 'aboutness' matches the lingual aspect's 'signification', the correspondence between other levels and aspects exhibit a little tension.
3.3.3 Enriching Newell's Notion of Levels
With this foundation, we can first affirm Newell's notion of levels. While some, such as Searle, would question whether it is even valid for Newell to talk about knowledge level behaviour of the computer (physical causality, "does not process information") and symbol level behaviour of the human (biotic causality), Dooyeweerd affirms it is valid and meaningful for Newell to speak in such terms.
3.3.4 Newell's Theory of Knowledge
We noted earlier that while Newell's theory of levels is intuitive, his theory of knowledge exhibited problems. Can Dooyeweerd help address these problems?
3.3.3 Reflections
In short, Dooyeweerd fulfils Newell and what he seemed to be reaching for in his theory of levels. The reason he does so may be explained if we consider that what Newell was aiming at was what Dooyeweerd offers: a philosophical stance in which meaning rather than being or process is fundamental, in which deterministic and non-deterministic sit comfortably side by side rather than being torn asunder (as in the nature-freedom ground-motive), and in which the coherence of diversity is presupposed and needs no explanation.
3.2 Relating to Artificial Intelligence
Most who have addressed the artificial intelligence question of whether computers are like human beings may be seen to have presupposed that we can answer it by seeking some substance, process or type of causality that, in itself and on its own, can explain the difference or similarity -- that is, in terms of Immanence Philosophy. Humans exhibit behaviour or property X and computers exhibit Y, and then the question is to what extent and in what ways X = Y.
"The inner restlessness of meaning, as the mode of being of created reality, reveals itself in the whole temporal world. To seek a fixed point in the latter is to seek it in a 'fata morgana', a mirage, a supposed thing-reality, lacking meaning as the mode of being which every points beyond and above itself. There is indeed nothing in temporal reality in which our heart can rest, because this reality does not rest in itself."
3.x Systems Theory
====: S.T. was moved to D.4.7 15 December 2003. Systems theory has played a major part in providing frameworks for understanding computer systems. We will briefly compare and contrast it with Dooyeweerd's Theory of Entities, and suggest how it may be enriched by Dooyeweerd's theory.
F2.8 Conclusion
# Thus we have a framework in which we understand computers, information and program to 'be' meaningful functioning in various aspects from the physical to lingual, functioning as object in all except the physical aspect.
References
Alavi M, Leidner DE [2001] "Review: Knowledge management and knowledge management systems: conceptual foundations and research issues", MIS Quarterly 25(1):107-36.
Appendix: Von Neumann Machine
Most computers to this day have a Von Neumann architecture, in which the CPU has
0010 011000 010001
means "copy the contents of a longword (32 bits) beginning with the memory cell whose address is held in address register 1, into data register 3",
1110 0001 1000 0011
means "shift all the 32 bits in data register 3 left by eight places",
0010 001010 000011
means "copy the bit pattern in data register 3 into the memory cell whose address is held in address register 1".
This page is part of an attempt to forge a framework to understand information systems using Herman Dooyeweerd's ideas, within The Dooyeweerd Pages, which explain, explore and discuss Dooyeweerd's interesting philosophy. Email questions or comments would be welcome.