Andrew Basden
Information Systems Institute, University of Salford, Salford, M5 4WT, U.K. A.Basden@salford.ac.uk
Copyright (c) Andrew Basden, 2001, all rights reserved
Newell's knowledge level is not always easy to understand, and most have used only part of his theory. Surprisingly, there seems to have been little discussion of the notion as such; people have merely accepted and used it. This paper undertakes a 'rational reconstruction' of Newell's theory to make it more accessible, identifying a set of 'key points' to which reference may be made. It then compares his suite of levels with others (finding it to be more comprehensive than most) and surveys the uses to which Newell's notion of levels has been put. The conclusion is that Allen Newell did the information systems communities a great and lasting service when he published The knowledge level.
Keywords: Knowledge level, Symbol level.
Ten years later a special issue of Artificial Intelligence contained his short Reflections on the Knowledge Level [RKL] but, sadly, Allen Newell did not live to see the subsequent growth in use of the concept during the 1990s. He died of cancer in July 1992 after a full and fruitful scientific career, as celebrated by the late Herbert Simon [88].
The knowledge level lives on and shows no sign of losing its influence twenty years later, because it seems to be what people have been looking for but did not know it. There is every possibility that it will last another twenty years. Allen Newell made the intuitive idea explicit, gave it a name, linked it to ideas that were already accepted, and presented a painstaking analysis of the new idea. With clarity, he has offered us a coherent and structured foundation for much scientific work in many areas of information systems.
However, the notion of the knowledge level itself has seldom been discussed or critiqued. It has just been welcomed, accepted and used. "Newell's principal surprise (and regret)," said Bobrow [16], "was that this way of looking at system had never been worked out in any techncial detail - a step he thought necessary to clarify the power of this distinction [between knowledge and symbol levels]." It is time to look at Newell's notion more critically. Despite its influence and coherence it does have problems that must be examined and it should be placed on a philosophical foundation to prepare it for the next twenty years.
In this paper, and the companion paper, hereafter referred to as [CP], we try to provide a celebratory critique of a great idea by a great man. One considers the twenty years since Newell's paper The Knowledge Level was published, the other the next twenty years. Both can be read independently if the reader desires.
In this paper we consider Newell's notion of levels, especially the knowledge level, in some detail, so as to understand why the theory has taken the shape it has. We identify its key points, identified in the text by 'kp', and the relationships between them. Appendix 1 summarizes them and shows pages in the original paper where they are mentioned. The reader might find it useful to refer to the appendix while reading.
In his paper, Allen Newell took the reader on a journey, developing his argument, exploring a few side turnings on the way. But that did not yield the greatest clarity (as Newell himself admitted ten years later [RKL]) because several major themes were woven together as the argument progressed. Shall we obtain a semblance of clarity by ignoring inconvenient bits? No! We will retain all the key points of Newell's theory and present a 'rational reconstruction' thereof that separates out the main themes and omits some side turnings. We consider how the key points are related to each other and to the work of others, and try to make them relevant to today's situation. Occasionally we go beyond Newell's own exposition, but where we do so this is made clear (e.g. as 'Deduced' in appendix 1). We will use Newell's examples and reasoning where he gave any but provide our own where he did not. The major source for our discussion is The Knowledge Level [KL], but his Reflections on the Knowledge Level [RKL] and United Theories of Cognition [UTC], ten years later, provide a little extra material.
The notion of levels did not start with Newell. It has always been important. Many suites of levels have been proposed and we compare some of the more recent ones with Newell's. We discover that Newell's suite is the most comprehensive in both scope and execution. Some suites are muddled and even not true levels, and our discussion demonstrates a critique that we can apply to future suites. We also discover the need for a sixth level that covers social or cultural meaning and propose one, the tacit level.
Then we survey how Newell's theory has been used over the last twenty years, showing a usefulness far greater than Newell himself anticipated. A major interest comes, not only from the artificial intelligence community but from the human factors and the business and management science communities. This survey indicates which key points have been important and includes writers' comments that might help us clarify, enrich or refine them.
In the companion paper we find that the main parts of Newell's theory stand. While the wide use of his idea speaks of its greatness, it is not until we get to the end of the second paper that we appreciate just how far-sighted Allen Newell was, and just how far he was able to discern the importance of issues that were to arise many years later outside the paradigm in which he was working.
This suggests that Newell's notion of the knowledge level, and of levels in general, is a significant one with potential for longevity. While Newell himself saw it as a proposal for predicting behaviour, it is perhaps more like a paradigmatic idea that can stimulate, legitimate and guide whole new research programmes.
(Newell's term, 'logic level', is unfortunately misleading since it refers not to symbolic logic but rather to that found in digital electronics. Moreover, it combines two sub-levels that had previously been thought of as distinct levels: logic circuit, whose medium is single bits, and register-transfer, whose medium is bit vectors. Therefore, from this point, we will refer to it as the bit level. Later, when we link Newell's levels to linguistics, we will rename device level as materials level and circuit level as component level.)
In [UTC] Newell suggested that time response increases logarithmically through the sequence of levels: in human beings the equivalent of the bit level operates on milliseconds, the symbol level on seconds, the knowledge level on minutes, etc.
In [KL] Newell used the first four levels (derived from his earlier work, [13]) to discuss the characteristics of levels, then introduced the knowledge level, showing that it had some surprising characteristics. Here we include the knowledge level in our discussion from the start.
A level is a level of description (kp:descr), a distinct way of seeing or describing a computer system (or indeed any thing). It provides a set of concepts and vocabulary for discussing that system that includes [KL:95] "a medium that is to be processed, components that provide primitive processing, laws of composition that permit components to be assembled into systems, and laws of behavior that determine how system behavior depends on the component behavior and the structure of the system" (kp:sys, kp:med, kp:compn, kp:lawc, kp:lawb).
Different levels describe the same system, not different parts thereof (kp:same). They describe it in different yet equally valid ways (kp:valid) - e.g. "The Prospector system found molybdenum deposits" (KL) and "The Prospector system used probabilistic reasoning" (SL). "Neither of these .. definitions of a level is the more fundamental. It is essential that they both exist and agree." [KL:95] A description at a level is complete, in the sense of not leaving gaps that must be filled in by reference to descriptions from other levels (kp:compl).
What use are levels? Two types of description of a computer system particularly interest us: interpretation of what it is doing, by a user or observer of the system, and specification of what it must do, by its designers and developers. So at each level we can both predict behaviour of existing or specified systems (kp:pred) and design new systems (kp:des), and distinguishing the levels lends clarity to both these tasks. "Computer systems levels are realized by technologies" [KL:97], each different (kp:tgy). Thus, for example, a knowledge engineer would work at the knowledge level to design a knowledge based system, a programmer would work at the symbol level to design the knowledge representation software, a systems programmer would work at the bit level, maybe in assembler language, to create the lowest level routines for it and the operating system that runs it, an electronic engineer would design the hardware to run it on, and the materials technologist would design ways of obtaining purer silicon.
It is the relationship between the levels that enables implementation of a level by lower levels (kp:impln). It is one of dependency: lower levels are necessary to higher levels (kp:dep). Implementation is always in terms of lower levels and involves taking a description at one level (a specification) and deciding how to realize it using the medium etc. of the level below. For example, knowledge is implemented in (represented by) symbols, symbols are implemented in bit patterns in memory, bits are implemented by voltages and currents held by conductors and components, which are implemented in (manufactured from) physical materials. We can see that such 'implementation' takes a distinctly different form at each level. So if a system has a description at one level then it will always be possible to describe it at the next lower level and, though the sequence of levels, to realize it as a physical system.
But the reverse in not always the case. "Computer systems levels," said Newell [KL:97], "are not simply levels of abstraction. That a computer has a description at a given level does not necessarily imply it has a description at higher levels." For example, not all electronics is digital, and a car engine computer probably has no knowledge level and possibly no symbol level. Thus a system might have a top level beyond which it cannot be described (kp:top).
"Within each level," said Newell [KL:95], "hierarchies are possible." So merely aggregating things at one level (e.g. bits into bit vectors, symbols into complex symbol structures) does not move us up a level (kp:agg). This was why he combined the logic circuit (sub-)level with the register transfer (sub-)level. Aggregation might in fact be involved as we ascend to the next level (e.g. to represent a patient in a medical database might need an aggregation of three integers and four strings) but something more is needed: added meaning.
"Each computer systems level is a specialization of the class of systems capable of being described at the next lower level." [KL:97; his italics]. Such specialization involves the describer of the system (be they the system's creator or user) in distinguishing things at one level that have meaning at the next higher level and assigning that meaning to them. In short hand, we can say that meaning is 'added' as we move up a level (kp:meaning). At each level, however, a different kind of specialization, or meaning, is required, and thus implementation takes a different form at each level. Though Newell did not discuss what these were, we can see that they are approximately as follows:
Several things follow from the irreducibility of levels. Levels cannot be explained in terms of each other. A description at any level can in principle be complete (kp:compl). Some components at one level, or their behaviour, might be invisible (i.e. not describable) at higher levels (kp:invis), such as power supply voltages (CL), checksums (BL), iteration variables (SL) and limits such as table sizes (SL). Random number generators rely on inter-level irreducibility. What seems to be an error at one level is often explainable at lower levels (kp:err), for example, a stack overflow can cause a variable inexplicably to have a wrong value.
A specially important consequence of irreducibility is that there is a many-to-many relationship between levels (kp:m-n): "Any instantiation of a level can be used to create any instantiation of the next lower level." [KL:95] Thus, for example (my examples, not Newell's), the symbol 'a' can be encoded by various different bit patters - '01100001
' in ASCII, '10000001
' in EBCDIC, and
00111100 00000100 00111110 01100110 00011110in one of the smaller bitmapped fonts my word processor uses. Conversely, a given bit pattern can encode many different symbols: '01100001' at the bit level can be the letter 'a' or the number 97.
This in turn gives considerable freedom in designing systems (kp:freedom). Especially, because "knowledge can be defined independent of the symbol level" [KL:99], we can choose between knowledge representation formalisms for a given body of knowledge. More fundamentally, as Newell points out, the irreducibility makes it possibile in the first place to designing at the knowledge level (in terms of user or domain requirements) before we have a complete - or indeed any - description at the symbol level. Conversely, Brooks [17] points out that we can create design at the symbol level before we have details at the knowledge level (application). We can see, therefore, that this irreducibility of the levels ensures something that we take for granted - the ability to design both bespoke and generic software.
Lastly, because of their irreducibility, no one level has any a-priori claim to superiority over the others. Each level is important, in its distinct way (kp:imptnt). This enabled Newell to recognise the different research contribution of both Schank's [86] conceptual dependencies (knowledge level) and Berliner's [14] processing method for the game of Backgammon (symbol level) (a point echoed later when Chandrasekaran, Josephson and Benjamins [25] say "Theories in AI fall into two broad categories: mechanism theories and content theories"). This means the levels can guide research strategy (kp:resch). In system design it means all levels must be given adequate consideration, lest the system fail in one way or another. Some computer games, for example, astonished the market with the sophistication of their bit level design (graphics and sound) but had atrocious gameplay (knowledge level), so after the novelty had worn off they ceased to satisfy.
"They [levels] are not just a point of view that exists solely in the eye of the beholder. This reality comes from computer system levels being genuine specializations, rather than being just abstractions that can be applied uniformly." [KL:98]
"Nature has a say in whether a technology [and therefore a level] can exist." [KL:97]
"To repeat the final remark of the prior section: Computer system levels really exist, as much as anything exists. They are not just a point of view. Thus, to claim that the knowledge level exists is to make a scientific claim, which can range from dead wrong to slightly askew, in the manner of all scientific claims." [KL:99]
Though by means of a level we construct descriptions, the levels themselves are not products of human construction; the ways in which we might describe cannot themselves be constructed or imagined into existence. Though each level is a useful epistemological device, what levels 'exist' was, for Newell, an ontological fact. Newell was claiming that levels of computer architecture are exactly (some of the) ontological levels of reality.
The ontological claim protects the theory from perspectives that are radically incompatible with it and would denature it, often without anyone realising so. For example, Gaines [40] claims the knowledge level can emerge from autopoiesis, which in effect is monistic reduction that robs the knowledge level of its true importance, treating it as only a convenient label. Dennett [32] claimed that the intentional stance is in the eye of the beholder, but Newell recognised that he must disagree. Actual hostility can arise because ontological claims are out of fashion in today's information systems communities pervaded by post-rationalist paradigms, as exemplified by one of my colleagues who rejected Newell's notion of levels because "It's too essentialist for me!".
So we need to know whether Newell's ontological claim was important to his theory. That he gave such emphasis to this, and reiterated it ten years later [RKL], means it was important to him, but he does not say why. Curiously, though, the claim does not seem to have been discussed, nor even noticed: "no one has taken seriously - or even been intrigued with - the proposition that the knowledge level was not invented ..." [RKL:33]. This author noticed it the first time he read the paper and has been intrigued by it ever since, because much that is important hangs on it.
The most important consequence is that Newell's theory would not hold together without it. Philosophically, we cannot have plurality, irreducibility and relatedness (kp:pls,kp:irred,kp:rel) all together unless we make an ontological claim for each level. Without it, one or other would have to be denied, and, with them much that was shown above to follow from them.
Moreover, an ontological claim can be developed into a single principle that lies behind the levels (viz. meaning), which helps us understand why particular levels exist, and their relationships. This provides a clear, rather than muddled, taxonomy of levels. Without the claim, one person's levels might differ fundamentally from another's even if they have the same name. As Newell argued, it is not possible to think up arbitrary many levels that nestle between the existing ones (kp:nointv). So we can distinguish between distinct technologies (kp:tgy) and specialisms in information systems, and thus can gain direction to research.
The claim provides a means of testing Newell's theory, allowing us to compile criteria for evaluating claims of new candidate levels. For example, in [RKL, UTC] Newell proposed a 'problem-space computational model' that lies between the symbol and knowledge levels, with "states, desired states and selection knowledge for operators." He assigned it to the symbol level, because "we know of no way to have a genuine system level between the symbol level and the knowledge level" [RKL:36], but one feels he would have liked it to be a different level. The matter requires debate. One possibility is that this is the boundary between the levels, and that at each such boundary will have such a phenomenon. Another possible explanation is discussed in the companion paper [CP].
The knowledge level has two main themes, the relationship between knowledge and symbols, which seems to have been the one most referred to in the citing papers, and the prediction of behaviour, which was Newell's main interest. In Newell's paper the two themes were intertwined as he developed his argument, partly because knowledge is intimately linked with behaviour, being seen only via behaviour, and partly because of his artificial intelligence perspective. In retrospect, however, we can see that some confusion arose when people began to assume different emphases, and thus it is useful to maintain a conceptual distinction between the two themes, as Newell himself found later [RKL].
"The knowledge cannot so easily be seen, only imagined as the result of interpretive processes operating on symbolic expressions." [KL:105-6]
Aamodt and Nygĺrd [1] perhaps put it more clearly: "This makes the notion of interpretation central, since it is through an interpretation process that a syntactic structure is transformed into a semantic meaningful entity." Newell likened the knowledge level to Dennett's intentional stance [32] (kp:stance), but saw a crucial difference between them that is discussed in [CP].
For the knowledge level to work there must be both a describing and a described agent (kp:oo), which Newell called observing and observed agents; see Fig. 2. The description is made by the describing agent of the described agent. It is knowledge, so the describing agent must itself function in a manner describable at the knowledge level, i.e. interpreting the described system. The description is also implemented in a symbol system, so the describing agent must also have a symbol level description.
Knowledge is about the system's environment (kp:env). While a symbol, a bit pattern, a voltage, etc. are of the computer system itself, a piece of knowledge, held by those symbols, is of something outside. Knowledge, being about environment, is thus focused on the application domain (kp:appln) rather than the system itself. But this means knowledge is an extremely complex, highly diverse medium, so that while "given a specification at a level it is possible to construct, by routine means, a physical system that realizes that specification" [KL:97] at the knowledge level this is far from routine. Many have commented on the difficulties in knowledge acquisition [e.g. 99], which we can see as the knowledge level equivalent of programming (kp:kgacq).
(In the special case, the agent has knowledge of itself at any level, 'itself' is the environment. While an agents lower level components, medium, etc. are invisible at the knowledge level, knowledge about them is visible.)
We can predict behaviour of a system at any level because each level has a law of behaviour (kp:behav,kp:pred). The behaviour of any non-trivial system described at any level is complex. At lower levels behaviour is determined by components responding to inputs and outputs so the law of behaviour is in terms of localized phenomena. Such components are simple and predictable in their behaviour (e.g. a transistor, a data register, a numeric object with Add and Multiply methods), so variety of system behaviour is explained by the way components are connected. But at the knowledge level each piece of knowledge, being about something in the world, is itself complex so variety comes from content rather than structure (kp:variety), and behaviour at the knowledge level is determined, not by local properties but by a global principle (kp:glob).
Newell made two claims for behaviour described at the knowledge level: rationality and, discussed later, non-determinacy. "Knowledge is intimately linked with rationality" [KL:100] which, to Newell, is not just about deductions but about goals (kp:goal). "To treat a system at the knowledge level is to treat it as having some knowledge and some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates." [KL:98] This is the global principle that determines behaviour at the knowledge level: the principle of rationality (kp:por). Newell defined the components of the knowledge level to be actions, goals and bodies of knowledge (kp:klcomp). Newell discussed three main parts in the principle of rationality [KL:102-3]:
But he recognised that it should be extended (kp:extn) to cater for null sets, mutually exclusivity, difficulty in telling whether an action will lead to a goal, uncertainty, risk, dependence on the actions of other agents and 'goal preferences'. He argued that logic should not be seen as a representation language (symbol level) but as an aid to analysis at the knowledge level (kp:logic).
Because of irreducibility we should be able to predict behaviour without knowing the symbol level of the agent (which is difficult for humans, animals and adaptive systems), if we know their knowledge, possible actions and goals. Newell argued that, in principle, we can know the observed agent's knowledge if we share its environment, because knowledge is always about environment (kp:env). We can know their actions by observation. We can know their goals either by asking them or applying the principle or rationality to their past actions. This can work, claimed Newell, because both an agent's goals and their knowledge of environment are relatively stable. Problems with this proposal are discussed in [CP].
The knowledge level "requires both processes and data structures" [KL:105], so they merge (kp:proc). The sharp procedural-declarative distinction found at the symbol level disappears at the knowledge level [2, 57] and Clancey [31] emphasises this to move into new territory about knowledge acquisition. This means that while at lower levels the medium exists as states of components that remain passive until changed by the components [KL:97] - a voltage is the state of a conductor, a bit pattern is the state of a memory cell or register, a value is the state of an attribute - the medium at the knowledge level (viz. knowledge itself) is neither a state nor passive. Instead, "knowledge is to be characterized entirely functionally, in terms of what it does" [KL:105; Newell's emphasis] (kp:func).
But to say what knowledge an agent has we need to know that this generative process is. An obvious one is deductive logic: an agent "knows all that can be inferred from the conjunction of {L1}" (where L1 is "the set of logical expressions of which we are willing to say that the agent 'knows {L1}'") [KL:110]. Newell postulated that the knowledge of an agent is the complete logical closure of the knowledge actually represented within it (kp:clos). This is useful in that two systems with differently encoded knowledge can be said to know the same thing if their logical closures are identical [2].
But it can also mean that the ideal knowledge level agent has infinite knowledge, which is of course impractical [35]. Or, rather, it should be treated as infinite as the observer cannot know in advance which inferences will be made. So Newell argued that knowledge cannot be captured in a finite structure (kp:inf) [KL:107].
a) Behaviour at lower levels is determinate (kp:lwrdet).
b) Behaviour at knowledge level is non-determinate (kp:nondet).
--------
c) Where is determinacy lost? How do we account for this?
==== Note to publisher: That line of dashes is meant to be a pseudo-deduction; either leave as dashes, or draw a line there.
There are several possible explanations that are discussed in [CP], such as the observed agent having genuine freedom in its behaviour, but Newell did not discuss these, possibly because of the rationalist presuppositions dominating artificial intelligence at the time. Instead, he claimed that the knowledge level is 'radically incomplete' (kp:ri) as a level of description. "That she [the princess] resolved it is clear, but that her behavior in doing so was describable at the knowledge level does not follow. ... The term radical is used to indicate that entire ranges of behavior may not be describable at the knowledge level, but only in terms [of] systems at a lower level (namely, the symbolic level)." [KL:104-5] Anderson [2] takes a similar line, filling in more detail.
Newell tied radical incompleteness closely to logical closure (kp:clos). He tried to use kp:ri to explain why logical closure (kp:clos) is not encountered in practice. He used kp:clos to explain kp:ri: an observer cannot tell (even with a complete knowledge level description of the agent) what portion of the logical closure of the agent's knowledge has actually been generated, because the generative process depends on the precise symbol level inference algorithms involved, which are unknown at the knowledge level.
This is an ingenious proposal, and many have found it hard to understand. But, as we discuss in [CP], his reasoning seems weak, and kp:ri causes a number of serious problems. Not the least of these is that the knowledge level can no longer fulfil its complete duty as a level (kp:compl). Newell did not seem to think it caused any problems and in [UTC:118] he developed the idea into the notion of 'strength' of a level: "a level is weak if considerations from lower levels enter into determining the future course of behavior." However very few of those who have referred to Newell's knowledge level have found this part of his theory useful.
In many cases 'level' is used to refer to a distinction within one level. One example is the common meta/object-'level' distinction. Though we might expect it to correlate with the knowledge/symbol level distinction, 'meta' usually keeps us at the same level, much as aggregation does: meta-knowledge is still knowledge, and thus at the knowledge level, meta-reasoning is still a symbol level processing mechanism even though it might be applied to reasoning mechanisms. Meta-learning merely improves reasoning mechanisms and increases efficiency of processing [28] so, as we discuss below, is invisible at the knowledge level.
Klir [58] presented the idea of epistemological levels in information: source, data, generative, structure and various meta levels, each defining a type of system. A source system comprises variables (uninstantiated), their potential states, and "some way of describing the meaning of both the variables and their states." This makes it span the symbol level and knowledge level. The data level is the set of instantiations for the source variables, the generative system, the set of invariant relationships amongst states of variables, and the structure level, the compositional relationships among variables themselves. These are just different components of the symbol level; see Fig. 3.
The Representational Redescription theory [55] offers three main levels, employed by [18] to discuss how, they believe, consciousness arises from neural activity. RR distinguishes Level-I, implicit representations such as found in connectionist models, from level-E1, explicit representations not accessible to consciousness or verbal report [18] and level-E2, explicit ones that are. (Level-E2 is then split further.) These are not Newellian levels because (a) they are distinguish different values of certain attributes such as explicitness or accessibility to consciousness (b) in RR if something is at level-E1 then it would not be at level-I.
True Newellian levels all apply to the same thing with equal validity. They are not differentiated by the part-whole relationship, nor by values taken by an attribute, but by meaning and ways of describing.
Some thinkers have discerned three levels. Anderson's [2] suite of implementation level, algorithm level and knowledge level, which he uses to construct an elegant theory of the origins of knowledge and human learning, discussed below. Anderson makes reference to Newell's levels, and understands them well, often providing succinct explanations of issues that Newell left unclear and providing useful illustrations. He makes insightful comments, such as linking the merging of process and structure at the knowledge level (kp:proc) to the procedural-declarative controversy. He suggests that 'knowledge' is not restricted to the knowledge level "despite the term Newell gave it." Since the human being learns some operations at the algorithm or even implementation level, these too can be called knowledge, though of a different kind. This enables him to be more comfortable with radical incompleteness (kp:ri) and his treatment might ameliorate some of its problems. He stipulates that the algorithm level excludes exact details of the language in which algorithms are written while including generalized symbolic operations such as summation; Newell was silent on this. His suggestions work can bring precision into the theory of levels.
Sathi, Fox, Greenberg [84] propose a suite of five layers:
But they cover only two of Newell's levels. The first three layers refer, mainly, to symbol level purposes (data storage, inference algorithms and structuring) while the semantic and domain layer refer to knowledge level purposes, to things outside the computer system (kp:env,kp:appln). However, there is a slight hint of knowledge level in the logical layer since, as well as processing, it is a different way of seeing something already described by the implementation layer (OAV triples) and Newell argues that logic is of the knowledge level (kp:logic).
Rasmussen's [80] suite of five levels does cover Newell's five, but he takes a rather different approach that is influenced by physical engineering phenomena.
While the first two correspond almost exactly with the materials level and component level, and the fifth with the knowledge level, the third and fourth each span several of Newell's levels: heat transfer is at the materials level while feedback is at the component level if seen as a physical phenomenon amongst components or at the bit level if seen in terms of signals. The third and fourth levels cannot be truly separated because feedback involves causality and causality must take account of feedback. They would seem to refer approximately to process and structure across several levels.
All the suites mentioned are useful, but they are not as homogeneous nor elegant as Newell's is. A suite of levels should have a single principle by which levels are distinguished and related which, in the case of Newell's levels, is distinct types of meaning (kp:meaning). But some other suites distinguish their levels according to several principles - specificity-generality, aggregation, process-structure, etc. If different levels still need to refer to each other, as process and structure do, they are not distinct levels but just different portions of the same level. Newell's suite is cleaner than others, of wider coverage and thus can act as a 'gold standard', as shown in Fig. 3.
Nevertheless, Jennings' proposal provides a comprehensive model that could aid the science of agents, the engineering of agent-based systems and a basis for analysis and design. It is interesting to us in its attempt to extend Newell's suite of levels, to tackle some social issues such as roles and oblications. Indeed, in [UTC] Newell himself mentioned the need for a social element, though he did not discuss it in any detail. In [21] a different approach is taken, not proposing a new level but rather asking what is needed for an individual agent to be social. Their Model Social Agent has full cultural-historical knowledge with, in effect, an emotional principle as its law of behaviour. What is interesting here is Newell's admission of some structure within knowledge and that the knowledge level agent is several steps before the emotional, suggesting a more radical extension to the principle of rationality tnan proposed above.
The parallel between the first five and Newell's levels is striking. Might they be the same suite of levels but for different media? Newell himself hints at such a possibility when he says, of an expression written on a blackboard, "there is .. no difficulty seeing this same expression residing in a definite set of memory cells in a computer. ... interpretive processes operating on symbolic expressions" [KL:105]. The close parallel between them is shown in Fig. 3.
Both acoustics and Newell's device level are concerned with physical materials and interactions - vibrations in gases and solids, for the former, and electric and magnetic fields for the latter. Because the word 'device' can be misleading to a computer scientist and has little meaning for a linguist, we might rename this level to 'materials level'. We can extend this level for written linguistic media to the materials of which e.g. paper and computer screens are made (cellulose, carbon, glass, phosphorescent substances, etc.).
Newell's circuit level is concerned with hardware components manufactured from such materials as silicon, plastic, copper, and activity such as voltages and currents. Phonetics is similarly concerned with 'components' that make, transmit, receive and process sound. For written linguistics media, this level sees paper and ink, CRT screens, etc. To generalise across all media we can call this the 'component level'. The cells and organs (aggregations of cells; kp:agg) of the human nervous system is visible at this level.
The remaining levels need no renaming as differences in media start to disappear. Whether digital signals carried by voltages, phonemes carried by sound or pixels carried on screens or marks on paper, they are all pure bits of information. Just as the single bit is aggregated at this level into registers etc., so phonemes aggregate into phoneme sequences, and pixels on screen and marks on paper aggregate into shapes, colours, textures and spatial arrangements. These things are all describable at the bit level.
At the symbol level, concerned with symbols and their syntactic structures, the differences have disappeared altogether. Semantics in linguistics is concerned with what the symbols mean or represent, which is the first concern of the knowledge level (kp:about). (Note, the way we sometimes use 'semantics' in artificial intelligence, as the activity that is linked with a particular symbol, is a symbol level concern.)
With the exception of their laws of behaviour, Newell's levels would seem to be almost identical to those of linguistics (and applicable to user interfaces), which means that findings in linguistics could contribute to the refining of Newell's levels. Conversely, Newell's levels might contribute to linguistics because they have laws of behaviour.
Pragmatics accounts for the way language is, or should be, used in practice. Grice [43] speaks about maxims that regulate the quantity, quality, relevance and manner of utterances. While, arguably, certain portions of what has become pragmatics could be brought under semantics, other portions cannot, such as assumptions about context [59], conditions for appropriateness or felicity of the utterance [56] or shared background knowledge [53]. "So far, context has hardly been studied within a knowledge level framework," remarked Öztürk and Aamodt [74]. There are signs that it should not be, but is a genuine specialization of the knowledge level. Habermas [46] said "the action orienting power of cultural values is at least as important for interactions as that of theories [explicit knowledge]."
What these assumptions, values, etc. have in common is tacitness: a special type of knowledge that we are usually less aware of as we act, speak or write than we are of semantic (knowledge level) meanings. So we might propose a sixth level, called the tacit level, above the knowledge level and equivalent to these portions of linguistic pragmatics. It seems a true level, a specialization of knowledge, with added meaning that is cultural or tacit connotation (kp:meaning), that describes the same thing (utterance in linguistics, system in computers) (kp:same). We can make the following initial proposal for the tacit level:
The tacit level anticipates a social element of language and knowledge and addresses some of the issues raised by Jennings'. Whether the tacit level is distinct or just part of the knowledge level remains to be debated and tested. Usually we will treat the tacit level separately but sometimes just as part of the knowledge level.
From a brief perusal of the titles of 328 citations mentioned earlier, we can see a wide range of uses, which are in four main groups:
Though papers have been counted in only one category many covered several topics, and some appear several times below.
From this we see that Newell's ideas have been referred to as much by researchers in human factors as in artificial intelligence, a shift alluded to by Clancey [31] when he said "the primary concern of knowledge engineering is modelling systems in the world, not replicating how people think." This is not surprising since Newell happened to address many key points important to human factors (e.g. kp:dist, kp:lvls, kp:des, kp:interp, kp:oo, kp:env) with theoretical rigour. It is surprising how few of these papers had as their primary topic the critique or refinement of the theory of levels itself. Some critique or refinement can be found elsewhere (e.g. within [35, 40, 54, 96]), but it seems as though most have accepted Newell's ideas almost uncritically as an axiom for their own research.
We first look at Newell's own comments on use of the notion of the knowledge level, next at the important, nearly invisible use mentioned above, and then at uses as identified above. Throughout, if the author's own uses of Newell's theory can usefully extend the discussion, they are included, and a few that have not been covered elsewhere are outlined in a separate sub-section.
Our review is indicative rather than exhaustive, covering only a small sample of the 328 papers, plus a few others the author knew about. The aim is to show the diversity of uses of Newell's theory, indicate which key points have been most important, explain how they have been employed, and make a few observations from this employment that might be useful when refining the theory. Many of the papers cited make significant contributions in their own field, but we largely ignore them because our interest lies only in use of Newell's theory.
"Nothing has changed my mind about the existence of the knowledge level or the nature of knowledge in the debate since the paper was written. After all, nothing has happened in the meantime to change the way the computational community uses the concept of knowledge; which is the grounding of the paper. I think the acceptance of the concept will rise very gradually. ..."
In that paper he reinforced his ontological claim (kp:ont), and reviewed how various communities had made use of the knowledge level.
"The aim of agent communication languages is to support coordination not at the symbol but at the knowledge level .. independently from implementation-related aspects." [41]
"Cognition can be studied as a disembodied process without solving the symbol grounding problem" [57]
"Comparisons should largely be performed on the knowledge level rather than on the symbol level or lower levels of the system." [47]
"Newell introduced the knowledge level as an implementation-independent level ..." [99]
"Newell advocates the modelling of knowledge at a level above its symbolic representation." [82]
"This separation of knowledge from its representation ... Therefore it makes sense to distinguish ..." [22]
As [81] puts it, "These are guidance and communication benefits." They cut across all usage groups and the majority of papers studied make no further explicit reference to Newell's ideas. Newell's theory is used as a justification of: an architecture for a system [15], a two-strand approach to topoi [34], an approach to testing KBSs [48], the ignoring of implementation issues in a model of contextual knowledge [73], an approach to knowledge reuse and Ripple Down Rules [81], an approach to verification and validation [83], an approach to impact of KBS [50], and much more. As we have seen, Newell himself used the distinction in this way to understand the contribution of different types of research. The notion of the knowledge level is so intuitive that several authors have used the term without explicitly referring to Newell's work [19, 39, 57, 61].
Newell's most useful contribution to the community as a whole seems to have been to introduce a distinct concept, make a strong ontological claim for it, and work out some of the details. Newell provided scientific grounds on which much research could tackle knowledge in a way that does not reduce it to psychology, sociology, logic or philosophy, but allows it to be discussed and explored as a distinct topic in its own right.
However, there are many other contributions of a more specific nature that we now look at.
One good example of an attempt to apply Newell's theory very explicitly to knowledge representation is by [1], whose major concern is to bridge the SL-KL gap, practically generating symbol level implementation from a knowledge level model. They develop a theoretical framework of data, information and knowledge on which to base their proposal, and then a language and architecture. They apply it to case based reasoning (interesting in view of Dietterich's [35] claim below that such is not describable at the knowledge level).
They propose what seems, from the symbol level, a conventional frame language, but with in-built knowledge level features, that explicitly recognise the external environment (kp:env), such as causality. Knowledge is an interpretation (kp:interp), but knowledge itself is involved in that very interpretation process, so they propose a recursive structure. Two issues must be addressed in the design of such langauges: how to decide what knowledge level features should be included, since knowledge of the world is, as Newell pointed out, infinite (kp:inf), and how to maintain integrity and harmony for users of such languages.
They find that "Knowledge level modelling methodologies ... are in general weak in representing the actual specific domain models ..." so need strengthening. But we should not "necessarily assum[e] that a complete transformation is possible or even wanted." Other key points they value are kp:func, kp:freedom, but they distance themselves from "Newell's highly intentional and purpose-oriented way of describing a system" and as a result have trouble with the link between Newell's knowledge level and Winograd and Flores' [101] approach (especially kp:proc, kp:env).
Newell's theory gave impetus and validity to the 1990s research in formal ontologies, because they describe knowledge itself and the independence of the knowledge and symbol levels (kp:irred) enables agents that represent knowledge in different ways to share knowledge. Reference to the knowledge level in relation to ontologies is made by e.g. [71, 25]. "We use common ontologies to describe ontological commitments ..." says Gruber [44] "... The idea of ontological commitments is based on the Knowledge-Level perspective (Newell, 1982)." Behaviour is important too (kp:behav) because an ontology is more than a description, it is a commitment to behaviour that is consistent therewith. On this basis Gruber identifies criteria for ontology design and gives two warnings that are founded in Newell's ideas. 'Encoding bias' occurs when symbol level considerations affect the ontology, constraining symbol level choices and making knowledge sharing more difficult. Ontological commitment makes claims about the world being modelled, which can constrain at the knowledge level.
To overcome these constraints and bridge the SL-KL gap successfully, while retaining useful representational power, we must understand the nature of SL-KL relationships (kp:rel) in sufficient detail to make technical proposals. Basden [7] tried to do this by introducing the criterion of 'appropriateness' as a formalization of the intuitive notion of 'naturalness' of representation, related to affordance [42]. While the conventional criteria of sufficiency, efficiency and expressive power [64] are visible at the symbol level, appropriateness is visible at the knowledge level. Its proposal is that distinct aspects of knowledge of our world (conceptual, quantitative, spatial, kinematic, causal, etc.) can and should be reflected by different symbol level formalisms and facilities, rather than hoping object-orientation or logic programming will fit all. It can provide principles for multiple representations. Using an inappropriate formalism (e.g. logic programming for spatial knowledge) can lead to errors and other problems.
Van Heijst, Schreiber and Wielinga [97] discuss ontologies at some length. The distinction and relationship between knowledge and symbol level (kp:dist, kp:rel) is central to their work. They identify categories of ontologies and criteria by which to judge tools, describe a tool, and discuss the need for multiple representations. Of interest to us are three problems they mention. Ontologies become huge (because of kp:inf), which might mean knowledge level approaches are less useful than hoped. Agents can make dissimilar ontological commitments - suggesting that the knowledge level does not solve all our problems and a tacit level might been needed. Interaction between domain knowledge and problem solving strategies cannot be ignored, despite Newell's claim that process and structure merge at the knowledge level (kp:proc), so this key point might need rethinking.
In 'vertical' architecture design each level is implemented by a distinct module (or set of modules), each of which has programmed code that handles implements components, medium, law of composition and law of behaviour of its level. Typically takes the form shown in Fig. 4 (which shows the author's experimental (unpublished) IRKit architecture, that has module sets for each of the bit, symbol, knowledge and (planned) tacit levels. The separation of modules reflects kp:dist, and that the inter-module calls are always from higher to lower reflects kp:dep. The central knowledge base contains all the data structures processed by all modules and reflects the idea that each level describes the same total system (kp:same). A practical outworking of it is the Istar KBS toolkit [10].
A good 'vertical' example is the COMMET system [92], which has distinct portions devoted to each of the knowledge, symbol and bit levels. This is reflected in the user interface, in that on many of its screens the user is offered explicit access to each level. In contrast to the conventional wisdom which would hide the lower levels, this keeps the user aware of the importance of all levels, in line with kp:imptnt. Even the architecture of books can be designed around the levels: for example Stefik's [90] textbook has two main parts: Symbol Level and Knowledge Level.
An example of 'horizontal' influence on architecture is in [22], a user interface to a database whose symbol level data model is given (relational data model). That there is a many-to-many relationship between levels (kp:m-n) promises that, even with a deficient data model, it is possible to achieve good knowledge level design, and motivated the designers to devise user interaction in which both style and content are at the knowledge level. Though not having separate modules for each level, Newell's clear distinction between levels (kp:irred) motivated them to seek clear ways of bridging between levels without compromising the nature of any, in spite of real world complexities. They devised an architecture to support the interaction that has three modules: domain, transformation and data. A bonus of this design is flexibility, in that it allowed a second, symbol level, style of interface to be incorporated, to serve 'data users' for whom the symbol level can be meaningful.
Progressing from such examples, one might envisage a 'levels-oriented architecture', in which the key points of Newell's theory are reflected more deliberately and fully in its design. To take just three examples,
Such principles stand in need of development but it may be that some of the problems with object-oriented design can be overcome by levels-oriented design.
Of the few papers that take Newell's ideas further, [15] describes a 'smart searcher' that operates at the knowledge level, in contrast to most search which is at the symbolic or subsymbolic levels. Knowledge level search employs expert knowledge to make it more efficient, and they conjecture that such searches will resemble human searching. (Of course, search pruning by knowledge has been commonplace for years in artificial intelligence but Newell did not claim any novelty for his theory; all he claimed was that it reflected what had been common practice concerning knowledge, and that his theory helps us understand it better.) They illustrate the diversity of knowledge types involved (global, visual, conceptual, tactical, etc.) and discuss a knowledge acquisition method for obtaining such knowledge and a tool to aid it. At the end they discuss the ways in which Newell's theory has influenced their work, the main one being the distinction of knowledge level from lower levels (kp:dist) but they have also used kp:descr,kp:freedom,kp:proc,kp:m-n,kp:rat,kp:goal,kp:nondet,kp:ri. Interestingly, the postulate of logical closure (kp:clos) seems unimportant in their work.
Since the knowledge level is concerned with the content rather than structure, it would be content rather than structure or mechanism that is the central issue in knowledge level reasoning, knowledge of the world (kp:env). "Could a crocodile run a steeplechase?" asked Levesque [63], as an example of the kind of question we have no difficulty in answering but which logic-based systems find hard. Our easy answer comes not from the processing but from the knowledge level content, namely we know something about crocodiles that is inconsistent with steeplechasing. The knowledge level of reasoning is extremely important in real life, for example in courts of law. If the logicist and search communities are to understand real-life reasoning, it would seem plausible that they must turn their attention to such knowledge level content as being just as important as symbol level reasoning processes.
[74] develops a model of context knowledge that makes case based reasoning more robust and able to adapt to changing environments. Though using the term 'knowledge' [27] actually discusses derivation of symbol level from bit level, using cluster analysis. [85] discusses human skill in signal interpretation and data fusion and how it might be encapsulated in genetic algorithms, but focus mainly on bit and symbol levels.
In none of these papers did the authors make explicit use of Newell's theory in shaping their own ideas, but only referred to a key point thereof as justification for the approach they took. Interestingly, each used a different key point (kp:dist,kp:por,kp:comp respectively). It seems the knowledge recognition communities have yet to understand the potential of Newell's theories for their work.
A more in-depth treatment of learning at the knowledge level was by Dietterich [35]. Learning at the symbol level is described as changes to symbol structures within the observed system; similarly learning at the knowledge level is "a positive change in its knowledge level description over time." (Negative changes, forgetting, were not discussed.) He discussed three types of machine learning, and provided a formalized account of each, defining a goal for each and relying heavily on Newell's postulate of logical closure (kp:clos). Dietterich found a surprising result: learning is not easily describable at the knowledge level.
Simple accumulation of facts and rules, such as occurs in data entry, can be described at the knowledge level because the logical closure of the stored facts and rules increases. Typified by the MRS system [83], Dietterich characterizes this as 'knowledge flow' from the environment and calls it DKLL, Deductive Knowledge Level Learning. What he calls NKLL, Non-Deductive Knowledge Level Learning, is exhibited by induction algorithms like AQ11 [68] that generate general beliefs from a collection of facts. Though the logical closure of the stored knowledge changes, it does so in ways that cannot be accounted for at the knowledge level because induction is not explainable on the basis of deduction (though [1] indicates how such a description might be possible). The third type of learning is when a system reflects on its own activity and learns heuristics that enhance that activity for the future, as exhibited by the LEX system [69]. Such reflection is deductive, so any heuristic discovered and stored is within the logical closure, and so the logical closure of its stored knowledge does not change. So this type of learning is completely invisible at the knowledge level, and Dietterich labels it SLL, Symbol Level Learning. [40] also recognises the possibility of some learning being invisible at the knowledge level.
Intuitively, learning is the increase or enhancing of knowledge, so we would expect the knowledge level to be very helpful in describing all kinds of learning. So it is disappointing to find this is not the case. As we discuss in the companion paper [CP], the problem lies in the postulate of logical closure (though Dietterich believes it to lie in lack of normative models of NKLL).
However, we might question Dietterich's approach. For example, we see the induction algorithm to be part of what the system knows, and the collection of facts to be, instead, a collection of statements about facts, then this type of learning might be describable at the knowledge level. But a more fruitful approach might arise from building knowledge change into Newell's theory right from the start.
Gaines attempts a comprehensive knowledge level analysis of a society of agents, discussing how their individual knowledge can serve the interests of the organization. His approach is to see each agent as employing other agents as resources to fulfil its own goals, distributing tasks among them (a knowledge level decision). He discusses how collaborating agents might be trained, by each other and even while they are working, so that their competence might be enhanced, recognising Dietterich's [35] SLL and DKLL. He introduces the idea of granularity of knowledge as a factor in training, and shows how management of training might be simplified.
Gaines' ideas make a significant contribution to the debate about multiple agent systems, and can stimulate us to reconsider some of Newell's key points. For example, he focuses on the goals of the observing ('modeling') agent whereas Newell focuses on the goals of the observed agent. Though probably valid, this needs justification. But he directly denies Newell's ontological claim (kp:ont), arguing at length that the goals on which the knowledge level is founded emerge from autopoiesis as a "natural outcome", which itself emerges from chaotic behaviour, and thus is in danger of undermining the very theory he makes use of. His argument contain flaws, assuming the knowledge level activities of observation, modelling and making analogies as his basis for emergence of goals.
[41] raises what could be a major problem in knowledge level agents. On one hand, "asynchronous communication mechanisms, reliable message passing, and nonblocking primitives" etc. have no place in a knowledge level descriptions, but real-life systems often need to adapt knowledgeably, as human beings do, to failures in such mechanisms (e.g. in seeking the best monetary exchange rate from banks, if some bank systems do not return a result, then it seems sensible to work with those available rather than failing). This suggests that, in some applications, the system might validly possess knowledge about its own lower levels - and though the lower level components and media are not themselves visible at the knowledge level, knowledge about them is, as mentioned earlier.
Clancey's [30] heuristic classification Method is a "methodology for analyzing problems, preparatory to building an expert system." It identifies the various types of inferences made (at the knowledge level) during problem solving: selection and construction of solutions, abstraction and heuristic association, and analysis and synthesis of problems. From this a method of analysing knowledge is proposed, which can also suggest the overall architecture of knowledge bases. Clancey's paper has a useful discussion on knowledge level analysis, and, though he tries to relate heuristic classification to Newell's principle of rationality and logical closure (kp:rat,kp:clos), it is plain that these do not fit comfortably and that his proposal does not need them. Many have referred to Clancey's work, notably the KADS team. Ramoni, et. al. [79] try to combine heuristic classification with three other knowledge level analysis methods: deep-shallow [26], problem-solving methods [66] and generic task [23].
Steels' [91] components of expertise recasts components of the knowledge level - goals, bodies of knowledge and actions - as tasks, models and methods, discussing them in depth but without assuming the principle of rationality (kp:por). Several recent writers have made use of his ideas, such as [36] when "discovering the components of expertise without thinking of computational aspects" in biology. This led them to define two knowledge level methods appropriate to the domain - Refine Taxonomic Tree and Propose and Revise - but they do not discuss how they identified these two. Steels' components are also employed by [73] to propose a knowledge level taxonomy of contextual knowledge, and a four-step method for analysis of context that is claimed to make knowledge based systems less brittle.
Attarwala and Basden [5] used a purely knowledge level analysis to improve the quality of knowledge acquisition, independent of acquisition techniques employed. They recognised the essentially subjective and context-dependent problem solving element (CPS) in the heuristics (experience) that often emerge from knowledge acquisition, and suggested it should be possible to separate such elements from general understanding (generally agreed and applicable knowledge that is a better foundation for a knowledge base) by asking four questions of any piece of knowledge, recursively:
This method operates solely at the knowledge level and has proved highly efficient and effective in knowledge analysis [11].
Andrews [3] discusses knowledge level analysis from a more pragmatic angle, verging on system development as a whole. She discussed the benefits of a knowledge level approach and suggested four steps: make explicit what knowledge is needed, discern how much is amenable to the symbol level methods to be employed (e.g. numeric for computational fluid dynamics), estimate the effort required to codify all the knowledge (recognising when new conceptual foundations need to be laid) and look at how the knowledge itself is used, so as to assess whether the proposed system is likely to be useful. This is a good example of integrating different levels (kp:imptnt) in service of wider issues of usefulness.
System development is where all levels must be integrated because all are important in their different ways (kp:imptnt). The various phases in the system development process often reflect emphasis on different levels: user requirements analysis is knowledge level and tacit level, specification moves towards the symbol level, knowledge representation is at the symbol level, implementation moves towards bit level and, sometimes, component level, then testing moves back to all levels. It is the distinction between levels and their technologies (kp:irred,kp:tgy) that enables this separation of phases. As [67] commented, Newell's work led to a more structured approach and improved project organization reminiscent of software engineering.
In the testing phase, errors can occur at any level, so all must be tested, but the approach differs for each level. Haouche-Gingins and Charlet [48] discuss knowledge level testing, seen as comparing a knowledge level description of actual system behaviour with specification. Freed from the need to consider the testing at the symbol level (kp:freedom), they identify two types of knowledge level specification - of the system and of anomalies - and propose the addition of validation knowledge to domain knowledge. Though testing deals with behaviour of the system, key points of the knowledge level that are to do with predicting behaviour (kp:pred, kp:rat, kp:por, kp:ri) are not relevant here because the issue is actual behaviour.
In contrast to structured, linear approaches to system development several workers [1, 15, 52] contend that systems should be developed iteratively, merging or interleaving the knowledge level and symbol level development phases, and should evolve over their lives. This is because real life knowledge-level models become so complex (kp:variety) that written specifications are no longer useful. However, the merged phases of iterative development can bring confusion.
This problem might be resolved by training and attitude change rather than by resorting to formal project structure. The attitude should be one that recognises the distinct importance of every level (kp:irred,kp:imptnt) yet holds them all in relationship (kp:rel) to achieve harmony. To achieve this, [11] proposes a methodology which combines linear and iterative development to abstract away from technological processes like specification and design towards desired properties of the technological artifact, such as trustability and usability. These are what really matter to make the delivered system useful, and are visible at the knowledge and tacit levels (kp:appln). This overcomes techno-centric tendencies and avoids being swamped by symbol level debates (e.g., from the 1980s, procedural versus declarative, rules versus PROLOG, backward versus forward chaining, etc.).
We can take Richards' [81] discussion as typical of useful knowledge level work in this area. From one symbol structure in a KBS different explanations might be generated for different purposes. Such flexibility shapes what the user sees as the tool's potential [101], and is made more feasible if the internal model is of general knowledge ('causal models', [62], 'understanding', [5]) rather than context-dependent problem-solving knowledge. Richards discusses both 'static' and 'dynamic' ways of generating explanation, illustrating that process and structure merge at the knowledge level (kp:proc). Static ways involve tracing either the structure (relationships) or the process (rule firings) of the knowledge base, while dynamic ways involve the user's action, e.g in undertaking 'what-if' analysis. The tacit level issue of social meaning is briefly touched upon.
In the main, these papers do so simply to establish the scope of their discussion and to justify considering knowledge of the domain without having to consider implementation (kp:detail). Sadly, much of the potential of the knowledge level for applications is missed. There are several genuine research contributions that applications papers could make, based on Newell's theory:
"It does not matter," say Wielinga, et. al. [99], "whether the knowledge resides in the head of someone, is documented in a book, or is represented in an information system." So a knowledge level analysis can, in principle, encompass all knowledge in an organization, whether held in archives, databases, intelligent software agents or by personnel.
Knowledge management in organizations has become an important issue during the 1990s. It has, of course, close links with multiple agent systems, especially when we presuppose no difference between human and machine. Gaines' [40] paper discussed earlier has relevance for organizational knowledge, postulating that the knowledge level can emerge in a society of agents that model and manage each other as resources to fulfil their goals and manage each others' learning. Gaines showed that tasks can be distributed among agents successfully by knowledge level methods if certain conditions pertain (including the doubtful one that task difficulty is reasonably independent of agents). He discusses agent learning curves and shows how, by assigning a granularity to knowledge the management of learning can be simplified. Though there might be severe flaws in his arguments, mentiohed above, Gaines' treatment of organizational knowledge is far-reaching and wide-ranging, indicating that the notion of the knowledge level has powerful potential to tackle organizational knowledge.
Jennings [54] takes a different approach. As we have seen, he proposes a new, social, level, in order to tackle problems of an agent-based approach to organizational knowledge, especially unpredictability. Just as the knowledge level "stripped away implementation and application specific details to reveal the core of asocial problem solvers" so it might be possible to strip away details of individuals to reveal the core of organizational knowledge. As we have argued above, his social level is not a true Newellian level but it does offer us a way of seeing a whole organization as an information system; whether this is useful remains to be seen.
De Souza, Ying and Yang [33] do not speculate that organizations can be treated as systems in such a manner. Instead, they employ the knowledge level in a more direct way as a tool for modelling business processes. A knowledge level analysis can encompass dynamic and uncertain knowledge, enable a simple organization of information, utilize well-developed knowledge representation schemes to aid business processes, and allow models to be interactively changed so they remain alive over a long period. They discuss types of knowledge, such as precise, muddy and random. What they value about the knowledge level is first its independence from the symbol level (kp:irred) and the freedom it gives to ignore implementation aspects (kp:freedom). Radical incompleteness (kp:ri), they claim, suits organizational reality, though their notion of it might differ from Newell's.
Organizational knowledge is a dynamic research arena, in which ideas are even now being conceived and born, and it will be a long time before it is known which are the truly successful approaches. It is not unlikely that Newell's theory of levels will have some contribution to make, because of its recognition of intentionality and because of its diversity of levels of description.
Because knowledge is functioning rather than a commodity (kp:func) it can be researched and discussed without having to know whether, for example, it is held explicitly or innately [2]. Because Newell's knowledge level is intimately bound up with the environment (kp:env) it can help also guide research on human behaviour in the real world. Newell suggested that if we know approximately what a person is likely to know and their goals, then we can often make useful predictions by invoking the principle of rationality (kp:por).
There have been a number of attempts to discuss levels in psychology, such as Representational Redescription, discussed above, [55, 20]. There are also a number of direct references to Newell's work in psychology, such as [49, 77, 89]. Probably the most useful are those which have employed levels to propose models of human cognition.
Foremost among these is perhaps John Anderson's [2] Theory of the Origins of Human Knowledge. His main discussion is about how knowledge might be built up. He proposes three levels, knowledge, algorithm and implementation levels, which correlate well with Newell's knowledge, symbol and bit level and enable him to come up with an elegant theory of various psychological functions involved in learning. His algorithm level is closely linked with human working memory, while his implementation level covers resource issues like size of working memory, speed of response, etc. His theory tackles the issue of innate knowledge, suggesting that this occurs at both knowledge level and implementation level, but not at the algorithm level. He uses the word, knowledge, for any learned capability, at any level; following such usage might resolve some of the problems in Newell's knowledge level, as we discuss in [CP].
Some uses for theory are small, such as to propose ways of escaping an infinite symbol level [47] or to construct a theoretical basis for distinguishing data, information and knowledge [1]. In many ways, the overall design of an architecture (above) is akin to developing a theory.
Debate is an important part of theory development, and Newell's levels can help to structure it, in a number of ways. This is well illustrated by that between Guarino [45] and Van Heijst, Schreiber and Wielinga [97, 98] about using ontologies in KBS development. Guarino believes the interaction problem is largely invisible at the knowledge level, but van Heijst et. al. argue that it is visible in practical KBS experience. In this they are using the knowledge level as a yardstick for classifying phenomena. Both parties, it should be noted, treat the knowledge level as something with which they wish to be associated, so the argument tended to be about who was more true to Newell's idea. However, recognising this tendency, Van Heijst et. al. clarified the debate with "our difference of opinion with Guarino originates from different views on the nature of the knowledge level. Guarino's view .. closely corresponds to Newell's original formulation ... in terms of its rational behaviour ... we use a much more pragmatic - and we believe more useful - interpretation of the knowledge level. In our view, knowledge level models are descriptions of the knowledge required to solve some problem, formulated in a language that does not restrict expressiveness because of efficiency considerations." (We can see here the two main themes of the knowledge level, 'aboutness' and behaviour.) This recognition clears the ground and allows further progress.
Newell's theory was also used in two other ways: to guide the generation of examples and counter-examples that helped progress the debate - an example of a visible interaction - and to exclude certain things from consideration - types of ontology (implementation-oriented ones) - and thus define the scope of debate. A possible fifth way Newell's theory could be used, to appeal to key points of Newell's theory as axioms in a deductive argument, was not exhibited in this debate. In this was we can see that Newell's theory can be useful in theoretical debate because of its intuitive appeal, its normative force, and the detail with which Newell worked it out.
We can see its use in development of (or at least justification for) a broad perspective in Clancey's [31] The knowledge level reinterpreted - modelling socio-technical systems. Stressing that knowledge is active interpretation (kp:interp,kp:func), Clancey does not so much reinterpret the knowledge level as stress what Newell himself said but most artificial intelligence workers had not noticed. It is as though Newell's ideas are a door into new territory which Clancey opens, pushes through, makes a wide exploration of the territory beyond, and then paints a broad perspective picture for us of what he found. "Several ideas interweave in this analysis," he says, in the conclusion of a paper that is well worth reading:
"... In many respects this research has just begun. ... knowledge engineering moves radically from its original concern in 'acquiring and representing expert knowledge' to the larger arena of social and interactional issues involved in collaboration and invention in everyday work. We shift from the idea that a glass box design is an inherent property of a device, to realize that transparency is relative to the observer's point of view, and this depends on cultural setting. We shift from the idea that computer models are equivalent to habits and skills; rather as representations they play a key role in reflection and hence learning new ways of seeing and behaving. We shift from the idea that goals, meaning and information are fixed entities that are inherent in a task, to helping people in their continual, everyday efforts to construct their mutual roles, contributions and identity. In all this, we see the role of knowledge engineering not as 'capturing knowledge' in a program that is delivered by technicians to users. Rather, we seek to develop tools that help people in a community, in their everyday practice of creating new understandings and capabilities, new forms of knowledge."
This author has found the integrative position useful in recognising that some issues occur across all levels. Consider efficiency. We usually take it to be of lower levels (component level: fast hardware, avoiding bus contention, etc.; bit level: good machine code, proper choice between linked lists versus tables, etc.; symbol level: well designed and appropriate algorithms). But Fensel and Straatman [38] argue that it has a knowledge level aspect too. For example, making assumptions can increase efficiency in problem solving, and deliberate assumptions are visible at the knowledge level with cultural ones, at the tacit level. Many issues are multi-level, including user interface (see Fig. 5), errors, data storage, and can be better understood if all levels are taken into account. The history of computing can also be seen as progress up the levels:
An integrative position can help in real-world system design and evaluation. In the general case, a system's functioning can be jeopardized by problems at any level. We have already commented on computer games that had good bit level but poor knowledge level; but the reverse - good knowledge level but poor bit level, e.g. slow response - can also seal a game's fate. Many websites have great decoration (bit level) but poor knowledge level content. We might state this as an principle:
Principle of Level Importance: In general, every level is important to the success or failure of an information system.
The principle applies to most types of information technology, and a multi-level analysis of technologies can be helpful.
For a general KBS, correct knowledge is the most important factor: knowledge level. The way the knowledge is represented is usually less crucial and bit level issues are usually less important still. The importance of the tacit level depends on whether users come from disparate cultural groups; Fensel and Benjamins [37] discuss assumptions during knowledge engineering and Öztürk [73] discusses context issues. Many KBSs are for a specialized group of users so it is less important. But if the KBS makes its knowledge available over the Internet, as a knowledge server [9], the tacit level becomes crucial because cultural assumptions of users are likely to be vary widely. Bit level efficiency becomes important for a heavily-used server. Normally lower levels are unimportant for a KBS, but in one that controls industrial plant the component level can become important in terms of sensors and hardware devices, and at the materials level specially robust materials might be needed. The tacit level is probably unimportant for such systems.
For a public multimedia system that employs video walls, complex sound system, etc. most levels demand design effort. At the component level all monitors must be kept in colour balance, electrical contacts must be good and interference must be avoided. The memory bus, shared by the central processor, graphics hardware and sound hardware, must be fast. At the bit level lip synchronization must be rigorous and animation frames must be rendered fast enough to avoid flicker. This can mean reducing the detail of the 3D scene being rendered, which can have symbol level implications if visual symbols are not so easily seen. If the multimedia contains a message, then the visual/aural symbols must be well designed. At the knowledge level, the content must be crisp, clear and coherent for public information services - but perhaps not so for a political message! Artificially composed music must make sense musically [78]. At the tacit level humour should be appropriate to the culture in which it is shown, and cultural mistakes or insults should be avoided.
Principle of Technical Demand: The more levels that are important to a technology, the more demanding it is in project design.
The output from multi-level analysis can be used as a yardstick against which information systems can be evaluated. It helps ensure that important factors are not overlooked in evaluation. For example the success of the ELSIE KBS [12], that was used widely in the surveying profession in the late 1980s in budgeting for office developments, can be analysed in this way. It was a type of general KBS, in which all users were from the surveying profession. The cultural issues of the profession (tacit level) were built into the system on a continual basis because of the close involvement of the U.K.'s Royal Institution of Chartered Surveyors. The quality of knowledge (knowledge level) was high, owing to the 'four questions' of knowledge acquisition being employed to separate understanding from CPS. At the symbol level, the language employed (Savoir) was powerful and flexible, and at the bit level it was a very efficient piece of software. So all levels scored highly.
In teaching databases, for instance, the author found the approach useful to split his course into bit level (data storage mechanisms, indexing, security etc.), symbol level (data models and structuring, etc.) and knowledge level (content, consistency, etc. - and normalization). Students often have difficulty grasping normalization when taught conventionally, as a mere extension of database structure. This is because while relational database structure is a symbol level issue, correct normalization of a database is a knowledge level issue, dependent on the precise knowledge level meanings ('aboutness') given to the information by user and designers. Trying to understand a knowledge level issue in symbol level terms end in confusion. The author now teaches what he calls 'intuitive normalization', in contrast to mathematically-founded normalization, and many students have found it easier to understand. This shift would not have been possible without Newell's notion of levels.
With regard to the first, the most important use has been the validation of our intuition that knowledge is distinct from symbols, and to be treated in its own right without reduction to pyschology or any other science. This has stimulated and shaped the development of knowledge analysis, representation and ontologies. But this theme is also important when seeking a suite of levels. Newell's suite is clearly superior to most others in its coverage, detail in which it has been worked out, and particularly in the ontological claim Newell made for his levels. This holds the whole suite together as a coherent plurality. But, apart from the author's own multi-level analysis that he has used in teaching, design and evaluation, few others seem to have espoused an integrative multi-level approach.
Newell's suite matches that of linguistics well, with the exception of a new tacit level that matches some issues in linguistic pragmatics. The proposal for the tacit level has been cursory, here, and more work needs to be carried out. It shows signs of being a genuine specialization of the knowledge level into a new level, but it might just be part of the knowledge level itself. Further work is needed to determine which it is, based on the philosophical underpinning that we discuss in [CP].
The theme of 'aboutness' has been important in knowledge representation, ontologies, explanation and the like. The emphasis on interpretation anticipated the interpretive perspectives that have arisen in information systems since Newell wrote his paper. But it seems there are avenues to explore. Newell did not attempt to explain 'aboutness' as such, and it may be that gaining an understanding thereof will stimulate and guide such exploration. The companion paper [CP] attempts such an understanding.
The theme of rational behaviour has been referred to by a number of workers in the artificial intelligence community, and also by those involved in organizational knowledge and multi-agent systems. Newell's emphasis on goals has given the notion of the knowledge level a focus that has stimulated technical design in a number of areas. But, as Newell found, the logicist community has still to recognise the benefits the knowledge level could give them.
One of the three main themes, behaviour, I had all but ignored. The theme of 'aboutness' I had valued intuitively but had little clear idea what Newell's own treatment of it was. The theme of irreducible levels I had found of immense value, and in fact had gone beyond Newell's ideas (e.g. adding the tacit level) to provide myself with a central plank for much of my teaching and research, but I felt, at the back of my mind, that I did not know exactly what Newell had said and not said about levels.
I discovered that Allen Newell's theory has been used in ways I had not expected - and ways that he himself probably did not anticipate. I discovered that his ideas hang together well, each part supporting the others without (much) contradiction. His theory of levels overshadows others because of its scope and comphehensive detail. The pieces about which I had been uneasy now fit into the whole picture. For some, my uneasiness has subsided, much like Chandrasekaran reported in [24]:
"I raised a number of questions about what I thought were inconsistencies in his proposed 'knowledge level.' He patiently explained the issues and I came to see that the 'inconsistencies' were actually subtleties that required a certain amount of aging in one's mind."
For others, the uneasiness has clarified into specific problems or points of disagreement, which I address in the companion paper [CP].
From the large number of times his paper The Knowledge Level [KL] has been cited, his ideas could perhaps be credited with being a major stimulant of, foundation for and motivation behind the areas of knowledge acquisition, ontologies and knowledge management. Exactly how they have been is worth another study by historians, but Newell's paper has given us all intellectual permission to think of knowledge itself, as distinct from symbols, and not reduced to psychology, logic, sociology or philosophy. What Allen Newell has done was to take a set of intuitive ideas awaiting explication, examine them, clarify them, work them out in detail and show how they are all belong together as part of a larger scenario. There are still some gaps and problems in his ideas and, though he tried to supply it, little in the way of philosophical underpinning, and sorting these out is the function of the companion paper.
Allen Newell's theory of levels is rather like Darwin's theory of evolution. It is more a way of thinking than a set of laws. It is a perspective that can motivate and guide research in information systems and around which whole research programmes may be structured. It is of course too early to say how great its final impact will be, but it has lasted twenty years and is showing no signs of growing old. I believe that, with some refinement and philosophical underpinning as discussed in [CP], it can last at least another twenty years, and for some decades beyond that, as an important perspective on information systems.
Id. | Key Point | Source | Explanation |
---|---|---|---|
Concerned with Suite of Levels | |||
kp:int | Intuition. | [KL:90,92-3,116] | We have an intuition that knowledge is somehow different from the symbols it is expressed in, and this intuition is worth developing into a theory. |
kp:prac | Practice. | [KL:92] | The notion of levels derives primarily from practice in artificial intelligence. |
kp:theory | Theory. | [KL:122] | But it can be supported theoretically. |
kp:resch | Guiding Research. | [KL:117-120] | That knowledge and symbols are distinct clarifies contribution of research and can guide strategy. |
kp:scope | Scope of Theory. | [KL:106] | All things that can be said to have knowledge are describable by this theory; human and computer are alike at knowledge level. |
kp:pls | Levels. | [KL:96] | The levels form a pluralistic suite, not monistic or dualistic. |
kp:lvls | The levels .. | [KL:99] | .. are: materials (device), component (circuit), bit, symbol, knowledge and, possibly, tacit levels. |
kp:descr | Description. | [KL:96,97] | Each level gives us a different way of describing information systems. |
kp:valid | Validity. | [KL:95] | All descriptions of a thing are equally valid; each level has a role. |
kp:same | Object of Description. | [KL:95] | Each level describes the same thing, not different parts of it, though in different ways. |
kp:compl | Completeness. | [KL:96] | A description at any level should not need description from other levels to make sense. |
kp:des | Design. | [KL:97] | System design and evaluation can occur at each level. |
kp:behav | Behaviour. | [KL:95] | Because each level has a law of behaviour the theory of levels allows us to describe behaviour not just structure. |
kp:pred | Prediction. | [KL:95-6] | Because of the law of behaviour a description at any level allows us to predict behaviour. |
kp:ont | Ontological Claim. | [KL:97,98,99] | What levels we can describe at is given. We cannot think up new levels at will. |
kp:nointv | No Intervening Levels. | [KL:97] | So there are no intervening levels. |
About Each Level | |||
Each level has ... | |||
kp:sys | A system | ||
kp:med | Medium | ||
kp:compn | Components | ||
kp:lawc | Laws of composition | ||
kp:lawb | Laws of behaviour | ||
kp:rel | Relationships. | [KL:95-97] | Levels are related; no 'unbridgeable gap'. |
kp:dep | Dependence. | [KL:99] | Higher levels depend on lower, so a description at lower levels is always possible, in principle. |
kp:top | Top Level. | [KL:97] | But there might not be a description at higher levels; there might be a top level. |
kp:tgy | Technologies. | [KL:97] | Each level determines a distinct technology. |
kp:impln | Implementation. | [KL:95,99] | Each level is implemented in lower levels. |
kp:dist | Distinctness. | [KL:94-5] | The levels are distinct from each other. |
kp:irred | Irreducibility. | [KL:95-97,99] | A stronger form of kp:dist. The distinctness is an ontological irreducibility. |
kp:meaning | Meaning. | [KL:97] | Each level is reached by 'adding meaning' to the next lower level: a 'specialization' of the one below, by which the describer distinguishes what has meaning at the higher level and assigns that meaning. In particular, the next level cannot be reached by mere abstraction nor aggregation. |
kp:agg | Aggregation. | [KL:97] | Aggregation (hierarchies) occurs within each level, and does not take us to the next level on its own. |
kp:invis | Invisibility. | [KL:105] | Some things described at one level become invisible at next level, because they have no meaning there. |
kp:err | Errors. | [KL:97] | Errors at a level are explainable and valid in lower levels. |
kp:m-n | Many-to-Many. | [KL:95] | The relationship between levels is many-to-many. |
kp:detail | Lower Level Detail. | [KL:108] | We can work at the knowledge level without full knowledge of symbol level of agent; similarly for other levels. |
kp:freedom | Freedom. | [KL:108] | Hence we can work at KL when symbol level detail not known; we can design at the KL (application task) before representation strategy has been decided. |
kp:imptnt | All Important. | [Deduced] | All levels are important, and should be integrated in design and evaluation. |
Concerning the Knowledge Level Itself | |||
kp:about | 'Aboutness'. | [KL:114,123] | The link between symbol and knowledge Newell called 'aboutness'. But the knowledge level does not explain aboutness; it merely assumes it. |
kp:stance | Intentional Stance. | [KL:122] | Some levels link with Dennett's stances, especially the knowledge level with the intentional stance. |
kp:interp | Interpretation. | [KL:105] | Knowledge involves interpretation of symbols. |
kp:oo | Observing and Observed Agents. | [KL:106] | The knowledge level is only possible if there is an observing agent as well as an observed agent. The observing agent does the interpretating. |
kp:comp | Competence. | [KL:100] | "Knowledge is a competence-like notion, being a potential for generating action." |
kp:func | Knowledge as Functioning. | [KL:105] | Knowledge is functioning. "Knowledge is characterized functionally, in terms of what it does, not structurally .." |
kp:genv | Generative. | [KL:107-8] | An agent's knowledge is more than it has represented within it, and includes what is generated or inferred therefrom. |
kp:clos | Logical closure. | [KL:110] | The generative mechanism is deductive logic, so an agent's knowledge is the deductive closure of all it has represented. |
kp:proc | Process with Structure. | [KL:106,114] | Structure and process merge at the knowledge level. |
kp:env | Environment. | [KL:109-10] | Knowledge is about (things in) the environment, the world outside the system. A special case is when the environment is the system itself. |
kp:appln | Application. | [Deduced] | A major special part of the environment which the knowledge is about is the application. In most systems designed for use, the knowledge level is intimately bound up with the application of the computer system. |
kp:inf | Infinite. | [KL:107] | "Knowledge of world cannot be captured in a finite structure." |
kp:variety | Variety from Content. | [KL:101] | Variety at knowledge level comes from the rich content of knowledge, r.t. structure. At lower levels it comes from the way relatively simple components are assembled to form the system. |
kp:kgacq | Knowledge Engineering. | [From KL:97,114] | Construction of physical implementation of a knowledge level description is no longer routine. |
kp:nondet | Non Determinacy. | [KL:104] | At knowledge level behaviour is not determinate. |
kp:nonpred | Non-predictable Behaviour. | [KL:104] | So knowledge level behaviour cannot be predicted precisely. |
kp:lwrdet | Determinate Lower Levels. | [KL:96] | Behaviour at lower levels is determinate; this gives us a problem. |
kp:glob | Global Principle. | [KL:102] | Behaviour at lower levels comes from the local responses of components to stimuli; behaviour at knowledge level comes from response to a global principle. |
kp:rat | Rationality. | [KL:100] | "Knowledge is intimately linked with rationality." |
kp:logic | Role of logic. | [KL:100,121-2] | the proper role of logic is not as a symbol level representation language, but as a knowledge level means for analysing the world. |
kp:por | Principle of Rationality. | [KL:102-105] | The law of behaviour at the knowledge level is the principle of rationality. |
kp:goal | Goals. | [KL:101] | Rational behaviour is directed by goals (a special type of knowledge). |
kp:klcomp | Knowledge level components. | [KL:100-1] | Therefore the knowledge level components are actions, bodies of knowledge and goals. |
kp:extn | PoR needs extending. | [KL:103] | The Principle of Rationality needs extending. |
kp:ri | Radical Incompleteness. | [KL:104,111] | The knowledge level is 'radically incomplete' as a level of description. Behaviour prediction often requires it to be augmented with lower level descriptions. |
==== Note to publisher: I have attempted to apply formatting with italics for journal names, bold for volumes numbers, etc. but am aware that more formatting might be needed.
==== Note to publisher: If companion paper is to be included, please enter its details below.
[KL] Newell A., The knowledge level, Artificial Intelligence, 18 (1982) 87-127.
[RKL] Newell A., (1993), Reflections on the Knowledge Level, Artificial Intelligence, 59 (1982) 31-38.
[UTC] Newell A., Unified Theories of Cognition, Harvard University Press, Cambridge, MA, (1990).
[CP] Basden A., The knowledge level - a philosophical enrichment for the next twenty years, (2002).
[1] Aamodt A., Nygĺrd M., Different roles and mutual dependencies of data, information, and knowledge - An AI perspective on their integration, Data and Knowledge Engineering, 16 (3) (1995) 191-222.
[2] Anderson J.R., A theory of the origins of human knowledge, Artificial Intelligence, 40 (1-3) (1989) 313-351.
[3] Andrews A.E., Progress and challenges in the application of artificial-intelligence to computational fluid-dynamics, AIAA Journal, 26 (1) (1988) 40-46.
[4] Armengol E., Plaza E., Explanation-based learning - a knowledge level analysis, Artificial Intelligence Review, 9 (1) (1995) 19-35.
[5] Attarwala F.T., Basden A., A methodology for constructing expert systems, R&D Management, 15 (2) (1985) 141-149.
[6] Basden A., On the application of expert systems, International Journal of Man-Machine Studies, 19 (1983) 461-477.
[7] Basden A., Appropriateness, in: Bramer M.A., Macintosh A.L, (eds.), Research and Development in Expert Systems X; Proc. Expert Systems 93, BHR Group, U.K, (1993) pp.315-328.
[8] Basden A., Three levels of benefit in expert systems, Expert Systems, 11 (2) (1994) 99-107.
[9] Basden A., Some technical and non-technical issues in implementing a knowledge server, Software - Practice and Experience, 30 (2000) 1127-1164.
[10] Basden A., Brown A.J., Istar - a tool for creative design of knowledge bases, Expert Systems, 13 (4) (1996) 259-276.
[11] Basden A., Watson I.D., Brandon P.S., Client Centred: an approach to developing knowledge based systems, Council for the Central Laboratory of the Research Councils, U.K, (1995).
[12] Brandon P.S., Basden A., Hamilton I., Stockley J., Expert Systems: Strategic Planning of Construction Projects, The Royal Institution of Chartered Surveyors, London, UK, (1988).
[13] Bell C.G., Newell A., Computer Structures: Readings and Examples, McGraw-Hill, New York, (1971).
[14] Berliner H.J., Backgammon computer program beats world champion, Artificial Intelligence, 14 (1980) 205-20.
[15] Beydoun G., Hoffman A., Incremental acquisition of search knowledge, International Journal of Human Computer Studies, 52 (3) (2000) 493-530.
[16] Bobrow D.G., Artificial intelligence in perspective: a retrospective on fifty volumes of the Artificial Intelligence Journal, Artificial Intelligence, 59 (1993) 5-20.
[17] Brooks H.M., Expert systems and intelligent information retrieval, Information Processing and Management, 23 (4) (1987) 367-382.
[18] Browne C., Evans R., Sales N., Aleksander I., Consciousness and neural cognizers: a review of some recent approaches, Neural Networks, 10 (7) (1997) 1303-1316.
[19] Brusoni V., Console L., Terenziani P., Dupré D.T., A spectrum of definitions for temporal model-based diagnosis, Artificial Intelligence, 102 (1) (1998) 39-79.
[20] Carassa A., Tirassa M., Representational Redescription and cognitive architectures, Behavioral and Brain Sciences, 17 (4) (1994) 711-712.
[21] Carley K., Newell A., The nature of the social agent, J. Math. Sociol., 19 (4) 221-62.
[22] Chan H.C., Goldstein R.C., User-database interaction at the knowledge level of abstraction, Information and Software Technology, 39 (10) (1997) 657-668.
[23] Chandrasekaran B., Towards a taxonomy of problem solving types, AI Magazine, 4 (1983) 9-17.
[24] Chandrasekaran B., Allen Newell - Expert Interview, IEEE Expert, June 1993 (1993) 5-12.
[25] Chandrasekaran B., Josephson J.R., Benjamins V.R., What are ontologies, and why do we need them?, IEEE Intell Syst Applications, 14 (1) (1999) 20-26.
[26] Chandrasekaran B., Mittal S., Deep versus compiled knowledge in diagnostic problem solving, Proc. Nat. Conf. Artificial Intelligence, (1982) pp.349-54.
[27] Chiu D.K.Y., Wong A.K.C., Synthesizing knowledge - a cluster-analysis approach using event covering, IEEE T. Systems, Man and Cybernetics, 16 (2) (1986) 251-259.
[28] Christodoulou E., Keravnou E.T., Metareasoning and meta-level learning in a hybrid knowledge-based architecture, Artificial Intelligence in Medicine, 14 (1998) 53-81.
[29] Chuang T-T., Yadav S.B., The development of an adaptive decision support system, Decision Support Systems, 24 (2) (1998) 73-87.
[30] Clancey W.J., Heuristic Classification, Artificial Intelligence, 27 (1985) 289-350.
[31] Clancey W.J., The knowledge level reinterpreted - modelling socio-technical systems, International Journal of Intelligent Systems, 8 (1993) 33-49.
[32] Dennett D.C., Brainstorms: Philosophical Essays on Mind and Psychology, Bradford, Montgomery, VT, (1978).
[33] De Souza R., Ying Z.Z., Yang L.C., Modelling business processes and enterprise activities at the knowledge level, Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM, 12 (1) (1998) 29-42.
[34] Dieng R., Corby O., Lapalut S., Acquisition and exploitation of gradual knowledge, International Journal of Human-Computer Studies, 42 (5) (1995) 465-499.
[35] Dietterich T.G., Learning at the Knowledge Level, Machine Learning, 1 (1986) 287-316.
[36] Domingo M., Sierra C., Knowledge level analysis of taxonomic domains, International Journal of Intelligent Systems, 12 (2) (1997) 105-135.
[37] Fensel D., Benjamins V.R., The role of assumptions in knowledge engineering, International Journal of Intelligent Systems, 13 (8) (1998) 715-47.
[38] Fensel D., Straatman R., The essence of problem-solving methods: making assumptions to gain efficiency, International Journal of Human-Computer Studies, 48 (2) (1998) 181-215.
[39] Fernández P.M., García-Serrano A.M., The role of knowledge-based technology in language applications development, Expert Systems with Applications, 19 (1) (2000) 31-44.
[40] Gaines B.R., Knowledge management in societies of intelligent adaptive agents, Journal of Intelligent Information Systems, 9 (3) (1997) 277-298.
[41] Gaspari M., Concurrency and knowledge-level communication in agent languages, Artificial Intelligence, 105 (1-2) (1998) 1-45.
[42] Gibson, J.J., The theory of affordances, in: Shaw R., Bransford J, (eds.), Perceiving, Acting and Knowing, Erlbaum, Hillsdale, NJ, (1977).
[43] Grice H.P., Logic and conversation, in: Cole P., Morgan J. (eds.) Syntax and Semantics Vol 3, (1975).
[44] Gruber T.R., Toward principles for the design of ontologies used for knowledge sharing, International Journal of Human-Computer Studies, 43 (5-6) (1995) 907-928.
[45] Guarino N., Understanding, building and using ontologies, International Journal of Human-Computer Studies, 46 (1997) 293-310.
[46] Habermas J., The Theory of Communicative Action; Volume One: Reason and the Rationalization of Society, tr. T. McCarthy, Polity Press, (1986).
[47] Hansson S.O., Knowledge-level analysis of belief base operations, Artificial Intelligence, 82 (1-2) (1996) 215-235.
[48] Haouche-Gingins C., Charlet J., A knowledge-level testing method, International Journal of Human-Computer Studies, 49 (1) (1998) 1-20.
[49] Hastie R., Problems for judgment and decision making, Annual Review of Psychology, 52 (2001) 653-683.
[50] Hendriks P.H., The organisational impact of knowledge-based systems: a knowledge perspective, Knowledge-based Systems, 12 (4) (1999) 159-169.
[51] Hintikka J., Knowledge and Belief, Cornell University Press, Ithaca, NY, (1962).
[52] Hori M., Yoshida T., Domain-oriented library of scheduling methods: design principle and real-life application, International Journal of Human-Computer Studies, 49 (4) (1998) 601-626.
[53] Jackendoff R., Semantic Interpretation in Generative Grammar, MIT Press, USA, (1972).
[54] Jennings N.R., On agent-based software engineering, Artificial Intelligence, 117 (2) (2000) 277-296.
[55] Karmiloff-Smith A., Beyond Modularity, MIT Press, (1992).
[56] Keenan E., Two kinds of presupposition in natural langauge, in: Fillmore C., Langendoen T, (eds.) Linguistic Semantics, (1971).
[57] Kirsch D., Foundations of AI: the big issues, Artificial Intelligence, 47 (1991) 3-30.
[58] Klir G.J., Identification of generative structures in empirical data, International Journal of General Systems, 3 (1976) 89-104.
[59] Lakoff G., Linguistics and natural logic, Synthese, 1 (2) (1970) 151-271.
[60] Landauer T.K., The Trouble with Computers: Usefulness, Usability and Productivity, Bradford Books, MIT Press, Cambridge, MA, (1996).
[61] Lanzola G., Gatti L., Falasconi S., Stefanelli M., A framework for building cooperative software agents in medical applications, Artificial Intelligence in Medicine, 16 (3) (1999) 223-249.
[62] Lee M., Compton P., From heuristic knowledge to causal, in: Yao X, (ed.) Explanations: Proc. of the Eighth Australian Joint Conference on Artificial Intelligence, 13-17 Nov, 1995, Canberra. Singapore (World Scientific) (1995) pp.83-90.
[63] Levesque H.J., Logic and the complexity of reasoning, Journal of Philsophical Logic, 17 (1988) 355-89.
[64] Levesque H.J., Brachman R.J., A Fundamental Tradeoff in Knowledge Representation and Reasoning (Revised Version), in: Brachman R.J., Levesque H.J, (eds.), Readings in Knowledge Representation, Morgan Kaufmann, Los Altos, California, (1985).
[65] McCarthy J., Hayes P.J., Some philosophical problems from the standpoint of artificial intelligence, in: Meltzer B., Michie D, (eds.) Machine Intelligence 4, Edinburgh University Press, Edinburgh, (1969).
[66] McDermott J., Preliminary steps towards a taxonomy of problem solving methods, in: Marcus S, (ed.) Automating Knowledge Acquisition for Expert Systems, Kluwer, Boston MA, (1988) pp.225-56.
[67] Menzies T.J., OO patterns: lessons from expert systems, Software - Practice and Experience, 27 (1997) 1457-78.
[68] Michalski R.S., On the quasi-minimal solution of the general covering problem, Proc. Fifth International Federation on Automatic Control, 27 (1969) 109-29.
[69] Mitchell T.M., Learning and problem solving, Proc. Eighth International Joint Conference on Artificial Intelligence, (Morgan-Kaufmann), (1983) pp.1139-51.
[70] Mitev N.N., More than a failure? The computerized reservation systems at French Railways, Information Technology and People, 9 (4) (1996) 8-19.
[71] Musen M.A., Fagan L.M., Combs D.M., Shortliffe E.H., Use of a domain model to drive an interactive knowledge-editing tool, International Journal of Human-Computer Studies, 51 (2) (1999) 479-495.
[72] Newell A., Physical symbol systems, Cognitive Science, 4 (1980) 135-183.
[73] Öztürk P., Towards a knowledge-level model of context and context use in diagnostic problems, Applied Intelligence, 10 (2-3) (1999) 123-37.
[74] Öztürk P., Aamodt A., A context model for knowledge-intensive case-based reasoning, International Journal of Human-Computer Studies, 48 (3) (1998) 331-355.
[75] Paraskevas P.A., Pantelakis I.S., Lekkas T.D., An advanced integrated expert system for wastewater treatment plants control, Knowledge-based Systems, 12 (7) (1999) 355-361.
[76] Pickering J., Attridge S., Viewpoints - metaphor and monsters - childrens storytelling, Research in the Teaching of English, 24 (4) (1990) 415-440.
[77] Pirolli P., Wilson M., A theory of the measurement of knowledge content, access, and learning, Psychological Review, 105 (1) (1998) 58-82.
[78] Ramalho G.L., Rolland P.Y., Ganascia J.G., An artificially intelligent jazz performer, Journal of New Music Research, 28 (2) (1999) 105-29.
[79] Ramoni M., Stefanelli M., Magnani L., Barosi G., An epistemological framework for medical knowledge-based systems, IEEE T. Systems, Man and Cybernetics, 22 (6) (1992) 1361-1375.
[80] Rasmussen J., Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering, North Holland, New York, (1986).
[81] Richards D., Reuse of knowledge: a user-centred approach, International Journal of Human-Computer Studies, 52 (3) (2000) 553-579.
[82] Richards D., Compton P., An alternative verification and validation technique for an alternative knowledge representation and acquisition technique, Knowledge-based Systems, 12 (1-2) (1999) 55-73.
[83] Russell S., The Compleat Guide to MRS, Technical Report KSL-85-12, Department of Computer Science, Stanford University, CA, USA, (1985).
[84] Sathi A., Fox M.S., Greenberg M., Representation of activity knowledge for project management, IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-7 (5) (1985) 531-552.
[85] Sawaragi T., Umemura J., Katai O., Iwai S., Fusing multiple data and knowledge sources for signal understanding by genetic algorithm, IEEE T Ind Electron, 43 (3) (1996) 411-421.
[86] Schank R.C., Conceptual Information Processing, North Holland, Amsterdam, (1975).
[87] Shahar Y., Cheng C., Model-based visualization of temporal abstractions, Computational Intelligence, 16 (2) (2000) 279-306.
[88] Simon H.A., Allen Newell: the entry into complex information processing, Artificial Intelligence, 59 (1993) 251-9.
[89] Stanovich K.E., West R.F., Individual differences in rational thought, Journal of Experimental Psychology - General, 127 (2) (1998) 161-188.
[90] Stefik M., Introduction to Knowledge Systems, Morgan Kaufmann, San Fransisco, USA, (1995).
[91] Steels L., Components of expertise, AI Magazine, 11 (2) (1990) 29-29.
[92] Steels L., The componential framework and its role in reusability, in: David J-M., Krivine J-P., Simmons R, (eds.) Second Generation Expert Systems, Springer, (1995) pp.273-98.
[93] Swartout W., Moore J., Explanation in second generation ES, in: David J-M., Krivine J-P., Simmons R, (eds.) Second Generation expert Systems, Springer, Berlin, (1993) pp.211-31.
[94] Taylor M.M., Editorial: Perceptual control theory and its application, International Journal of Human-Computer Studies, 50 (6) (1999) 433-44.
[95] Valley K., Explanation in expert system shells - a tool for exploration and learning, Lecture Notes in Computer Science, 608 (1992) 601-614.
[96] Van de Velde W., Issues in knowledge level modelling, in: David J-M., Krivine J-P., Simmons R, (eds.) Second Generation Expert Systems, Springer-Verlag, Berlin, (1993) pp.211-32.
[97] van Heijst G., Schreiber A.T., Wielinga B.J., Using explicit ontologies in KBS development, International Journal of Human-Computer Studies, 46 (2-3) (1997) 183-292.
[98] van Heijst G., Schreiber A.T., Wielinga B.J., Roles are not classes: a reply to Nicola Guarino, International Journal of Human Computer Studies, 46 (2-3) (1997) 311-318.
[99] Wielinga B.J., Sandberg J., Schreiber G., Methods and techniques for knowledge management: what has knowledge engineering to offer?, Expert Systems with Applications, 13 (1) (1997) 73-84.
[100] WOS Web of Science Citation Search, http://wos.mimas.ac.uk/isicgi/CIW.cgi, search taken 4 May 2001.
[101] Winograd T., Flores F., Understanding Computers and Cognition, Addison-Wesley, (1986).
Number of visitors to these pages: . Written on the Amiga and Protext.
Created: August 2002 Last updated: 22 February 2002 better sizing for pictures.