< Texts on topics | cikon.de |
Semantic Web technologies Formalizing (from HTML to Ontologies) Grounding (from symbols to reality or vice versa) Semantic Web R&D in Europe .... and elsewhere Formalizing (from HTML to Ontologies) The World Wide Web has become a major vehicle for distributing and delivering multimedia content (including video and audio). It is accessible from stationary and mobile client platforms. Up until recently the main format used to represent content on the Web has been HTML, a presentational markup language. The way of using HTML tags and their attributes, can be expressed to some extent as an SGML document type definition. HTML has undergone several revisions in order to accomodate for instance typed objects, including those that can be heard and watched and others that allow a certain degree of interaction with server based processes. However, these revisions (including major steps such as the introduction of Cascading Style Sheets, Dynamic HTML or scripting languages) could not remedy the basic "flaw" of HTML: its focus on lay-out and its lack of mechanisms for describing the content of document components. This is a design inherent characteristic as HTML was primarily conceived with a view to supporting human-to-human (visual) communication and not to enable effective and efficient information retrieval or to facilitate business processes on the Web. Mechanisms for specifying in some sense the meaning of document components are indeed necessary for any attempt to "automate" the Web to succeed. Automating for instance the tasks of making a travel arrangement or of carrying out a business transaction via the Web would require the respective software to "know" precisely the formats and intended meanings of the various resources underlying such transactions. The same holds for personalisation and filtering software as well as for software that executes complex queries on Web content. This kind of software is commonly known as Agents, decorated with varying attributes and qualifications, such as information, intelligent, autonomous, cooperative, adaptive, rational, mobile, etc. Web Automation, a subgoal perhaps of Business Automation, may be the biggest motivation behind the initiative known as the Semantic Web. It is not quite clear who coined the expression and when. A 1997 "Semantic Web paper" by Alexander Chislenko expands somewhat on the above outlined issues and refers to relevant activities going on since the mid-nineties. Yet most would agree that the idea of a Semantic Web has received its greatest push from the World Wide Web Consortium (W3C). An informal paper of September 1998, by Tim Berners-Lee, entitled "Semantic Web Road Map", as well as a more formal note on "Web Architecture: Describing and Exchanging Data" (June 1999) may be considered the seminal documents. They reflect on the role of several formalisms that at the time had been either in various stages of discussion within the W3C or had already attained the status of a Recommendation. The most basic of these are XML (eXtensible Markup Language, February 1998) and RDF (Resource Description Framework - Model and Syntax Specification, February 1999), claimed to be instrumental in achieving the Semantic Web. XML is derived from SGML. Differences between SGML and XML are explained in W3C NOTE-sgml-xml-971215: "Comparison of SGML and XML". Like SGML it is, properly speaking, not a markup language but a formalism for defining the syntax of markup languages, also called Document Type Definitions or DTDs. XML is often hailed as the successor to HTML. Wrongly so, because HTML markup determines but one document type whereas XML makes it possible to introduce an indefinite number of document types. Thus it makes sense to say that DTDs are specific XML applications. HTML has in fact been redefined as one such XML application, called XHTML, and it will continue to be used for presentational purposes. There are other XML applications (DTDs) and there will be many more, with tags and attributes tailored to the specific needs of special interest communities (e.g. aecXML for architecture, engineering and construction; XML/EDI for business-to-business communication) or given subject areas (e.g. MathML). (See also http://www.xml.com/) DTDs need not necessarily be designed with a view to visual representation. They may also serve to frame data to be communicated between software processes. XML-defined markup tags can be given names that describe the type of content they delimit. The visual representation of a document can be neatly decoupled from its description by associating a particular style sheet declaring for instance the way of arranging the different parts of that document on a computer screen or on an A4 sheet of paper. Style sheets are of course also documents the syntax of which follows the rules of XML. Document lay-out is but one instance of a document transformation. Consequently, the style sheet language (or XSL - eXtensible Stylesheet Language) actually provides a mechanism for translating documents from some representation into some other representation (XSLT - XSL Transformations). These need not be visual but can serve whatever purpose. What do XML and its various ancillary formalisms (such as XNamespaces, XPath, XLink, XPointer and XForms) contribute to the Semantic Web? Well, not much and yet a lot. XML is a purely syntactic device, a notation to formulate grammars, that's all. The XML specification does not say anything about how to define the meaning of DTDs. These meanings are entirely separate from the DTDs. Allegedly "meaningful" tags such as <AUTHOR> are meaningful to human readers only. For a computer programme processing a chunk of text marked <AUTHOR>, this tag is about as significant as <U17> or any other string of characters in angular brackets. Hence XML-defined DTDs provide at best structural or (to a human beholder) descriptional markup, not "semantic markup". The main value of XML in terms of semantics lies in the fact that members of special interest communities can agree on a common interpretation of their DTD elements, for instance in terms of the operations a software process is supposed to carry out on them, or in terms of how to map these elements to attributes of a given database. They can agree on relations between content portions delimited by given markup elements and possibly on constraints these relations ought to satisfy. However, we repeat: none of these semantic agreements (often based on specific domain knowledge) are part of the DTDs used to mark up the documents of interest. This has been remedied somewhat by creating another XML-based language for declaring document types: XMLSchema which is more powerful and flexible than the DTD formalism. While DTDs, as introduced in the XML specification are not themselves XML documents, instances of XMLSchema are. They can therefore be processed by XML tools. And while there are only ten types possible in DTDs XMLSchema knows several dozens, provides a fairly sophisticated type composition facility (allowing for instance constraints on values) and facilitates the reuse of component definitions (and it does so in conjunction with the XML namespace conventions, namespaces being defined in URI addressable XML documents). XML schemata can convey precise semantic information inasmuch as type declarations do but not more. (See e.g. Roger L. Costello: XML Schema Tutorial, for an easily digestible introduction to XMLSchema.) Semantic agreements may be tacitly assumed (as for everyday natural language), informal (as in most printed dictionaries), semi-formal (as in many markup and programming language definitions) or strictly formal. Strict formality would indeed be necessary for the Semantic Web where software agents are supposed to do jobs that people would normally do under much less stringent formality requirements. (Well, this is of course true only as long as software agents are not capable of behaving exactly like humans. See also: "What the Semantic Web can represent" and – by implication – what it cannot.) Informal, semi-formal and strictly formal semantic agreements are usually laid down in documents as well. Documents have syntactic structure. And here, of course, XML comes in again, as a means to define the syntax of documents that contain semantic assertions. So much for another – important – contribution of XML to the Semantic Web. The main questions now are: what kind of semantic agreements are needed and feasible, and what should the underlying formal models be like? The question of how to use XML (or whether to use XML at all) for defining the syntactical structure of such agreements is (we repeat) important for interoperability at the syntactic level but secondary from a "logical" point of view. As has already been indicated, semantic agreements can be of varying degrees of depth and complexity. Simple assertions about documents such as stating its title, author, subject etc., can for instance be defined as attributes of the <META> tag in HTML. This approach has become a de facto standard known as the Dublin Core (DC). It is used to provide catalogue type information on Web pages and a growing number of search engines are exploiting that information. Another early attempt at formalizing assertions about Web resources is PICS, the Platform for Internet Content Selection, a reaction perhaps to the Communications Decency Act, the failed initiative of US legislators to impose some regulation on what can be put on the Internet and what must not. Like DC-elements PICS-labels may appear as attributes of the HTML <META> tag. Basically, both make factual statements concerning certain properties of documents or Web resources. Such statements are commonly known as metadata, providing information that is not necessarily apparent from the document or resource itself (in the case of HTML encoded pages: not included or not clearly discernible in the body part of these pages). This is in fact the term that has been adopted for talking about semantic agreements on Web resources. They are the realm of RDF (Resource Description Framework; see above). The W3C RDF recommendation posits a metadata model and an XML-based syntax for actually writing "metadata documents". The RDF data model consists of three object types: Resources: A resource may be an entire Web page; a part of a Web page; a whole collection of pages; or an object that is not directly accessible via the Web; e.g. a printed book. Resources are always named by URIs. Properties: A property is a specific aspect, characteristic, attribute, or relation used to describe a resource. Statements: A specific resource together with a named property plus the value of that property for that resource is an RDF statement. A statement can be graphically represented as a directed labeled graph (DLG) consisting of two nodes (the resource and a property value) and one arc (the property). Alternatively, a statement can be read as a sentence consisting of subject (the resource), predicate (the property) and object (the property value). Property values may themselves be resources and statements can be made about statements (i.e. statements can also be considered resources). In theory this can lead to arbitrarily complex structures (DLGs). Hence, resource description documents (metadata) consist of (resource, property, value) triples and it does not require much imagination to invent an XML syntax (document type) for marking up such documents, using the XML namespace conventions. RDF introduces a number of additional concepts though (and their syntactic representations), refining the basic model. 'Standard' properties, for instance, of a statement when viewed as a resource are its subject, predicate, object and type; the value of the latter is 'rdf:Statement', an object in the standard RDF namespace (indicated by the prefix 'rdf'). Unsorted and sorted lists of resources can be formulated, as well as the fact that the value of a given property may be one of several possible values. The respective container constructs are called bag, sequence and alternative. Furthermore, suitable attributes are defined for the various RDF elements. (Here the word 'standard' refers to everything defined by the RDF Model and Syntax document.) The RDF Model and Syntax document does not include any indication of how to define the namespaces (sometimes also referred to as vocabularies) underlying specific resource descriptions, or how to give (some sort of) meaning to names of resources and properties. This is where RDF-Schema (or RDFS, currently a W3C Candidate Recommendation, released in March 2000) comes in. RDFS proposes a vocabulary for talking about and for constructing RDF vocabularies. Its basic word is class. The biggest class, the Universe of Discourse, so to speak, is called Resource. Everything (well, almost) one can talk about in RDFS is a member of that class, or resource, for short (note the lowercase "r"). The other 'standard' classes are the classes "Class", "Property" and "ConstraintProperty", the latter being a subclass of class Property (and all of them being subclasses of Resource). (In this paragraph, of course, 'standard' refers to concepts defined by the RDF-Schema document.) RDFS itself is built - in RDF manner - around a number of 'standard' resources, the names of which also belong to the 'standard' vocabulary. (Actually, two of these 'standard' names, type and property, have been carried over from the RDF namespace.) These resources are in one of the 'standard' classes. Hence they are either of type Class or of type Property. 'type' is in fact one of the basic properties (in the sense of RDF) of these RDFS-resources, the other one being 'subClassOf'. Furthermore, RDFS-resources of type Property are subject to the 'standard' constraint properties domain and range, the values of which are classes. Ontologies and ontology languages We may now ask: what do RDF and RDFS contribute to the Semantic Web? The answer is: a lot but not yet enough! While RDF and RDFS together constitute candidate tools (or the basic machinery) for making meaningful statements about resources they do not tell us anything at all about what statements we should make. And this is where ontologies and ontology languages (i.e. formal languages in which to formulate ontologies) come in. (This is perhaps analogous to a situation familiar to every computer programmer: while the hardware and operating system of a computer provide the basic tools, concepts, etc., for implementing meaningful computations, they do not give the programmer a clue of what exactly she should do with them. What to do is stated in a different language: a design or programming language, often application specific. The programming language is a layer on top of these basic tools.) Now: what is an ontology? Briefly, an ontology names objects and states known facts, relations, rules, etc., pertinent to a given domain of interest. (For instance gardening: what plants to grow when and where; which plants are compatible; what insecticides to use on which plants; how to avoid the use of insecticides, etc.) This is (at least part of) what is commonly referred to as knowledge. If we are knowledgeable about some domain we can draw conclusions from statements made about objects of that domain, make inferences, possibly taking into account knowledge about some other domain and thereby creating new knowledge. Ontologies (and ontology research) have been around for some time (and certainly since long before RDF, XML or even the Web were invented). They have formed part of the standard repertoire of Artificial Intelligence (formal reasoning) and its subdomain Knowledge Engineering. And they have been around under different names. Axiomatic mathematical theories, for instance, are ontologies par excellence. Why do ontologies have the potential of bringing us a big step closer to the Semantic Web? Simply because software agents who are supposed to work largely autonomously on the Web (and this is what we understand the Semantic Web to be all about) can do so only if they can tap the kind of knowledge represented for instance by ontologies. And they can do so only if that knowledge can be suitably encoded as Web metadata. By the same token this applies to the task of making searches more effective and efficient: only if a search engine has some knowledge (in the above indicated formal sense) of the domain it searches can its performance be beyond simple keyword finding capabilities. Ontologies are indeed likely to be the vehicles for reaching a further milestone on Tim Berners-Lee's Semantic Web Roadmap, a milestone he described as follows: The next layer, then is the logical layer. We need ways of writing logic into documents to allow such things as, for example, rules (for) the deduction of one type of document from a document of another type; the checking of (a) document against a set of rules of self-consistency; and the resolution of a query by conversion from terms unknown into terms known. Given that we have quotation in the language already, the next layer is predicate logic (not, and, etc) and the next layer quantification (for all x, y(x)). Grounding (from symbols to reality or vice versa)
Link summary:
|