19-Mar-92 0:57:22-GMT,16857;000000000001 Received: from mail.cs.tu-berlin.de by csla.csl.sri.com with SMTP id AA10174 (5.65b/IDA-1.4.3.12 for risks); Wed, 18 Mar 92 16:56:19 -0800 Received: from sissy.cs.tu-berlin.de by mail.cs.tu-berlin.de with SMTP id AA10895 (5.65c/IDA-1.4.4(mail.m4[1.6]) for ); Thu, 19 Mar 1992 01:56:01 +0100 Date: Thu, 19 Mar 1992 01:56:01 +0100 From: Kai Rannenberg Message-Id: <199203190056.AA10895@mail.cs.tu-berlin.de> To: risks@csl.sri.com Subject: ITSEC, Statement of Observations - the full text Statement of Observations concerning the Information Technology Security Evaluation Criteria (ITSEC) V1.2 Data Protection and Data Security Task Force of the German Society for Informatics (Praesidiumsarbeitskreis Datenschutz und Datensicherung der Gesellschaft fuer Informatik) Feb. 24, 1992 The German Society for Informatics (GI) with its more than 18 000 members is the largest German association of professionals working in information technology (IT). As its highest board concerned with data protection and data security, we want to bring the following observations and recommendations to the attention of the Commission of the European Communities as the sponsor of ITSEC, the politicians, all professionals working in the field of informatics and especially those engaged in the further standardization of "Evaluation Criteria for IT Security" within ISO/IEC JTC1/SC27 and - last but not least - society at large. The Information Technology Security Evaluation Evaluation Criteria (ITSEC) are intended to have a significant influence on security issues, primarily in a technical sense. If they gain it, due to the socio-technological nature of IT systems, they will also have a significant influence on organizational structures. Due to the pervasive nature of IT, the everyday-life of each individual living in the European Community will be affected. This causes signficant changes in social structures. If IT systems are insecure, society at large is at risk. If they are structured in an Orwellian way, they are a threat to democracy. Therefore, the points summarized in the following are of upmost concern to us. Historical perspective On April 17th, 1991, one of us participated in the ITSEC Version 1.1 (V1.1) Workshop, and some answers from the representatives of the group, i.e. the four countries which developed ITSEC, were all but convincing to us, in particular concerning our observations 1.1, 2.1, and 6.1. The same is true for ITSEC V1.2 and its accompanying response to key comments raised by reviewers. Even now, a real high quality review of ITSEC is impossible, since reviewers lack the information of ITSEM and experience with many different functionality classes to be defined. Therefore, it is impossible to decide impartially at the moment whether the answer to our observation 2.1, that this incompleteness can be solved by "defining a functionality class using exclusion functions", is appropriate or not. ITSEC V1.2 has most of the observed deep problems of V1.1 just as V1.1 has most of the observed deep problems of V1.0. Therefore not much prophecy is required to predict that V2 (and the emerging ISO standard) will have most known deep problems if everybody just goes on. Until now, no alternative approach has been worked out. Therefore, the CEC should sponsor the development of alternative proposals and evaluation experiences for Version 2 of ITSEC. Observations 1. Title and Scope 1.1 Version 1.2 of the ITSEC is the best and most general we have today. But it does not, by far, address the scope suggested by its title and scope section but at least this scope will be needed in a future information society. Therefore, either the title and the scope section have to be narrowed or much more and much broader work is required. The ITSEC Version 1.2 are even less security evaluation criteria than Version 1.0. They define a framework to formulate security evaluation criteria. In a certain sense this is more, in another this is less than bare security evaluation criteria. In any case, it does not help to define evaluation criteria in a way that the ITSEC title becomes correct (in this respect) as done in the presentation of Jonathan Wood on the ITSEC V1.1 Workshop. The TCSEC [Trusted Computer Systems Evaluation Criteria, DOD 5200.28-STD] are with us for many years and there is absolutely no need to use "evaluation criteria" with another meaning than there. The ITSEC do not address decentrally managed IT-Systems, i.e. connected IT-Systems with multiple administrations with potentially conflicting interests. We admit that this is a difficult problem. But it is very misleading and no good service to society to completely ignore a very urgent problem falling in the scope suggested by the title and scope section. Reference to non-repudiation under the heading Data Exchange is by far not enough and raises problems of systematics, see 2.1 below. An appropriate title for Version 1.2 of ITSEC might be: A Framework of Security Evaluation Criteria for Hierarchically Managed Information Technology Systems 1.2 The ITSEC do not cover the problem of different kinds of potential attackers, i.e. not only users, but also system designers, manufacturers, operators; designers, manufacturers, and operators of the design tools; and so forth. Moreover, in real life different components or parts of the system (or product) have to be characterized by qualitatively and quantitatively different aspects of security and hence be subject to different type of attacks and attackers. Especially risks caused by the operator of the system are not considered or are not considered any longer in ITSEC V1.2. 1.3 Definition of Integrity We propose the following definition (changes in italics): integrity prevention of undetected unauthorized modification of information. Justification: In distributed systems, you cannot prevent unauthorized modification of data, e.g. in transit on a network, but you can detect unauthorized modification with cryptographical means. Additionally, the proposed change of the definition of integrity provides for a cleaner separation of integrity and availability. A third justification is that with the proposed change of the definition of integrity, integrity corresponds to the well established notion of partial correctness in program verification, and integrity and availability together correspond to the well established notion total correctness. 2. Functionality 2.1 The classes (Generic Headings) for the security enforcing functions are incomplete and unsystematic. As noted already in Andreas Pfitzmann's statement of observations concerning ITSEC V1.0, for some services, essential duals of the given classes are missing. E.g. for some services, Identification and Authentication are wanted, for other services, you need the corresponding duals Anonymity and Pseudonymity, cf. [David Chaum: Security without Identification: Transaction Systems to make Big Brother Obsolete; Communications of the ACM 28/10 (1985) 1030-1044]. The same is true concerning Audit and its dual Freeness from observability in "private" domains. The argument given at the ITSEC V1.1 Workshop that the group of four left out these duals because they feared confusion is more an argument for the contrary: If these duals are essential (which was unanimous consent on the ITSEC V1.1 Workshop) the problem has to be treated by the readers of the ITSEC whose confusion is feared. And their task will surely be more difficult (and their "confusion" greater) if the ITSEC do not introduce and define these duals! The description of access control has to reflect that the best (and, at least against strong attackers in today's systems, only) way to keep information confidential in a provable way is to avoid that it can be gathered. Surprisingly, in theory nearly any application can be realized that way and in practice many essential applications can, e.g. communication networks, payment systems, authorization systems, value exchange systems, cf. [David Chaum: Security without Identification: Transaction Systems to make Big Brother Obsolete; Communications of the ACM 28/10 (1985) 1030-1044], [Andreas Pfitzmann, Birgit Pfitzmann, Michael Waidner: ISDN-MIXes Untraceable Communication with very small Bandwidth Overhead; Proc. IFIP/Sec'91, May 1991, Brighton, North-Holland, Amsterdam 1991, 245-258], and [Holger Buerk, Andreas Pfitzmann: Value Exchange Systems Enabling Security and Unobservability; Computers & Security 9/8 (1990) 715-721]. Data Exchange is no security enforcing function at the same level of abstraction as the other seven function classes given, e.g. there is no class Data Storage. Assigning to that eighth function class such important aspects as non-repudiation makes this unsystematic approach even worse. 2.2 The 10 example functionality classes give very poor guidance. The possibility to define additional functionality classes does not solve the following problem: Without guidance by the ITSEC themselves, users and customers will not be able to specify their security needs. Therefore the 10 example functionality classes give the false impression every relevant security problem is covered. 2.3 Not only in the Chapters 0-6 of ITSEC, but even in the proposed functionality classes, there are no limits imposed on the bandwidth of covert channels. Compared with the Orange book and [Criteria for the Evaluation of Trustworthiness of Information Technology (IT) Systems, ISBN 3-88784-200-6; German Information Security Agency, 1989], this is a large step backward. The highest security class has to restrict the bandwidth of all covert channels together below 4 10**-12 bit/s. Justification: Imagine we have a file of people including the attribute "AIDS, yes or no". This bit of information has to be kept secret for the lifetime of the patient, e.g. 80 years. It might be acceptable that the file leaks 0.01 bit during this timespan. This yields an acceptable bandwidth of 0.01/80 bit/year = 4 10**-12 bit/s. The very least is that functionality classes dealing with higher assurance of confidentiality require that an upper bound for the bandwidth of covert channels is given. It is then for the accreditor or procurer of a system or product to decide, whether the possible bandwidth is acceptable or not. If the ITSEC do not require to evaluate this bandwidth for e.g. products, many procurers will have to do that, which clearly is a multiplication of effort. 2.4 The incomplete classification of functions and functionality hinders the adequate classification and certification of products and systems which guarantee Anonymity, Pseudonymity and Freeness from observability. In their current state the ITSEC do not give any help for evaluation and certification of systems which work without the need/enforcement of any gathering of unnecessary person-related information. Examples of such systems are referenced in 2.1 and known under the following buzzwords: untraceable communication = networks without user observability; untraceable payment = digital payment systems without user observability; authorization without identification; value exchange systems without user observability. 3. Assurance Effectiveness 3.1 The discrimination between strengths of mechanisms in only three classes (basic, medium or high) is very poor and not adequate. There must be more classes, see e.g. [Criteria for the Evaluation of Trustworthiness of Information Technology (IT) Systems, ISBN 3-88784-200-6; German Information Security Agency, 1989, p. 15f] for more classes and good definitions. But even there, there should be at least an additional class we call "unbreakable". You get it from the class of "virtually unbreakable" by omitting "according to the present state of the art", i.e. making the definition time-independent. Examples of members of the class unbreakable would be the one-time pad, cf. [Claude E. Shannon, Communication Theory of Secrecy Systems; The Bell System Technical Journal 28/4 (1949) 656-715], or information-theoretic authentication codes, cf. [Gustavus J. Simmons, A Survey of Information Authentication; Proceedings of the IEEE 76/5 (1988) 603-620]. Having a highest rating which is just "beyond normal practicality" is ridiculous! "Beyond normal practicality" is a trivial condition and only suited to characterize basic and not at all high. Subjectivity of the rating is no argument for very few classes. Since if you have only very few classes, you introduce relatively great rounding errors, which is even worse. So define many classes, and if there is subjectivity of the rating, then give a class as expectation and two further classes as an upper and lower limit. We think confidence intervals have a good tradition in engineering! 3.2 The rating of cryptographic mechanisms has to be international and as objective and reproducible as possible. This requires that all mechanisms be published in any detail. Any secret mechanisms are not suitable for certified secure systems. 4. Assurance Correctness 4.1 As in [Criteria for the Evaluation of Trustworthiness of Information Technology (IT) Systems, ISBN 3-88784-200-6; German Information Security Agency, 1989, p. 53], it should be stated clearly that evaluation levels beyond E6 are possible, very desirable, and might be defined in the future. One reason for this is that there are transitive Trojan Horses, cf. the Turing award lecture by [Ken Thompson, Reflections on Trusting Trust; Communications of the ACM 27/8 (1984) 761-763]. Examples for evaluation levels beyond E6 are Level E7: Verified design history of all tools used to develop the TOE. Level E8: Verified design history of all tools used to develop tools used to develop the TOE. and so forth. The ultimate level would require that all (recursive definition!) used tools have verified design, i.e. you need some form of secure bootstrap in generating these tools. The subversion of tools is a special risk of IT compared to other technologies. It is not covered in the ITSEC and should at least be mentioned in the definition of "Developer Security" (p. 113, 6.27) and "Development Environment" (p. 113, 6.28). 4.2 If a formal security policy for all aspects of security has to exist for the levels E4 and above as suggested in E4.2, in opinion, this would be a major (at least: short term) weakness of ITSEC. Since then, at the moment, only 3 evaluation levels are possible for nearly all relevant TOEs. If the requirement is simply unprecise in formulation, we would assess this as a minor issue. 5. Post-Evaluation Problems Evaluation and Certification of products and systems do not stay valid forever and evaluated products and systems are going to be connected. 5.1 The re-rating and re-evaluation has to be harmonized internationally. Reference to national certification bodies or a statement "beyond the scope" (p. 13) is a bad excuse for not addressing that issue. 5.2 What shall happen if someone discovers (and possibly publishes) a flaw allowing a break of security for a certified TOE? Does the certificate become invalid by discovery of a security flaw? How are all users of the TOE notified of this? 5.3 Rating of TOEs consisting of evaluated components. What functionality class and evaluation level should be assigned to, e.g., systems consisting of evaluated products. Is in this case a complete, partial or no re-evaluation required? 6. Development and Discussion of the ITSEC The organization of ITSEC Development and Discussion must be improved. 6.1 For public acceptability of ITSEC and a constructive scientific discussion, it is not enough to publish the criteria (or more precise: their framework) themselves. It is necessary to publish a detailed rationale, i.e. all options and suggestions, the reasons for the selection of options, etc. On a scientific conference on dependable computer systems organized by GI (Gesellschaft fuer Informatik e.V., the leading German scientific organization in the area of informatics) in March 1991 at Darmstadt, this has been asked for unanimously. "ITSEC Revision: Addressing the Main Issues A response to the key comments raised by reviewers" is a first step. Others should follow very soon. 6.2 The comparison between V1.1 and V1.2 shows that critical points are neither discussed nor clarified but left out in the course of the document's development. ------------------------------------------- Kai Rannenberg Technische Universitaet Berlin Informatics Department E-Mail: | Snail Mail: | kara@cs.tu-berlin.de | Sekretariat FR 5-10 Phone: (+49 30) 314-73499 | Franklinstr. 28/29 Fax: (+49 30) 314-24891 | D-W-1000 Berlin 10, Germany