Network Working Group B. Fraser Request for Comments: 2196 Editor FYI: 8 SEI/CMU Obsoletes: 1244 September 1997 Category: Informational Site Security Handbook Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This handbook is a guide to developing computer security policies and procedures for sites that have systems on the Internet. The purpose of this handbook is to provide practical guidance to administrators trying to secure their information and services. The subjects covered include policy content and formation, a broad range of technical system and network security topics, and security incident response. Table of Contents 1. Introduction.................................................... 2 1.1 Purpose of this Work............................................ 3 1.2 Audience........................................................ 3 1.3 Definitions..................................................... 3 1.4 Related Work.................................................... 4 1.5 Basic Approach.................................................. 4 1.6 Risk Assessment................................................. 5 2. Security Policies............................................... 6 2.1 What is a Security Policy and Why Have One?..................... 6 2.2 What Makes a Good Security Policy?.............................. 9 2.3 Keeping the Policy Flexible..................................... 11 3. Architecture.................................................... 11 3.1 Objectives...................................................... 11 3.2 Network and Service Configuration............................... 14 3.3 Firewalls....................................................... 20 4. Security Services and Procedures................................ 24 4.1 Authentication.................................................. 24 4.2 Confidentiality................................................. 28 4.3 Integrity....................................................... 28 Fraser, Ed. Informational [Page 1] RFC 2196 Site Security Handbook September 1997 4.4 Authorization................................................... 29 4.5 Access.......................................................... 30 4.6 Auditing........................................................ 34 4.7 Securing Backups................................................ 37 5. Security Incident Handling...................................... 37 5.1 Preparing and Planning for Incident Handling.................... 39 5.2 Notification and Points of Contact.............................. 42 5.3 Identifying an Incident......................................... 50 5.4 Handling an Incident............................................ 52 5.5 Aftermath of an Incident........................................ 58 5.6 Responsibilities................................................ 59 6. Ongoing Activities.............................................. 60 7. Tools and Locations............................................. 60 8. Mailing Lists and Other Resources............................... 62 9. References...................................................... 64 1. Introduction This document provides guidance to system and network administrators on how to address security issues within the Internet community. It builds on the foundation provided in RFC 1244 and is the collective work of a number of contributing authors. Those authors include: Jules P. Aronson (aronson@nlm.nih.gov), Nevil Brownlee (n.brownlee@auckland.ac.nz), Frank Byrum (byrum@norfolk.infi.net), Joao Nuno Ferreira (ferreira@rccn.net), Barbara Fraser (byf@cert.org), Steve Glass (glass@ftp.com), Erik Guttman (erik.guttman@eng.sun.com), Tom Killalea (tomk@nwnet.net), Klaus- Peter Kossakowski (kossakowski@cert.dfn.de), Lorna Leone (lorna@staff.singnet.com.sg), Edward.P.Lewis (Edward.P.Lewis.1@gsfc.nasa.gov), Gary Malkin (gmalkin@xylogics.com), Russ Mundy (mundy@tis.com), Philip J. Nesser (pjnesser@martigny.ai.mit.edu), and Michael S. Ramsey (msr@interpath.net). In addition to the principle writers, a number of reviewers provided valuable comments. Those reviewers include: Eric Luiijf (luiijf@fel.tno.nl), Marijke Kaat (marijke.kaat@sec.nl), Ray Plzak (plzak@nic.mil) and Han Pronk (h.m.pronk@vka.nl). A special thank you goes to Joyce Reynolds, ISI, and Paul Holbrook, CICnet, for their vision, leadership, and effort in the creation of the first version of this handbook. It is the working group's sincere hope that this version will be as helpful to the community as the earlier one was. Fraser, Ed. Informational [Page 2] RFC 2196 Site Security Handbook September 1997 1.1 Purpose of This Work This handbook is a guide to setting computer security policies and procedures for sites that have systems on the Internet (however, the information provided should also be useful to sites not yet connected to the Internet). This guide lists issues and factors that a site must consider when setting their own policies. It makes a number of recommendations and provides discussions of relevant areas. This guide is only a framework for setting security policies and procedures. In order to have an effective set of policies and procedures, a site will have to make many decisions, gain agreement, and then communicate and implement these policies. 1.2 Audience The audience for this document are system and network administrators, and decision makers (typically "middle management") at sites. For brevity, we will use the term "administrator" throughout this document to refer to system and network administrators. This document is not directed at programmers or those trying to create secure programs or systems. The focus of this document is on the policies and procedures that need to be in place to support the technical security features that a site may be implementing. The primary audience for this work are sites that are members of the Internet community. However, this document should be useful to any site that allows communication with other sites. As a general guide to security policies, this document may also be useful to sites with isolated systems. 1.3 Definitions For the purposes of this guide, a "site" is any organization that owns computers or network-related resources. These resources may include host computers that users use, routers, terminal servers, PCs or other devices that have access to the Internet. A site may be an end user of Internet services or a service provider such as a mid- level network. However, most of the focus of this guide is on those end users of Internet services. We assume that the site has the ability to set policies and procedures for itself with the concurrence and support from those who actually own the resources. It will be assumed that sites that are parts of larger organizations will know when they need to consult, collaborate, or take recommendations from, the larger entity. Fraser, Ed. Informational [Page 3] RFC 2196 Site Security Handbook September 1997 The "Internet" is a collection of thousands of networks linked by a common set of technical protocols which make it possible for users of any one of the networks to communicate with, or use the services located on, any of the other networks (FYI4, RFC 1594). The term "administrator" is used to cover all those people who are responsible for the day-to-day operation of system and network resources. This may be a number of individuals or an organization. The term "security administrator" is used to cover all those people who are responsible for the security of information and information technology. At some sites this function may be combined with administrator (above); at others, this will be a separate position. The term "decision maker" refers to those people at a site who set or approve policy. These are often (but not always) the people who own the resources. 1.4 Related Work The Site Security Handbook Working Group is working on a User's Guide to Internet Security. It will provide practical guidance to end users to help them protect their information and the resources they use. 1.5 Basic Approach This guide is written to provide basic guidance in developing a security plan for your site. One generally accepted approach to follow is suggested by Fites, et. al. [Fites 1989] and includes the following steps: (1) Identify what you are trying to protect. (2) Determine what you are trying to protect it from. (3) Determine how likely the threats are. (4) Implement measures which will protect your assets in a cost- effective manner. (5) Review the process continuously and make improvements each time a weakness is found. Most of this document is focused on item 4 above, but the other steps cannot be avoided if an effective plan is to be established at your site. One old truism in security is that the cost of protecting yourself against a threat should be less than the cost of recovering if the threat were to strike you. Cost in this context should be remembered to include losses expressed in real currency, reputation, trustworthiness, and other less obvious measures. Without reasonable knowledge of what you are protecting and what the likely threats are, following this rule could be difficult. Fraser, Ed. Informational [Page 4] RFC 2196 Site Security Handbook September 1997 1.6 Risk Assessment 1.6.1 General Discussion One of the most important reasons for creating a computer security policy is to ensure that efforts spent on security yield cost effective benefits. Although this may seem obvious, it is possible to be mislead about where the effort is needed. As an example, there is a great deal of publicity about intruders on computers systems; yet most surveys of computer security show that, for most organizations, the actual loss from "insiders" is much greater. Risk analysis involves determining what you need to protect, what you need to protect it from, and how to protect it. It is the process of examining all of your risks, then ranking those risks by level of severity. This process involves making cost-effective decisions on what you want to protect. As mentioned above, you should probably not spend more to protect something than it is actually worth. A full treatment of risk analysis is outside the scope of this document. [Fites 1989] and [Pfleeger 1989] provide introductions to this topic. However, there are two elements of a risk analysis that will be briefly covered in the next two sections: (1) Identifying the assets (2) Identifying the threats For each asset, the basic goals of security are availability, confidentiality, and integrity. Each threat should be examined with an eye to how the threat could affect these areas. 1.6.2 Identifying the Assets One step in a risk analysis is to identify all the things that need to be protected. Some things are obvious, like valuable proprietary information, intellectual property, and all the various pieces of hardware; but, some are overlooked, such as the people who actually use the systems. The essential point is to list all things that could be affected by a security problem. One list of categories is suggested by Pfleeger [Pfleeger 1989]; this list is adapted from that source: (1) Hardware: CPUs, boards, keyboards, terminals, workstations, personal computers, printers, disk drives, communication lines, terminal servers, routers. Fraser, Ed. Informational [Page 5] RFC 2196 Site Security Handbook September 1997 (2) Software: source programs, object programs, utilities, diagnostic programs, operating systems, communication programs. (3) Data: during execution, stored on-line, archived off-line, backups, audit logs, databases, in transit over communication media. (4) People: users, administrators, hardware maintainers. (5) Documentation: on programs, hardware, systems, local administrative procedures. (6) Supplies: paper, forms, ribbons, magnetic media. 1.6.3 Identifying the Threats Once the assets requiring protection are identified, it is necessary to identify threats to those assets. The threats can then be examined to determine what potential for loss exists. It helps to consider from what threats you are trying to protect your assets. The following are classic threats that should be considered. Depending on your site, there will be more specific threats that should be identified and addressed. (1) Unauthorized access to resources and/or information (2) Unintented and/or unauthorized Disclosure of information (3) Denial of service 2. Security Policies Throughout this document there will be many references to policies. Often these references will include recommendations for specific policies. Rather than repeat guidance in how to create and communicate such a policy, the reader should apply the advice presented in this chapter when developing any policy recommended later in this book. 2.1 What is a Security Policy and Why Have One? The security-related decisions you make, or fail to make, as administrator largely determines how secure or insecure your network is, how much functionality your network offers, and how easy your network is to use. However, you cannot make good decisions about security without first determining what your security goals are. Until you determine what your security goals are, you cannot make effective use of any collection of security tools because you simply will not know what to check for and what restrictions to impose. Fraser, Ed. Informational [Page 6] RFC 2196 Site Security Handbook September 1997 For example, your goals will probably be very different from the goals of a product vendor. Vendors are trying to make configuration and operation of their products as simple as possible, which implies that the default configurations will often be as open (i.e., insecure) as possible. While this does make it easier to install new products, it also leaves access to those systems, and other systems through them, open to any user who wanders by. Your goals will be largely determined by the following key tradeoffs: (1) services offered versus security provided - Each service offered to users carries its own security risks. For some services the risk outweighs the benefit of the service and the administrator may choose to eliminate the service rather than try to secure it. (2) ease of use versus security - The easiest system to use would allow access to any user and require no passwords; that is, there would be no security. Requiring passwords makes the system a little less convenient, but more secure. Requiring device-generated one-time passwords makes the system even more difficult to use, but much more secure. (3) cost of security versus risk of loss - There are many different costs to security: monetary (i.e., the cost of purchasing security hardware and software like firewalls and one-time password generators), performance (i.e., encryption and decryption take time), and ease of use (as mentioned above). There are also many levels of risk: loss of privacy (i.e., the reading of information by unauthorized individuals), loss of data (i.e., the corruption or erasure of information), and the loss of service (e.g., the filling of data storage space, usage of computational resources, and denial of network access). Each type of cost must be weighed against each type of loss. Your goals should be communicated to all users, operations staff, and managers through a set of security rules, called a "security policy." We are using this term, rather than the narrower "computer security policy" since the scope includes all types of information technology and the information stored and manipulated by the technology. 2.1.1 Definition of a Security Policy A security policy is a formal statement of the rules by which people who are given access to an organization's technology and information assets must abide. Fraser, Ed. Informational [Page 7] RFC 2196 Site Security Handbook September 1997 2.1.2 Purposes of a Security Policy The main purpose of a security policy is to inform users, staff and managers of their obligatory requirements for protecting technology and information assets. The policy should specify the mechanisms through which these requirements can be met. Another purpose is to provide a baseline from which to acquire, configure and audit computer systems and networks for compliance with the policy. Therefore an attempt to use a set of security tools in the absence of at least an implied security policy is meaningless. An Appropriate Use Policy (AUP) may also be part of a security policy. It should spell out what users shall and shall not do on the various components of the system, including the type of traffic allowed on the networks. The AUP should be as explicit as possible to avoid ambiguity or misunderstanding. For example, an AUP might list any prohibited USENET newsgroups. (Note: Appropriate Use Policy is referred to as Acceptable Use Policy by some sites.) 2.1.3 Who Should be Involved When Forming Policy? In order for a security policy to be appropriate and effective, it needs to have the acceptance and support of all levels of employees within the organization. It is especially important that corporate management fully support the security policy process otherwise there is little chance that they will have the intended impact. The following is a list of individuals who should be involved in the creation and review of security policy documents: (1) site security administrator (2) information technology technical staff (e.g., staff from computing center) (3) administrators of large user groups within the organization (e.g., business divisions, computer science department within a university, etc.) (4) security incident response team (5) representatives of the user groups affected by the security policy (6) responsible management (7) legal counsel (if appropriate) The list above is representative of many organizations, but is not necessarily comprehensive. The idea is to bring in representation from key stakeholders, management who have budget and policy authority, technical staff who know what can and cannot be supported, and legal counsel who know the legal ramifications of various policy Fraser, Ed. Informational [Page 8] RFC 2196 Site Security Handbook September 1997 choices. In some organizations, it may be appropriate to include EDP audit personnel. Involving this group is important if resulting policy statements are to reach the broadest possible acceptance. It is also relevant to mention that the role of legal counsel will also vary from country to country. 2.2 What Makes a Good Security Policy? The characteristics of a good security policy are: (1) It must be implementable through system administration procedures, publishing of acceptable use guidelines, or other appropriate methods. (2) It must be enforcible with security tools, where appropriate, and with sanctions, where actual prevention is not technically feasible. (3) It must clearly define the areas of responsibility for the users, administrators, and management. The components of a good security policy include: (1) Computer Technology Purchasing Guidelines which specify required, or preferred, security features. These should supplement existing purchasing policies and guidelines. (2) A Privacy Policy which defines reasonable expectations of privacy regarding such issues as monitoring of electronic mail, logging of keystrokes, and access to users' files. (3) An Access Policy which defines access rights and privileges to protect assets from loss or disclosure by specifying acceptable use guidelines for users, operations staff, and management. It should provide guidelines for external connections, data communications, connecting devices to a network, and adding new software to systems. It should also specify any required notification messages (e.g., connect messages should provide warnings about authorized usage and line monitoring, and not simply say "Welcome"). (4) An Accountability Policy which defines the responsibilities of users, operations staff, and management. It should specify an audit capability, and provide incident handling guidelines (i.e., what to do and who to contact if a possible intrusion is detected). Fraser, Ed. Informational [Page 9] RFC 2196 Site Security Handbook September 1997 (5) An Authentication Policy which establishes trust through an effective password policy, and by setting guidelines for remote location authentication and the use of authentication devices (e.g., one-time passwords and the devices that generate them). (6) An Availability statement which sets users' expectations for the availability of resources. It should address redundancy and recovery issues, as well as specify operating hours and maintenance down-time periods. It should also include contact information for reporting system and network failures. (7) An Information Technology System & Network Maintenance Policy which describes how both internal and external maintenance people are allowed to handle and access technology. One important topic to be addressed here is whether remote maintenance is allowed and how such access is controlled. Another area for consideration here is outsourcing and how it is managed. (8) A Violations Reporting Policy that indicates which types of violations (e.g., privacy and security, internal and external) must be reported and to whom the reports are made. A non- threatening atmosphere and the possibility of anonymous reporting will result in a greater probability that a violation will be reported if it is detected. (9) Supporting Information which provides users, staff, and management with contact information for each type of policy violation; guidelines on how to handle outside queries about a security incident, or information which may be considered confidential or proprietary; and cross-references to security procedures and related information, such as company policies and governmental laws and regulations. There may be regulatory requirements that affect some aspects of your security policy (e.g., line monitoring). The creators of the security policy should consider seeking legal assistance in the creation of the policy. At a minimum, the policy should be reviewed by legal counsel. Once your security policy has been established it should be clearly communicated to users, staff, and management. Having all personnel sign a statement indicating that they have read, understood, and agreed to abide by the policy is an important part of the process. Finally, your policy should be reviewed on a regular basis to see if it is successfully supporting your security needs. Fraser, Ed. Informational [Page 10] RFC 2196 Site Security Handbook September 1997 2.3 Keeping the Policy Flexible In order for a security policy to be viable for the long term, it requires a lot of flexibility based upon an architectural security concept. A security policy should be (largely) independent from specific hardware and software situations (as specific systems tend to be replaced or moved overnight). The mechanisms for updating the policy should be clearly spelled out. This includes the process, the people involved, and the people who must sign-off on the changes. It is also important to recognize that there are exceptions to every rule. Whenever possible, the policy should spell out what exceptions to the general policy exist. For example, under what conditions is a system administrator allowed to go through a user's files. Also, there may be some cases when multiple users will have access to the same userid. For example, on systems with a "root" user, multiple system administrators may know the password and use the root account. Another consideration is called the "Garbage Truck Syndrome." This refers to what would happen to a site if a key person was suddenly unavailable for his/her job function (e.g., was suddenly ill or left the company unexpectedly). While the greatest security resides in the minimum dissemination of information, the risk of losing critical information increases when that information is not shared. It is important to determine what the proper balance is for your site. 3. Architecture 3.1 Objectives 3.1.1 Completely Defined Security Plans All sites should define a comprehensive security plan. This plan should be at a higher level than the specific policies discussed in chapter 2, and it should be crafted as a framework of broad guidelines into which specific policies will fit. It is important to have this framework in place so that individual policies can be consistent with the overall site security architecture. For example, having a strong policy with regard to Internet access and having weak restrictions on modem usage is inconsistent with an overall philosophy of strong security restrictions on external access. A security plan should define: the list of network services that will be provided; which areas of the organization will provide the services; who will have access to those services; how access will be provided; who will administer those services; etc. Fraser, Ed. Informational [Page 11] RFC 2196 Site Security Handbook September 1997 The plan should also address how incident will be handled. Chapter 5 provides an in-depth discussion of this topic, but it is important for each site to define classes of incidents and corresponding responses. For example, sites with firewalls should set a threshold on the number of attempts made to foil the firewall before triggering a response? Escallation levels should be defined for both attacks and responses. Sites without firewalls will have to determine if a single attempt to connect to a host constitutes an incident? What about a systematic scan of systems? For sites connected to the Internet, the rampant media magnification of Internet related security incidents can overshadow a (potentially) more serious internal security problem. Likewise, companies who have never been connected to the Internet may have strong, well defined, internal policies but fail to adequately address an external connection policy. 3.1.2 Separation of Services There are many services which a site may wish to provide for its users, some of which may be external. There are a variety of security reasons to attempt to isolate services onto dedicated host computers. There are also performance reasons in most cases, but a detailed discussion is beyond to scope of this document. The services which a site may provide will, in most cases, have different levels of access needs and models of trust. Services which are essential to the security or smooth operation of a site would be better off being placed on a dedicated machine with very limited access (see Section 3.1.3 "deny all" model), rather than on a machine that provides a service (or services) which has traditionally been less secure, or requires greater accessability by users who may accidentally suborn security. It is also important to distinguish between hosts which operate within different models of trust (e.g., all the hosts inside of a firewall and any host on an exposed network). Some of the services which should be examined for potential separation are outlined in section 3.2.3. It is important to remember that security is only as strong as the weakest link in the chain. Several of the most publicized penetrations in recent years have been through the exploitation of vulnerabilities in electronic mail systems. The intruders were not trying to steal electronic mail, but they used the vulnerability in that service to gain access to other systems. Fraser, Ed. Informational [Page 12] RFC 2196 Site Security Handbook September 1997 If possible, each service should be running on a different machine whose only duty is to provide a specific service. This helps to isolate intruders and limit potential harm. 3.1.3 Deny all/ Allow all There are two diametrically opposed underlying philosophies which can be adopted when defining a security plan. Both alternatives are legitimate models to adopt, and the choice between them will depend on the site and its needs for security. The first option is to turn off all services and then selectively enable services on a case by case basis as they are needed. This can be done at the host or network level as appropriate. This model, which will here after be referred to as the "deny all" model, is generally more secure than the other model described in the next paragraph. More work is required to successfully implement a "deny all" configuration as well as a better understanding of services. Allowing only known services provides for a better analysis of a particular service/protocol and the design of a security mechanism suited to the security level of the site. The other model, which will here after be referred to as the "allow all" model, is much easier to implement, but is generally less secure than the "deny all" model. Simply turn on all services, usually the default at the host level, and allow all protocols to travel across network boundaries, usually the default at the router level. As security holes become apparent, they are restricted or patched at either the host or network level. Each of these models can be applied to different portions of the site, depending on functionality requirements, administrative control, site policy, etc. For example, the policy may be to use the "allow all" model when setting up workstations for general use, but adopt a "deny all" model when setting up information servers, like an email hub. Likewise, an "allow all" policy may be adopted for traffic between LAN's internal to the site, but a "deny all" policy can be adopted between the site and the Internet. Be careful when mixing philosophies as in the examples above. Many sites adopt the theory of a hard "crunchy" shell and a soft "squishy" middle. They are willing to pay the cost of security for their external traffic and require strong security measures, but are unwilling or unable to provide similar protections internally. This works fine as long as the outer defenses are never breached and the internal users can be trusted. Once the outer shell (firewall) is breached, subverting the internal network is trivial. Fraser, Ed. Informational [Page 13] RFC 2196 Site Security Handbook September 1997 3.1.4 Identify Real Needs for Services There is a large variety of services which may be provided, both internally and on the Internet at large. Managing security is, in many ways, managing access to services internal to the site and managing how internal users access information at remote sites. Services tend to rush like waves over the Internet. Over the years many sites have established anonymous FTP servers, gopher servers, wais servers, WWW servers, etc. as they became popular, but not particularly needed, at all sites. Evaluate all new services that are established with a skeptical attitude to determine if they are actually needed or just the current fad sweeping the Internet. Bear in mind that security complexity can grow exponentially with the number of services provided. Filtering routers need to be modified to support the new protocols. Some protocols are inherently difficult to filter safely (e.g., RPC and UDP services), thus providing more openings to the internal network. Services provided on the same machine can interact in catastrophic ways. For example, allowing anonymous FTP on the same machine as the WWW server may allow an intruder to place a file in the anonymous FTP area and cause the HTTP server to execute it. 3.2 Network and Service Configuration 3.2.1 Protecting the Infrastructure Many network administrators go to great lengths to protect the hosts on their networks. Few administrators make any effort to protect the networks themselves. There is some rationale to this. For example, it is far easier to protect a host than a network. Also, intruders are likely to be after data on the hosts; damaging the network would not serve their purposes. That said, there are still reasons to protect the networks. For example, an intruder might divert network traffic through an outside host in order to examine the data (i.e., to search for passwords). Also, infrastructure includes more than the networks and the routers which interconnect them. Infrastructure also includes network management (e.g., SNMP), services (e.g., DNS, NFS, NTP, WWW), and security (i.e., user authentication and access restrictions). The infrastructure also needs protection against human error. When an administrator misconfigures a host, that host may offer degraded service. This only affects users who require that host and, unless Fraser, Ed. Informational [Page 14] RFC 2196 Site Security Handbook September 1997 that host is a primary server, the number of affected users will therefore be limited. However, if a router is misconfigured, all users who require the network will be affected. Obviously, this is a far larger number of users than those depending on any one host. 3.2.2 Protecting the Network There are several problems to which networks are vulnerable. The classic problem is a "denial of service" attack. In this case, the network is brought to a state in which it can no longer carry legitimate users' data. There are two common ways this can be done: by attacking the routers and by flooding the network with extraneous traffic. Please note that the term "router" in this section is used as an example of a larger class of active network interconnection components that also includes components like firewalls, proxy- servers, etc. An attack on the router is designed to cause it to stop forwarding packets, or to forward them improperly. The former case may be due to a misconfiguration, the injection of a spurious routing update, or a "flood attack" (i.e., the router is bombarded with unroutable packets, causing its performance to degrade). A flood attack on a network is similar to a flood attack on a router, except that the flood packets are usually broadcast. An ideal flood attack would be the injection of a single packet which exploits some known flaw in the network nodes and causes them to retransmit the packet, or generate error packets, each of which is picked up and repeated by another host. A well chosen attack packet can even generate an exponential explosion of transmissions. Another classic problem is "spoofing." In this case, spurious routing updates are sent to one or more routers causing them to misroute packets. This differs from a denial of service attack only in the purpose behind the spurious route. In denial of service, the object is to make the router unusable; a state which will be quickly detected by network users. In spoofing, the spurious route will cause packets to be routed to a host from which an intruder may monitor the data in the packets. These packets are then re-routed to their correct destinations. However, the intruder may or may not have altered the contents of the packets. The solution to most of these problems is to protect the routing update packets sent by the routing protocols in use (e.g., RIP-2, OSPF). There are three levels of protection: clear-text password, cryptographic checksum, and encryption. Passwords offer only minimal protection against intruders who do not have direct access to the physical networks. Passwords also offer some protection against misconfigured routers (i.e, routers which, out of the box, attempt to Fraser, Ed. Informational [Page 15] RFC 2196 Site Security Handbook September 1997 route packets). The advantage of passwords is that they have a very low overhead, in both bandwidth and CPU consumption. Checksums protect against the injection of spurious packets, even if the intruder has direct access to the physical network. Combined with a sequence number, or other unique identifier, a checksum can also protect again "replay" attacks, wherein an old (but valid at the time) routing update is retransmitted by either an intruder or a misbehaving router. The most security is provided by complete encryption of sequenced, or uniquely identified, routing updates. This prevents an intruder from determining the topology of the network. The disadvantage to encryption is the overhead involved in processing the updates. RIP-2 (RFC 1723) and OSPF (RFC 1583) both support clear-text passwords in their base design specifications. In addition, there are extensions to each base protocol to support MD5 encryption. Unfortunately, there is no adequate protection against a flooding attack, or a misbehaving host or router which is flooding the network. Fortunately, this type of attack is obvious when it occurs and can usually be terminated relatively simply. 3.2.3 Protecting the Services There are many types of services and each has its own security requirements. These requirements will vary based on the intended use of the service. For example, a service which should only be usable within a site (e.g., NFS) may require different protection mechanisms than a service provided for external use. It may be sufficient to protect the internal server from external access. However, a WWW server, which provides a home page intended for viewing by users anywhere on the Internet, requires built-in protection. That is, the service/protocol/server must provide whatever security may be required to prevent unauthorized access and modification of the Web database. Internal services (i.e., services meant to be used only by users within a site) and external services (i.e., services deliberately made available to users outside a site) will, in general, have protection requirements which differ as previously described. It is therefore wise to isolate the internal services to one set of server host computers and the external services to another set of server host computers. That is, internal and external servers should not be co-located on the same host computer. In fact, many sites go so far Fraser, Ed. Informational [Page 16] RFC 2196 Site Security Handbook September 1997 as to have one set of subnets (or even different networks) which are accessible from the outside and another set which may be accessed only within the site. Of course, there is usually a firewall which connects these partitions. Great care must be taken to ensure that such a firewall is operating properly. There is increasing interest in using intranets to connect different parts of a organization (e.g., divisions of a company). While this document generally differentiates between external and internal (public and private), sites using intranets should be aware that they will need to consider three separations and take appropriate actions when designing and offering services. A service offered to an intranet would be neither public, nor as completely private as a service to a single organizational subunit. Therefore, the service would need its own supporting system, separated from both external and internal services and networks. One form of external service deserves some special consideration, and that is anonymous, or guest, access. This may be either anonymous FTP or guest (unauthenticated) login. It is extremely important to ensure that anonymous FTP servers and guest login userids are carefully isolated from any hosts and file systems from which outside users should be kept. Another area to which special attention must be paid concerns anonymous, writable access. A site may be legally responsible for the content of publicly available information, so careful monitoring of the information deposited by anonymous users is advised. Now we shall consider some of the most popular services: name service, password/key service, authentication/proxy service, electronic mail, WWW, file transfer, and NFS. Since these are the most frequently used services, they are the most obvious points of attack. Also, a successful attack on one of these services can produce disaster all out of proportion to the innocence of the basic service. 3.2.3.1 Name Servers (DNS and NIS(+)) The Internet uses the Domain Name System (DNS) to perform address resolution for host and network names. The Network Information Service (NIS) and NIS+ are not used on the global Internet, but are subject to the same risks as a DNS server. Name-to-address resolution is critical to the secure operation of any network. An attacker who can successfully control or impersonate a DNS server can re-route traffic to subvert security protections. For example, routine traffic can be diverted to a compromised system to be monitored; or, users can be tricked into providing authentication secrets. An organization should create well known, protected sites Fraser, Ed. Informational [Page 17] RFC 2196 Site Security Handbook September 1997 to act as secondary name servers and protect their DNS masters from denial of service attacks using filtering routers. Traditionally, DNS has had no security capabilities. In particular, the information returned from a query could not be checked for modification or verified that it had come from the name server in question. Work has been done to incorporate digital signatures into the protocol which, when deployed, will allow the integrity of the information to be cryptographically verified (see RFC 2065). 3.2.3.2 Password/Key Servers (NIS(+) and KDC) Password and key servers generally protect their vital information (i.e., the passwords and keys) with encryption algorithms. However, even a one-way encrypted password can be determined by a dictionary attack (wherein common words are encrypted to see if they match the stored encryption). It is therefore necessary to ensure that these servers are not accessable by hosts which do not plan to use them for the service, and even those hosts should only be able to access the service (i.e., general services, such as Telnet and FTP, should not be allowed by anyone other than administrators). 3.2.3.3 Authentication/Proxy Servers (SOCKS, FWTK) A proxy server provides a number of security enhancements. It allows sites to concentrate services through a specific host to allow monitoring, hiding of internal structure, etc. This funnelling of services creates an attractive target for a potential intruder. The type of protection required for a proxy server depends greatly on the proxy protocol in use and the services being proxied. The general rule of limiting access only to those hosts which need the services, and limiting access by those hosts to only those services, is a good starting point. 3.2.3.4 Electronic Mail Electronic mail (email) systems have long been a source for intruder break-ins because email protocols are among the oldest and most widely deployed services. Also, by it's very nature, an email server requires access to the outside world; most email servers accept input from any source. An email server generally consists of two parts: a receiving/sending agent and a processing agent. Since email is delivered to all users, and is usually private, the processing agent typically requires system (root) privileges to deliver the mail. Most email implementations perform both portions of the service, which means the receiving agent also has system privileges. This opens several security holes which this document will not describe. There are some implementations available which allow a separation of Fraser, Ed. Informational [Page 18] RFC 2196 Site Security Handbook September 1997 the two agents. Such implementations are generally considered more secure, but still require careful installation to avoid creating a security problem. 3.2.3.5 World Wide Web (WWW) The Web is growing in popularity exponentially because of its ease of use and the powerful ability to concentrate information services. Most WWW servers accept some type of direction and action from the persons accessing their services. The most common example is taking a request from a remote user and passing the provided information to a program running on the server to process the request. Some of these programs are not written with security in mind and can create security holes. If a Web server is available to the Internet community, it is especially important that confidential information not be co-located on the same host as that server. In fact, it is recommended that the server have a dedicated host which is not "trusted" by other internal hosts. Many sites may want to co-locate FTP service with their WWW service. But this should only occur for anon-ftp servers that only provide information (ftp-get). Anon-ftp puts, in combination with WWW, might be dangerous (e.g., they could result in modifications to the information your site is publishing to the web) and in themselves make the security considerations for each service different. 3.2.3.6 File Tra