💾 Archived View for spam.works › mirrors › textfiles › hacking › ta&m.txt captured on 2023-11-04 at 13:27:02.

View Raw

More Information

⬅️ Previous capture (2023-06-14)

-=-=-=-=-=-=-

              Texas A&M Network Security Package Overview
                              7/1/93

                           Dave Safford
                           Doug Schales
                            Dave Hess

ABSTRACT:

Last August, Texas A&M University UNIX computers came under extensive
attack from a coordinated group of internet crackers.  This package of 
security tools represents the results of over seven months of development
and testing of the software we have been using to protect our estimated
twelve thousand internet connected devices.  This package includes
three coordinated sets of tools: "drawbridge", an exceptionally powerful
bridging filter package; "tiger", a set of convenient yet thorough
machine checking programs; and "netlog", a set of intrusion detection
network monitoring programs.  While these programs have undergone
extensive testing and modification in use here, we consider this to
be a beta test release, as they have not had external review, and
the documentation is still very preliminary.

A BRIEF HISTORY OF THE INCIDENTS:
        On Tuesday 25 August 1992, we were notified by Ohio State that a
specific TAMU machine was being used to attack one of their computers
over internet.  The local machine turned out to be a Sun workstation in
a faculty member's office.  Unfortunately, this faculty member was out
of town for a week, so rather than trying to convince the department
head to let us in without the faculty present, we decided to monitor
network connections to the workstation, and if necessary, disconnect
the machine from the net electronically.  This decision to monitor the
machine's sessions rather than immediately securing it turned out to be
very fortunate, as this monitoring gave us a wealth of information
about the intruders and their methods.
        Our initial monitoring tools were very simple, but as the 
significance of what we were seeing became apparent, we rapidly
improved the tools to the point that we were able to watch the
intruder's entire session in real time, keystroke by keystroke.  This
monitoring led to the discovery that several outside intruders were
involved, and that many other local machines had been compromised. One
local machine had even been set up as a cracker bulletin board machine,
that the crackers would use to contact each other and discuss
techniques and progress!
        By Thursday 27 August we felt we had enough information about 
which machines had been compromised, and how they had been broken into,
that we could effectively clean them up. In addition, the severity of
the modifications the intruders were making, particularly on the
bulletin board machine, made it imperative to stop the intrusions, so
we contacted the respective system managers, and arranged to shut down
all machines, and scheduled the system cleanup for the next day.
        On Friday 28 August, we worked on the known affected machines,
closing the security holes that had been used to break in, and brought
them all back up on the network.
        On Saturday 29 August, we received an emergency call from one
of the system managers, saying that the intruders had broken back into
the cracker bulletin board machine.  Concerned about the integrity of
their research data, they asked for their machines to be physically
disconnected from the rest of the network.
        On Monday 31 August, we analyzed the logs of the new break-in,
and determined that 1) the crackers were much more sophisticated than
we had been led to believe, and 2) many more local machines and user
accounts had been compromised than we initially knew. We even found a
file containing hundreds of captured passwords.  It appeared that there
were actually two levels of crackers: the high level was the more
sophisticated, and had a thorough knowledge of the technology; the low
level were the "foot soldiers" that merely used the supplied cracking
programs with little understanding of how they worked. Our initial
response had been based on watching the later, less capable crackers,
and was insufficient to handle the more sophisticated ones.
        After much deliberation, we decided that the only way to
protect the computers on campus was to block certain key incoming
network protocols, reenabling them to local machines on a case by case
basis, as each machine had been cleaned up and secured. The problem
was that if the crackers had access to even one unsecure local
machine, it could be used as a base for further attacks, so we had to
assume all machines had been compromised, unless proven otherwise.
        The recommendation to filter incoming traffic was presented
to the Associate Provost for Computing on Monday afternoon, and
approved.  We bought or borrowed the necessary equipment for the filter
and monitor machines late that afternoon, and worked all night on the
design and coding of the filter. Particular effort was made in the
design to achieve the necessary security with the minimum of impact to
local users.  The filter was completed and installed by 5PM Tuesday 1
September.
        On Thursday 3 September, our monitor logs showed an obviously
automated attack by ftp which was sequentially probing every machine on
campus. Here again we decided to monitor this attack, as we were unsure
what it could accomplish.  This decision to observe, rather than
immediately block, turned out to be very fortunate.
        Shortly after midnight on friday we were notified by CERT that 
another site was monitoring their machines, and had noticed that the
crackers had broken back into our machines.  We immediately went in and
analyzed our logs, and determined that the crackers had used ftp to
install a program that allowed them to tunnel past our filter's blocks.
In addition, the crackers were now installing some very sophisticated
trap doors and trojan programs throughout a large number of machines,
apparently to ensure that they were never locked out again.  These
system modifications were particularly nasty in that they went to great
pains to conceal them, including patching the modified binaries to
match the original timestamps and checksums.  At this point we
completely redesigned the filter to keep the crackers out, and
installed the new version by 5AM Saturday.  The new version, while
providing much greater security, also was unfortunately more visible to
valid users.
        The good news was that the new filter interrupted the new
sophisticated crackers while they were working (apparently they did
not anticipate so rapid a response), and we discovered one machine
still had over 40 megabytes of the cracker's tools.  This captured 
information provided us the most detailed information yet as to the 
cracker's methods.
        Since the new filter was installed, we have observed no
successful intrusion attacks against our machines, despite continued
logging of probes and continued attempts. (We did log one incident of
a local student trying to attack an off campus machine, but that is
another topic.)  Our efforts since then have centered in three areas:
improving the filter (largely for ease of use, and throughput),
improving the monitoring tools (to reduce manpower requirements), and
developing a program to help local system managers check their machines
for proper security configuration.

RESPONSE OVERVIEW:

        Our response to the intrusion incidents has three major thrusts:  
filtering, monitoring, and cleaning.  Our first line of defense is the
bridging filter package drawbridge, which is used to filter all packets
to or from the internet.  Drawbridge allows us to control internet
access on machine by machine and port by port basis on a full ethernet
bandwidth basis.  While other firewall configurations are known to be
stronger, drawbridge provides a level of compromise between security
and availability more acceptable to the university environment, and
provides much needed flexibility and throughput for our large scale
network.
        Realizing that drawbridge was a compromise between convenience
and security, we developed a set of monitoring tools to look for
intrusions that might be attempting to circumvent the filter.  These
tools monitor our internet link continuously, checking for unusual
connections, or patterns of connections, and for a wide range of
specific intrusion signatures.  Finally, we developed tiger scripts, a
tool for helping system administrators check the configuration and
security status of their individual machines.

        The following diagram shows an overview of the filter and
monitor implementation.  In traditional secure gateways, a filter and
secure bastion host are used, and all traffic to or from internet is
forced through them.  This typically means that users need proxy
clients for external access, such as for telnet and ftp, so that they
all do not have to log on to the bastion host for external access.
In our case, the filter allows arbitrary protocol filtering on a host
by host basis, so that each department can set up its own authorized hosts,
with their own service configurations, (subject to the campus wide minimum
standards.)  This provides a reasonable level of both security and
flexibility for educational and research requirements. For a host to
be enabled at all beyond the default incoming permissions for mail, 
it must pass the very thorough tiger scripts, as described later.
The monitor node is placed outside the filter so that it can record
connection attempts which are blocked by the filter. This placement
has been crucial to recognizing intrusion attacks, but does place the
monitor itself at risk.  To minimize this risk, we placed both the
filter and monitor in a controlled access machine room, and configured
the monitor to accept no other network requests other than validated
monitor requests. The filter is similarly programmed only to respond to 
secure filter update requests.
 
                 \
                  \/\ T1 to internet
                     \
                  ---------
                  |WAN    |
                  |Router |
                  |(cisco)|
                  ---------
                     |     -----------
                     |     |Secure   |
            ethernet |-----|Monitor  | "netlog"
                     |     |(Sun4/60)|
                     |     -----------
                     |
                 ---------
                 |Filter |
                 |Bridge | "drawbridge"
                 |(486PC)|  
                 ---------
                     |
                     | ethernet to campus
                    \/
----------------------------------------------
      |                            |
  ---------                    ---------
  |       |      secured       |       |
  |       |     ... ... ....   |       | (checked with "tiger scripts")
  |       |       hosts        |       |
  ---------                    ---------

FILTER: (drawbridge)

        Our first line of defense is the bridging filter running our
drawbridge program.  We initially looked at using the filtering built
into our WAN routers (cisco), but determined that our requirements,
particularly in the need for supporting potentially different filtering
to each of our roughly 12,000 machines in our class B network, were too
complex for the router.  In addition we needed something that could handle
full ethernet bandwidth, was itself very secure, and which could be 
implemented VERY rapidly.  D. Brent Chapman presented an interesting 
analysis of the limitations of current filter implementations in his
"Network (In)Security Through IP Packet Filtering", Proceedings of the
Third UNIX Security Symposium, September 1992.  Our drawbridge program,
along with its support filter specification language and compiler,
address his critical recommendations with respect to both functionality
and ease of specification.
        We decided on a PC with two SMC 8013 (AKA Western Digital) 
ethernet cards, and based our first software implementation on
pcbridge, by Vance Morrison.  This initial version was soon rewritten
from scratch in turbo C++, to make the addition of needed features
somewhat easier.  The current filter design provides "allow" based
filtering per host with separate incoming and outgoing permissions.
(Allow based filtering prohibits all connections except those that are
explicitly allowed.) 
        For both performance and configuration management, the filter
tables are created on a support workstation, based on a powerful
filter configuration language, and then securely transferred to the 
filter machine, either at boot time, or dynamically during operation.
The support machine does all the hard work of parsing the configuration
file, looking up addresses, and building the tables, so that the filter
itself need only perform simple O(1) table lookups at run time. Updating
the tables dynamically is made secure with a DES authentication.
        Our current default configuration allows any outgoing connection,
but allows in only smtp (mail).  Several campus and departmental servers
have been checked and set up as hosts which are able to receive incoming
telnet, ftp, nntp, and gopher requests.

MONITOR: 

        The goal of monitoring is to record security related network 
events, by which intrusion attempts can be detected and tracked.  This
is a very difficult problem in general.  The communication data rates
make this problem somewhat like trying to take a sip of water from a
fire hose; our campus has some 30 terabytes of internal data transfer
per day, and our internet connection is on the order of ten gigabytes
per day, with an average of 50,000 individual connections during that
period.  Clearly we have to be both very selective and flexible in what
we monitor, and we need automated tools for reviewing even these
resultant logs.  Another problem is that of monitor placement.  It is
important that monitors be placed so that critical segments can be
observed, and so that the monitors themselves are secure. 
        Our solution includes the programs "tcplogger", "udplogger", 
"etherscan", and some associated support programs.  The tcp and udp
loggers basically log a one line summary for all connection attempts.
The associated analysis programs report on suspicious connections or
patterns of connections.  In addition, these logs have been very useful
in analyzing details of security events after the fact.  The etherscan
program goes much further, actually scanning all packets and their
contents, looking for a specific set of intrusion signatures, such as
root login attempts from off campus.  The details of the intrusion
signatures, as well as the etherscan program itself will not be
discussed in detail, as it contains sensitive information and
techniques that we do not want widely distributed.

MACHINE CLEANUP: (tiger scripts)

        Our third major thrust has been the development of an automated
tool for checking a given machine for signs of intrusion, and for
network security related configuration items. The resultant tool is
actually a set of bourne shell scripts (for high portability).  The
major goal of the tiger scripts is to provide a simple way to check a
machine prior to allowing it any level of greater internet connectivity. 
For ease of use, the tiger scripts label all outputs with an error classificatio
n:

        --FAIL--  The problem that was found was extremely serious.
        --WARN--  The problem that was found may be serious, but will
                  require human inspection.
        --INFO--  A possible problem was found, or a change in
                  configuration is being suggested.
        --ERROR-- A test was not able to be performed for some reason.

The checking performed covers a wide range of items, including items
identified in CERT announcements, and items observed in the recent 
intrusions.  The scripts use Xerox's cryptographic checksum programs
to check for both modified system binaries (possible trap doors/ trojans),
as well as for the presence of required security related patches.
        Currently the tiger scripts have been configured for SunOS
4.1.x, Solaris 2.0, SVR4, Nextstep 3.0, and UNICOS 7.0 releases.  
The programs are largely table driven for ease of porting, and we are
working on ports to Ultrix, AIX, SGI, etc.

POLICIES:

        The policies and procedures need to provide both security and
flexibility.  Our resultant decision was to filter incoming traffic
other than mail to all machines, and then allow case by case requests
for authorized hosts status, based on successful demonstration of basic
security configuration with the tiger scripts. In special cases, we
have had requests for allowing incoming requests to special servers
which are not easily checked, such as for embedded robot controllers.
In these cases, we have allowed the connections, but implemented
special monitors on these services.
        Long term policy questions which remain unanswered include
how to handle updates in response to critical CERT announcements,
and how to handle OS updates. Obviously we need some way to coordinate
both periodic and quick response host reviews.  Our filter
configuration language does support machine classes, so we could do
something such as "disable ftp to all SunOS 4.1.1 machines" in response
to a CERT announcement of a respective problem, but it would be nice
to have a mechanism to communicate such announcements to the respective
managers BEFORE cutting off access.  The problem on a large campus
is maintaining a contact list for a large number of machines, given
the high rate of turnover in student managers. In addition, the 
information in our filter configuration file may rapidly become 
outdated, as managers update their machines hardware and software.
Our current plan is to require annual security checks with the current
tiger scripts, enforced with the warning of loss of authorization status.
In the case of aperiodic security events or announcements, we will
attempt to evaluate the time criticality of responding, and require
appropriate event specific checking.  As the tiger scripts are so
easy to run, we anticipate that this requirement will not be a
significant burden to system managers.
        A recent case in point was the announced security problem
with the wuarchive anonymous ftp code.  In this case, we knew exactly
which machines had ftp authorized, and contacted the respective managers
immediately. They all updated their software so rapidly that was not
necessary to block access, and the limited number of authorized machines
avoided the need for an immediate tiger update.

AVAILABILITY:

        Drawbridge, tiger, and all monitoring tools other than etherscan
are now available via anonymous ftp in sc.tamu.edu:pub/security/TAMU.
Due to export restrictions, the DES routines used in drawbridge have
been put in a separate tar file, and are readable only by US sites.
Other sites should have no problem either running the filter without
encryption, or dropping in their own favorite encryption package.

        The distribution of etherscan has been hotly debated within
our group.  One argument is that etherscan should be freely released,
as the crackers already have equivalent knowledge and tools (they do),
and restrictions would only hurt valid administrators. The counter 
argument is that free availability of the intrusion signatures would 
enable the crackers to design better intrusions, and the availability 
of sources would provide novice crackers a significant help.  Our 
resultant compromise will be to provide copies to NIC registered
site contacts, given an official request on respective letterhead.
Requests should be sent to:
        Dr. Dave Safford
        Director, Supercomputer Center
        Texas A&M University
        MS 3363
        College Station, TX 77843-3363
 

CONCLUSIONS:

        In response to a significant series of intrusions from internet,
we have developed a set of policies and tools for filtering, monitoring
and checking.  Each of these three areas has proved critical; the
filtering for its ability to protect machines from attack, the
monitoring because it has yielded significant information about the
intruders and their methods, and augments the filter, and the checking
tools for their ability to automate the task of checking and cleaning a
large number of machines.