💾 Archived View for gemini.spam.works › mirrors › textfiles › internet › FAQ › afs.faq captured on 2022-03-01 at 15:43:30.

View Raw

More Information

-=-=-=-=-=-=-

Path: bloom-beacon.mit.edu!senator-bedfellow.mit.edu!faqserv
From: mpb@mailserver.aixssc.uk.ibm.com (Paul Blackburn)
Newsgroups: alt.filesystems.afs,alt.answers,news.answers
Subject: AFS distributed filesystem FAQ
Supersedes: <afs-faq_764764458@rtfm.mit.edu>
Followup-To: alt.filesystems.afs
Date: 29 Apr 1994 21:56:56 GMT
Organization: AIX Systems Support Centre, IBM UK
Lines: 1860
Approved: news-answers-request@MIT.Edu
Expires: 12 Jun 1994 21:51:38 GMT
Message-ID: <afs-faq_767656298@rtfm.mit.edu>
Reply-To: mpb@acm.org (AFS FAQ comments address)
NNTP-Posting-Host: bloom-picayune.mit.edu
Summary: Introduction to AFS with pointers to further information
X-Last-Updated: 1994/04/29
Originator: faqserv@bloom-picayune.MIT.EDU
Xref: bloom-beacon.mit.edu alt.filesystems.afs:403 alt.answers:2615 news.answers:18783

Archive-name: afs-faq
Version: 1.77
Last-modified: 1108 GMT Friday 29th April 1994

AFS frequently asked questions
______________________________________________________________________________

   This posting contains answers to frequently asked questions about AFS.
   Your comments and contributions are welcome (email: mpb@acm.org)

   Most newsreaders can skip from topic to topic with control-G.

______________________________________________________________________________
Subject: Table of contents:

   0  Preamble
      0.01  Purpose and audience
      0.02  Acknowledgements
      0.03  Disclaimer
      0.04  Release Notes
      0.05  Quote

   1  General
      1.01  What is AFS?
      1.02  Who supplies AFS?
      1.03  What is /afs?
      1.04  What is an AFS cell?
      1.05  What are the benefits of using AFS?
            1.05.a  Cache Manager
            1.05.b  Location independence
            1.05.c  Scalability
            1.05.d  Improved security
            1.05.e  Single systems image (SSI)
            1.05.f  Replicated AFS volumes
            1.05.g  Improved robustness to server crash
            1.05.h  "Easy to use" networking
            1.05.i  Communications protocol
            1.05.j  Improved system management capability
      1.06  Which systems is AFS available for?
      1.07  What does "ls /afs" display in the Internet AFS filetree?
      1.08  Why does AFS use Kerberos authentication?

   2  Using AFS
      2.01  What are the differences between AFS and a unix filesystem?
      2.02  What is an AFS protection group?
      2.03  What are the AFS defined protection groups?
      2.04  What is an AFS access control list (ACL)?
      2.05  What are the AFS access rights?
      2.06  What is pagsh?
      2.07  Why use a PAG?
      2.08  How can I tell if I have a PAG?
      2.09  Can I still run cron jobs with AFS?
      2.10  How much disk space does a 1 byte file occupy in AFS?
      2.11  Is it possible to specify a user who is external
            to the current AFS cell on an ACL?
      2.12  Are there any problems printing files in /afs?

   3  AFS administration
      3.01  Is there a version of xdm available with AFS authentication?
      3.02  Is there a version of xlock available with AFS authentication?
      3.03  How does AFS compare with NFS?
      3.04  Given that AFS data is location independent, how does
            an AFS client determine which server houses the data
            its user is attempting to access?
      3.05  Which protocols does AFS use?
      3.06  Are setuid programs executable across AFS cell boundaries?
      3.07  How does AFS maintain consistency on read-write files?
      3.08  How can I run daemons with tokens that do not expire?
      3.09  Can I check my user's passwords for security purposes?
      3.10  Is there a way to automatically balance disk usage across
            fileservers?
      3.11  Can I shutdown an AFS fileserver without affecting users?
      3.12  How can I set up mail delivery to users with $HOMEs in AFS?
      3.13  Should I replicate a ReadOnly volume on the same partition
            and server as the ReadWrite volume?
      3.14  Should I start AFS before NFS in /etc/inittab?
      3.15  Will AFS run on a multi-homed fileserver?
      3.16  Can I replicate my user's home directory AFS volumes?
      3.17  Which TCP/IP ports and protocols do I need to enable
            in order to operate AFS through my Internet firewall?
      3.18  What is the Andrew Benchmark?
      3.19  Is there a version of HP VUE login with AFS authentication?

   4  Getting more information
      4.01  Is there an anonymous FTP site with AFS information?
      4.02  Which USENET newsgroups discuss AFS?
      4.03  Where can I get training in AFS?

   5  About the AFS faq
      5.01  How can I get a copy of the AFS faq?
      5.02  How can I get my question (and answer) into the AFS faq?
      5.03  How can I access the AFS faq via the World Wide Web?

   6  Bibliography
______________________________________________________________________________

Subject: 0  Preamble

Subject: 0.01  Purpose and audience

   The aim of this compilation is to provide information about AFS including:

      + A brief introduction
      + Answers to some often asked questions
      + Pointers to further information

   Definitive and detailed information on AFS is provided in Transarc's
   AFS manuals ([23], [24], [25]).

   The intended audience ranges from people who know little of the subject
   and want to know more to those who have experience with AFS and wish
   to share useful information by contributing to the faq.

Subject: 0.02  Acknowledgements

   The information presented here has been gleaned from many sources.
   Some material has been directly contributed by people listed below.

   I would like to thank the following for contributing:

        Pierette VanRyzin (Transarc)
        Lyle Seaman (Transarc)
        Joseph Jackson (Transarc)
        Dan Lovinger (Carnegie Mellon University)
        Lucien Van Elsen (IBM)
        Jim Rees (University of Michigan)
        Derrick J. Brashear (Carnegie Mellon University)
        Hans-Werner Paulsen (MPI fuer Astrophysik, Garching)

   Thanks also to indirect contributors:

        Ken Paquette (IBM)
        Lance Pickup (IBM)
        Lisa Chavez (IBM)
        Dawn E. Johnson (Transarc)

Subject: 0.03  Disclaimer

   I make no representation about the suitability of this
   information for any purpose.

   While every effort is made to keep the information in
   this document accurate and current, it is provided "as is"
   with no warranty expressed or implied.

Subject: 0.04  Release Notes

   This faq compilation contains material used with permission of
   Transarc Corporation. Permission to copy is given provided any
   copyright notices and acknowledgements are retained.

   Column 1 is used to indicate changes from the last issue:

      N = new item
      U = updated item

Subject: 0.05  Quote

   "'Tis true; there's magic in the web of it;"         Othello, Act 3 Scene 4
                                              --William Shakespeare (1564-1616)
______________________________________________________________________________
Subject: 1  General

Subject: 1.01  What is AFS?

   AFS is a distributed filesystem that enables co-operating hosts
   (clients and servers) to efficiently share filesystem resources
   across both local area and wide area networks.

   AFS is marketed, maintained, and extended by Transarc Corporation.
 
   AFS is based on a distributed file system originally developed
   at the Information Technology Center at Carnegie-Mellon University
   that was called the "Andrew File System".

   "Andrew" was the name of the research project at CMU - honouring the
   founders of the University.  Once Transarc was formed and AFS became a
   product, the "Andrew" was dropped to indicate that AFS had gone beyond
   the Andrew research project and had become a supported, product quality
   filesystem. However, there were a number of existing cells that rooted
   their filesystem as /afs. At the time, changing the root of the filesystem
   was a non-trivial undertaking. So, to save the early AFS sites from having
   to rename their filesystem, AFS remained as the name and filesystem root.

Subject: 1.02  Who supplies AFS?

        Transarc Corporation          phone: +1 (412) 338-4400
        The Gulf Tower
        707 Grant Street              fax:   +1 (412) 338-4404
        Pittsburgh
        PA 15219                      email: information@transarc.com
        United States of America             afs-sales@transarc.com

Subject: 1.03  What is /afs?

   The root of the AFS filetree is /afs. If you execute "ls /afs" you will
   see directories that correspond to AFS cells (see below). These cells
   may be local (on same LAN) or remote (eg halfway around the world).

   With AFS you can access all the filesystem space under /afs with commands
   you already use (eg: cd, cp, rm, and so on) provided you have been granted
   permission (see AFS ACL below).

Subject: 1.04  What is an AFS cell?

   An AFS cell is a collection of servers grouped together administratively
   and presenting a single, cohesive filesystem.  Typically, an AFS cell is
   a set of hosts that use the same Internet domain name. 

   Normally, a variation of the domain name is used as the AFS cell name.

   Users log into AFS client workstations which request information and files
   from the cell's servers on behalf of the users.

Subject: 1.05  What are the benefits of using AFS?

   The main strengths of AFS are its:
 
      + caching facility
      + security features
      + simplicity of addressing
      + scalability
      + communications protocol

   Here are some of the advantages of using AFS in more detail:

Subject: 1.05.a  Cache Manager

   AFS client machines run a Cache Manager process. The Cache Manager
   maintains information about the identities of the users logged into
   the machine, finds and requests data on their behalf, and keeps chunks
   of retrieved files on local disk.

   The effect of this is that as soon as a remote file is accessed
   a chunk of that file gets copied to local disk and so subsequent
   accesses (warm reads) are almost as fast as to local disk and
   considerably faster than a cold read (across the network).

   Local caching also significantly reduces the amount of network traffic,
   improving performance when a cold read is necessary.

Subject: 1.05.b  Location independence

   Unlike NFS, which makes use of /etc/filesystems (on a client) to map
   (mount) between a local directory name and a remote filesystem, AFS
   does its mapping (filename to location) at the server. This has the
   tremendous advantage of making the served filespace location independent.

   Location independence means that a user does not need to know which
   fileserver holds the file, the user only needs to know the pathname
   of a file. Of course, the user does need to know the name of the
   AFS cell to which the file belongs. Use of the AFS cellname as the
   second part of the pathname (eg: /afs/$AFSCELL/somefile) is helpful
   to distinguish between file namespaces of the local and non-local
   AFS cells.

   To understand why such location independence is useful, consider
   having 20 clients and two servers. Let's say you had to move
   a filesystem "/home" from server a to server b.

   Using NFS, you would have to change the /etc/filesystems file on 20
   clients and take "/home" off-line while you moved it between servers.

   With AFS, you simply move the AFS volume(s) which constitute "/home"
   between the servers. You do this "on-line" while users are actively
   using files in "/home" with no disruption to their work.

   (Actually, the AFS equivalent of "/home" would be /afs/$AFSCELL/home
   where $AFSCELL is the AFS cellname.)

Subject: 1.05.c  Scalability

   With location independence comes scalability. An architectural goal
   of the AFS designers was client/server ratios of 200:1 which has
   been successfully exceeded at some sites.
 
   Transarc do not recommend customers use the 200:1 ratio. A more
   cautious value of 50:1 is expected to be practical in most cases.
   It is certainly possible to work with a ratio somewhere between
   these two values. Exactly what value depends on many factors including:
   number of AFS files, size of AFS files, rate at which changes are made,
   rate at which file are being accessed, speed of servers processor,
   I/O rates, and network bandwidth.

   AFS cells can range from the small (1 server/client) to the massive
   (with tens of servers and thousands of clients).
 
   Cells can be dynamic: it is simple to add new fileservers or clients
   and grow the computing resources to meet new user requirements.

Subject: 1.05.d  Improved security

   Firstly, AFS makes use of Kerberos to authenticate users.
   This improves security for several reasons:

     + passwords do not pass across the network in plaintext

     + encrypted passwords no longer need to be visible

          You don't have to use NIS, aka yellow pages, to distribute
          /etc/passwd - thus "ypcat passwd" can be eliminated.

          If you do choose to use NIS, you can replace the password
          field with "X" so the encrypted password is not visible.
          (These issues are discussed in detail in [25]).

     + AFS uses mutual authentication - both the service provider
       and service requester prove their identities

   Secondly, AFS uses access control lists (ACLs) to enable users to
   restrict access to their own directories.

Subject: 1.05.e  Single systems image (SSI)

   Establishing the same view of filestore from each client and server
   in a network of systems (that comprise an AFS cell) is an order of
   magnitude simpler with AFS than it is with, say, NFS.

   This is useful to do because it enables users to move from workstation
   to workstation and still have the same view of filestore. It also
   simplifies part of the systems management workload.

   In addition, because AFS works well over wide area networks the SSI
   is also accessible remotely.

   As an example, consider a company with two widespread divisions
   (and two AFS cells): ny.acme.com and sf.acme.com. Mr Fudd, based
   in the New York office, is visiting the San Francisco office.

   Mr. Fudd can then use any AFS client workstation in the San Francisco
   office that he can log into (a unprivileged guest account would suffice).
   He could authenticate himself to the ny.acme.com cell and securely access
   his New York filespace.

   For example:
 
       The following shows a guest in the sf.acme.com AFS cell:
       {1} obtaining a PAG with pagsh command (see 2.06)
       {2} use the klog command to authenticate into the ny.acme.com AFS cell
       {3} making a HOME away from home
       {4} invoking a homely .profile
 
       guest@toontown.sf.acme.com $ /usr/afsws/etc/pagsh            # {1}
       $ /usr/afsws/bin/klog -cell ny.acme.com -principal elmer     # {2}
       Password:
       $ HOME=/afs/ny.acme.com/user/elmer; export HOME              # {3}
       $ cd
       $ .  .profile                                                # {4}
       you have new mail
       guest@toontown $

   It is not necessary for the San Francisco sys admin to give Mr. Fudd
   an AFS account in the sf.acme.com cell.  Mr. Fudd only needs to be
   able to log into an AFS client that is:
      1) on the same network as his cell and
      2) his ny.acme.com cell is mounted in the sf.acme.com cell
         (as would certainly be the case in a company with two cells).

Subject: 1.05.f  Replicated AFS volumes

   AFS files are stored in structures called Volumes.  These volumes
   reside on the disks of the AFS file server machines.  Volumes containing
   frequently accessed data can be read-only replicated on several servers.

   Cache managers (on users client workstations) will make use of replicate
   volumes to load balance.  If accessing data from one replicate copy, and
   that copy becomes unavailable due to server or network problems, AFS will
   automatically start accessing the same data from a different replicate copy.

   An AFS client workstation will access the closest volume copy.
   By placing replicate volumes on servers closer to clients (eg on same
   physical LAN) access to those resources is improved and network traffic
   reduced.

Subject: 1.05.g  Improved robustness to server crash

   The Cache Manager maintains local copies of remotely accessed files.
 
   This is accomplished in the cache by breaking files into chunks
   of up to 64k (default chunk size). So, for a large file, there may be
   several chunks in the cache but a small file will occupy a single chunk
   (which will be only as big as is needed).
 
   A "working set" of files that have been accessed on the client is
   established locally in the client's cache (copied from fileserver(s)).
 
   If a fileserver crashes, the client's locally cached file copies are usable.
 
   Also, if the AFS configuration has included replicated read-only volumes 
   then alternate fileservers can satisfy requests for files from those
   volumes.

Subject: 1.05.h  "Easy to use" networking

   Accessing remote file resources via the network becomes much simpler
   when using AFS. Users have much less to worry about: want to move
   a file from a remote site? Just copy it to a different part of /afs.

   Once you have wide-area AFS in place, you don't have to keep local
   copies of files. Let AFS fetch and cache those files when you need them.

Subject: 1.05.i  Communications protocol

   AFS communications protocol is optimized for Wide Area Networks.
   Retransmitting only the single bad packet in a batch of packets
   and allowing the number of unacknowledged packets to be higher
   (than in other protocols, see [4]).

Subject: 1.05.j  Improved system management capability

   Systems administrators are able to make configuration changes
   from any client in the AFS cell (it is not necessary to login
   to a fileserver).
 
   With AFS it is simple to effect changes without having to take
   systems off-line.
 
   Example:
 
   A department (with its own AFS cell) was relocated to another office.
   The cell had several fileservers and many clients.
   How could they move their systems without causing disruption? 
 
   First, the network infrastructure was established to the new location.
   The AFS volumes on one fileserver were migrated to the other fileservers.
   The "freed up" fileserver was moved to the new office and connected
   to the network.
 
   A second fileserver was "freed up" by moving its AFS volumes across
   the network to the first fileserver at the new office. The second
   fileserver was then moved.
   
   This process was repeated until all the fileservers were moved.
 
   All this happened with users on client workstations continuing
   to use the cell's filespace. Unless a user saw a fileserver
   being physically moved (s)he would have no way to tell the change
   had taken place.
 
   Finally, the AFS clients were moved - this was noticed!

Subject: 1.06  Which systems is AFS available for?

   AFS runs on systems from: HP, Next, DEC, IBM and SUN.

   Transarc customers have done ports to Crays, and the 3090, but all
   are based on some flavour of unix.  Some customers have done work to
   make AFS data available to PCs and Macs, although they are using
   something similar to the AFS/NFS translator (a system that enables
   "NFS only" clients to NFS mount the AFS filetree /afs).

   The following list (current at time of writing) is for AFS version 3.3
   (check with Transarc for the most up-to-date list).

   This information can also be found on grand.central.org:
 
      via AFS: /afs/grand.central.org/pub/afsps/doc/SUPPORTED_SYSTEMS.afs.*
      via FTP: grand.central.org:/pub/afsps/doc/SUPPORTED_SYSTEMS.afs.*

   System-name     CPU and Operating System

   hp300_ux90      Hewlett Packard 9000 Series 300/400 running HP-UX 9.0
   hp700_ux90      Hewlett Packard 9000 Series 700 running HP-UX 9.0
   hp800_ux90      Hewlett Packard 9000 Series 800 running HP-UX 9.0
   next_mach20     NeXT (68030 or 68040 systems) running NeXT OS Rel 2.0,2.1
   next_mach30     NeXT (68030 or 68040 systems) running NeXT OS Rel 3.0
   pmax_ul43       DECstation 2100, 3100 or 5000 (single processor) 
                   running Ultrix 4.3
   rs_aix32        IBM RS/6000 running AIX 3.2, 3.2.1 and 3.2.2
   rt_aix221       IBM RT/PC running AIX 2.2.1
   rt_aos4         IBM-RT/PC running AOS Release 4
   sgi_51          Silicon Graphics running IRIX 5.1
   sun3_411        Sun 3 (68020 systems) running Sun OS 4.1.1
   sun3x_411       Sun 3 (68030 systems) running Sun OS 4.1.1
   sun4_411        Sun 4 (EXCEPT SparcStations) running Sun OS 4.1.1, 4.1.2 or 
                   4.1.3
   sun4_52         Sun 4 (EXCEPT SparcStations) running Solaris 2.2
   sun4c_411       Sun SparcStations running Sun OS 4.1.1, 4.1.2 or 4.1.3
   sun4c_52        Sun SparcStations running Solaris 2.2
   sun4m_412       Sun SparcServer 600MP running Sun OS 4.1.2/Solaris 1.0, 
                   4.1.3/Solaris 2.1
   sun4m_52        Sun SparcServer 600MP running Solaris 2.2
   vax_ul43        VAX systems running Ultrix 4.3 (single processor).

   There are also ports of AFS done by customers available from Transarc
   on an "as is" unsupported basis.
 
   More information on this can be found on grand.central.org:
 
      via AFS: /afs/grand.central.org/pub/afs-contrib/bin/README
      via FTP: grand.central.org:/pub/afs-contrib/bin/README
 
   These ports of AFS client code include:
 
      HP (Apollo) Domain OS - by Jim Rees at the University of Michigan.
      sun386i - by Derek Atkins and Chris Provenzano at MIT.

Subject: 1.07  What does "ls /afs" display in the Internet AFS filetree?

   Essentially this displays the AFS cells that co-operate in the
   Internet AFS filetree.

   Note that the output of this will depend on the cell you do it from;
   a given cell may not have all the publicly advertised cells available,
   and it may have some cells that aren't advertised outside of the given site.

   The definitive source for this information is:

           /afs/transarc.com/service/etc/CellServDB.export

   I've included the list of cell names included in it below:
   uni-freiburg.de         #Albert-Ludwigs-Universitat Freiburg
   anl.gov                 #Argonne National Laboratory
   bcc.ac.uk               #Bloomsbury Computing Consortium
   bstars.com              #Boeing Aerospace and Electronics/STARS
   bu.edu                  #Boston University
   cs.brown.edu            #Brown University Department of Computer Science
   ciesin.org              #CIESIN
   cards.com               #Cards - Electronic Warfare Associates
   cmu.edu                 #Carnegie Mellon University
   andrew.cmu.edu          #Carnegie Mellon University - Campus
   ce.cmu.edu              #Carnegie Mellon University - Civil Eng. Dept.
   club.cc.cmu.edu         #Carnegie Mellon University Computer Club
   cs.cmu.edu              #Carnegie Mellon University - School of Comp. Sci.
   ece.cmu.edu             #Carnegie Mellon University - Elec. Comp. Eng. Dept.
   me.cmu.edu              #Carnegie Mellon University - Mechanical Engineering
   others.chalmers.se      #Chalmers University of Technology - General users
   cs.cornell.edu          #Cornell University Computer Science Department
   graphics.cornell.edu    #Cornell University Program of Computer Graphics
   theory.cornell.edu      #Cornell University Theory Center
   msc.cornell.edu         #Cornell University Materials Science Center
   pegasus.cranfield.ac.uk #Cranfield Insitute of Technology
   grand.central.org       #DARPA Central File Server at Transarc
   hrzone.th-darmstadt.de  #TH-Darmstadt
   kiewit.dartmouth.edu    #Dartmouth College, Kiewit
   northstar.dartmouth.edu #Dartmouth College, Project Northstar
   es.net                  #Energy Sciences Net
   cern.ch                 #European Laboratory for Particle Physics, Geneva
   fnal.gov                #Fermi National Acclerator Laboratory
   jrc.flinders.edu.au     #Flinders School of Info. Sci. and Tech. - Australia
   hepafs1.hep.net         #FNAL HEPNET cell 1
   pub.nsa.hp.com          #HP Cupertino
   palo_alto.hpl.hp.com    #HP Palo Alto
   ctp.se.ibm.com          #IBM/4C, Chalmers, Sweden
   ibm.uk                  #IBM UK, AIX Systems Support Centre
   inel.gov                #Idaho National Engineering Lab
   iastate.edu             #Iowa State University
   ipp-garching.mpg.de     #Institut fuer Plasmaphysik
   sfc.keio.ac.jp          #Keio University, Japan
   cc.keio.ac.jp           #Keio University, Fac. of Sci. & Tech. Computing Ctr
   lrz-muenchen.de         #Leibniz-Rechenzentrum Muenchen Germany
   athena.mit.edu          #MIT/Athena cell
   rel-eng.athena.mit.edu  #MIT/Athena Release Engineering (primary sources)
   net.mit.edu             #MIT/Network Group cell
   sipb.mit.edu            #MIT/SIPB cell
   media-lab.mit.edu       #MIT/Media Lab cell
   mtxinu.com              #mt Xinu Incorporated
   nada.kth.se             #Royal Institute of Technology, NADA
   nce_ctc                 #National Computational Env. - Cornell Theory Center
   nce                     #National Computing Environment  - wide area cell
   nce_psc                 #National Computing Environment (Metacenter)
   nersc.gov               #National Energy Research Supercomputer Center
   alw.nih.gov             #National Institutes of Health
   test.alw.nih.gov        #National Institutes of Health (test cell)
   cmf.nrl.navy.mil        #Naval Research Lab
   ncat.edu                #North Carolina Agricultural and Technical State U.
   nsf-centers.edu         #NSF Supercomputing Centers
   ctd.ornl.gov            #Computing and Telecommunications Div ORNL
   ri.osf.org              #OSF Research Institute
   gr.osf.org              #OSF Research Institute, Grenoble
   vfl.paramax.com         #Paramax (Unisys) Paoli Research Center
   stars.reston.unisys.com #Paramax (Unisys) - Reston, Va.
   psc.edu                 #PSC (Pittsburgh Supercomputing Center)
   rwcp.or.jp              #Real World Computer Partnership(rwcp)
   rhrk.uni-kl.de          #Rechenzentrum University of Kaiserslautern
   rus.uni-stuttgart.de    #Rechenzentrum University of Stuttgart
   ihf.uni-stuttgart.de    #University of Stuttgart, Ins. fuer Hochfrequenz-Tec
   rpi.edu                 #Rensselaer Polytechnic Institute
   rose-hulman.edu         #Rose-Hulman Institute of Technology
   dsg.stanford.edu        #Stanford Univ. - Comp. Sci. - Distributed Systems
   ir.stanford.edu         #Stanford University
   slac.stanford.edu       #Stanford Linear Accelerator Center
   stars.com               #STARS Technology Center - Ballston, Va.
   ssc.gov                 #Superconducting Supercollider Lab
   ethz.ch                 #Swiss Federal Inst. of Tech. - Zurich, Switzerland
   telos.com               #Telos Systems Group - Chantilly, Va.
   titech.ac.jp            #Tokyo Institute of Technology
   transarc.com            #Transarc Corporation
   cs.arizona.edu          #University of Arizona - Computer Science Dept.
   ece.ucdavis.edu         #Univ California - Davis campus
   spc.uchicago.edu        #University of Chicago - Social Sciences
   rrz.uni-koeln.de        #University of Cologne -  Reg Comp Center
   urz.uni-heidelberg.de   #Universitaet Heidelberg
   uni-hohenheim.de        #University of Hohenheim
   ncsa.uiuc.edu           #University of Illinois
   wam.umd.edu             #University of Maryland Network WAM Project
   umich.edu               #University of Michigan - Campus
   sph.umich.edu           #University of Michigan -- School of Public
   citi.umich.edu          #University of Michigan - IFS Development
   dmsv.med.umich.edu      #University of Michigan - DMSV
   lsa.umich.edu           #University of Michigan - LSA College
   math.lsa.umich.edu      #University of Michigan - Math Cell
   cs.unc.edu              #University of North Carolina at Chapel Hill
   nd.edu                  #University of Notre Dame
   pitt.edu                #University of Pittsburgh
   rus-cip.uni-stuttgart.de #RUS Cip-Pool,Rechenzentrum University of Stuttgart
   mathematik.uni-stuttgart.de #University of Stuttgart, Math Dept.
   isi.edu                 #University of Southern California/ISI
   cs.utah.edu             #University of Utah Computer Science Dept
   cs.washington.edu       #University of Washington Comp Sci Department
   cs.wisc.edu             #University of Wisconsin-Madison, Computer S
   ucop.edu                #University of California Office of the President

   This shows different and widespread organizations making use
   of the Internet AFS filetree.

   Note that it is also possible to use AFS "behind the firewall"
   within the confines of your organization's network - you don't have
   to participate in the Internet AFS filetree.

   Indeed, there are lots of benefits of using AFS on a local area network
   without using the WAN capabilities.

Subject: 1.08  Why does AFS use Kerberos authentication?

   It improves security.

   Kerberos uses the idea of a trusted third party to prove identification.
   This is a bit like using a letter of introduction or quoting a referee
   who will vouch for you.

   When a user authenticates using the klog command (s)he is prompted
   for a password. If the password is accepted the Kerberos
   Authentication Server (KAS) provides the user with an encrypted token
   (containing a "ticket granting ticket").

   From that point on, it is the encrypted token that is used to prove
   the user's identity. These tokens have a limited lifetime (typically
   a day) and are useless when expired.

   In AFS, it is possible to authenticate into multiple AFS cells.
   A summary of the current set of tokens held can be displayed
   by using the "tokens" command.

   For example:
      elmer@toontown $ tokens
 
      Tokens held by the Cache Manager:
 
      User's (AFS ID 9997) tokens for afs@ny.acme.com [Expires Sep 15 06:50]
      User's (AFS ID 5391) tokens for afs@sf.acme.com [Expires Sep 15 06:48]
         --End of list--

   Kerberos improves security because a users's password need only be
   entered once (at klog time).

   AFS uses Kerberos to do complex mutual authentication which means that
   both the service requester and the service provider have to prove their
   identities before a service is granted.

   Transarc's implementation of Kerberos is slightly different from
   MIT Kerberos V4 but AFS can work with either version.
 
   For more detail on this and other Kerberos issues see the faq
   for Kerberos (posted to news.answers and comp.protocols.kerberos) [28].
   (Also, see [15], [16], [26], [27])

Subject: 2  Using AFS

Subject: 2.01  What are the differences between AFS and a unix filesystem?

   Essentially, from a user's point of view, there is little difference
   between AFS and local unix filestore. Nearly all the commands normally
   used to access local files can be used to access files in /afs.

   In the following set of sections, I have attempted to "target"
   each section to an appropriate type of user by including to the
   right of each section heading one of: User, Programmer, SysAdmin.
   
   Here is a summary of the differences:

   Authentication:                                         [ User ]
 
      Before a user can access protected AFS files (s)he needs to become
      authenticated to AFS using the klog command (Kerberos login) to get
      a Kerberos "ticket granting ticket" (called a token from here on).
 
      Without a token, an unauthenticated user is given the AFS identity
      "system:anyuser" and as such is only able to access files in directories
      that have ACLs granting system:anyuser access.
 
      Many systems have the klog function built into the system login program.
      So a user would not even have to know they gain a token on logging in.
      If you use a system where you have to issue the klog command after
      login then you should run the pagsh command first (see below).
 
      AFS provides access control lists to give more precise control
      to users wishing to protect their files (see AFS ACL below).
 
   File permissions:                                       [ User ]
 
      Unix mode bits for group and other are ignored.
      The mode bits for the file owner don't work the way they used to.
 
      Users should protect their AFS files with (directory) ACLs only.
      Just use mode bits to make a file executable.
 
   Data protection with AFS ACLs:                          [ User ]
 
      Some versions of unix (eg IBM's AIX version 3) allow ACLs on
      local files. In AFS, ACLs protect directories and used with
      AFS protection groups (see below) provide a finer granularity
      of protection than can be achieved with basic unix file permissions.
      (AFS ACLs are described in more detail below.)
 
   Protection groups:                                      [ User ]
 
      Users can create and maintain their own protection groups in AFS -
      as opposed to unix where only sys admins can manage protection groups.

   Hard links:                                             [ User ]
 
      In AFS, hard links (eg: ln old new) are only valid within a directory.
      This is because AFS ACLs protect directories (not individual files)
      and allowing hard links that span directories would subvert ACL
      protection.
 
      Symbolic links work in AFS because they reference a pathname and
      not an i-node directly. (Hard links reference an i-node directly.)
 
   Changing file protection by moving a file:              [ User ]
 
      Moving a file to a different directory will change the protection
      of a file if the ACL on the new directory if different to the ACL
      on the original directory.
   
   chown and chgrp:                                        [ User ]
 
      Only members of the AFS group "system:administrators" can use these
      commands on files in /afs.
 
   Save on close:                                          [ Programmer ]
 
      AFS Cache Manager does not send file modifications to a file server
      until the close() or fsync() system call.
 
      write() system calls only update the local cache copy on the client.
 
      Note the difference in semantic of writing a file:
      
      local unix file: writes update the file "immediately"
      AFS file:        local cached copy updated "immediately" but
                       the server copy is only updated when the file
                       is closed or fsync'ed.

      It is important to understand that most applications (eg: vi, emacs,
      frame, interleaf, wingz, etc) issue the close() system call when
      the user chooses/issues the "save" command in the application.
 
      Users are not required to exit the application to "save" their
      changes back to the server.
 
   byte-range file locking:                                [ Programmer ]
 
      AFS does not support byte-range locking within a file,
      although lockf() and fcntl() calls will return 0 (success).
      The first time a byte-range lock is attempted, AFS will display:
 
      "afs: byte-range lock/unlock ignored; make sure no one else
       else is running this program."
 
   whole file locking:                                     [ Programmer ]
 
      AFS does support advisory locking an entire file with flock().
      Processes on the same client workstation that attempt to lock
      a file obey the proper locking semantics.
 
      Processes on different AFS clients requesting a lock on the same
      file would get EWOULDBLOCK returned.
 
   character and block special files:                      [ SysAdmin ]
 
      AFS does not support character and block special files.
      The mknod command does not create either character or block
      special files in /afs.

   AFS version of fsck:                                    [ SysAdmin ]
 
      On an AFS server, the partitions containing served files are NOT
      unix filesystems and standard fsck *must* not be used - use the AFS
      version instead.

Subject: 2.02  What is an AFS protection group?

   A named list of users.
 
   Group names are used in AFS ACLs to identify lists of users with
   particular access permissions.
 
   In AFS, users can create and maintain their own protection groups.
   This is different to unix where only the system administrator can
   manage /etc/group.
 
   AFS groups are stored in the protection database on fileserver(s)
   and managed by using the "pts" command.
 
   An AFS group typically has the format:
 
       owner-id:group-name
 
   By default, only the owner of a group can change its members.
 
   It is possible to have both users and IP addresses as members
   of an AFS group. By using an IP address like this you can specify
   all the users from the host with that IP address.

Subject: 2.03  What are the AFS defined protection groups?

   system:anyuser

       Everyone who has access to an AFS client in any cell that is
       on the same network as your cell.

   system:authuser

       Everyone who has access to an AFS client in any cell that is
       on the same network as your cell *and* has valid tokens for
       your cell (ie has been authenticated in your cell).

   system:administrators

       Users who have privileges to execute some but not all
       system administrator commands.

Subject: 2.04  What is an AFS access control list (ACL)?

   There is an ACL for every directory in AFS. The ACL specifies
   protection at the directory level (not file level) by listing
   permissions of users and/or groups to a directory. There is a
   maximum of 20 entries on an ACL.

   For example:
 
   An AFS ACL is displayed by using the "fs" command as shown below:
 
      tweety@toontown $ fs listacl .
      Access list for . is
      Normal rights:
        fac:coords rlidwka
        system:anyuser rl
 
   This ACL shows that members of the AFS protection group "fac:coords"
   have full access rights to the current directory and "system:anyuser"
   has only read and lookup rights.
 
   The members of "fac:coords" can be determined by accessing the
   protection group database using the "pts" command as shown below:
 
      tweety@toontown $ pts membership fac:coords
      Members of fac:coords (id: -1577) are:
        sylvester
        roadrunner
        yosemite.sam

Subject: 2.05  What are the AFS access rights?

   In AFS, there are seven access rights that may be set or not set:

   lookup          l       Permission to examine the ACL and traverse the
                           directory (needed with most other access rights).
                           Permission to look up filenames in a directory.
   read            r       View the contents of files in the directory
   insert          i       Add new files or sub-directories
   write           w       Modify file contents, use "chmod"
   delete          d       Remove file(s) in directory
   lock            k       Permission for programs to "flock" files
                           in the directory
   administer      a       Ability to change the ACL

   There are short-hand forms:

   read            rl      read and lookup
   write           rlidwk  all rights except administer
   all             rlidwka
   none                    removes all rights

Subject: 2.06  What is pagsh?

   A command to get a new shell with a process authentication group (PAG).
 
   This is normally used if your system does not use the AFS version of login.
   It is used to get a PAG prior to running klog.
 
   The PAG uniquely identifies the user to the Cache Manager.
   Without a PAG the Cache Manager uses the unix UID to identify a user.

Subject: 2.07  Why use a PAG?

   There are two reasons:
 
   a) Child processes inherit the PAG and the Kerberos token so they are AFS
      authenticated.
 
   b) For security: if you don't have a PAG then the Cache Manager identifies
      you by unix UID. Another user with root access to the client could
      su to you and therefore use your token.

Subject: 2.08  How can I tell if I have a PAG?

   You can tell if you have a PAG by typing "groups". A PAG is indicated
   by the appearance of two integers in the list of groups.

   For example:
      sylvester@toontown $ groups
      33536 32533 staff catz

Subject: 2.09  Can I still run cron jobs with AFS?

   Yes, but remember that in order to fully access files in AFS you have
   to be AFS authenticated. If your cron job doesn't klog then it only
   gets system:anyuser access.
 
   The klog command has a "-pipe" option which will read a password from
   stdin. IF (yes, that's a big if :-) you are prepared to store your
   password in a local (non-AFS) file then you might use the following:
 
      a) create a script to get your AFS token:
 
         #!/bin/sh -
         #
         # NAME      afsgt
         # PURPOSE   Get AFS token by using password stored in a file.
         #           Only need this to run cron jobs that need
         #           authenticated AFS access
         usage() {
                 echo "Usage: afsgt passwordfile" >&2
         }
         if [ -z "${1}" ]; then
                 echo "afsgt error: need name of password file" >&2
                 usage
                 exit 1
         fi
         /usr/afsws/bin/klog -pipe < ${1}
 
      b) Store your password in a local (non-AFS) file that only you
         have access to (perhaps: /home/$USER/.p).
 
         Make sure that this file is mode 600 and also be sure that
         you trust whoever has root access on this system and whoever
         has access to backup tapes! Also, don't forget to change this
         file if you change your AFS password.
 
      c) In your crontab file, run afsgt before whatever task you run.
 
         0 6 * * * /usr/local/bin/afsgt /home/$USER/.p; $HOME/bin/6AMdaily

Subject: 2.10  How much disk space does a 1 byte file occupy in AFS?

   One kilobyte.
 
   Other filesystems allocate different file block sizes.
   For example, IBM's AIX version 3 journaled file system (JFS)
   uses 4K blocks (exception: 2K for the 160MB disk drive).
 
   Such blocksize differences lead to variations on the amount of
   disk space required to store files. Copying a directory from AFS
   to AIX JFS would require more space in JFS because of the block
   fragmentation.
 
   Example:
 
   a) Create a one byte file in AFS and use "ls -s" to show how many
      kilobytes it occupies:
 
         ariel@atlantica $ echo z >/afs/dsea/tmp/one.byte.file
         ariel@atlantica $ ls -s /afs/dsea/tmp/one.byte.file
            1 /afs/dsea/tmp/one.byte.file
 
   b) Create same file in local filesystem (AIX JFS):
 
         ariel@atlantica $ echo z >/tmp/one.byte.file
         ariel@atlantica $ ls -s /tmp/one.byte.file
            4 /tmp/one.byte.file

Subject: 2.11  Is it possible to specify a user who is external
               to the current AFS cell on an ACL?

   No. You cannot reference a particular user from another AFS cell.
 
   You can specify an IP address on the ACL; this means any and all
   users from the host with that IP address.
 
   Another solution to this problem is to give the external user an
   "authentication-only" account in your AFS cell. This means that
   (s)he can klog (but has no home directory) in your cell.

   Cross-realm authentication (where co-operating cells are able to
   specify remore users as "user@remote.cell" on an ACL) is an *unsupported*
   feature of AFS 3.3a. That means that Transarc doesn't promise
   to make it work for you, nor keep it running in future releases.

Subject: 2.12  Are there any problems printing files in /afs?

   The issue of printing in AFS is almost always the same: what do you
   send to the printing daemon?  Do you send it the bytes you want to
   print or do you just send the file name containing those bytes?  If
   you send it a file name, you have to be sure that the printing daemon
   can read it.  Most daemons run with no AFS tokens, so can't access
   directories unless they are open for system:anyuser read access.
   Often, printing commands (lpr, lp, enq) have an option that allows
   for both modes of operation, though the default behavior varies from
   system to system.  If you're interested in making your daemons
   authenticate to AFS, check out the example scripts in AFS-Contrib:
 
     /afs/grand.central.org/pub/afs-contrib/tools/reauth-example
 
   Another common problem is setuid printing commands.  For instance, the
   "enq" command runs as root, daemon, or some such user.  If you aren't
   using the AFS login and simply issue "klog" to get tokens, those
   tokens are associated with your uid.  When setuid programs run, they
   lose access to your token and often can't read the file name given as
   an argument.  The solution in this case is to use "pagsh" before
   "klog" so that your tokens are transferred to subprocesses
   automatically by group membership.  This works even if the uid
   changes, as for setuid programs.

Subject: 3  AFS administration

Subject: 3.01  Is there a version of xdm available with AFS authentication?

   Yes, grand.central.org:pub/afs-contrib/tools/xdm/*

Subject: 3.02  Is there a version of xlock available with AFS authentication?

   Yes, grand.central.org:pub/afs-contrib/tools/xlock/*

Subject: 3.03  How does AFS compare with NFS?

                                AFS                          NFS
   File Access           Common name space from     Different file names from
                         all workstations           different workstations

   File Location         Automatic tracking by      Mountpoints to files set by
   Tracking              file system processes      administrators and users
                         and databases

   Performance           Client caching to reduce   No local disk caching;
                         network load; callbacks    limited cache consistency
                         to maintain cache consis-
                         tency

   Andrew Benchmark      Average time of 210        Average time of 280
   (5 phases, 8 clients) seconds/client             seconds/client

   Scaling capabilities  Maintains performance in   Best in small to mid-size
                         small and very large       installations
                         installations

                         Excellent performance on   Best in local-area
                         wide-area configuration    configurations

   Security              Kerberos mutual authen-    Security based on
                         tication                   unencrypted user ID's

                         Access control lists on    No access control lists
                         directories for user and
                         group access

   Availability          Replicates read-mostly     No replication
                         data and AFS system
                         information

   Backup Operation      No system downtime with    Standard UNIX backup system
                         specially developed AFS
                         Backup System

   Reconfiguration       By volumes (groups of      Per-file movement
                         files)

                         No user impact; files      Users lose access to files
                         remain accessible during   and filenames change
                         moves, and file names do   (mountpoints need to be
                         not change                 reset)

   System Management     Most tasks performed from  Frequently involves telnet
                         any workstation            to other workstations

   Autonomous            Autonomous administrative  File servers and clients
   Architecture          units called cells, in
                         addition to file servers
                         and clients

                         No trust required between  No security distinctions
                         cells                      between sites

   [ source: grand.central.org:pub/afsps/doc/afs-nfs.comparison ]

Subject: 3.04  Given that AFS data is location independent, how does
               an AFS client determine which server houses the data
               its user is attempting to access?

    The Volume Location Database (VLDB) is stored on AFS Database 
    Servers and is ideally replicated across 3 or more Database Server
    machines.  Replication of the Database ensures high availability
    and load balances the requests for the data.  The VLDB maintains 
    information regarding the current physical location of all volume 
    data (files and directories) in the cell, including the IP address
    of the FileServer, and the name of the disk partition the data is 
    stored on.
 
    A list of a cell's Database Servers is stored on the local disk of
    each AFS Client machine as: /usr/vice/etc/CellServDB
 
    The Database Servers also house the Kerberos Authentication
    Database (encrypted user and server passwords), the Protection
    Database (user UID and protection group information) and the 
    Backup Database (used by System Administrators to backup AFS file 
    data to tape).

Subject: 3.05  Which protocols does AFS use?

   AFS may be thought of as a collection of protocols and software
   processes, nested one on top of the other. The constant interaction
   between and within these levels makes AFS a very sophisticated software
   system.

   At the lowest level is the UDP protocol, which is part of TCP/IP. UDP
   is the connection to the actual network wire. The next protocol level is
   the  remote procedure call (RPC).  In general, RPCs allow the developer
   to build applications using the client/server model, hiding the
   underlying networking mechanisms. AFS uses Rx, an RPC protocol developed
   specifically for AFS during its development phase at Carnegie Mellon
   University.

   Above the RPC is a series of server processes and interfaces that all
   use Rx for communication between machines. Fileserver, volserver,
   upserver, upclient, and bosserver are server processes that export RPC
   interfaces to allow their user interface commands to request actions and
   get information. For example, a bos status <machine name> command will
   examine the bos server process on the indicated file server machine.

   Database servers use ubik, a replicated database mechanism which is
   implemented using RPC. Ubik guarantees that the copies of AFS databases
   of multiple server machines remain consistent. It provides an
   application programming interface (API) for database reads and writes,
   and uses RPCs to keep the database synchronized. The database server
   processes, vlserver, kaserver, and ptserver, reside above ubik. These
   processes export an RPC interface which allows  user commands to control
   their operation.  For instance, the pts command is used to communicate
   with the ptserver, while the command klog  uses the kaserver's RPC
   interface.

   Some application programs are quite complex, and draw on RPC interfaces
   for communication with an assortment of processes. Scout utilizes the
   RPC interface to file server processes to display and monitor the status
   of file servers. The uss command interfaces with  kaserver, ptserver,
   volserver and vlserver to create new user accounts.

   The Cache Manager also exports an RPC interface. This interface is used
   principally by file server machines to break callbacks.  It can also be
   used to obtain Cache Manager status information.  The program cmdebug
   shows the status of a Cache Manager using this interface.

   For additional information, Section 1.5 of the AFS System
   Administrator's Guide and the April 1990 Cache Update contain more
   information on ubik. Udebug information and short descriptions of all
   debugging tools were included in the January 1991 Cache Update. Future
   issues will discuss other debugging tools in more detail.

   [ source: grand.central.org:pub/cache.update/apr91 ]
   [ Copyright  1991 Transarc Corporation ]

Subject: 3.06  Are setuid programs executable across AFS cell boundaries?

   By default, the setuid bit is ignored but the program may be run
   (without setuid privilege).

   It is possible to configure an AFS client to honour the setuid bit
   (see: "fs setcell". Use with care!).

   NB: making a program setuid (or setgid) in AFS does *not* mean
   that the program will get AFS permissions of a user or group.
   To become AFS authenticated, you have to klog.  If you are not
   authenticated, AFS treats you as "system:anyuser".

Subject: 3.07  How does AFS maintain consistency on read-write files?

   AFS uses a mechanism called "callback".

   Callback is a promise from the fileserver that the cache version
   of a file/directory is up-to-date. It is established by the fileserver
   with the caching of a file.

   When a file is modified the fileserver breaks the callback.  When the
   user accesses the file again the Cache Manager fetches a new copy 
   if the callback has been broken.

   The following paragraphs describe AFS callback mechanism in more detail:
 
   If I open() fileA and start reading, and you then open() fileA,
   write() a change ***and close() or fsync()*** the file to get your
   changes back to the server - at the time the server accepts and writes
   your changes to the appropriate location on the server disk, the
   server also breaks callbacks to all clients to which it issued a copy
   of fileA.
 
   So my client receives a message to break the callback on fileA, which
   it dutifully does.  But my application (editor, spreadsheet, whatever
   I'm using to read fileA) is still running, and doesn't really care
   that the callback has been broken.

   When something causes the application to read() more of the file
   the read() system call executes AFS cache manager code via the VFS switch,
   which does check the callback and therefore gets new copies of the data.
 
   Of course, the application may not re-read data that it has already read,
   but that would also be the case if you were both using the same host.
   So, for both AFS and local files, I may not see your changes.

   Now if I exit the application and start it again, or if the
   application does another open() on the file, then I will see the
   changes you've made.  
 
   This information tends to cause tremendous heartache and discontent
   - but unnecessarily so.  People imagine rampant synchronization problems. 
   In practice this rarely happens and in those rare instances, the data in
   question is typically not critical enough to cause real problems or 
   crashing and burning of applications.  Over the past 8 years we've found
   that the synchronization algorithm has been more than adequate in practice
   - but people still like to worry!

   The source of worry is that, if I make changes to a file from my
   workstation, your workstation is not guaranteed to be notified until I
   close or fsync the file, at which point AFS guarantees that your
   workstation will be notified.  This is a significant departure from NFS,
   in which no guarantees are provided.
 
   Partially because of the worry factor and largely because of Posix,
   this will change in DFS.  DFS synchronization semantics are identical
   to local file system synchronization.
 
   [ DFS is the Distributed File System which is part of the Distributed ]
   [ Computing Environment (DCE).                                        ]

Subject: 3.08  How can I run daemons with tokens that do not expire?

   It is not a good idea to run with tokens that do not expire because
   this would weaken one of the security features of Kerberos.
 
   A better approach is to re-authenticate just before the token expires.
 
   There are two examples of this that have been contributed to
   grand.central.org. The first is "reauth":
 
   via AFS: /afs/grand.central.org/pub/afs-contrib/tools/reauth*
   via FTP: grand.central.org:/pub/afs-contrib/tools/reauth*

   The second is "lat":
 
   via AFS: /afs/grand.central.org/pub/afs-contrib/pointers\
                                /UMich-lat-authenticated-batch-jobs
   via FTP: grand.central.org:/pub/afs-contrib/pointers\
                                /UMich-lat-authenticated-batch-jobs

Subject: 3.09  Can I check my user's passwords for security purposes?

   Yes. Alec Muffett's Crack tool (at version 4.1f) has been converted
   to work on the Transarc kaserver database. This modified Crack
   (AFS Crack) is available via anonymous ftp from:
 
      export.acs.cmu.edu:/pub/crack.tar.Z
 
   and is known to work on: pmax_* sun4*_* hp700_* rs_aix* next_*
 
   It uses the file /usr/afs/db/kaserver.DB0, which is the database on
   the kaserver machine that contains the encrypted passwords. As a bonus,
   AFS Crack is usually two to three orders of magnitude faster than the
   standard Crack since there is no concept of salting in a Kerberos database.
 
   On a normal UNIX /etc/passwd file, each password can have been encrypted
   around 4096 (2^12) different saltings of the crypt(3) algorithm, so for
   a large number of users it is easy to see that a potentially large
   (up to 4095) number of seperate encryptions of each word checked has
   been avoided.
 
   Author & Contact: Dan Lovinger, del+@cmu.edu
 
   Note: AFS Crack does not work for MIT Kerberos Databases.
         The author is willing to give general guidance to someone interested
         in doing the (probably minimal) amount of work to port it to do MIT
         Kerberos. The author does not have access to a MIT Kerberos server
         to do this.

Subject: 3.10  Is there a way to automatically balance disk usage across
               fileservers?

   Yes. There is a tool, balance, which does exactly this.
   It can be retrieved via anonymous ftp from:
 
      export.acs.cmu.edu:/pub/balance.tar.Z
 
   Actually, it is possible to write arbitrary balancing algorithms
   for this tool. The default set of "agents" provided for the current
   version of balance balance by usage, # of volumes, and activity per week,
   the latter currently requiring a source patch to the AFS volserver.
   Balance is highly configurable.
 
   Author: Dan Lovinger, del+@cmu.edu

Subject: 3.11  Can I shutdown an AFS fileserver without affecting users?

   Yes, this is an example of the flexibility you have in managing AFS.
 
   Before attempting to shutdown an AFS fileserver you have to make
   some arrangements that any services that were being provided are
   moved to another AFS fileserver:
 
   1) Move all AFS volumes to another fileserver. (Check you have the space!)
      This can be done "live" while users are actively using files
      in those volumes with no detrimental effects.
 
   2) Make sure that critical services have been replicated on one
      (or more) other fileserver(s). Such services include:
 
         kaserver  - Kerberos Authentication server
         vlserver  - Volume Location server
         ptserver  - Protection server
         buserver  - Backup server
 
      It is simple to test this before the real shutdown by issuing:
 
         bos shutdown $server $service
 
      where: $server is the name of the server to be shutdown
        and  $service is one (or all) of: kaserver vlserver ptserver buserver
 
   Other points to bear in mind:
 
   + "vos remove" any RO volumes on the server to be shutdown.
     Create corresponding RO volumes on the 2nd fileserver after moving the RW.
     There are two reasons for this:
 
     1) An RO on the same partition ("cheap replica") requires less space
        than a full-copy RO.
 
     2) Because AFS always accesses RO volumes in preference to RW,
        traffic will be directed to the RO and therefore quiesce the load
        on the fileserver to be shutdown.
 
   + If the system to be shutdown has the lowest IP address there may be a
     brief delay in authenticating because of timeout experienced before
     contacting a second kaserver.

Subject: 3.12  How can I set up mail delivery to users with $HOMEs in AFS?

   There are many ways to do this. Here, only two methods are considered:
 
   Method 1: deliver into local filestore
 
   This is the simplest to implement. Set up your mail delivery to
   append mail to /var/spool/mail/$USER on one mailserver host.
   The mailserver is an AFS client so users draw their mail out of
   local filestore into their AFS $HOME (eg: inc).

U  Note that if you expect your (AFS unauthenticated) mail delivery program
   to be able to process .forward files in AFS $HOMEs then you need to
   add "system:anyuser rl" to each $HOMEs ACL.
 
   The advantages are:
 
      + Simple to implement and maintain.
      + No need to authenticate into AFS.
 
   The drawbacks are:
 
      - It doesn't scale very well.
      - Users have to login to the mailserver to access their new mail.
      - Probably less secure than having your mailbox in AFS.
      - System administrator has to manage space in /var/spool/mail.
 
   Method 2: deliver into AFS
 
   This takes a little more setting up than the first method.
 
   First, you must have your mail delivery daemon AFS authenticated
   (probably as "postman"). The reauth example on grand.central.org
   shows how a daemon can renew its token. You will also need to setup
   the daemon startup soon after boot time to klog (see the -pipe option).
 
   Second, you need to set up the ACLs so that "postman" has lookup rights
   down to the user's $HOME and "lik" on $HOME/Mail.
 
   Advantages:
 
      + Scales better than first method.
      + Delivers to user's $HOME in AFS giving location independence.
      + Probably more secure than first method.
      + User responsible for space used by mail.
 
   Disadvantages:
 
      - More complicated to set up.
      - Need to correctly set ACLs down to $HOME/Mail for every user.
      - Probably need to store postman's password in a file so that
        the mail delivery daemon can klog after boot time.
        This may be OK if the daemon runs on a relatively secure host.

Subject: 3.13  Should I replicate a ReadOnly volume on the same partition
               and server as the ReadWrite volume?
 
   Yes, Absolutely! It improves the robustness of your served volumes.
 
   If ReadOnly volumes exist (note use of term *exist* rather than
   *are available*), Cache Managers will *never* utilize the ReadWrite
   version of the volume. The only way to access the RW volume is via
   the "dot" path (or by special mounting).
 
   This means if *all* RO copies are on dead servers, are offline, are
   behind a network partition, etc, then clients will not be able to get
   the data, even if the RW version of the volume is healthy, on a healthy
   server and in a healthy network.
 
   However, you are *very* strongly encouraged to keep one RO copy of a
   volume on the *same server and partition* as the RW. There are two
   reasons for this:
 
   1) The RO that is on the same server and partition as the RW is a clone
      (just a copy of the header - not a full copy of each file).
      It therefore is very small, but provides access to the same set of files
      that all other (full copy) ReadOnly volume do.
      Transarc trainers refer to this as the "cheap replica".
 
   2) To prevent the frustration that occurs when all your ROs are unavailable
      but a perfectly healthy RW was accessible but not used. 
 
      If you keep a "cheap replica", then by definition, if the RW is available,
      one of the RO's is also available, and clients will utilize that site.  

Subject: 3.14  Should I start AFS before NFS in /etc/inittab?

   Yes, it is possible to run both AFS and NFS on the same system but
   you should start AFS first.
 
   In IBM's AIX 3.2, your /etc/inittab would contain:
 
      rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS daemons
      rcnfs:2:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS daemons
 
   With AIX, you need to load NFS kernel extensions before the AFS KEs
   in /etc/rc.afs like this:
 
      #!/bin/sh -
      # example /etc/rc.afs for an AFS fileserver running AIX 3.2
      #
      echo "Installing NFS kernel extensions (for AFS+NFS)"
      /etc/gfsinstall -a /usr/lib/drivers/nfs.ext
      echo "Installing AFS kernel extensions..."
      D=/usr/afs/bin/dkload
      ${D}/cfgexport -a ${D}/export.ext
      ${D}/cfgafs    -a ${D}/afs.ext
      /usr/afs/bin/bosserver &

Subject: 3.15  Will AFS run on a multi-homed fileserver?

   (multi-homed = host has more than one network interface.)
 
   Yes, it will. However, AFS was designed for hosts with a single IP address.
   There can be problems if you have one host name being resolved to several
   IP addresses.
 
   Transarc suggest designating unique hostnames for each network interface.
   For example, a host called "spot" has two tokenring and one ethernet
   interfaces: spot-tr0, spot-tr1, spot-en0.
   Then, select which interface will be used for AFS and use that hostname
   in the CellServDB file (eg: spot-tr0).

   You also have to remember to use the AFS interface name with any AFS
   commands that require a server name (eg: vos listvol spot-tr0).
 
   There is a more detailed discussion of this in the August 1993 issue
   of "Cache Update" (see: /afs/grand.central.org/pub/cache.update/aug93).

   The simplest way of dealing with this is to make your AFS fileservers
   single-homed (eg only use one network interface).


Subject: 3.16  Can I replicate my user's home directory AFS volumes?

   No.
 
   Users with $HOMEs in /afs normally have an AFS ReadWrite volume
   mounted in their home directory.
 
   You can replicate a RW volume but only as a ReadOnly volume
   and there can only be one instance of a ReadWrite volume.
 
   In theory, you could have RO copies of a user's RW volume
   on a second server but in practice this won't work for the
   following reasons:
 
   a) AFS has built-in bias to always access the RO copy of a RW volume.
      So the user would have a ReadOnly $HOME which is not too useful!
 
   b) Even if a) was not true you would have to arrange frequent
      synchronisation of the RO copy with the RW volume (for example:
      "vos release user.fred; fs checkv") and this would have to be
      done for all such user volumes.
 
   c) Presumably, the idea of replicating is to recover the $HOME
      in the event of a server crash. Even if a) and b) were not
      problems consider what you might have to do to recover a $HOME:
 
      1) Create a new RW volume for the user on the second server
         (perhaps named "user.fred.2").
 
      2) Now, where do you mount it?
 
         The existing mountpoint cannot be used because it already has
         the ReadOnly copy of the original volume mounted there.
 
         Let's choose: /afs/MyCell/user/fred.2
 
      3) Copy data from the RO of the original into the new RW volume
         user.fred.2
 
      4) Change the user's entry in the password file for the new $HOME:
         /afs/MyCell/user/fred.2
 
      You would have to attempt steps 1 to 4 for every user who had
      their RW volume on the crashed server. By the time you had done
      all of this, the crashed server would probably have rebooted.
 
      The bottom line is: you cannot replicate $HOMEs across servers.

Subject: 3.17  Which TCP/IP ports and protocols do I need to enable
               in order to operate AFS through my Internet firewall?

   Assuming you have already taken care of nameserving, you may wish to
   use an Internet timeserver for Network Time Protocol:
 
      ntp             123/tcp
 
   A list of NTP servers is available via anonymous FTP from:
 
      louie.udel.edu:/pub/ntp/doc/clock.txt
 
   For a "minimal" AFS service which does not allow inbound or outbound klog:
 
      fileserver      7000/udp 
      cachemanager    7001/udp
      ptserver        7002/udp
      vlserver        7003/udp
      kaserver        7004/udp
      volserver       7005/udp
      reserved        7006/udp
      bosserver       7007/udp
 
   (Ports in the 7020-7029 range are used by the AFS backup system,
    and won't be needed by external clients performing simple file accesses.)
 
   Additionally, For "klog" to work through the firewall you need to
   allow inbound and outbound UDP on ports >1024 (probably 1024<port<2048
   would suffice depending on the number of simultaneous klogs).

Subject: 3.18  What is the Andrew Benchmark?

   "It is a script that operates on a collection of files constituting
   an application program. The operations are intended to represent typical
   actions of an average user. The input to the benchmark is a source tree
   of about 70 files. The files total about 200 KB in size. The benchmark
   consists of five distinct phases:
 
     I MakeDir - Construct a target subtree that is identical to the
                 source subtree.
    II Copy    - Copy every file from the source subtree to the target subtree.
   III ScanDir - Traverse the target subtree and examine the status
                 of every file in it.
    IV ReadAll - Scan every byte of every file in the target subtree.
     V Make    - Complete and link all files in the target subtree." 
   
   [ source: ]
   [ grand.central.org:pub/afs-contrib/doc/benchmark/Andrew.Benchmark.ps ]
   [ /afs/grand.central.org/pub/afs-contrib/doc/benchmark/Andrew.Benchmark.ps ]

Subject: 3.19  Is there a version of HP VUE login with AFS authentication?

   Yes, the availability of this is described in:
 
      /afs/transarc.com/public/afs-contrib/pointers/HP-VUElogin.txt
   
Subject: 4  Getting more information

Subject: 4.01  Is there an anonymous FTP site with AFS information?

   Yes, it is: grand.central.org [192.54.226.100].

   A brief summary of contents:
 
   Directory                    Contents
 
   pub/cache-update             AFS user group newsletters
   pub/afs-contrib              Contributed tools and documents
   pub/afsps/doc                release notes, SUPPORTED_SYSTEMS.afs.*
   pub/afsug                    AFS user group (see README for detail)
   darpa/doc/afs/specs/progint  AFS programming interface docs

   grand.central.org also participates in the Internet AFS filetree.

Subject: 4.02  Which USENET newsgroups discuss AFS?

   alt.filesystems.afs and occasionally in comp.unix.admin

Subject: 4.03  Where can I get training in AFS?

   Transarc provide user and administrator courses.
   These can be provided at the customer site or at Transarc's offices.

   Transarc's education coordinator may be contacted by:
 
      telephone: +1 412 338 4363    email: education@transarc.com

Subject: 5  About the AFS faq

   This compilation is dedicated to those who inspire through good humour,
   enthusiasm, wit and wisdom.

Subject: 5.01  How can I get a copy of the AFS faq?

   There are several ways:
 
   The best way to access the AFS faq is via AFS so you see the
   latest version. If you take a copy via FTP or email your copy
   can only be a snapshot of this changing file.

   via AFS: /afs/grand.central.org/pub/afs-contrib/doc/faq/afs.faq
            /afs/ibm.uk/public/doc/afs.faq
            /afs/aixssc.uk.ibm.com/public/doc/afs.faq
 
   via FTP: grand.central.org:/pub/afs-contrib/doc/faq/afs.faq
            rtfm.mit.edu:/pub/usenet/news.answers/afs-faq
            ftp.aixssc.uk.ibm.com:/pub/src/afs.faq

   via World Wide Web:
            http://www.cis.ohio-state.edu/hypertext/faq/usenet/afs-faq/faq.html
 
   via email:
            mail -s afs.faq auto-send@mailserver.aixssc.uk.ibm.com </dev/null
 
   via USENET news:
 
            From time to time this faq will be posted to the USENET newsgroups:
               alt.filesystems.afs alt.answers news.answers
 
   If you have no luck with any of the above methods, send email to
   mpb@acm.org (Paul Blackburn) and I will be pleased to send you a
   copy of the AFS faq.

Subject: 5.02  How can I get my question (and answer) into the AFS faq?

   Comments and contributions are welcome, please send to: mpb@acm.org
 
   I am looking for reviewers to help me check the material here, please
   let me know if you would like to help.

Subject: 5.03  How can I access the AFS faq via the World Wide Web?

   To access the World Wide Web you either need your own browser
   or have telnet access to WWW servers.
 
   WWW browsers exist for most machines. Here's a list of some browsers;
 
      Name     System/requirements           Available from (among others)
      ====     ===================           ==============
      Mosaic   X windows, MS-Windows, Mac    ftp.ncsa.uiuc.edu  /Web
      lynx     vt100                         ftp.wustl.edu /packages/www/lynx
 
   From your own browser, OPEN or GO to the following document:
 
      http://www.cis.ohio-state.edu/hypertext/faq/usenet/afs-faq/faq.html
 
   It is much better to run your own browser but if this is not possible
   there are several WWW servers accessible via telnet:
 
   +  telnet info.cern.ch
      then type:
         go http://www.cis.ohio-state.edu/hypertext/faq/usenet/afs-faq/faq.html
 
   +  telnet www.njit.edu                 (login: www)
      then type:
         g
         http://www.cis.ohio-state.edu/hypertext/faq/usenet/afs-faq/faq.html
 
   +  telnet  ukanaix.cc.ukans.edu        (login: www) needs vt100
      then type:
         ghttp://www.cis.ohio-state.edu/hypertext/faq/usenet/afs-faq/faq.html

Subject: 6  Bibliography

   If documentation is available via anonymous FTP it is indicated
   in square brackets like:
 
    [ athena-dist.mit.edu:pub/kerberos/doc/usenix.PS ]
 
    where: athena-dist.mit.edu is the anonymous FTP site and
           pub/kerberos/doc/usenix.PS is the filename

   Similarly, for those who have appropriate access, documents available
   via AFS are shown with the format:
  
   [ /afs/........ ]
 
   [1] John H Howard, Michael L Kazar, Sherri G Menees, David A Nichols,
       M Satyanarayanan, Robert N Sidebotham, Michael J West
       "Scale and Performance in a Distributed File System",
       ACM Transactions on Computer Systems, Vol. 6, No. 1, Feb 1988 pp 51-81.
 
   [2] Michael L Kazar,
       "Synchronisation and Caching Issues in the Andrew File System",
       USENIX Proceedings, Dallas, TX, Winter 1988
 
   [3] Alfred Z Spector, Michael L Kazar,
       "Uniting File Systems", UNIX Review, March 1989
 
   [4] Johna Till Johnson,
       "Distributed File System brings LAN Technology to WANs",
       Data Communications, November 1990, pp 66-67.
 
   [5] Michael Padovano, PADCOM Associates,
       "AFS widens your horizons in distributed computing",
       Systems Integration, March 1991
 
   [6] Steve Lammert,
       "The AFS 3.0 Backup System", LISA IV Conference Proceedings,
       Colorado Springs, Colorado, October 1990.
 
   [7] Michael L Kazar, Bruce W Leverett, Owen T Anderson,
       Vasilis Apostolides, Beth A Bottos, Sailesh Chutani,
       Craig F Everhart, W Anthony Mason, Shu-Tsui Tu, Edward R Zayas,
       "DEcorum File System Architectural Overview",
       USENIX Conference Proceedings, Anaheim, Texas, Summer 1990.
 
   [8] "AFS Drives DCE Selection", Digital Desktop, Vol 1 No 6 Sept 1990.
 
   [9] James J Kistler, M Satyanarayanan,
       "Disconnected Operation in the Coda Filesystem",
       CMU School of Computer Science technical report, CMU-CS-91-166
       26th July 1991.
 
  [10] Puneet Kumar. M Satyanarayanan,
       "Log-based Directory Resolution in the Coda File System",
       CMU School of Computer Science internal document, 2 July 1991.
 
  [11] Edward R Zayas,
       "Administrative Cells: Proposal for Cooperative Andrew File Systems",
       Information Technology Center internal document,
       Carnegie-Mellon University, 25th June 1987
 
  [12] Ed Zayas, Craig Everhart,
       "Design and Specification of the Cellular Andrew Environment",
       Information Technology Center, Carnegie-Mellon University,
       CMU-ITC-070, 2 August 1988
 
  [13] Kazar, Michael L, Information Technology Center,
       Carnegie-Mellon University,
       "Ubik - A library for Managing Ubiquitous Data", 
       ITCID, Pittsburgh, PA, 1988
 
  [14] Kazar, Michael L, Information Technology Center,
       Carnegie-Mellon University,
       "Quorum Completion", ITCID, Pittsburgh, PA, 1988
 
  [15] SP Miller, BC Neuman, JI Schiller, JH Saltzer,
       "Kerberos Authentication and Authorization System",
       Project Athena technical Plan, Section E.2.1, MIT, December 1987.
       [ athena-dist.mit.edu:pub/kerberos/doc/techplan.PS ]
       [ athena-dist.mit.edu:pub/kerberos/doc/techplan.txt ]
       [ /afs/watson.ibm.com/projects/agora/papers/kerberos/techplan.PS ]
 
  [16] Bill Bryant,
       "Designing an Authentication System: a Dialogue in Four Scenes",
       Project Athena internal document, MIT, draft of 8th February 1988
       [ athena-dist.mit.edu:pub/kerberos/doc/dialog.PS ]
       [ athena-dist.mit.edu:pub/kerberos/doc/dialog.mss ]
       [ /afs/watson.ibm.com/projects/agora/papers/kerberos/dialog.PS ]
 
  [17] Edward R Zayas,
       "AFS-3 Programmer's Reference: Architectural Overview",
       Transarc Corporation, FS-00-D160, September 1991
       [ grand.central.org:darpa/doc/afs/specs/progint/archov/archov-doc.ps ]
       [ /afs/watson.ibm.com/projects/agora/papers/afs/archov-doc.ps ]
 
  [18] "AFS Programmer's Reference: Authentication Server Interface",
       Transarc Corporation, 12th April 1993
       [ grand.central.org:darpa/doc/afs/specs/progint/arsv/asrv-ispec.ps ]
       [ /afs/watson.ibm.com/projects/agora/papers/afs/asrv-ispec.ps ]
 
  [19] Edward R Zayas,
       "AFS-3 Programmer's Reference: BOS Server Interface",
       Transarc Corporation, FS-00-D161, 28th August 1991
       [ grand.central.org:darpa/doc/afs/specs/progint/bsrv/bsrv-spec.ps ]
       [ /afs/watson.ibm.com/projects/agora/papers/afs/bsrv-spec.ps ]
 
  [20] Edward R Zayas,
       "AFS-3 Programmer's Reference: File Server/Cache Manager Interface",
       Transarc Corporation, FS-00-D162, 20th August 1991
       [ grand.central.org:darpa/doc/afs/specs/progint/fscm/fscm-ispec.ps ]
       [ /afs/watson.ibm.com/projects/agora/papers/afs/fscm-ispec.ps ]
 
  [21] Edward R Zayas,
       "AFS-3 Programmer's Reference:
              Specification for the Rx Remote Procedure Call Facility",
       Transarc Corporation, FS-00-D164, 28th August 1991
       [ grand.central.org:darpa/doc/afs/specs/progint/rx/rx-spec.ps ]
       [ /afs/watson.ibm.com/projects/agora/papers/afs/rx-spec.ps ]
 
  [22] Edward R Zayas,
       "AFS-3 Programmer's Reference:
              Volume Server/Volume Location Server Interface",
       Transarc Corporation, FS-00-D165, 29th August 1991
       [ grand.central.org:darpa/doc/afs/specs/progint/vvl/vvl-spec.ps ]
       [ /afs/watson.ibm.com/projects/agora/papers/afs/vvl-spec.ps ]

  [23] "AFS User Guide",
        Transarc Corporation, FS-D200-00.08.3

  [24] "AFS Commands Reference Manual",
        Transarc Corporation, FS-D200-00.11.3

  [25] "AFS Systems Administrators Guide",
        Transarc Corporation, FS-D200-00.10.3

  [26] Steven M. Bellovin, Michael Merritt
       "Limitations of the Kerberos Authentication System",
       Computer Communications Review, October 1990, Vol 20 #5, pp. 119-132
       [ research.att.com:/dist/internet_security/kerblimit.usenix.ps ]
       [ /afs/watson.ibm.com/projects/agora/papers/kerberos/limitations.PS ]

  [27] Jennifer G. Steiner, Clifford Neuman, Jeffrey I. Schiller
       "Kerberos: An Authentication Service for Open Network Systems"
       [ athena-dist.mit.edu:/pub/kerberos/doc/usenix.PS ]
       [ athena-dist.mit.edu:/pub/kerberos/doc/usenix.txt ]
 
  [28] Barry Jaspan
       "Kerberos Users' Frequently Asked Questions"
       [ rtfm.mit.edu:/pub/usenet/news.answers/kerberos-faq/user ]
 
  [29] P. Honeyman, L.B. Huston, M.T. Stolarchuk
       "Hijacking AFS"
       [ ftp.sage.usenix.org:/pub/usenix/winter92/hijacking-afs.ps.Z ]
 
  [30] R.N. Sidebotham
       "Rx: Extended Remote Procedure Call"
       Proceedings of the Nationwide File System Workshop
       Information Technology Center, Carnegie Mellon University,
       (August 1988)
        
  [31] R.N. Sidebotham
       "Volumes: The Andrew File System Data Structuring Primitive"
       Technical Report CMU-ITC-053, Information Technology Center,
       Carnegie Mellon University, (August 1986)