💾 Archived View for spam.works › mirrors › textfiles › computers › hd.txt captured on 2023-06-14 at 16:02:54.

View Raw

More Information

-=-=-=-=-=-=-


       ---------------------------------------------------------------- 
 
                     HARD DISKS - THE ESSENTIAL ACCESSORY 

       ---------------------------------------------------------------- 

       A simple observation: the first accessory any computer user 
       should buy is hard drive. On a dollar for dollar basis nothing 
       speeds up processing and expands convenience like a hard drive. 
       The bad news? The substantial storage capacity of a hard drive 
       contains the seeds of data catastrophe if you don't understand 
       how to CAREFULLY maintain a hard drive. Some reference 
       information pertaining to larger desktop hard drives as well as 
       smaller laptop drives has been retained since drives in both 
       computers are similar in function although different in form and 
       size.

       Many computer operations tend to slow down at the critical 
       bottleneck of information transfer from computer memory (RAM) to 
       disk. The faster the transfer, the faster the program operates. 
       Nine times out of ten it is the bottleneck formed when 
       information flows to or from a disk that you and your program 
       must wait. This is where a hard drive really shines - speed. 

       Given the best possible treatment, a hard drive should last from 
       eight to fifteen years. Drive manufacturers typically suggest 
       30,000 to 70,000 hours of routine life for a hard drive before 
       failure. If you kept your PC on for a 40 hour work week for 50 
       weeks - you could expect about 15 years of service for a drive 
       rated at 30,000 hours. Some hard drive users even suggest 
       leaving the drive on continuously or alternatively turning it on 
       in the morning and off at night to minimize motor and bearing 
       wear since it is the starting shock which wears most heavily on 
       a drive. However, given marginal treatment or abuse, you can 
       expect about fifteen minutes of service followed by a $250 
       repair bill. Obviously a little information about hard drives 
       and their care can't hurt. 

       ---------------------------------------------------------------- 
 
                TECHNOLOGY 101 - BOOT CAMP FOR HARD DRIVE USERS 

       ---------------------------------------------------------------- 

       What is a hard drive? If you have worked with a floppy disk you 
       already understand something about hard drives. Basically the 
       hard drive unit is a sealed chamber (sealed against dust and 
       dirt) which contains rapidly spinning single or multiple stacked 
       platters. The platter(s) are similar to a floppy disk in that 
       they store information magnetically - data can be erased and 
       rewritten as needed. The trick is, however, that the storage 
       capability is immense on a hard drive. 

       A floppy typically holds about one third of a million computer 
       characters (360,000 or 360K bytes). The hard drive can commonly 
       hold 20 to 40 million (or more!) bytes or computer words. In 
       addition, the hard drive motor spins the magnetic platter 
       quickly so that information is transferred rapidly rather than 
       the tedious rate of the leisurely spinning floppy. A small 
       read/write head hovers and moves above the hard drive magnetic 
       platter much like a phonograph needle above a record. The 
       difference is that the read/write head of the hard drive rides 
       slightly above the platter on a thin cushion of air. In the 
       floppy drive mechanism, the read/write head is in direct contact 
       with the floppy. All hard drives are installed in two parts: the 
       drive (a box containing the disk and read/write head) and the 
       controller (a circuit board) which may be integrated into the 
       drive or a separate circuit board. The hard drive stores the 
       information. The controller assumes the role of a high speed 
       "translator/traffic cop" to help the hard drive move its massive 
       amount of information smoothly. 

       Back to the magnetic platter for a moment. The read write heads 
       are mounted on a moveable arm and each position of the head 
       above the platter defines a circular TRACK just like the track 
       of a phonograph record. As the arm changes positions, different 
       circular tracks are traced magnetically upon the surface of the 
       platter. Most hard drives have several read/write heads which 
       service both the top and bottom of each platter. A set of tracks 
       on different platters define a vertical CYLINDER somewhat like 
       the surface of a tin can whose top and bottom are missing. Large 
       hard drives can have six or more platters and therefore 12 or 
       more sides for information storage. The tracks can also be 
       defined as divisions of equally divided data called SECTORS 
       which are something like portions of the outer edge of a circle. 
       Finally, the sum collection of tracks, sectors and cylinders 
       define the entire VOLUME of the hard disk. 

       Each piece of data has an address which tells the read/write 
       heads where to move to locate that specific piece of 
       information. If you tell the read/write heads to move to and 
       hover over a specific track, sooner or later your data will pass 
       beneath it. Since you can move the heads directly to a given 
       track quickly, the early nomenclature for a hard drive was the 
       DASD or DIRECT ACCESS STORAGE DEVICE. 

       Movement of the read/write head arm takes a little time. For 
       this reason an ACCESS TIME is associated with hard drives and 
       stated in advertising and specification sheets. Generally this 
       time is stated as the AVERAGE ACCESS TIME and is frequently in 
       the thousandths of seconds or millisecond range which is fast 
       indeed. The old IBM XT class machines featured access times 
       around 85 milliseconds with the AT class machines featuring 
       access times around 40 seconds. Newer hard drives post times in 
       the 28 to 15 millisecond access range. Remember, the faster you 
       can move the read/write heads, the faster you can get to your 
       data. 

       The AVERAGE WAIT TIME is a less frequently discussed number but 
       equally interesting. Once the read/write head is positioned over 
       the track holding your data, the system must wait for the 
       correct sector to pass beneath. Obviously, the average wait time 
       is one half the time it takes for a full rotation of the 
       platter. This figure is rarely given in advertisements and is 
       usually comparable for most drives of the same type and is 
       generally much shorter than the access time. Speed matters to a 
       hard drive! Average wait time is published if you dig it out of 
       the specification sheet or write to the manufacturer. 

       An extension of this logic brings us to consider the INTERLEAVE 
       FACTOR for a disk. Generally a hard drive reads and writes 
       information in sectors of the same, repeatable size such as 512 
       bytes. However programs and data files are usually much bigger 
       than this and obviously must be scattered onto many sectors. The 
       problem is that the disk rotation is much too fast for a large 
       file to be written in perfectly contiguous sectors on the same 
       track. If you tried to write the data onto a track, one byte 
       after the next, the central processing unit chip or CPU could 
       not absorb the data fast enough.  

       The solution is to place sectors to be read in ALTERNATING 
       fashion which gives the CPU time to digest the data. Thus if a 
       circular track on the platter had 8 sectors you might number and 
       read them in this order: 1,5,2,6,3,7,4,8. This way the CPU has a 
       "breather" in between each sector read. The number of rotations 
       it takes the heads to read ALL tracks in succession is the 
       INTERLEAVE FACTOR. Slow CPU chips can force a disk to use an 
       interleave factor of 3 or even 4. A faster processor might be 
       able to handle a disk interleave of 1:2 (such as 80286 processor 
       chips) or even 1:1 (such as 80386 processor chips.) It is 
       possible to low level format a disk and change its interleave 
       factor; but if the CPU cannot keep up, the adjustment is 
       worthless. To the processor operating in millionths of a second, 
       the time drain of waiting for a hard drive which operates in 
       thousandths of a second or floppy drive which operates in tenths 
       and full seconds is wasted time. The obvious point of logic is 
       that when using a hard drive you need to organize files for 
       minimum time delays for the processor. 

       The first outer track on a disk is always the boot record which 
       loads the main portions of DOS into the machine. Following this 
       is the file allocation table or FAT which we discussed in 
       earlier tutorials. The FAT maintains data in CLUSTERS which, for 
       an XT class machine are 4096 bytes. On the AT class machine the 
       cluster size is 2048 bytes which is much more efficient and less 
       wasteful of disk space. Following the FAT are the sectors for 
       the root directory of the hard drive. Each directory entry is 32 
       bytes in length. Curiously, and to our good advantage, unused 
       entries in the directory have a unique first character byte. 
       When a file is deleted though DOS, ONLY the first character is 
       reset. 

       Fortunately this allows various utility programs to attempt to 
       recover the deleted file since ONLY the directory data is 
       altered but NOT the file itself. However, as time goes on and 
       additional files are added to the disk, the original file is 
       overwritten by new information. This is why you need to act 
       immediately if you discover you have accidentally deleted a 
       file. An advantage to the use of the FAT is that files do not 
       have to be given a fixed amount of space on a disk - they can 
       use as many or few clusters as needed. The downside is that the 
       file pieces can be scattered wildly over the surface of the disk 
       in a non contiguous fashion which only the FAT can track. This 
       means more read/write head motion and more wasted time as far as 
       the CPU and the performance of your program is concerned. 

       Additionally, if you have many deleted files within the 
       directory, DOS must search tediously through each one from top 
       to bottom of the directory to find a match for the file you are 
       trying to locate. Obviously, then, programs and data of high use 
       should have their directory entries located near the top of the 
       directory to speed the search. Each time the read/write head 
       moves takes time: searching the directory and finding the pieces 
       of the scattered file all take movement of the read/write arm. 
       There are several ways to unfragment files which boost disk 
       performance, and we'll talk about those techniques it a bit. 

       ---------------------------------------------------------------- 
 
               HARD DISKS - STRATEGIES FOR TURBOCHARGED RESULTS 

       ---------------------------------------------------------------- 

       Before we examine methods for improving hard drive performance, 
       several simple "care and feeding" precautions should be 
       mentioned. 

       Hard drives are touchy if mistreated! Once brought up to speed, 
       a hard drive should never be bumped or moved. The read/write 
       head (similar to the phonograph needle resting on a record) will 
       smash or chip into the surface of the spinning hard drive 
       platter and take your data with it. Either the head or the 
       magnetically coated platter can be permanently damaged. Allow 
       the hard drive to some to a complete stop before moving the 
       computer. 
       
       In addition always use a "parking" software package to move the 
       read/write head to the safety zone before turning off the 
       computer. A parking program usually accompanies most computers 
       which have hard drives installed or can be obtained from 
       commercial or shareware sources. A few drives automatically park 
       the heads when turned off but this tends to be a rare feature 
       seen mostly on high priced hard drives. 

       Always maintain copies of data and programs outside the hard 
       drive by "backing up" onto a floppy or tape. How often should 
       you back up your files? Daily if you use the computer to produce 
       many changes to important documents. Weekly backup is probably a 
       bare minimum considered reasonable for occasional computer 
       users. Other computer users maintain vital data on floppies or 
       other backup systems and use the hard drive to store programs or 
       applications only such as a spreadsheet or database. Backups are 
       a good idea even for floppy disk systems which have no hard 
       drive. 
       
       Make two copies of every file regardless of whether you have a 
       hard drive or not. Some shareware and commercial utilities ease 
       the backup chore by only copying those files to a floppy which 
       have been changed or updated since the last backup has been 
       performed. They ignore files which have not changed and thus do 
       not require copying again. This can save a lot of time when 
       backing up valuable files from your hard drive to a floppy for 
       safekeeping. 
              
       Hard drives should periodically be reorganized (files 
       unfragmented) to ensure speedy retrieval and access to data. 
       Inexpensive or free software programs known as "disk file 
       unfragmenters" do this job nicely. As disk files are created and 
       deleted, blank spaces and unused sectors begin to build up. 
       
       Gradually files are broken into pieces and scattered over the 
       many tracks and sectors of the disk. This happens to both 
       floppies and hard drives, but is especially annoying on hard 
       drives because of the dramatic increase in time it takes to load 
       a program or data file. The File allocation table is the 
       culprit, sense all data is packed away in the first and handiest 
       sector on the drive which the FAT can find. 

       The FAT allows files to be fragmented down to the cluster level. 
       One way to unfragment a disk is to copy all of the files off to 
       floppies and then recopy them back to the hard drive - a tedious 
       nuisance at best. You would do this with the DOS XCOPY or COPY 
       commands but not DISKCOPY since this would retain the tracks and 
       their fragmentation as you first found them. 
       
       Defragmenting programs perform this task without requiring 
       removal of the files from the hard drive. They perform their 
       magic by moving around the clusters of a scattered file in such 
       a way as to reassemble it into contiguous pieces again. Some 
       customization is permitted with the more sophisticated 
       "defragmenting" programs. For example, subdirectory files can be 
       relocated after the root or below a different subdirectory or, 
       in another example, high use files might be placed higher in the 
       directory listing for faster disk access. 
       
       The first time a defragmenting program is run may require 
       several hours if a hard drive is large and badly fractured with 
       scattered files and clusters. It is a good idea to backup all 
       essential files prior to "defragging" just in case there is a 
       power failure during a long "defrag". Subsequent runs of the 
       "defragger" produce runs of only a few minutes or so since the 
       heavy work was done earlier. Essentially, "defragging" the hard 
       drive should be done regularaly, perhaps weekly. Defragging is 
       not a substitute for caching, ramdisks, or buffer - instead it 
       is a maintenance function which should be done regularly. 

       Yet another possible avenue to improve disk performance is that 
       of changing the disk interleave factor which we will discuss a 
       bit later in this tutorial. By way of brief introduction: the 
       disk interleave indicates how many revolutions of the magnetic 
       platter are required to read all the sectors of data from the 
       spinning track. A ratio of 1:1 means all data can be read 
       sequentially. One sector of data after another. 

       There is some overhead time required for the read/write head to 
       zip to the FAT area of the disk (if it is not in a cache or 
       buffer) to determine location of the next sector along the disk 
       track. 
       
       For example, five clusters of data on a track might require four 
       trips back to the FAT track to find the cluster addresses even 
       on a completely defragmented disk. We will talk more about 
       cluster and defragmenting a bit later in this tutorial. 
       
       Nevertheless, depending on the speed of your central processor 
       or CPU, using a program which tests and alters the interleave 
       factor, IF THIS CAN BE DONE, may yield better performance. Most 
       interleave adjustment software first performs a test to 
       determine the current interleave, the possible changes and of 
       course how much performance time might be gained. A few of these 
       packages can alter the interleave with the files in place but 
       you should backup truly essential files before starting the 
       process. Interleave factor adjustment are mainly derived from 
       the CPU speed NOT the disk speed. Thus a fast AT or 80386 
       equipped machine will more likely be able to take advantage of 
       an interleave adjustment. 

       Tinkering with a hard drive for optimum results might best be 
       divided into two categories: DISK SUBSTITUTION and DISK 
       ALTERATION. DOS allows two clever ways substituting RAM memory 
       for disk memory. 

       In the first, using BUFFERS, the small CONFIG.SYS file on your 
       hard drive is modified to contain a buffers statement. A sample 
       might be: BUFFERS=20. A DOS buffer is an area of RAM memory 
       capable of holding a 512 byte mirror image of a disk sector. 
       This allows DOS to quickly search the buffer area for frequently 
       used data instead of the slower disk. In the older XT class 
       machine, if you did not specify a buffer size, DOS defaulted to 
       2 buffers while later versions of DOS default to about 10 
       buffers. Most users settle on about 20 buffers but you can 
       specify up to 99 with current releases of DOS. But you don't get 
       something for nothing. If you used the full 99 buffers 
       available, you would soak up 45K of your main RAM memory! The 
       downside of using buffers is that more is not necessarily 
       better. 

       Unfortunately, DOS searches the buffer area of RAM sequentially 
       rather than logically so if DOS requires data which is in the 
       buffer area, it will search each 512 byte area in sequence from 
       top to bottom even though the data it needs may be at the end of 
       the buffer. Logically, then, there is an optimum number of 
       buffers - too many used with a small program and you can slow 
       things down, not enough and DOS will be forced to go out to the 
       disk to retrieve what it needs. If you rarely use the same data 
       within a program twice but load lots of different programs and 
       data, a large number of buffers won't help. However if you need 
       frequent access to a certain data file or portion of that file, 
       buffers will help. Portions of the FAT are kept within the 
       buffers area, so dropping your buffers to zero has the damaging 
       effect that DOS must always go to the disk to read the FAT which 
       isn't helpful either. 

       Another  way of substituting RAM memory for disk memory involves 
       using a RAMDISK. The idea is to create in RAM memory an entire 
       disk or a small portion of a disk. This works like magic on many 
       machines since the reading of tracks and sectors takes place at 
       the high speed of RAM memory rather than the mechanically 
       limited speed of the read/write heads on a floppy or hard drive. 

       But be careful. Three areas of difficulty can arise. First you 
       must remember to take the data from a floppy or hard drive and 
       move it into the RAMDISK. Many people do this automatically from 
       within an AUTOEXEC.BAT file or may have several floppies, each 
       with a different RAMDISK configuration depending on the task at 
       hand. Copying data to the RAMDISK usually moves along briskly. 
       Secondly you must sacrifice a large area of memory for the 
       RAMDISK which can no longer be used by your main program. Users 
       of computers with extended or expanded memory usually choose to 
       put their RAMDISK in the extended or expanded memory area of RAM 
       so that precious main memory is not lost. Still, a small RAMDISK 
       can soak up 64K of RAM memory and one or two MEG RAMDISKS area 
       common for many users. The third and most serious problem when 
       using RAMDISKS is that they are volatile - switch off the 
       machine or experience a power failure, and your data is lost 
       forever! Rather than residing safely on a magnetic disk, the 
       data is "floating" in RAM memory and should be - MUST BE! - 
       written to a disk before the machine is powered down. 

       Many applications fly with a RAMDISK. Users of word processors 
       find that moving the spelling checker and thesaurus to the 
       RAMDISK speeds up things considerably since these are used 
       heavily in a random manner. Spreadsheet users find that reading 
       and writing short data files to RAMDISKS is a boon. Programs 
       which use overlay files or temporary files as well as 
       programming compilers benefit from RAMDISK use. Batch files 
       which are disk intensive as well as small utilities really 
       sprint when placed on a RAMDISK. Basically, any program file 
       which is frequently used and loaded/unloaded repeatedly to a 
       disk during normal computer operation is an excellent candidate 
       for RAMDISK placement. DOS contains a RAMDISK which is called by 
       using the statement DEVICE=VDISK.SYS or DEVICE=RAMDRIVE.SYS (if 
       you are using MSDOS) which is placed in your CONFIG.SYS file. 
       Your DOS manual details the specifics such as stating the size 
       of RAMDISK and giving it a drive letter. You must still copy 
       your target files into the RAMDISK and place it in the search 
       path (with the PATH=  command) as we mentioned in a previous 
       tutorial. And the RAMDISK should always be the first drive 
       letter mentioned in the path command so that DOS searches it 
       first for optimum results. 

       Yet another area of investigation is that of CACHE software. 
       Essentially a CACHE is an extension of the buffers idea we 
       discussed earlier. But the twist is that the CACHE is searched 
       intelligently by a searching algorithm within the CACHE software 
       rather than from top to bottom as with the more typical DOS 
       buffer search system. Disk CACHE software can be obtained as 
       either commercial software or shareware. As with a RAMDISK, the 
       CACHE requires a chunk of RAM memory to operate. This can be 
       extended memory, expanded memory or main RAM memory. Some 
       manufacturers include a CACHE program with the software package 
       or DOS disk. A CACHE is a sophisticated type of RAMDISK, in a 
       rough sense. 

       CACHE software allocates a large area of memory for storage of 
       frequently used disk data. This data is updated by an 
       intelligent CACHE search algorithm in an attempt to "guess" 
       which tracks of a disk you might read or need next. The CACHE 
       also stores the most frequently used disk data and attempts to 
       remove less frequently used data. Whenever DOS requests disk 
       data, the CACHE software first tries to fill the order from data 
       currently stashed in the CACHE which prevents a slower disk 
       search. 
       
       When data is written from the program to the CACHE, first a disk 
       write is done to prevent data loss in case of power failure and 
       then the data is stashed in the CACHE in case it is needed 
       again. Usually the hard drive data is the target of the CACHE 
       activity, but a floppy disk could also be cached. All CACHE 
       software allows you to allocate the size of the CACHE as well as 
       the drive or drives to be cached. And some even allow you to 
       specify exact files or data to be cached. The key is that high 
       use data lives in RAM memory which keeps tedious disk access 
       times low. In general, if your computer has a megabyte or more 
       of memory and a speedy processor such as an 80286 or 80386 
       either or both a CACHE or RAMDISK option does improve 
       performance. 

       As we leave hard disk boot camp, let's finally look at hard 
       drive formatting processes. Two basic formatting operations are 
       of concern: physical formatting or low level formatting and 
       logical or high level formatting. When you use the format 
       program on a floppy disk both low level and high level 
       formatting is accomplished. On a hard disk, formatting performs 
       only logical or high level formatting. On a hard disk, low level 
       formatting is usually done to a disk before shipment. As an 
       aside, the FDISK command of DOS has little to do with either 
       type of formatting, but is a method of partitioning or arranging 
       the data onto the hard drive tracks. Each disk platter is 
       separated into circular concentric tracks where data is stored 
       as we saw earlier. During physical formatting the tracks are 
       divided into further subdivisions called clusters and further 
       yet into sectors. High level formatting involves the specific 
       ordering of the space for the exclusive use of DOS and is a bit 
       more analogous to the formatting of a floppy disk. 

       Some software programs of use by hard drive owners: 

       The following two programs perform low level formatting and 
       simple diagnostic routines on a hard drive: 

       Disk Manager and CheckIt 

       Data recovery and "unerasing" programs also containing 
       diagnostic routines are: 

       PC Tools Deluxe, Norton Utilities, Mace Utilities 

       Extensive diagnostic and maintenance/data repair functions as 
       well as interleave alteration and head parking are offered by: 

       SpinRite II, Optune, Disk Technician 

       Shareware programs with unerase functions include: 

       Bakers Dozen 

       Shareware programs with defragmentation capabilities include: 

       SST and PACKDISK. 

       Tutorial finished. Be sure to order your FOUR BONUS DISKS which 
       expand this software package with vital tools, updates and 
       additional tutorial material for laptop users! Send $20.00 to 
       Seattle Scientific Photography, Department LAP, PO Box 1506, 
       Mercer Island, WA 98040. Bonus disks shipped promptly! Some 
       portions of this software package use sections from the larger 
       PC-Learn tutorial system which you will also receive with your 
       order. Modifications, custom program versions, site and LAN 
       licenses of this package for business or corporate use are 
       possible, contact the author. This software is shareware - an 
       honor system which means TRY BEFORE YOU BUY. Press escape key to 
       return to menu.