💾 Archived View for spam.works › mirrors › textfiles › programming › archives.txt captured on 2023-11-14 at 11:47:27.
View Raw
More Information
⬅️ Previous capture (2023-06-16)
-=-=-=-=-=-=-
[ What follows is a thread trace of the Magpie ARCHIVERS sub-board. The
programmers behind PKARC, DWC and ZOO talked shop with some of the beta-
testers and users of these file compression/archival utilities. The text
was captured on June 30th, 1987. ]
From PATRICK BENNETT Msg #6058 *ARCHIVERS* (Rcvd)
To STEVE MANES Sat Dec 20, 1986 11:54am (0:11)
I definitely put my vote in for ZOO...
ZOO (now) has more going for it, and (will) have even MORE going for it. As
far as your statement of (Vanilla PC) ZOO would be excellent... Seeing as
though the source is written in (quite) portable C code, porting to other
environments is a fairly simple task. The Unix version should be running
anytime now... One thing that is nice about ZOO (for bbs's especially) are
it's Z format files. By adding a z to the extraction parameter ZOO will
extract the named files into Z files, with a z being placed in the middle on
the file extension. These Z files retain ALL attributes from the archive,
date, time, size, etc. But, they are Still compressed... So for example you
could set up a utility that would let a user download a single or multiple
files from an archive, but still retain their compression! Plus with a Z file
you can easily and QUICKLY move files from one arc to another. VERY QUICK,
since no compression must be done... ZOO has an incredible amount going for
it... I say, YES!
From BILLY ARNELL Msg #6065 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Sat Dec 20, 1986 1:19pm (0:06)
Regarding ZOO utility:
There had been many rumors about it being a problematic format. There were a
ton of messages on boards all over that ZOO had some serious problems. DO you
know about this and what the fuss was all about?
Further, since ARC is "still" the standard, adding ZOO'd files might cause
further problems for the moment. There aren't ZOO utilities for other
machines >yet<.
From PATRICK BENNETT Msg #6075 *ARCHIVERS* (Rcvd)
To BILLY ARNELL Sat Dec 20, 1986 5:48pm (0:09)
<ACK> B.S......
The messages that have been spreading around are quite pathetic... Most
started from a certain person in Michigan.... All quite ridiculas. Adding ZOO
files should cause absolutely no problem.... Conflicting? Sure... But
remember the LBR files? You can still find them! But, I wouldn't have called
it a 'problem.' Aren't ZOO utilities for other machines Yet... Right, but
there WILL be! Conversion to other machines, operating systems couldn't be
easier! ARC's structure is Far to limiting to provide any sort of flexibility.
I could go on, and on... But I would just suggest that you download it, and
give it a fair look; and let you be the judge.... I'll upload a document
written by the author describing ZOO and it's future later today... But enough
from me... Why don't you want to switch to ZOO?
From PHIL KATZ Msg #7110 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Mon Jan 12, 1987 8:18pm (0:08)
Patrick,
You state that ZOO "offers so much more" than the arc format. While it is
true that Rahul has stated many neat things which *could be* but have not yet
been added to ZOO, these same features could also be added to ARC files, in a
completely upward compatible manner.
For example, PKARC allows comments to be added to an archive, completely
transparent to ARC, ARCE, older versions of PKXARC etc. File paths and other
features could be added to ARC files as well, without requiring anyone to
convert the zillions of exitsing ARC files while still being compatible with
older arc programs.
>Phil>
From PATRICK BENNETT Msg #7145 *ARCHIVERS* (Rcvd)
To PHIL KATZ Tue Jan 13, 1987 12:35pm (0:05)
'PKARC allows comments to be added to an archive, completely transparent to
ARC, ARCE, older version of PKXARC etc...' Yeah, sure... Transparent! But
they are erased! Because they are transparent...
Besides the Blab, nice to see you here Phil!
From PHIL KATZ Msg #7209 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Wed Jan 14, 1987 8:12pm (0:07)
Erased ARC comments
Patrick,
Only ARC 5.12 (and probably 5.20) and Buerg's ARCA will erase archive
comments. ARC will do this only when modifying the archive, not when
extracting it. Similarily since PKXARC and ARCE are extract programs, they
don't erase the comments either.
The point was that archive comments can be put into an archive without
affecting any extract program. It is unfortunate that Thom did not have the
foresight to include comments in the original ARC format. Of course, it is
very easy to say this in hindsight . . .
>Phil>
From BILLY ARNELL Msg #6064 *ARCHIVERS* (Rcvd)
To STEVE MANES Sat Dec 20, 1986 1:18pm (0:05)
Regarding ARCS et al ...
Most machines have IBM Compatible ARC programs now. This includes the Atari
ST, the AMIGA, and so on. At least these mentioned are 100% compatible with
each other.
What machines would you cater do that you think don't have such utilities?
I think leaving ARC'd and SQZ'd files should be ok.
From PATRICK BENNETT Msg #6076 *ARCHIVERS* (Rcvd)
To BILLY ARNELL Sat Dec 20, 1986 5:53pm (0:06)
But as you remember these are seperately created ARC programs...
What if the Real ARC were suddenly to Add a new method of compression? Or
change the arc structure? Remember how many times new versions of arc were
released that were <incompatible> with earlier versions? Take a look at ZOO,
read the docs, etc... And post what you think.. I can't argue forever... I
know a bit, but not everything about it... The author of ZOO should be on here
soon....
From BILLY ARNELL Msg #6097 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Sun Dec 21, 1986 1:05am (0:06)
Pat,
If you think ZOO is that good, and have used it successfully, I have not
reason not to believe you -- and -- will probably start taking a serious look
at it.
One reason I personally like ARC files is because I have 6 computers, of
which 3 can use ARC'd files from either/or machines. No ZOO conversions yet
available, and that makes a diff to me.
Either way, I will check it out for sure.
From ROGOL DOMEDEFORS Msg #6091 *ARCHIVERS* (Rcvd)
To BILLY ARNELL Sun Dec 21, 1986 12:41am (0:13)
Lots of machines. All Unix and Xenix machines, for example....
.....that don't have some version of ARC ported to them. I have code for ARC,
but making it work even only to decompress files is not only a pain, but a job
requiring a bit too much for your typical small Xenix or Unix machine owner.
For parochial IBM stuff, which encompasses just about all of the files usually
made available in ARC format, it's surely no big deal; the problem comes only
when some IBM-fanatic encodes a general-interest >text< file in ARC. Then
it's a pain.
If you still want an example of a machine lacking such utilities, >my< machine
is one. I have SEA's Unix-Arc source, and it's full of errors, or at least
full of errors unless one assumes that only on an IBM machine is the code
supposed to run.
Other examples are the Sun, the Prime, the Vax...
From Steve's standpoint the idea is to conserve disk space, which is limited.
Also from Steve's standpoint, the idea is to support callers who use all sorts
of hardware to connect with him, not just the parochial models. I think his
idea is an excellent one, even though I expect eventually to hack the buggy
SEA stuff so as to have local de-ARCing capability.
From FHABER Msg #6137 *ARCHIVERS* (Rcvd)
To ROGOL DOMEDEFORS Sun Dec 21, 1986 2:18pm (0:04)
Therefore I again propose that text files be SQueezed, at most (possibly not
even that). There's a Huffman program for every computer I know of.
From JESSE LEVINE Msg #6144 *ARCHIVERS* (Rcvd)
To FHABER Sun Dec 21, 1986 5:39pm (0:07)
There should be NO objection if...
...we were to set up a standard whereby....
all programs that are not in zoo format (or arc - but let's say zoo for now
and make Pat happy) are automatically zoo'd on upload. Then, if you have zoo
you d/l in zoo format -- if you don't you download without zoo and the file is
expanded on d/l.
I too am resistant to zoo and like arc and trust it; but this should work
fine. I'd do the same with arc if steve wants -- whatever. The new and nifty
thing would be automatic compression for storage and automatic expansion for
those with non-ibm computers. -j
From PATRICK BENNETT Msg #6150 *ARCHIVERS* (Rcvd)
To JESSE LEVINE Sun Dec 21, 1986 8:12pm (0:03)
(Ugh, Grunt) Me, Pat, Happy...
From RAHUL DHESI Msg #6647 *ARCHIVERS* (Rcvd)
To ROGOL DOMEDEFORS Wed Jan 1, 1986 10:47pm (0:11)
Zoo works on UNIX!
Rogol, I have just uploaded to this BBS the source code in portable C for Zoo.
It is known to compile and run under 4.3BSD, Microport UNIX, System V, and
Xenix, on several different machines. My goal is to have a SINGLE
distribution that will compile on EVERY machine. Any improvements in
compression technique will be immediately available on every machine. Plus,
the next major release will allow for 255-character filenames and directory
names and much other stuff. It is all described in the documentation
accompanying the source.
I also have available for uploading when I get a chance, the executable Zoo
for the following machines: VAX-11/785 running 4.3BSD; AT&T UNIX PX running
System V Release 3; AT&T 3B2 running System V Release 2.1; Compaq 286 running
Microport UNIX System V Release 2.1.
Next on the list are implementations for the Amiga, Macintosh, and CP/M
machines. It takes time, but it's happening -- right now. Look for it.
From ROGOL DOMEDEFORS Msg #6650 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Thu Jan 2, 1986 1:14am (0:12)
So I see....
.....I've downloaded the code, although not yet tried to compile it.
It seems to me, with admittedly limited experience, that most funnied-up files
on or for *nix systems are various combinations of 'ar', 'shar', and 'sq'.
ARC seems to be pretty much limited to the IBM-PC world, and to other machines
that want to speak to the IBM-PC world. I've found that most of the files
I've ever downloaded from other systems, or tried to download, were either
pure text files that hadn't been funnied-up, or were code files for stuff that
I didn't really want for myself but wanted more for other people, like umodem
for my own BBS. It may be that there isn't a lot of demand for a different or
better archiving-squeezing system in the *nix world. But there'll still
undoubtedly be those who want the ability to work with the IBM-PC one. I was
impressed with the overall description of ZOO that Patrick B. previously
uploaded here, but of course the sole criterion for any conventional archiving
package is whether the community accepts it.
BTW, allow me to congratulate you on your taste in personal initials.
From JOHN COWAN Msg #6852 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Tue Jan 6, 1987 4:44pm (0:04)
Real Soon Now,
i.e. within a few months, there will be a Sun system here, so expect a Sun
Unix version then. I'm >real< interested in the ZOO design and will be
downloading the stuff as soon as possible.
From RAHUL DHESI Msg #6908 *ARCHIVERS* (Rcvd)
To JOHN COWAN Wed Jan 7, 1987 10:01pm (0:07)
Great!! Looking forward to Zoo on the Sun machines!!
A note of interest: A while ago my Department was contemplating buying a new
computer system. They had a choice of accepting a used VAX-11/785 or buying a
new machine. I tried hard to convince them to consider a Sun network or
possibly an Encore. Didn't work, but I did spend some time studying Sun
documentation. I was impressed by its class. So long as you are running
4.2BSD (isn't that was Sun uses, along with some graphics stuff added to it?)
compiling and running Zoo should be easy.
From JOHN COWAN Msg #6991 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sat Jan 10, 1987 12:05am (0:07)
4.3bsd, I hope.
I work for the Financial Systems Dept. of Merrill Lynch & Co. (the holding
company, not the brokerage firm) in a semi-R&D capacity. We've got: a lot of
Xerox Star workstations and appropriate file servers, Metaphor w/s and file
servers, IBM ATs on the Ethernet, a Microvax (to arrive) running Mt. Xinu
4.3bsd, a Vax 8550 running VMS (argh), and the Sun. The point of getting the
Sun is that Metaphor w/s development is done there, as they both use 68K
processors.
From RAHUL DHESI Msg #7053 *ARCHIVERS* (Rcvd)
To JOHN COWAN Sun Jan 11, 1987 6:44am (0:03)
`argh' is exactly the right response to VMS
From ROGOL DOMEDEFORS Msg #6948 *ARCHIVERS* (Rcvd)
To JOHN COWAN Thu Jan 8, 1987 9:40am (0:05)
Just out of curiosity, why all the interest?....
.....I intend to get Zoo working on my machine, but my sole reason for doing
so is to fit in with systems like this one, where files must be crunched in
one way or another to conserve space.
From STEVE MANES Msg #6108 *ARCHIVERS* (Rcvd)
To BILLY ARNELL Sun Dec 21, 1986 2:59am (0:06)
Well, I would still continue to support ARC....
... although the ZOO description really does seem to beat the pants off ARC
for BBS archiving. Incidentally, the author of ZOO is a user here and a prof
of CompSci at Univ of Indiana (I think).
I've unARCed Patrick's Attached file to you so you can read the descrip on
line. Use FIND Attached and the filename "zooplan1.txt" to find that message
and then E)xec Download with ASCII protocol to read it, if you wish.
From JIM FREUND Msg #6073 *ARCHIVERS* (Rcvd)
To STEVE MANES Sat Dec 20, 1986 4:59pm (0:03)
From an ol' time Atarian...
Would this affect text files?
From STEVE MANES Msg #6112 *ARCHIVERS* (Rcvd)
To JIM FREUND Sun Dec 21, 1986 3:18am (0:03)
Lotta points made above your message.
Would which point affect text files?
From JIM FREUND Msg #6121 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Dec 21, 1986 4:54am (0:04)
Lemme rephrase that...
Will this squeezing affext text files, & if so, what can those of us 8-bit
users without an ARC or ZOO utility do?
From STEVE MANES Msg #6123 *ARCHIVERS* (Rcvd)
To JIM FREUND Sun Dec 21, 1986 5:11am (0:05)
An ARCed file, as it is now, is probably inaccessible to you on an Atari.
That's one of the problems I want to fix with this "window" into ARC. I'd like
to be able to unARC files before sending them as a user option.
With that, then I can safely ARC everything that's uploaded.
From THOM HENDERSON Msg #7176 *ARCHIVERS* (Rcvd)
To STEVE MANES Wed Jan 14, 1987 1:01am (0:03)
There's an Atari ST version of ARC now. I think we have a copy.
From STEVE MANES Msg #7186 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Wed Jan 14, 1987 5:38am (0:03)
I've got a few Atari users here.
Would it be possible to upload a copy of ARC for it?
From BILLY ARNELL Msg #7196 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Wed Jan 14, 1987 9:17am (0:03)
There IS an ST ARC version, I use it all the time . . .
From PATRICK BENNETT Msg #7198 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Wed Jan 14, 1987 11:42am (0:05)
Yeah but Thom... What if you were to say, add a new compression method to ARC
now... What would happen to all the ARC's out there for other machines, ST,
Amiga, Mac, etc... Would the same lengthy conversion to other machines still
be involved?
From THOM HENDERSON Msg #7326 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Sat Jan 17, 1987 1:30am (0:05)
What lengthy? Compression/decompression only involves two modules.
All of the changes would be in easily locatable areas. I can't see as it
would be THAT hard.
And anyway, this is more or less a side issue, as the same argument would
apply to ANY program.
From PATRICK BENNETT Msg #7337 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Sat Jan 17, 1987 12:22pm (0:06)
Not true... I was speaking of the lengthy process involved in converting ARC
to other machines... Just look how long it has taken for other machines to
get ARC programs?! ZOO is written in quite portable C code and conversion to
other machines is a trivial task compared to what ARC afficiandos would have
to go through... Yes, you're right the same argument would apply to Any
program that wasn't written with portability in mind, I would agree....
From THOM HENDERSON Msg #7947 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Sat Jan 31, 1987 5:56pm (0:09)
I don't care if you have portability in mind or not.
The people who wrote the operating systems didn't! Some things just flat out
work differently, and there ain't much you can do about it.
For example, the ONLY problem with porting ARC to UNIX (I am told) is in
figuring out what to do with ends of lines. They are handled differently BY
THE OPERATING SYSTEMS, and anyone porting stuff from one to the other has got
to keep that in mind.
So the UNIX version of ARC has a switch to tell it "when adding this file to
that archive, translate newlines to what MSDOS uses" and vice versa.
Anybody porting any program between dissimilar enough machines is going to
have these sorts of problems, and there is not a heck of a lot the program
author can do about it.
So how long before we get ZOO on the Commodore 64?
From THOM HENDERSON Msg #7950 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Sat Jan 31, 1987 6:24pm (0:04)
By the way. . .
Just out of curiosity, how much experience do you have with porting programs
from one operating system to another?
From RAHUL DHESI Msg #6725 *ARCHIVERS*
To MANAGEMENT Sat Jan 3, 1987 5:09pm (0:04)
Congratulations on the ARC & Zoo windows
Steve -- good show! This is the first BBS to offer both ARC and Zoo windows.
It's marvellous.
From THOM HENDERSON Msg #6938 *ARCHIVERS* (Rcvd)
To STEVE MANES Thu Jan 8, 1987 4:30am (0:06)
ARC is on quite a few machines, including UNIX
We have versions now for UNIX, Commodore 64s, Amigas, Atari ST's, and the
Tandy 2000, at least. People are even working on porting it to the HP 2000
and IBM VM/370.
As for this business of extracting a file without uncompressing it, you guys
never heard of MARC? It does exactly that. It's not hard to do, either.
From STEVE MANES Msg #6939 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Thu Jan 8, 1987 6:32am (0:18)
Never heard of MARC.
The argument in favor of ZOO has, admittedly, been one-sided although I think
ZOO does have some features I like that are unsupported in ARC, like embedded
comments and greater arc/dearchiving speed. This is no slighting of the
capabilities of ARC because, after all, ARC predates ZOO and has been a most
reliable and positive utility. Its success and exceptional service is
measured by the scarcity of LBR and SQ files on the boards now. ARC also
brought a semblance of order to, at least, the BBS download subculture and
because of its wide userbase and enhancement of host resources is probably
singularly the reason why there are so many excellent download boards now.
ARC has nothing to apologize for; if it wasn't such an excellent utility there
wouldn't have been a developed market for enhancements to it.
BTW, for people who don't know, Thom is the author of ARC. So we have the
authors of both ZOO and ARC represented here.
I know you're involved in the SEAdog interface (FidoMail) now but are there
any planned enhancements to the ARC program itself? ZOO does have impressive
specs but I think it's also realistically presumptive to say that the majority
of users of either program are not overly concerned about the 8 or 9% decrease
in compressed file size in the comparisons I've seen favoring ZOO against ARC.
The "average" large ARC file seems to be in the 200+k area, which means a
diskspace savings in the neighborhood of 18k, tops. Nothing to get hysterical
about.
Currently, ARC is being challenged by PKXARC. Its appeal is its speed over
ARC, although I understand there are serious problems with the latest
revision. ZOO is appealing for similar reasons, although it is totally
incompatible with existing ARC files. Are there any plans for a competitive
answer to either of them?
From THOM HENDERSON Msg #7040 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Jan 11, 1987 1:34am (0:09)
MARC comes on the ARC disk
So do lots of other things. MARC is a fast archive extractor/merger. It lets
you do thing like:
MARC <target> <source> [<filespecs>]
If <target> does not exist, then it is created (thus the extraction business).
For example, if you had an archive named JUNKYARD.ARC, and you wanted to make
a new archive called WASTE.ARC which contains the file WASTE.TXT from
JUNKYARD.ARC, a very fast way to do it is:
MARC waste junkyard waste.txt
No compression/decompression takes place, so it is very fast.
We really shouldn't call it the ARC disk, I suppose. It contains all sorts of
goodies. Including FAKEY (allows automating program responses in a batch
file), TASK (asks a yes or no question, with a time limit), CHMOD (our own
version, lets you be selective), and several other items.
From THOM HENDERSON Msg #7041 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Jan 11, 1987 2:01am (0:40)
ARC vs. ZOO (from the other side)
First of all, let me say that if something better comes along, all well and
good. That's how the state of the art advances, after all. Having gotten
that out of the way. . .
Most of the arguments don't particularly impress me. Taking the points I can
remember:
1) ZOO runs faster than ARC; This is implementation dependant. Granted that
our implementation isn't the greatest at the moment, we've been going more for
portability than speed. We have plans to increase the speed of ARC
significantly in the future. Meanwhile, faster implementations DO exist, and
compare well with ZOO from what I hear.
2) ZOO is more portable; Well, maybe. It's all well and good to talk about
the potential for porting ZOO, but ARC has already been ported to CP/M, C64,
Atarai ST, Tandy 1000, and UNIX. It's even now being ported to IBM mainframes
and HP 3000's. Speaking as a person who has ported code many times, there's a
bit of a gap between theory and practice. Get ZOO ported to as many machines
as ARC already runs on, then we'll talk.
3) ZOO will be backwards compatible forever; Personally, I find this one a
bit hard to swallow. Oh, it could be true, but only at the expense of
severely limiting where it can go. ARC is upwards compatible, meaning that
the most recent version will always work, regardless of what version was used
to create the archive it is working on. This gives us tremendous flexibility
in its development.
4) ARC changes too fast, and it's too hard to keep up with it; This is a
holdover from ARC's early development. Yeah, when ARC was pretty new and just
starting to reach a wide audience, we did come out with a few new versions a
bit too quickly, I suppose. Still, those rapid releases were primarily bug
fixes. What were we supposed to do? Not support our software? Other people
(including the guy who reviewed ARC for PC Week) realized what was going on,
and labelled it "good program support." You can't please everybody, I guess.
Meanwhile, ARC has been stable at version 5.12 for close to a year now. This
is changing too quickly? A side point: The same document that said ARC
changes too rapidly also promised new versions of ZOO in the very near future.
NOT A CRITICISM! New software ALWAYS hits a cycle of rapid changes. You
just don't always see it.
5) The ARC code is buggy; Oh really? ARC 5.12 has a grand total of ONE known
bug. If you do a verbose listing, a file that was last modified between noon
and 1PM will be incorrectly displayed as AM instead of PM. In other words, a
file last modified at 12:30 in the afternoon will be reported as last modified
at 12:30 in the morning. This is ONLY in the report produced by the V
command. The file gets the right time when it is extracted, and the Update
and Freshen commands work properly. You can see, I think, why we have not
been in a tremendous rush to fix this bug.
6) ARC archive listings take too long; ZOO, it seems, has the ability to use
an archive rearranged in such a way as to allow a very fast listing of the
contents. No means appears available to actually rearrange things to do it,
but one of these days you might be able to. Does ARC really list an archive's
contents that slowly? It's certainly faster than I can read it, and darned
close to the BIOS limit on how fast you can shove text to the screen. Is this
really a point?
7) When ARC deletes an entry, it actually gets rid of the data; No argument
here. One of the reasons we wrote ARC was because LU did NOT do this, and we
didn't like it.
8) ARC only keeps the most recent version of each file, while ZOO keeps
multiple past versions as well; How many people want to do this? Sounds
great if you want a revision control system, not so hot if you're trying to
save space.
9) Users don't know what to use, what with ARC, LU, USQ, etc.; This was true
before ARC became popular. Is it still true? If anything, the logic of this
one sounds like a good case against ZOO, not in favor of it.
There were other points too, but I don't remember them now. Here are a few
points of my own:
a) ARC is a professional product, backed by a company with three years of
experience doing these things. We support what we sell.
b) SEA has a phone number listed in the book. You can call us if you have any
problems.
c) ARC is an established standard at this point. Not to say that new
standards won't evolve, but they should be a little more clearly superior, I'd
think. (I always reserve the right to be wrong.)
From STEVE MANES Msg #7046 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Sun Jan 11, 1987 2:56am (0:15)
Thanks for the details, Thom.
Not having been a file-serving sysop until now and not being a lounge lizard
on the download boards my experience with all file archivers is pretty
novice-division. Until recently, I've not paid much attention to crunching
files for my own use... just de-arcing stuff people gave me. But I'm getting
more into the habit of doing so.
The new 5.20 seems like the great equalizer then. Faster operation is all
I've really been concerned about and you do have to admit that the present ARC
is slower than some of its recent competition.
While I have encountered files that refused to be de-ARCed with SEA's ARC and
which were reported to me as being ARCed correctly with the same program, the
files may have been damaged in the interrum. The bugs I think you may be
referring to are regarding the source, ARC500SC.ARC. Rogol mentioned that it
was full of bugs and I, too, had problems compiling it even after tweaking the
code for my then-current compiler, Lattice 2.n.
I sympathize with the problems of the many early updates to ARC. Magpie goes
through daily code changes... at least three major bug fixes a week. Perhaps
ARC was released a bit prematurely but I also so suspect that even if I
cleared Magpie of all known creatures on my machines, and Jesse's, that Magpie
will encounter a few hundred more on other hardware.
The remaining points I'll leave for Rahul to address. This could be an
interesting debate!
From RAHUL DHESI Msg #7057 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Jan 11, 1987 7:28am (0:19)
Good to hear from Thom Henderson
I feel that Thom feels that my ZOOPLAN document was an attack on ARC. In fact,
I wrote that mostly to defend Zoo after Bob Mahoney circulated the ZOOBAD
series of articles.
Portability: This has different meanings to different people. When I say a
program is portable, by that I mean roughly this: If you have a compiler for
the language the program is written in, you should be able to implement it on
your system in about two evenings. Much more than that, and it's not a very
portable program. Zoo hasn't achieved exactly that degree of portability, but
it's much closer to it than the ARC source that is currently in circulation.
The first port of ARC to a different system probably took about a year. After
two months, Zoo already works on about five different systems, and just
yesterday, we got it to do everything but pack archives on the Amiga.
Performance: The portable Zoo doesn't peform as well as the MS-DOS- specific
Zoo. But the MS-DOS-specific Zoo performs much faster than ARC. And, unlike
ARC, Zoo always detects a full disk.
`ZOO will be backwards compatible forever': I didn't exactly say that. But
yet, that's one of my objectives. The next major release of portable Zoo (in
debugging stages) will support 255-character filenames and 255-character
directory names. It will preserve the local timezone of each file. It will
allow for storage of the data and resource forks of the Macintosh, and seveal
different formats for text files. If it weren't downwards compatible with
current versions of Zoo it would be a great inconvenience. Therefore it will
be downwards compatible, all the way to Zoo 1.00.
There's nothing wrong with people using ARC, except that it is tied to the
MS-DOS world (e.g. filenames restricted to 11 characters) and there are a lot
of users out there who would like to use the full syntax of their own machine.
Zoo is about to allow that.
Looking forward to ARC 5.20.
From STEVE MANES Msg #7063 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sun Jan 11, 1987 12:25pm (0:08)
Question:
Re: ARC's 12-char filename limitation (including the '.').... I realize how
limiting this can be for a system that will allow very long filename but, at
the same time, it's just a limitation. A text file, FILENAME.EXT, compressed
into READTHIS.ARC would still uncompress on either an MS-DOS machine or Very
Long Filename machine. However, how would zoo handle this ARC file on an
MS-DOS machine if the compressed text file had a filename greater than 128
chars, which is the maximum length imposed by DOS upon any single argument on
the command line (for redirection necessary to rename the internal filename to
DOS convention)?
From RAHUL DHESI Msg #7079 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Jan 11, 1987 11:06pm (0:11)
Handling very long filenames under MS-DOS
The extended directory structure currently being debugged contains fields for
long filename and directory name. Under MSDOS, the long filename field is
just ignord by the unarchiving program. Under other systems permitting the
long filename, the long filename field will be used if present, otherwise the
standard 11-character filename will be used during extraction. In other
words, all Zoo archives contain the 11-character filenames. In addition the
long filename is added if the archiver supports it. Downward compatibility is
maintained by keeping the first so many bytes of the directory of each
archived file constant and that's all that Zoo version 1.00 looks at. A type
field in the directory identifies its extended structure to higher versions of
Zoo.
The same technique prevents Zoo 1.00 from being confused by attached comments
-- it knows to ignore certain fields that are used for maintaining comments.
From PHIL KATZ Msg #7112 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Mon Jan 12, 1987 8:40pm (0:04)
SO?
Rahul,
The same extended or long file names could be added to ARC files just as
easily, you know. By the way Rahul, hi.
>Phil>
From RAHUL DHESI Msg #7117 *ARCHIVERS* (Rcvd)
To PHIL KATZ Mon Jan 12, 1987 9:48pm (0:07)
Phil, Phil, Phil
Oh Phil, Phil, Phil! When will you realize that ANY file can be changed to
add ANY new field so long as downwards compatibility isn't needed. Your
Squashing is causing quite a bit of controversy for that reason -- everybody
must now revise his ARC extraction program. The same thing will happen if you
extend the ARC format to add long filenames.
Zoo will do it without sacrificing downward compatibility!
And a hearty hi to you too! I can't get on Exec-PC any more until Bob comes
back.
From DEAN COOPER Msg #7291 *ARCHIVERS* (Rcvd)
To STEVE MANES Fri Jan 16, 1987 8:56am (0:13)
Only ONE bug found in ARC 5.12????
Well, I just couldn't let that one slip by in reading this thread... I
spent a lot of time testing ARC to see exactly how it works so that my DWC
archiver could be compatible at least in the User interface... I came across
numerous bugs although none were major. Have you ever tried converting a file
that was not encrypted to one that is... ARC will apply the password on both
extract and add of the convert operation. My archiver does this correctly.
Have you ever really played with the wildcard expansion??? It falls apart
in some cases that DOS would handle fine. DOS is poor enough that you could
at least emulate its ability...
This may not be considered a bug, but it drives me crazy... That is that
you test for the existence of a file when extracting by opening the file for
read only... This causes a program called file facility to find the file in
another directory even though I'm not extracting there. Please, test by
opening for read/write or something similar..
I also have written in my notes some bug regarding redirecting output
when using the "p" command although I can't remember exactly what that one
was...
Well that's all I can remember right now but I do know I ran in to quite
a few...
From FHABER Msg #7300 *ARCHIVERS* (Rcvd)
To DEAN COOPER Fri Jan 16, 1987 12:28pm (0:08)
The fact that the open gets fooled by a path enhancer (Dpath, FilePath)
annoys me, too.
My personal bete noir: no one has a flexible ARCTYPE or equivalent with
bidirectional scroll. I use compressed files for document storage, and I
still use .LBR files for this, because a CP/M program, TYPE109 offers
wildcards and bi-d for exploration purposes (I still keep a Baby Blue in this
machine).
Actually, I think the modern compressors have missed some of the nice things
in the ancient K&P ARCHIVE for text files. If one wants to keep a bunch of
ASCII documents together under one filename, and search them conveniently,
everything I've seen is lacking.
From DEAN COOPER Msg #7302 *ARCHIVERS* (Rcvd)
To FHABER Fri Jan 16, 1987 1:35pm (0:13)
Archivers and other features plus other stuff...
Say, I've never seen these other programs.... I'll have to take a look and
see what features would be nice to add to DWC... I personally like lots of
features in an archiver....
This is for Rahul, say I saw around here somewhere that someone said your a
professor... Is this true??? Give us some more details if you don't mind as
in this BBS world we never know just who we're talking to. How much of your
time do you spend on ZOO?? Do you work on it strictly in your spare time???
How about you Phil, what do you do??? I happen to work for Pansophic
Systems, Inc.... Just got aquired by them... I am developing a Human Interface
with pull down menus, dialogs, etc., for a high end graphics program on a
AT... Previously, I worked at SONY and developed and entire windowing
environment like MicroSoft Windows before they came out with theirs. That
product, unfortunately, never got off the ground...
For everybody else out there... take a look at my archiver... I have
uploaded it here including the source code for anybody interrested.... Check
it out and tell me what you think... I would greatly appreciate any
comment....
Dean W. Cooper
From PATRICK BENNETT Msg #7307 *ARCHIVERS* (Rcvd)
To DEAN COOPER Fri Jan 16, 1987 3:59pm (0:04)
Sure, be glad to! Am going to d/l it before I leave...
Personally I think all of you guys (Vern, Phil, Thom, Dean, Rahul) should get
together and create a new standard....
From DEAN COOPER Msg #7414 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Mon Jan 19, 1987 7:36am (0:04)
New Standard??
Sure, I've said elsewhere that I would be glad to combine my program with
the others to create a new standard.... After all, I don't have much to
loose as my program is currently not at all well known...
Dean
From PATRICK BENNETT Msg #7419 *ARCHIVERS* (Rcvd)
To DEAN COOPER Mon Jan 19, 1987 11:47am (0:04)
I agree... I think the only person who actually may have something to lose is
Thom 'cause of SEA... But I see nothing wrong with you, Phil, and Rahul
getting together...
From JESSE LEVINE Msg #7314 *ARCHIVERS* (Rcvd)
To DEAN COOPER Fri Jan 16, 1987 7:28pm (0:06)
Dean, two pieces of info...
.....first there's another piece of this ongoing debate on Magpie's sister
board AtPal at 718 238 7855. You may wanna' check in there.
Second, I design user interface for Citibank, and would very much like your
contribution to the User Interface discussion I just decided to start on
AtPal. Would love to discuss your ideas and impressions. -j
From DEAN COOPER Msg #7415 *ARCHIVERS* (Rcvd)
To JESSE LEVINE Mon Jan 19, 1987 7:38am (0:03)
AtPal / User Interface discussion
Sure, I'll drop by real soon and see what's going on over there...
From FHABER Msg #7713 *ARCHIVERS* (Rcvd)
To FHABER Mon Jan 26, 1987 11:19pm (0:03)
See #7667 in FILES for a solution to the ARCTYPE problem. The Neatness
Watchbird has been at work.
From RAHUL DHESI Msg #9701 *ARCHIVERS* (Rcvd)
To DEAN COOPER Sun Mar 1, 1987 11:03pm (0:06)
Hey Dean! I think I found some more bugs in DWC!
1. When invoked as
dwc a dwc /bin/*.*
it lists each file in /bin but complains that it can't find it.
Just saying
dwc a dwc /bin
works correctly and each file in /bin does get added.
2. There seems to be no way of finding out what pathname was saved for the
files, although it can restore the pathname.
From RAHUL DHESI Msg #7248 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Thu Jan 15, 1987 5:34am (0:41)
Further examination of ARC vs Zoo
There is more than one issue at stake. Some issues are:
1. ARC.EXE vs ZOO.EXE. 2. The ARC archive format vs the Zoo archive format.
ISSUE 1. Let's divide this further:
1.1. Performance. 1.2. Portability of ARC vs portability of Zoo.
ISSUE 1.1. In the MS-DOS world at least, ARC's performance is a nonissue,
since nobody uses ARC.EXE any more. The real competition to Zoo is Phil
Katz's utilities.
ISSUE 1.2. ARC has been implemented on a number of other machines. But ARC's
advantage here is only temporary because its source code is highly nonportable
and must be extensively modified for each new system.
Zoo source code is highly portable. I consider it a goal that it be possible
to implement Zoo on a new machine in about one working day. I'm very close to
achieving that goal. ARC is very, very far away from that goal.
ISSUE 2. Let's divide this further:
2.1. The ARC directory format vs the Zoo directory format. 2.2. The ARC
compression algorithm vs the Zoo compression algorithm.
ISSUE 2.1. The Zoo directory format permits additional information to be
added to the archive while maintaining full downward and upward compatibilty.
The only way in which enhancements can be made to the ARC format is by either
appending new information to the archive (as PKARC appends comments), or by
making the archive incompatible with earlier archive utilities. Appending
comments to the archive is not trouble-free: If ARC.EXE manipulates an
archive to which PKARC added comments, all comments are lost without warning.
(And the comments added by PKARC are very limited in size.)
The instant that the ARC directory format is modified, all existing ARC
utilities become obsolete. Since ARC was independently implemented on every
different machine, all implementations must be independently revised. By
contrast, when the Zoo directory format is revised, it still works with all
existing versions of Zoo, all the way back to version 1.00. And since all
versions of Zoo are compiled from the same source code, revisions are
immediately reflected on each supported machine.
The Zoo directory format has numerous advantages: (a) Detailed comments may
be added. (b) Zoo can tell the user precisely which version is needed to
fully manipulate an archive. (c) When adding a file, the user can opt to save
any replaced file, or pack the archive and recover the space. (d) Unlimited
expansion of the archive format is possible without making old versions of Zoo
obsolete. (e) Redundant information makes repair utilities possible. (f) Long
filenames and pathnames are possible.
That some of these things have not been implemented is not a valid criticism
of the archive format. The point is that the Zoo archive format permits these
enhancements and the ARC archive format does not.
ISSUE 2.2. Zoo does not use the several different compression techniques used
by ARC archives. Yet on the average Zoo gives better compression than ARC
does. If new compression techniques are developed, Zoo will be able to take
advantage of them much more easily than ARC. This is again because the same
source code will simply need to be recompiled on each machine. Any
compression enhancements in Zoo will be immediately availble on each machine.
But if the compression algorithm in ARC archives is enhanced, it needs to be
separately implemented on each machine. For example, a month after Phil Katz
introduced squashing, it is supported only on MS-DOS machines.
UPWARD COMPATIBILITY. Upward compatibility is trivial. If you have both
ZOO.EXE and PKXARC.COM on your disk, you are fully upward compatible with ARC
archives. If you have LUE.COM and ZOO.EXE and PKXARC.COM on your disk, you
are fully upward compatible with the ARC, LBR, and ZOO formats. And if you
have ALUSQ.COM and LUE.COM and ZOO.EXE and PKXARC.COM on your disk, you are
fully upward compatible with squeezed files as well as ARC, LBR, and ZOO
formats.
DOWNWARD COMPATIBILITY. Downward compatibility is NOT trivial. It can be
achieved only if version 1.00 of the program was written with the future in
mind. This is true of Zoo and is true of no other archive program. Barring
changes in the compression algorithm, Zoo 1.00 will be able to extract files
from any Zoo archive. If there is a change in the compression algorithm, Zoo
1.0 will still be able to give a directory listing of the contents of the
archive, and tell the user precisely which version of Zoo is needed to extract
a specific file. No other archive format permits this.
From PATRICK BENNETT Msg #7258 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Thu Jan 15, 1987 11:45am (0:03)
Good Answer, Good Answer... 'Ruff
From DEAN COOPER Msg #7292 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Fri Jan 16, 1987 9:08am (0:06)
Upward compatibility/ Archive file format
Rahul, good to talk to you here... You say no other archiver has upward
compatible file format... Well, mine almost has all that you mention... Now,
I'll just have to finish it off... That's why my current release is a
prototype... when version 1.00 comes out, there will be no more incompatible
changes unless absolutely nessesary...
Dean
From THOM HENDERSON Msg #7327 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sat Jan 17, 1987 1:39am (0:11)
Thank you for expressing your opinions.
I can see that we have very different viewpoints on many things.
I agree that backwards compatibility is important. I don't quite see it as
the be-all and end-all of programming, but it certainly is important to try to
maintain backwards compatibility. I don't quite see how you can predict
everything that you're ever going to want to do, but that's another issue.
Meanwhile, you keep making all these statements about how ZOO can grow and how
ZOO will someday do this or that, and how ZOO will never cause this or that
sort of problem. You may well be right. I wouldn't know. I gather that you
come from an academic environment, so perhaps you know more about these things
than I do. I come from a business/commercial environment, so I tend to look
at things a bit differently, perhaps.
I will be interested in seeing if you can still say these things after your
program has been out in the real world for a year or two.
From RAHUL DHESI Msg #7348 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Sat Jan 17, 1987 4:15pm (0:05)
Interesting phrases
I note your use of interesting phrases such as "be-all and end-all", "this or
that", "academic environment", etc. The one thing you have absolutely failed
to do is refute even a single one of my claims. What my background is or yours
is utterly irrelevant to this discussion.
From THOM HENDERSON Msg #7949 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sat Jan 31, 1987 6:18pm (0:16)
You have some problem?
I merely pointed out that you and I have different ideas regarding the
relative importance of different things. Is this not true? I speculate that
these differences in viewpoint may stem from differing backgrounds. One might
suppose benefits from either of our backgrounds. Some will feel that you are
in a better position to judge due to your academic and theoretical studies.
Others may feel that I am in a better position to judge due to my practical
and commercial activities. Most will probably not see it as relevant.
I also pointed out that ARC has proven itself in the way that it has already
been accepted and has spread so widely, and that ZOO has not yet done this.
And a side comment and observation: One cannot plan for every eventuality, no
matter how hard one tries. It's easy for you to state now that ZOO will
always be backwards compatible. You may find it less easy at some future
date.
PLEASE NOTE the use of the word "may" in the previous sentence. Perhaps you
have indeed so fully allowed for every eventuality that you will never have to
face the difficult choice of adding something at the cost of backwards
compatibility. For your sake, I hope you have, as it is a painful choice to
have to make (as I know full well).
Here's another problem I hope you never face: We released the sources for
ARC, as you released the sources for ZOO. We now have a problem in that
someone else entirely has written an "ARC clone" that incorporates a change
which is not backwards compatible. So you see, one doesn't always have much
control over these things. I hope you never have to decide what to do in that
sort of situation.
From THOM HENDERSON Msg #7042 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Jan 11, 1987 2:06am (0:09)
Yes, there's a new ARC coming
We should be releasing version 5.20 later this month or early next month. New
features include faster compression and smaller archives (a little, at least).
Also, 5.20 will be fully backwards compatible as far as 5.00 (as indicated by
the units digit). This means that versions as old as 5.00 will be able to
read archives created by 5.20.
Let's see, what else. . . The Run command will allow you to pass arguments to
the program being run. I don't remember what all else at the moment. It's
mostly performance improvements. Oh, yeah, we fixed the one known bug. Also
we're spiffing up the packaging a lot. We'll be going to a "standard" 5-1/2
by 8-1/2 inch folded/stapled manual, probably with a vinyl binder.
From STEVE MANES Msg #7047 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Sun Jan 11, 1987 2:59am (0:04)
You mentioned one current ARC feature here not supported in ZOO.
I've never used it but, as I said, I'm not an ARC-wize person. The Run
command seems like a powerful feature.
Rahul: any plans of supporting this?
From RICHARD CLARK Msg #7122 *ARCHIVERS* (Rcvd)
To STEVE MANES Mon Jan 12, 1987 11:27pm (0:06)
Just my two cents but I've pretty much converted my BBS to Zoo and I like the
speed and ease of use Zoo offers. With a 12K decompression file to help
decompress zooed files, no users have complained.}i
I haven't seen this one either but what is the current cost of Arc compared to
the cost of Zoo? Last time I looked, Arc was user-supported with a
contribution of $35 appreciated. The version of Zoo that I have asks for
nothing. Have things changed?
From DEAN COOPER Msg #7254 *ARCHIVERS* (Rcvd)
To THOM HENDERSON Thu Jan 15, 1987 9:43am (0:08)
ARC-Clone War
Eee Gad... This BBS sure isn't easy to get around in... I hope I'm doing
this right.... Anyway, let's not forget the other kid on the block... namely
ME!! I happen to have an MS-DOS archiver too... called DWC... Its full
featured (has several more features than ARC) and compresses better than ZOO,
ARC, or PKARC, and is fast (fast as ZOO now and soon to be as fast as PKARC).
However its incompatible with all the others (I'm so nice do complicate
everything...) Once I figure this BBS out, I'll upload my archiver so we can
have more compitition around here....
Dean W. Cooper
From STEVE MANES Msg #7264 *ARCHIVERS* (Rcvd)
To DEAN COOPER Thu Jan 15, 1987 4:47pm (0:06)
Welcome, Dean!
Vern Buerg was on last night and downloaded the the ARC/ZOO discussion so I
hope to hear from him soon too.
I've bumped up your privs so you can take some time figuring the system out.
Hit,
3 GM
to go to the Tutorial and familiarize yourself on the system. To get to
files, type,
2 GM
All the "defined boards" here lie in the Msg# 0-50 region but may also be
accessed by the child menus or by Change Discussion.
From RICHARD CLARK Msg #7346 *ARCHIVERS* (Rcvd)
To DEAN COOPER Sat Jan 17, 1987 3:53pm (0:05)
Arc - Zoo - SQ - DWC
First you have to get your util to run on Sun's, 3084's, Amiga's, IBM PC's,
Atari ST's and any other machine you can think of. I would like to see what
your utility does. What compression algorithm do you use to get smaller
files. What language do you use to save time?
From DEAN COOPER Msg #7416 *ARCHIVERS* (Rcvd)
To RICHARD CLARK Mon Jan 19, 1987 7:56am (0:18)
DWC - A little info...
Get to run on those other machines you say?? Well, I must tell you the
history of how I got into this whole mess... One day back in September I was
thinking to myself, "Gee, I bet I could come up with a better compression
algorithm..." I happened to come across some code by Kent Williams showing
old style Lempel-Ziv compression and thought it would be nice to convert this
stuff into nice modular code so that the compression function would use an
input and a output function to do its work through so that you could switch
between compressing files to compressing memory or what have you... To
demonstrate how my modular compression function worked I decided to write a
little front end... and what better front would there be but a little ARC like
program. This turned out to be so simple that I decided to flush out the
front end to match 95% of ARC's functionality. This took about a week.
Well, it so happen that my program was faster than ARC's, although
incompatible as I had never seen their source code. So I thought, "Gee,
somebody out there might like an improved ARC..." I got onto a few boards,
and what did I find?? ARCE, ARCA, PKARC, PKXARC, and ZOO. One thing led to
another and before I knew it I was spending countless hours making my program
faster and compressing smaller... Now, if there was a good chance that my
program would catch on, then I might try to port it around... but right now,
I'm just seeing if anybody is that interrested.
Moreover, in my zeal to make my program faster, I just happened to have
thrown out some of the modularity and portability.
Currently, however, my program is totaly 100% MicroSoft C. Which does
make it a little easier to port... Well, even if nobody else ever uses my
program, I know I will, seeing I like it a lot more than any of the other
ones...
Dean
From RICHARD CLARK Msg #7591 *ARCHIVERS* (Rcvd)
To DEAN COOPER Fri Jan 23, 1987 10:07pm (0:05)
Newer Compression code
I would certainly like to run your program and at this point, I think someone
should get on an article doing a comparison of compressions for Byte magazine!
There doesn't seem to be any money in the compression biz so it seems folks
do this to outdo themselves. It's great to see such a rush for speed and
tight code!
From ROGOL DOMEDEFORS Msg #7622 *ARCHIVERS* (Rcvd)
To RICHARD CLARK Sat Jan 24, 1987 3:39pm (0:05)
The March issue of Doctor Dobbs' Journal...
.....will have data compression as a theme. It will include a survey article
on ARC-type utilities. You might also be interested in a book called <Data
Compression> by Gilbert Held, published by John Wiley and Sons, which I
haven't read myself but which I've seen highly recommended.
From DEAN COOPER Msg #7738 *ARCHIVERS* (Rcvd)
To ROGOL DOMEDEFORS Tue Jan 27, 1987 8:12am (0:04)
They probably missed DWC...
It's probably too late to get DWC into that article with all the lead time
that is required... And few people know of my archiver... Too bad... But I'll
read it anyway...
Dean
From RICHARD CLARK Msg #8741 *ARCHIVERS* (Rcvd)
To ROGOL DOMEDEFORS Sun Feb 15, 1987 4:48pm (0:04)
J et al.
Thanks, I'll pick up the Gilbert book at B&N. I think my J sub just ran out.
I'll look for it on newstands.
From DEAN COOPER Msg #7737 *ARCHIVERS* (Rcvd)
To RICHARD CLARK Tue Jan 27, 1987 8:10am (0:04)
So true... No money here...
Yes, I long ago gave up the hope for any money... My program can be
considered absolutely FREE... I'm just trying for a little name recognition
now... A review would be great!!
Dean
From PHIL KATZ Msg #7429 *ARCHIVERS* (Rcvd)
To DEAN COOPER Mon Jan 19, 1987 8:58pm (0:03)
As Fast as PKARC??
Dean, I am anxiously waiting!
>Phil>
From DEAN COOPER Msg #7450 *ARCHIVERS* (Rcvd)
To PHIL KATZ Tue Jan 20, 1987 7:36am (0:11)
Sure thing!!
Now come on Phil, you can't possibly think you have the corner on speed...
It'll just take me a little time, that's all... Currently, I've been taking a
break from working on it, doing a little reading instead... But I guess its
about time to get back to work so I can at least put an end to this incessant
hype I here about your program being faster than any others... At least you
dropped the bit about compressing better than any others... But don't get
paranoid now, this is just friendly compitition...
Say, I havn't seen any magazines for the longest time until last night when I
saw this add in BYTE for a program called SQZ!... It is apparently a memory
resident program that just compresses speadsheet files, but it claims it gets
up to 95% compression... I've never seen a speadsheet file so I don't know
how that compares to Lempel-Ziv... Any ideas?? They say they use some type of
compression based on image compression.. Do you know what there talking
about??
Dean
From STEVE MANES Msg #7460 *ARCHIVERS* (Rcvd)
To DEAN COOPER Tue Jan 20, 1987 4:09pm (0:12)
Funny you should mention "SQZ!"
I was just about to leave something about it. For those unaware of SQZ! (it's
new), it's a memory-resident file squeezing utility which automatically
uncompresses and compresses files when they are called from DOS. It ONLY
works on datafiles used by Lotus 1-2-3, Symphony, Note-It, Reflex, Q&A,
Sideways and Cambridge Spreadsheet Analyst. The program's review (in CIS's
<Online Today> magazine) suggested that it was not a general-purpose file
squeezer. It appears to work by tokenizing common elements in the above
programs to achieve file compression of up to 97% (in a test, a 75,659 byte
Lotus worksheet compressed to just 2,313 bytes). In addition, there is a
significant increase in load and save speed using SQZ!.
SQZ! doesn't appear to be a TSR but a loader and disk i/o environment for the
source program. To use SQZ! with Lotus 1-2-3, for instance, you type "sqz
lotus" and SQZ! takes care of loading 1-2-3 and then removing itself when your
quit out of Lotus. I would prefer that more TSR's take this approach.
Turner Hall Publishing
10201 Torre Ave.
Cupertino, CA 95014
408-253-9607
$79.95, not copy protected.
Requires 30k RAM
From BILLY ARNELL Msg #7481 *ARCHIVERS* (Rcvd)
To STEVE MANES Wed Jan 21, 1987 12:08am (0:03)
SQZ! is quite good, and, yes, I wish more TSRs worked that way!
From PATRICK BENNETT Msg #7489 *ARCHIVERS* (Rcvd)
To STEVE MANES Wed Jan 21, 1987 11:50am (0:03)
Isn't that how HAL is run? 'hal lotus' How would you use SQZ! then?
From JACK Msg #7634 *ARCHIVERS* (Rcvd)
To STEVE MANES Sat Jan 24, 1987 8:02pm (0:10)
It works!
I don't use it on my system, but there is a department where we have it
installed and they are pleased with it. It is great for huge spreadsheets but
it's important to remember a few things. I've been told that if you load a
small spreadsheet using sqz! it actually takes >longer< to load than a
non-squeezed file. Remember, too, that even if you can get a very large file
on a disk by squeezing it it's not going to have any effect on how it sits in
memory, in other words if you ain't got an Above Board there are still limits
to how big a spreadsheet you can create. People get mislead by 1-2-3's 8,000+
rows and 256 columns (or whatever it is). I don't know the exact max's, but
depending on how you've arranged the data in the worksheet you can only use
maybe 500 lines down and 52 columns across with 640K. (I'm judging by a
worksheet I had a look at the other day.) I keep track of things by checking
/Worksheet Status frequently.
From JOHN COWAN Msg #7496 *ARCHIVERS* (Rcvd)
To STEVE MANES Wed Jan 21, 1987 5:37pm (0:07)
The point about "downward compatibility"
is that, when enhancements are made to the format, old versions of the archive
program will continue to run against the new format without damaging it in
random, unforeseeable ways. PK(X)ARC comments are simply stripped by ARC due
to the lack of downward compatibility. The provision in ZOO for multiple
directory-entry types prevents this from happening, by FORMALIZING CHANGE.
ARC format is rigid, and when changed there is no way to announce to old
programs that a new format is being employed.
From THOM HENDERSON Msg #7948 *ARCHIVERS* (Rcvd)
To JOHN COWAN Sat Jan 31, 1987 6:05pm (0:04)
ARC has no way to let old versions know that there are new features?
I gather that you were not using ARC already when ARC 5.0 came out, so you
never saw the "You need a new version of ARC" message.
From DEAN COOPER Msg #7563 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Fri Jan 23, 1987 7:35am (0:12)
Rahul, read your zoofrm.doc....
Rahul, I read your format Doc last night and will probably add two things from
that to my format... Namely, the tags put in front of the file data and a
size to the variable part of the directory, which in my case means storing the
size of data appended onto the end of the file data. This way I won't have the
problem that ARC has with Phil's comments... An old archiver reading a newer
DWC archive will simply copy the stuff it doesn't understand on through and
leave it alone. With the tag, I can do what you do (Do you do it currently?)
which is scan a corrupted file for pieces that can be recovered (very nice).
All the other stuff for portability to other systems I'll leave to you. I'll
stick to the MS-DOS market, seeing it pretty big... Thanks for making the
documentation available...
Say, I glanced at ARC's Lempel-Ziv implementation and couldn't help notice how
primitive it looked... that is, in comparison to how much I improved mine...
They seemed to have mostly just copied the code and from one comment didn't
even seem to know exactly how it worked... But they were first and I guess
that's all that matters in this market...
Dean
From RAHUL DHESI Msg #7611 *ARCHIVERS* (Rcvd)
To DEAN COOPER Sat Jan 24, 1987 11:08am (0:07)
ARC wasn't first with LZ
ARC was the first, however, to combine the merits of the CP/M LU utilities,
the UNIX ar utility, and the LZCOMP/LZDCMP utilities from Kent Williams and
the UNIX compress utility. ARC was there in the right place at the right
time.
By the way, wouldn't it be nice if we could all work together here? Why don't
you just add better performance to the Zoo format? It's well-documented, has
been adopted by a number of BBS operators, is very portable (Amiga version out
already), and already has some stuff that you are only now thinking to add to
DWC.
From DEAN COOPER Msg #7735 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Tue Jan 27, 1987 7:23am (0:06)
Will have to think about it....
Well, I don't know Rahul... I sort of like my archiver even if nobody wants
to use it... But I'll think about it... The stuff that I need to add isn't
really that much stuff... and anyway, I think I might want to start working on
something else...
Say, I'm writting a little article on Lempel-Ziv compression... I try to
explain it for people just interrested in how it works... I'll make it public
domain once I'm done...
Dean
From RAHUL DHESI Msg #7816 *ARCHIVERS* (Rcvd)
To DEAN COOPER Thu Jan 29, 1987 6:13am (0:03)
Will look forward to the article!
But I still think you should join in the Zoo thing.
From DEAN COOPER Msg #7827 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Thu Jan 29, 1987 7:57am (0:05)
Here's the first part of the article in a three part series...
I just couldn't help myself with putting it in a DWC archive... I hope you
still have my archiver... Please feel free to edit it. I plan to add it to
my distrubution file... The article is being put in a in-house company
news-letter... nothing big.
From JOHN COWAN Msg #7845 *ARCHIVERS* (Rcvd)
To DEAN COOPER Thu Jan 29, 1987 5:55pm (0:04)
Ahem!
Us non-DOS types would still like to read your article, Dean. Please upload
it in either ARC or ZOO so we can have Magpie take it apart.
From STEVE MANES Msg #7848 *ARCHIVERS* (Rcvd)
To DEAN COOPER Thu Jan 29, 1987 7:22pm (0:03)
HooHaw... a "hint".
Okay.. I'll work on adding DWC to the Show Attached window tonight. Stand by
all...
From RAHUL DHESI Msg #7903 *ARCHIVERS* (Rcvd)
To DEAN COOPER Fri Jan 30, 1987 8:42pm (0:04)
Good article!
I was able to read it through Steve's archive window, but the downloaded
archive could not be extracted with DWC.EXE prototype A2. It said it was an
invalid archive.
Looking forward to more articles.
From DEAN COOPER Msg #8015 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Mon Feb 2, 1987 7:58am (0:05)
Invalid archive?? Oh, boy...
I'll try downloading it myself.... But what method did you use to download
with... I've had a little problem with the extra stuff that XMODEM tacks onto
the end of a file... But that should be taken care of...
Dean
From BRUCE GOLDMAN Msg #7604 *ARCHIVERS* (Rcvd)
To STEVE MANES Sat Jan 24, 1987 5:49am (0:08)
When the going gets tough, - support for PKARC 2.0
It seems that PKARC 2.0 is the first pseudo-compatible ARCer. There are
ARCers like ZOO and DWC that claim no compatability on one side and ARC, ARCA
and PKARC (/oc) which claim complete compatability. Now PKARC allows for the
option of full compatability or possible incompatibility depending on the
switch setting. However it seems in my own personal benchmark PKARC as
compatable beats the pants off of ARC and ARCA, and as a non compatable leaves
ZOO and DWC also in the dust.
Why then does it seem that Katz's utilities seems to have caused condemnation
for him. I have attached a short text file called YESPKARC.ARC for you to
examine.
From RAHUL DHESI Msg #7613 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sat Jan 24, 1987 11:14am (0:09)
But PKARC doesn't do very much.
It can't run on ANYTHING except an MS-DOS machine, to begin with. Is your
horizon so narrow that you think nothing else exists?
PKARC also forces the user to conform to MS-DOS syntax. There goes any chance
of PKARC ever running on any machine on which a user wants to use the native
filename syntax of the machine.
PKARC doesn't support directory names. How do you transfer a hierarchy o
files without tedious manual manipulation after dearchiving?
PKARC can't improve without confusing thousands of users of ARC format
archives on many systems -- because theh moment PKARC changes its format, it
leaves everybody EXCEPT the MS-DOS user high and dry.
It's fast -- but it's fast only on an MS-DOS machine. How fast is it on other
systems? It doesn't run at all on other systems.
From WILLIAM QUAN Msg #7673 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sun Jan 25, 1987 5:27pm (0:09)
As for non-MS-DOS support ...
Granted as fact that the world doesn't revolve around MS-DOS machines (I'm an
IBM mainframer originally, and I've seen lots of DEC people here), but I think
that for the vast majority of MS-DOS users, they care less about having a
compression program that works well or at all on other machines. Certainly
there are critical applications where a machine-to-machine compatible program
would be necessary, I don't question that. But I think criticsm of non-MS-DOS
support is really only valid in some circles, and not justified in (many/most,
I believe) others.
I am not throwing support behind ARC, PKARC, ZOO, etc., etc., here. Just
wanted to insert a counterpoint to your valid point about the non-MS-DOS
support. Some people would find it meaningless, some definitely would find it
significant.
From BRUCE GOLDMAN Msg #7722 *ARCHIVERS* (Rcvd)
To WILLIAM QUAN Tue Jan 27, 1987 2:20am (0:08)
Just a quickie...
I echo your thoughts that most single machine users really don't care about
what is out there for other machines. I have TRS-80 equipment and ATARI
equipment along with my PC. When I use each one, I look at them as seperate
and really don't have any need to relate one to the other, save ASCII
transfers for text or machine modification of basic programs.
The major point of ZOO's portability is probably important for BBSes that have
DLs for a number of machines. If I want to transfer something from My TRS-80
to My PC, I can go null modem, and it is just as quick as ARCing and then
Dearcing.
From (n/a) Msg #8737 *ARCHIVERS*
To WILLIAM QUAN Sun Feb 15, 1987 4:15pm (0:07)
Compatibility IS needed for other machines
I know of at least 90 Amiga BBS's run on MS-DOS machines each with an average
of 200 Amiga users (my board has 400 active users). These boards depend on
Arc for MS-DOS *and* arc for the Amiga. I think this is a small sampling as
there are other BBS's on various machines faced with the same problems. To
say that a majority of MS-DOS users have no concern with Arc vs. Others is
absurd. With the most active users of Arc being BBS operators, I would think
there is a strong case for compatibility. Rich
From STEVE MANES Msg #8813 *ARCHIVERS*
To (n/a) Mon Feb 16, 1987 4:36pm (0:07)
Problem is that ARC, itself, isn't 100% compatible WITH itself.
It hasn't been since PKARC introduced Squashed files as a default option in
2.0. Just log on to a BBS with a lot of IBM users on a machine unsupported by
PKARC and I'll wager that at least a third of all
ARC file uploaded since 2.0 was released will be as inaccessible to you as if
they were compressed under ZOO or DWC.
As much as I don't like pointing fingers, this really isn't the fault of ARC
but of Phil Katz' and his decision to continue using the .ARC extension for
compressed files that are anything BUT ARC-compatible.
From BRUCE GOLDMAN Msg #8849 *ARCHIVERS* (Rcvd)
To STEVE MANES Tue Feb 17, 1987 4:54am (0:09)
Steve you keep hitting on incompatability
To PPKARC 2.0 is no longer compatabile with ARC is in fact ENTIRELY WRONG.
Using PKXARC 3.4, you can unarc all ARCs!
It is like saying a black and white TV is incompatable with a Color TV.
Simply because a B&W TV isn't capable of showing Color, should all color TVs
only show B&W to be compatable. Nope color TVs are upwardly mobile they can
show both Color and B&W.
Agreed you can pick apart the analogy by saying B&W TVs can still show Color
programs in B&W, while ARC can't de-arc squashed files.
But since PKXARC 3.4 is readily available and has proved to be superior to ARC
it must be considered to be the real ARC!
ZOO may be better, but how are you going to convince the Americans to throw
away their TVs because the Japanese system is superior?
From STEVE MANES Msg #8855 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Tue Feb 17, 1987 6:58am (0:18)
Bad analogy.
PKARC has a problem. I know there's a problem and it seems that some others
who are writing patches to prevent PKARC from writing non-ARC compatible files
also think there's a problem. Many sysops who've posted bulletins directing
users NOT to upload PKARC 2.0 compressed files without a .PKA or some such
extension also think there's a problem. Users of machines unsupported by
PK[X]ARC also think there's a problem. So there must be a problem.
The problem: "compatibility" in this particular arena means
"interchangability". If Phil Katz had opted to make his squashing
algorithm... which really is a dubious improvement... an explicit option we
wouldn't see all this uproar over PKARC 2.0. By default, PKARC creates files
that are incompatible with anything but PK[X]ARC. As nice as his product is,
he didn't create the "standard" and shouldn't be screwing with it. People see
".ARC" and think of it as a generic protocol. And, fact is, for those many
machines that PKARC doesn't support, ARC still remains a standard. Suppose
some MacIntosh author wrote a program that compressed files into some
proprietary format unsupported by either ARC or PKARC but which you had
unwittedly downloaded because it had ".ARC" on its tail. Wouldn't you feel a
little cheated?
Whether you choose to agree or not, "ARC" is a folklore "standard" that
predates PKARC. The pseudo-ARC files output by PKARC-Squash remind me of the
hundreds of "Ray's Pizza" parlors around the city.
Lissun, I like PKARC, I use PKARC, and will probably continue to use PKARC
(until, at least, Thom's 5.20 is released). It's a good product. But it even
bothers >me< that I can't tell one ARC file from the next. Before I upload
any ARC file to a BBS now I generally unpack it and repack with the /oc/ot
option for safety. However, I've moved most of my files over to ZOO now.
Rahul's program just feels more "solid" to me.
From BRUCE GOLDMAN Msg #9095 *ARCHIVERS* (Rcvd)
To STEVE MANES Sat Feb 21, 1987 4:54am (0:06)
but perhaps your reality is an analogy.
You seem to be saying (if my understanding is correct), that if PKARC 2.0 is
not compatabile with ARC, than you might as well go to ZOO. There is a irony
here, that is if one and one point one aren't the same, its time to go to a
new mathematical system.
I tend to think of the PROBLEM is just an over blown occurance of a file by an
overzealous support to be true to the original ARC.
From STEVE MANES Msg #9106 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sat Feb 21, 1987 12:17pm (0:24)
Er, the analogy is lost on me.
Okay, one last time, with feeling, and then I'm going to retire from this
debate. I have no qualms qualms with PKARC 2.0's compression scheme. That is
for the authors of these various archivers to argue over. PKARC could write
files >backwards< and I wouldn't care less. It's a fine program, runs well
and is generally quite reliable. If I had a beef about PKARC I wouldn't be
running it in an archive window here.
As I've stated more times than I care to count, my issue with PKARC is its
naming convention for its squashed files. That was a mistake, an oversight, a
deception, whatever you want to call it. Before I was hip to PKARC 2.0, which
was only recently, I must have spent a few hours downloading .ARC files from
BBSes which failed to uncompress. In fact, the reason I went with PKARC was
because I thought SEA's ARC was buggy for failing to extract these files.
Such turns out not to be the case. Those "buggy" ARC files are long since
deleted and I wasted a few hours of online time because PKARC decided to make
incompatibility a default in the program.
This argument is not about what is/is not the right way to go or who is the
inheritor of the ARC standard. It's not about who's faster or makes smaller
files. It's about confusing the users of these programs and creating a
general distrust of the ARC format, especially among those groups of people
who own machines unsupported by PKARC. If I happened to stumble upon an
improvement to, say, the Xmodem protocol and implemented it >as< the default
Xmodem here, don't you think people would gripe about that? Sure, I could
say, "Hey.. my algorithm gives you X% better throughput than Christiansen
protocol!" but that's meaningless to those who don't have the compatible
software to use it! Chuck Forsberg's Ymodem (Xmodem-1k) protocol will happily
receive Xmodem CRC so it could be said that Ymodem is "Xmodem-compatible".
But when it sends, it would most likely crash most Xmodem programs. But, so
what? Ymodem is the superior protocol and by your judgment that gives it
license to call itself "Xmodem".
That is my entire issue against PKARC. Whether formally declared or not,
we're all dependent upon certain "standards" and, wittingly or not, take them
for granted. Rather than play with these standards covertly, authors with a
better idea should attempt to create parallel alternatives, just as Rahul and
Dean have done, and not play fast and loose with existing standards at the
expense of confusing the users of these products.
From RAHUL DHESI Msg #8900 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Tue Feb 17, 1987 8:16pm (0:13)
Bruce, I tend to agree with you about PKARC.
Change must inevitably come, and Phil did what he could. Despite the fact
that I'm directly competing with him in this nasty business of "which archiver
is better" I tend to sympathise with him (though I think using the same
extension might not have been a good idea).
PKARC is AS compatible with ARC as ARC is with itself!! Good heavens, doesn't
anybody remember ARC 1.0, and 2.0, and 3.0, and 4.0, and all the iterations in
between? Always upward-compatible, seldom downward compatible! Phil's merely
carrying on a long tradition. People complained when ARC went from 4.5 to
5.0, and they are complaining now because it's gone another step in the exact
same manner.
The only little hitch is that Phil didn't release the specs for squashing
until the pressure got to him. That was a mistake! He should have released
specs and possibly sample code well in ADVANCE of the release of the Squashing
PKARC, to let the BBS windows authors get prepared.
But then again, Thom Henderson never released specs in advance, so perhaps
once again Phil was merely carrying on the tradition.
All this aside, I observe: Zoo users know which program will extract their
archives (a very simple rule: any version of Zoo, Booz, Looz, or Ooz will
do). ARC users don't.
From BRUCE GOLDMAN Msg #9099 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sat Feb 21, 1987 5:04am (0:05)
But arc users do!
You close by saying that ZOO users know whick program will extract their
archives but "ARC users don't." This is untrue, as PKARC 3.4 unarcs ALL arcs.
I got my PC Clone in May, 1986, when ARC 5.12 was already released so I am not
familiar with the trials and tribulations of going from SQZ to LBR to ARC 1.0
etc.
From RAHUL DHESI Msg #9112 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sat Feb 21, 1987 3:43pm (0:03)
PKXARC unarcs ALL arcs but....
...only on MS-DOS machines!
From BRUCE GOLDMAN Msg #9174 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sun Feb 22, 1987 4:08am (0:07)
But that is enough.
For 99% of all transactions, a BBS user prefers to get his software that is
written for his machine. Why would I want to download a program for the
VIC-20 from a BBS onto my PC?
For the same reason people knock the compatibility standard of archivers, why
no know the compatability of machines, let them all use MS-DOS 9or is that let
them eat cake...)
As I recall ZOO works on the Amiga and the PC with promise of CPM and others.
(Correct me if I am wrong).
I certainly don't need a compatable archiver with an Amiga? /
From JOE ZITT Msg #9182 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sun Feb 22, 1987 11:39am (0:04)
As I've said elsewhere...
my company will soon be using ZOO to transfer files between our MS-DOS
machines here and our UNIX machines overseas. SHow me another archiver that
can do that!
From DEAN COOPER Msg #9287 *ARCHIVERS* (Rcvd)
To JOE ZITT Tue Feb 24, 1987 7:28am (0:05)
Well, were getting 386 Zenix here at work, so I can now port DWC to
Zenix. In the process, I can make my code more portable, separate the machine
dependant parts, and get a Unix version running... Just give me a little
time........
Dean P.S. I'm almost done with my optimization so I'll be moving on to
other stuff soon.
From BRUCE GOLDMAN Msg #9503 *ARCHIVERS* (Rcvd)
To JOE ZITT Fri Feb 27, 1987 4:43am (0:06)
Joe, I in no way am putting ZOO down,
I am saying for the needs of most BBS users, it is obvious that arc is not
only sufficent but also just about exclusively the only one in use. I want to
transfer files from my TRS-80 to the IBM, the only available ARC-type program
is SQUEEZE that works on both, does that make SQUEEZE the best, no, but for
that purpose it does!
I am glad there is ZOO as it adds one more bit of flexibility.
From JOE ZITT Msg #9523 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Fri Feb 27, 1987 8:11am (0:03)
I'm not necessarily saying ZOO is the best...
just the only one that will handle our purposes.
From BRUCE GOLDMAN Msg #9663 *ARCHIVERS* (Rcvd)
To JOE ZITT Sun Mar 1, 1987 5:33am (0:03)
Ditto!
I'm not saying PKARC is the best, but it is faster, better shrinkage and more
compatible for my needs.
From KILGORE TROUT Msg #11363 *ARCHIVERS* (Rcvd)
To JOE ZITT Mon Apr 6, 1987 2:44am (0:03)
ARC can do that. Just pick up a copy of UNIXARC.
From BILL DAVIDSEN Msg #13054 *ARCHIVERS* (Rcvd)
To JOE ZITT Sun May 3, 1987 7:20pm (0:05)
And I never claimed a first
I can tell you that ARC was not (remotely) the first program to combine the
adding of data with compression. I had a program (written in Aztec C) which
did that under CP/M 2.2 (1980 perhaps?). I later did a version for DOS
1.something. I make no claim to be first, but it's NOT a new idea.
From RAHUL DHESI Msg #9191 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sun Feb 22, 1987 1:00pm (0:11)
Why would I want to download a VIC-20 program for my PC?
There has been a BIG revolution in programming. In the early days, hobbyists
exclusively used assembly language for two reasons: limited memory made it
impractical to use high-level languages, and good compilers weren't available
anyway. BASIC was perhaps the one exception, but the nonstandard dialects
used by all machines made it impossible to easily port programs.
Today, the situation is much different. You might not want to download a
VIC-20 program, but you MIGHT want to download a program written in C or
Pascal and compile and run it on your system. Turbo Pascal is close to ANSI
Pascal if you are a little careful. C is even more standard if you don't use
o.s. dependent facilities. The trend towards greater standardization is
pretty clear. Soon you will find people programming almost exclusively in
high-level languages.
Why not make it easy for people to release their source code? The easier it
is for others to use it, the more incentive an author will have to distribute
source.
From STEVE MANES Msg #9200 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sun Feb 22, 1987 3:21pm (0:14)
For >you< it might be a non-issue.
For >me<, it isn't. I've grabbed a few megabytes of C source code off non-IBM
systems and likewise send K&R-compatible source to non-IBM systems.
Currently, this must be done as a plain ASCII or Huffman SQueeze because even
plain old ARC isn't commonly found on non-MSDOS systems. Why this is so when
SEA has ARC running on other machines, I can't say. It's possible that it's
just an education problem and that users of other hardware are not AWARE of
ARC. But one thing's for certain: PKARC 2.0's Squashing destroys what
compatibility Thom Henderson sought to create between MS-DOS and the rest of
the world.
Which brings up another point: I'm going to be porting Magpie to Unix over the
next few months and this BBS will no longer be operating under MS-DOS. That
means I cannot support PKARC files created by 2.0 default. In fact, I'll have
to prohibit them since I don't want to dump PKARC files via null modem to my
MS-DOS machine just to check their contents. If I can get SEA's 5.00 source to
compile (which I've never successfully been able to do) I'll still support
standard ARC format. At this point, ZOO is the obvious archiver-of-choice
since Rahul has been so actively involved in bringing ZOO to other machines.
Likewise, he has a functioning ZOO for Unix now as well as for the IBM and
Amiga so that IBM users can continue to up/download IBM files.
From BRUCE GOLDMAN Msg #9508 *ARCHIVERS* (Rcvd)
To STEVE MANES Fri Feb 27, 1987 5:04am (0:07)
Does three make it universal?
ZOO is a good product, but simply because it supports AMIGA and UNIX doesn't
force it to be the only choice, unless it is indeed the only choice. If
MAGPIE goes unix, than you have no choice but straight files of ZOO archivers.
I have somewhat of the latest version of ZOO and would not be adverse to using
it on MAGPIE, however when I call my local PC Board or RBBS, that says ALL
FILES MUST BE ARCed, I have no choice there either.
I would prefer to see DWC to be the universally accepted ARChiver. If I was
to use a personal arc on my HD it would be DWC.
From STEVE MANES Msg #9517 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Fri Feb 27, 1987 6:46am (0:15)
I don't really care which of these authors winds up being the "universal
archiver". If Phil Katz supported other machines besides the IBM I'd as soon
go with his software. But the fact remains that only two of these authors
here HAVE their software running on machines other than the IBM: Thom
Henderson and Rahul Dhesi.
Of these two archivers, only ZOO has been >designed< for portability. SEA's
ARC is quite reliable (but quite slow) and its portability is based upon
extending the MS-DOS environment to other operating systems. I'm glad that
Dean has plans to port DWC outside of MS-DOS but that ain't happened yet so it
remains to be seen what problems will be encountered in the translation (first
thing Dean's gotta do is learn how to spell "Xenix").
As I said earlier, an extra point or two in operating speed or archive size
really isn't all that meaningful nor do I think it will help sell the product.
PKARC broke the ground here with a product several orders of MAGNITUDE faster
than SEA's ARC. That was significant. I would hate to see more important
matters, such as portability and archive integrity, overshadowed by a kinda
pointless crusade to beat PKARC by a nose in the time trials.
This being said, why do you wish DWC to be the "universal archiver"? My
opinion is that the performance differences between ZOO and DWC are only food
for purist debate, not an issue of any practical importance to anyone USING
these products.
From BRUCE GOLDMAN Msg #9659 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Mar 1, 1987 5:19am (0:11)
We all have different priorities
Your #1 priority is portability mine is speed and size (I know that is 2
priorities). If ZOO had the best shrinkage and the best speed than no doubt
it would be the one of choice.
WHY SHRINKAGE SIZE IS IMPORTANT When DLing from a BBS, obviously the smaller
the package to DL the less time need to DL it. Therefore if toll calls are in
use there is an overall cash savings. If you are limited on a BBS as we all
are by time, then you can DL more packages. Lets face it ARC and Squeeze were
invented for telecommunications.
WHY SPEED IS IMPORTANT Most of us have a limited amount of time at our
computer. I think we would much rather spend that time active than waiting
for the computer. Even seconds can seem rather long when you are waiting to
regain control of your keyboard.
WHY PORTABILITY IS IMPORTANT If we are talking to at least one other type of
computer than it is important.
FOR ME... Portability is not an issue. On balance for my needs at this time,
PKARC blows ZOO out of the ocean!
From STEVE MANES Msg #9671 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sun Mar 1, 1987 7:51am (0:08)
By your own calculations....
.... those in your YESPKARC.ARC upload.... PKARC holds a 1% file compression
margin over ZOO. With a 120k archived file, that's 1 Ymodem block or little
more than 4 >seconds< to send a ZOO'd file at 2400 baud, 8 seconds at 1200
baud. The phone company doesn't bill in fractions of minutes, you know.
Indeed, ZOO is slower on ARC and de-ARC than PKARC by a wider margin. On that
issue, you have a point. But I probably archive and unsqueeze an entire
library about twice a week. The difference in speed between PKARC and ZOO is,
practically, too slight to be of much more than academic interest.
From BRUCE GOLDMAN Msg #9714 *ARCHIVERS* (Rcvd)
To STEVE MANES Mon Mar 2, 1987 5:04am (0:08)
If ZOO was the accepted standard then I would support it.
ARC and Katz's mods seems to be much more acceptable on MS DOS boards. Give
full access to ZOO, DWC and ARC, when transferring MS-DOS to MS-DOS, ZOO
finishes a close third in size and time.
So lets go back to the tool box, if I have a specific size screwdriver that is
exactly right for a specific job aren't I better using that screwdriver than a
universal fit all screwdriver.
Perhaps I am being too microscopic in my attitude, but for my specific present
need PK does it. I do not throw out ZOO as Rahul said, it may come in handy
in the future.
From STEVE MANES Msg #9737 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Mon Mar 2, 1987 9:11am (0:05)
Just so long as you qualify the above use of the term "standard".
PKARC may/may not be the current standard of the "IBM Micro Running MS-DOS"
Set. But not only isn't PKARC a standard with >most< of the computer world,
it doesn't even run on non-DOS machines.
From RAHUL DHESI Msg #9705 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Mon Mar 2, 1987 12:55am (0:27)
Clarifying the issues (again).
Bruce, take another look at my message to Thom Henderson in which I tried to
clarify the different issues. Archive format and archiver implementation are
two separate issues.
I haven't tried optimizing speed in Zoo since way back in October or so,
because I was concentrating on other things such as portability and features
such as wildcards and pathnames. When I decide to compete on speed, Zoo will
leave PKARC in the dust for a very simple reason: It uses only one
compression method, not the three that PKARC tries to use.
What is more likely to happen is that as Zoo catches on (and it IS catching
on slowly but surely), Philip Katz will decide to join in the effort and
dedicate his assembly language talents to creating a fast, specialized MS-DOS
version.
Then you will have the best of both worlds -- a portable version from me and a
superfast MS-DOS specific one from Phil.
However, I AM competing on speed in a subtle way: If you take the average of
compression speeds under three different operating systems (e.g. MS-DOS,
AmigaDOS, and UNIX) you will find Zoo winning, because under AmigaDOS and
UNIX, PKARC can be shown to take an infinite amount of time to do anything
useful.
As for shrinkage size, realize that Zoo is optimized for large text files.
Take a bunch of text files of 100K+ each and compare. There's a good reason
for that! The type of file that is most useful across dissimilar machines is
the text file containing human-readable documents or source code.
Did you know that Phil Katz did some tests with the five MOST POPULAR
downloads from the Exec-PC BBS, and ZOO WON IN COMPRESSION? It's true -- ask
Phil, or look at the file he circulated with the results. Phil did something
sneaky though--he added five more sets of data that HE chose, and the result
was that Zoo lost on the total. But it won in compression in the five files
that Bob Mahoney, Sysop of Exec-PC, declared were the most popular downloads.
Finally, what's the ability to recover data from damaged archives worth to
you? What's it worth to you in peace of mind to know that directory entries
corrupted? Zoo uses some additional overhead to keep track of this
integrity information and to allow for things like pathnames and comments and
(for the future) timezone information, multiple version numbers, and other
such stuff. My judgement was that users would consider it worthwhile to
invest a few extra bytes for each file for better security of data and these
advanced features.
I agree that what PKARC does, it does well. (Except that it chokes on
pathnames containing slashes, and has a few other minor flaws. That problem
is common to all the common archivers except Zoo and the original ARC.) Phil
has done a good job of programming, and when he turns his attention to the Zoo
format the result will be one heck of an archive utility.
From BRUCE GOLDMAN Msg #9721 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Mon Mar 2, 1987 5:41am (0:04)
I hope ZOO succeeds
You have made some excellent arguments in favor of ZOO, but as of this moment,
PKARC is the method to go for me. Only 1 min left, will finish next time.
From BRUCE GOLDMAN Msg #7721 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Tue Jan 27, 1987 2:15am (0:15)
Rahul forgive me but,
To say PKARC doesn't do much is grossly unfair. As a PC end user of BBSes it
is the finest product out there for as you even mentioned MS-DOS machines.
Granted Zoo is particular adept at portability, which not only is valuable but
also earns my respect as well as I am sure many others.
The problem I have with ZOO is that it doesn't offer me anything over PKARC
and indeed weakens the capability of us pure PC Users who DL only PC stuff
from PC Boards. If I had an amiga, and someone had a specific ARCHIVE type
called XXX, then I certainly would want to use that and wouldn't give a damn
about ARC or ZOO OR DWC.
I am not trying to say one machine oriented board should ignore other types of
machines in their DL, but I certainly think that most boards that are file
oriented cater to their specific audience. and only provide a gratuitious set
of files for other computers.
Going back to my car analogy. If every car could use the exact same door
handle, it would of course be great, cheapen the price and make it more
accessible. In reality this isn't the case, but if I could purchase the
specific handle for my car at a much lower price and it is more specific for
my car, I would prefer that than a handle that fits 100 other models.
I truly don't care if it fits a volkswagen or a mercedes if I have a porsche
(might as well dream, it is my analogy).
ARC offers the compatability of the old as it can unarc any arc, it offers top
speed and it compacts better than almost all I tested.
Bruce
From RAHUL DHESI Msg #7728 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Tue Jan 27, 1987 5:12am (0:06)
It all depends on how big or how small your universe is.
AND, if your universe is limited to MS-DOS machines, then by all means PKARC
is a good choice. But then, when you say so, it is desirable to qualify that
statement by saying that you are only referring to the MS-DOS world.
As for compatiblity, see my earlier message about how to get 100% upward
compatibility not only with ARC, but also with Zoo, SQueezed, and LBR and LQR
formats, all on the same disk.
From BRUCE GOLDMAN Msg #7813 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Thu Jan 29, 1987 2:44am (0:11)
Sometimes you only need a small universe...
I agree with you whole heartedly that for portability there is no question
that ZOO is the most superior cruncher out there to my knowledge. Being that
I never condsidered ARCHIVING on other systems, nor did I conceive of a need.
Due to your messages and some talks with Steve Manes, I can see the purpose, I
applaud it and could even find it useful. I would love to have a system in
which I could pack my files on my TRS-80 mod I/III and move them fast and
efficently to the IBM.
My question, and perhaps challenge to you, is why do you have to be
incompatable to ARC to make it work on other machines. If ZOO was capable of
faster speed and condensing more than ARC, that it would at least be superior
even in the MS-DOS world.
Since for purely MS-DOS purposes, PKARC beats it in all 3 of what I consider
to be the critical measurements (arcing speed, dearcing speed and size of
condensation) added to the fact that it has the compatibility of deARCing all
ARCs, I have to stick with Katz.
From RAHUL DHESI Msg #7819 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Thu Jan 29, 1987 6:18am (0:05)
Why Zoo is incompatible with ARC.
The ARC format was not created with expansion in mind. It limits filenames to
the MS-DOS syntax. It allows for no comments, no pathnames, etc.
In addition, ARC uses too many different compression techniques. This makes
it harder to implement it on other systems. I chose just one technique for
simplicity.
From BRUCE GOLDMAN Msg #7873 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Fri Jan 30, 1987 4:19am (0:05)
One nice advantage of ZOO could be...
used on dead machine. That is as a program that would allow transfer from
your former machine that is "obsolete" to a new machine. I would definately
find a use for a TRS-80 Mod I/III/4 version. Any thought to working with the
somewhat dead machines?
From RAHUL DHESI Msg #7899 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Fri Jan 30, 1987 8:08pm (0:05)
I'm working on a version for machines with limited memory.
The problem with the TRS-80 and CP/M machines is the 64 K address space. Zoo
is quite big, and the compression table takes up quite a bit of space. I'm
planning to release a portable bare-bones version that would run on CP/M and
TRS-80 machines. Can't say how soon that will be, though.
From BRUCE GOLDMAN Msg #7967 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sun Feb 1, 1987 3:53am (0:04)
you got my attention!
A TRS-80 version, simple so I could move my stuff onto my PC, makes for a
personal very practical program.
Good luck with it!
From PATRICK BENNETT Msg #7742 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Tue Jan 27, 1987 12:21pm (0:04)
W/ your car analogy... ARC/PKARC come standard, ZOO comes w/ Air
Conditioning, Very nice stereo, a few other nifties, and a roll bar for those
'accidents.'
From BRUCE GOLDMAN Msg #7875 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Fri Jan 30, 1987 4:27am (0:04)
...and so it goes.
But on the other hand the PKARC car has reliability, trust, a long heritage
and more than likely the ability to weather the storm.
From STEVE MANES Msg #7882 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Fri Jan 30, 1987 8:27am (0:18)
Depends upon where you draw the line there.
(Devil's Advocate Mode).. PKARC 2.0 and its default Squashing means, for all
practical purposes, that PKARC has created yet another new compression
"standard" by omission. Of course, it will correctly unpack "standard" ARC
files as well as create new ones compatible with SEA's ARC. But the children
it brings into the world without the explicit /oc are as non-"standard" as
anyone else's wares here. It's also a newer compression option than ZOO and,
very likely, DWC. So PKARC 2.0 really has yet to prove its reliability and
its heritage appears to be measurable by weeks.
Also, as I've said before, people ARE using PKARC 2.0 to pack files and stick
'em on BBSes without noting that they were created with PKARC's Squasher. I
pulled two off Compuserve on Wednesday and one off PCSI. Folks with PKARC 2.0
either opt to create the smallest files possible for quicker transmission or
aren't paying attention that the files they are creating are unpackable ONLY
with PKARC. This is potentially more troublesome than many, more obvious
compression methods since PKARC-Squashed files >look< like ARC files... but
ain't. So there IS something to concern IBM-only archive users about PKARC.
Me, for instance... until last month I didn't even know PKARC existed. If I'd
just spent 25 minutes downloading a large .ARC file only to have my SEA ARC
croak on it I would've probably written it off to line noise and gone back
online to grab it again. I >>really<< think Phil should create a new file
extension for Squashed PKARC files. There's no reason not to. Squashed files
are gonna be incompatible with SEA ARC anyway... and wouldn't folks be a bit
upset if Rahul's program decided to use .ARC as an extension... with the same
incompatibility?
The nicest things about standards in the computer industry is that there are
SOOOOO many to choose from...
From BILLY ARNELL Msg #7960 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Feb 1, 1987 3:03am (0:03)
I've asked everyone on my system to use .PKA for the new compression
From BRUCE GOLDMAN Msg #7964 *ARCHIVERS* (Rcvd)
To STEVE MANES Sun Feb 1, 1987 3:43am (0:10)
But the same is true of SEA ARC
What happens if SEA comes out with version 6.0 instead of 5.2. It uses an
incredible condensing program, lets call it compacting. Lets further say its
the best thing since chocolate pudding. Now again you DL a package for 25
minutes that this new procedure has compacted. You again don't know of ARC
6.0, only armed with arc 5.12, you get garbage. You would be hit with the
same problem of not knowing of PKXARC 3.4.
This hypothetical situation has happened when ARC moved from full version to
full version, so there is a historical precedent.
I DO HAVE AN ANSWER - WE NEED A NEW DRUG!
What we really need is someone to work on a universal de arcer, one that works
on PKARC 2.0 with squashing, ZOO and DWC.
That way anyone could use their favorite system to arc, and we all would be
armed with a universal Dearcer.
From STEVE MANES Msg #7980 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sun Feb 1, 1987 1:04pm (0:11)
Not really a comparable situation.
For one thing, the ARC standard belongs to SEA, which developed it. For
another, SEA ARC tells you when it encounters an ARC file meeting its own
conventions that "You need a new upgrade of ARC". It's not a big deal and I
don't know why Phil Katz is reluctant to do it but changing the file extension
would fix everything. I'll tell you what MIGHT happen, however: if 5.20 comes
out with tighter/faster operation (which Thom indicated it would) then we can
presuppose that at least some people who are using PKARC now will switch back
to ARC because it's also presumable that ARC 5.20 will have a trick or two in
the files it creates that PKARC won't be able to deal with either. Therefore,
ARC anarchy with neither side able to claim 100% ARC compatibility. ARC's
created under SEA 5.20 won't extract under PKARC and ARCs created under PKARC
2.0 won't be extractable under SEA 5.20.
Which is a pretty damn good reason to reconsider one of the other archivers
here which, at least, can claim compatibility with all its files now.
From BRUCE GOLDMAN Msg #8005 *ARCHIVERS* (Rcvd)
To STEVE MANES Mon Feb 2, 1987 2:42am (0:08)
But lets face reality.
With no disrespect to Thom or SEA, they have been dormant now for one full
year, 5.12 was last revised in Feb, 1986. Katz, Buerg and Chin, have in the
last year came out with revision after revision. Can you now blame Katz for
saying why should I stick to a system that is a year old, when I have newer
and faster methods.
ZOO doesn't even claim speed or shrinkage comparison with Katz. DWC only
shows size in his benchmark but eludes to time.
Katz' claim to glory is speed, but along with that speed his compactness just
about meets with every other arcer and in deed passes juast about all of them.
From STEVE MANES Msg #8013 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Mon Feb 2, 1987 7:32am (0:14)
True, but what's going to happen when SEA releases 5.20....
.... and it adds its OWN squash? It's not unreasonable to think that this
will be the case as Thom has stated that 5.20 will create smaller archives.
It's also not unreasonable to presume that any Squash used by 5.20 will not be
compatible with PKARC either. What we'll be left with is one compatible and
TWO incompatible ARC formats posing as some kind of "standard". Also, there
will be no way to tell by looking at the file which of the three formats it
is.
What that will means is that folks will have to have both ARC 5.20 and PKARC
2.0 on disk and extract the unknown ARC file by trial-and-error using both of
these utilities. If the above scenario does turn out to be the case, as I
suspect it will be, then ARC loses on the speed comparisons simply because you
will have only a 50% chance of selecting the correct de-arcer to extract a
file that has been squashed with one or the other programs. Whereas ZOO will
extract all .ZOO files and DWC will extract all .DWC files, ARC 5.20 won't be
able to handle PKARC 2.0 files and vice versa unless it has been explicitly
created in the "standard" ARC format.... which people aren't doing even now
with PKARC. The fumble-time finding the correct archiver to extract the ARC
should also be included in the "speed comparisons" too.
From JESSE LEVINE Msg #8019 *ARCHIVERS* (Rcvd)
To STEVE MANES Mon Feb 2, 1987 9:03am (0:05)
It is Katz's responsibility to change the extension of the files...
....he creates. If he is anything less than 100% compatible with ARC
(whatever latest version) he MUST yield to SEA's prior use of the .arc
extension. I think the BBS community should unite behind this principle.
Phil?? -j
From STEVE MANES Msg #8044 *ARCHIVERS* (Rcvd)
To JESSE LEVINE Tue Feb 3, 1987 12:31am (0:04)
I agree.
The alternative is gonna be civil war with the .ARC format and both programs
(and programmers) may risk losing out to a more stable, consistent format.
From BILLY ARNELL Msg #8168 *ARCHIVERS* (Rcvd)
To JESSE LEVINE Thu Feb 5, 1987 4:25pm (0:04)
I agree. I'm asking people to use .PKA for the new PKARC extension, and
I think Katz SHOULD force it in his program to avoid confusion; especially
amongst the new comers to the ever-changing MSDOS world.
From RAHUL DHESI Msg #8292 *ARCHIVERS* (Rcvd)
To BILLY ARNELL Sun Feb 8, 1987 1:44am (0:05)
New extensions for PKARC. I suggest `KAT'
Because (a) It can be pronounced, like ARC and ZOO but unlike PKA; and (b) It
stands for Katz's Amazing Technique (for compression). I suggested it to
Phil, but he had some good reasons for not using it.
From BRUCE GOLDMAN Msg #8057 *ARCHIVERS* (Rcvd)
To STEVE MANES Tue Feb 3, 1987 6:07am (0:12)
To follow the logic even further lets suppose...
that Joe Shmoe, says hey Zoo is the greatest thing ever, but I can write a
process that will un arc all zoos, on all machines, but will create a smaller
file and be faster. Joe Shmoe, now puts up his new JSZOO.
Rahul, lost interest in ZOO back in 1988, when he got involved in writing this
detailed universal BBS. Well Shmoe sees it is 1990, and its been two years
since the last offical ZOO 4.0! He now says whoa, I got a brainstorm, I can
MINUTE a file, that will cut the time of ZOO 4.0 by 1/12th in time, and 1/10th
in size. Not only that but it is completely compatable with all known
machines.
All of a sudden controversy erupts and Rahul claims he will have ZOO 4.1 out
in a few months.
I mean we can suppose, suggest and feel, but between you, me and the wall, I
don't think Thom, has the same incentive, interest or initiative to build a
new arc.
Secondly, I think is ARC comes out with a new algorithm, it would not be
called SQUASHING, and I think Katz would try to stay in line or else he would
have no choice but to go to the new extension!
From RAHUL DHESI Msg #8120 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Wed Feb 4, 1987 10:45pm (0:08)
That scenario is unlikely to happen.
Because...I chose Zoo's compression table size very carefully. Make it 12-bit
compression, and you lose some compression percentage. Make it 14 bits, and
you take up too much memory. 13 bits was the perfect compromise, and Phil
independently realize that too. Dean hasn't realized the memory constraints
many people work under, so he uses more than 13 bits (14 or 16? I don't
remember).
And since Zoo delivers much better performance than ARC (both in the
MS-DOS-specific and the portable versions), there is little incentive for
somebody else to try to improve it.
From BRUCE GOLDMAN Msg #8140 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Thu Feb 5, 1987 5:10am (0:14)
pardom my ignorance
I am not really sure how compression works or why. Perhaps as an end user
that gives me a slight advantage at looking at them. Sometimes you are too
close to the product that you don't see the overview.
I can't honestly believe that you see no improvement in compression size for
ZOO. At one point I am sure Christianson thought no one would improve Xmodem,
but it took a long time and XMODEM CRC was around and now YMODEM Batch and
Zmodem. Ward was very happy to pass the gauntlet and was pleased the author
used the name YMODEM as a tribute to XMODEM.
If lets say Dean, looked at ZOO, and using his expertise came up with a major
improvement, I am sure you would accept it. To say that no one can improve on
your work, may be a little short sighted. There are people out there working
on going beyond Einstein, who went beyond Newton.
I look foward to next year, when we compare ZOO 1.4 to the version that is
current in 1988. I am sure that if you are the only one to continue to work
on ZOO, it still would have remarkable progress, if Dean helps you it may
even be better. But to throw a worm back into my scenerio. what happens if
Dean comes aboard and the two of you have an argument and you both try to
advance the property your own way. Again you may have a similar
incompatability. Although that may be far fetched, so is send a man to the
moon, anything can happen.
From RAHUL DHESI Msg #8287 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sun Feb 8, 1987 1:29am (0:08)
At some point, improvements exponentially decline.
The closer you get to achieving representation of information in the smallest
theoretically possible number of bits, the harder it is to improve still
further. Can't be sure, but I think that with LZ compression we may be
getting pretty close to that minimum.
As an example, the limit of throughput with any protocol over PC Pursuit is
the data rate in bits per second. At 1200 bps, the maximum is 120 bytes per
second. With Ymodem I see about 90 cps over PC Pursuit. With Zmodem I see
100 to 110 when things are going smoothly. You can see how hard it will be to
improve on that.
From BRUCE GOLDMAN Msg #8352 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Mon Feb 9, 1987 5:07am (0:09)
As soon as you learn the rules, we change the game!
Yes you are right that there are limits that 1200 bauds has and the LZ's
algorithm allows. But just as the perfect maximum/minimum is reached, we dump
them and go somewhere else.
At one time 64K was the maximum on a micro, then 640K and now they are talking
megabytes if not gigabytes. Soon we will discuss 1200 baud as we do 110 baud
when we go 9600 baud and beyond. Eventually you or someone else will come up
with an algorithm that will blow the pants off of LZ.
Just as LBR and SQ laid the foundation for ARC, and perhaps ZOO was the next
logical step. But to think ZOO will be the end of progress is not only
foolish but a very depressing thought. I know that you will march forward as
will Dean and Phil, and the countless others who will challenge ZOO, ARC and
DWC!
From DEAN COOPER Msg #8367 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Mon Feb 9, 1987 7:56am (0:09)
Lempel-Ziv is a general compression algorithm...
It has no knowledge what so ever about the data it is compression, and
relies on one fact only: That sequences of symbols will repeat themselves with
small to large frequency. Now as far as general compression goes, LZW is one
of the best as far as speed and amount of compression. However it could be
beat by one willing to take more time or add to the algorithm a knowledge of
the data it is compressing. Like the SQZ! program that only compresses
spreadsheet files, these do better then LZW because they take advantage of
some knowledge of the data... There are others too...
But in the end, for speed and general purpose use, LZW is close to the
best possible...
From BRUCE GOLDMAN Msg #8429 *ARCHIVERS* (Rcvd)
To DEAN COOPER Tue Feb 10, 1987 2:34am (0:05)
So in other word Lempel-Ziv AINT the final word!
If I read you correct, LZ, assuming nothing about the data, just compresses it
using its algorithm. Now if someone, who really studies the general purpose
algorithm comes up with a better one, LZ can become ancient.
From DEAN COOPER Msg #8439 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Tue Feb 10, 1987 8:21am (0:07)
It already is ancient as far as certain types of data are concerned...
Like speadsheet data, SQZ! mostly likely does better... I also know of a
compression technique for bitmapped pictures. It compresses by producing a
picture that "looks" like the original, but uses far less data. (Note, this is
not true compression, as the original cannot be got back from the compressed
form.) But still, for general purpose use, it's going to be very hard to beat
with something that is as fast as it is...
From BRUCE GOLDMAN Msg #8478 *ARCHIVERS* (Rcvd)
To DEAN COOPER Wed Feb 11, 1987 5:29am (0:12)
Since my knowledge on compression is limited
I have no suggestions on how to manufacture a better compressor than L.Z. I
still recall how some programmers tend to write very loose code while others
like to pack their code. On the TRS-80 there was a utility called PACKER,
that would take a basic program, take out all the rems, renum the program by 1
starting at 1, and try to make as many multi-line statements as possible.
I once was able to get a 22K program down to about 4K, including renaming
variables to single letters, making strings out of repetitive text, etc. after
PACKER and I did our shrink.
One possible hint, is that there is a list of the top 100 use words floating
around on BBSes. Perhaps if a pure TEXT squasher had that info for the basis
of its work, added to it the words that are most common, we may have a special
text squasher. Perhaps not for speed but for shrinkage perhaps something that
could use a spell checkers dictionary, could perhaps shrink a large text by a
great deal. Not sure how it would work, but perhaps there is an idea in here
somewhere.
From DEAN COOPER Msg #8484 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Wed Feb 11, 1987 7:36am (0:08)
I think somebody already did that in hardware before...
Bob Mahoney told me that he had seen a long time ago, a board that would
do compression on the fly before sending the data to the modem... It had a
number of large ROM chips that contained an English dictionary and compressed
text very well. Of course other types of data didn't do very well. The
problem with compressors that only work on certain types of data is that a
general archiver would have to end up supporting all the different algorithms
which could make the archiver VERY large. But a stand alone compressor
program could do very well, except that people would end up with many
compressor/decompressor programs....
From BRUCE GOLDMAN Msg #8529 *ARCHIVERS* (Rcvd)
To DEAN COOPER Thu Feb 12, 1987 3:50am (0:04)
Well since I reinvented the wheel how about this...
Eye Drops that contained the exact prescription and would form a contact lens
on the eye. OK it has nothing to do with computers, but its something novel.
From RAHUL DHESI Msg #8403 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Mon Feb 9, 1987 9:30pm (0:09)
You got it!
Improvements exponentially decline, but new and dramatically different ideas
always emerge.
However, there are some theoretical constraints that always exist. For
example, nobody has yet found a way around the second law of thermodynamics,
which makes perpetual motion machines impossible -- though countless have
tried. Similarly, there is a theoretical limit to how much you can compress
data, and the limit depends on the amount of redundancy in the data. For
example, take a Zoo archive and try to compress it further. Neither ARC nor
DWC will compress a Zoo archive more than 2 to 4%. This is because the
archive lacks much redundancy.
In the long run, cheaper mass storage and direct digital communications links
are likely to be a better solution than improvements in compression
techniques.
From DEAN COOPER Msg #8150 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Thu Feb 5, 1987 8:28am (0:09)
Come on Rahul... A LOT of people out there also have a lot of memory,
My program simply takes advantage of the memory that's available on todays
computers. My "speed" compressor will work on most 256K PC's and my "size"
compressor will work on 512K PC's. Now of course you can play games and have
small partitions... But then your locked out of where most modern
applications are going... which is taking advantage of what's there. I even
have an idea for Lempel-Ziv on a 386 machine that will use its LARGE model
which is larger than 4 tera-bytes which is the small model. A machine doesn't
have to have the memory since its virtual but my algorithm will be even faster
if I use what the machine is capable of doing... Both of my compressors do
better than ZOO on certain classes of files, just take a look at my table.
- *You have 3 minute(s) left**
From RAHUL DHESI Msg #8289 *ARCHIVERS* (Rcvd)
To DEAN COOPER Sun Feb 8, 1987 1:36am (0:08)
A lot of people have a lot of memory!
However, if an archiver is to be some sort of standard, then it has to allow
for the less powerful machines. Believe it or not, a LOT of PC uses have only
256 K memory, of which some is taken up with TSRs. Why? Usually because these
people refuse to buy anything other than true-blue IBM, and can't afford more
than 256 K. I'm trying to allow for them too.
Then again, when I implement Zoo for CP/M, its economy of memory will be
valuable. You've locked out the CP/M world permanently. We both made value
judgements and ended up with different conclusions.
From ROGOL DOMEDEFORS Msg #8302 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sun Feb 8, 1987 5:33am (0:08)
An additional consideration....
.....is that on machines that operate on a consistent multiuser basis, the
total available physical memory is divided by the number of active users.
Therefore many small, microcomputer multiuser systems may not offer to any
given user more than perhaps 300K or less. Even large (5 or more megabytes of
RAM) small microcomputer systems may offer each user only that much space if
they're heavily loaded. Any program requiring lots and lots of memory to
execute necessarily excludes such systems. In a couple of years even the
small multiuser systems will have megabytes per user; call back then; but
until then, there are realities.
From DEAN COOPER Msg #8307 *ARCHIVERS* (Rcvd)
To ROGOL DOMEDEFORS Sun Feb 8, 1987 8:45am (0:03)
My program will run in 200K of free memory (182K to extract)....
From ROGOL DOMEDEFORS Msg #8317 *ARCHIVERS* (Rcvd)
To DEAN COOPER Sun Feb 8, 1987 11:15am (0:03)
But only under MSDOS; my above msg is irrelevant to you.
From DEAN COOPER Msg #8306 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sun Feb 8, 1987 8:44am (0:07)
I'm perfectly willing to concede to the people who stay behind the times...
It just doesn't cost that much to upgrade, and its more than just me that
requires about 200K of free memory (only 182K to extract)...
Anyway, must people are going to be sticking with ARC for a while longer
(a year??), so by then maybe only a few will be left with such small amounts
of memory.... Yes, I concede CP/M... I don't really care... I think I will
simply shift my focus tgiving people an archiver that is not meant to be some
kinda of "standard", but one that they can use privately...
From STEVE MANES Msg #8320 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Sun Feb 8, 1987 12:13pm (0:04)
Your argument is substantially correct....
.... just a small NitMsg: most PC/XT clone mamaboards allow for RAM expansion
up to at least 512k now. But there are still a lot of people using older
genuwine IBMs with 256k (and less) memory.
From BRUCE GOLDMAN Msg #8353 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Mon Feb 9, 1987 5:11am (0:09)
There is a time to go beyond the minimum.
I understand the reasoning and even appreciate those who code their program so
that the person with the least amount of equipment can use it. But lets say
(and I haven't taken a survey or even am try to make an educated guess, just
using a high random type number) that 90% of all IBM users have 640K, at what
point do you say the heck with the other 10%.
Should we still support the one or two Altair users out there and what about
the 4 or 5 Commodore Pet owners? I am not trying to be frivilous but am
wondering at what point do we actually neglect the small percentages for the
good of the larger percentage. It is definately admirable to take care of
lessers, but there comes a time...
From DEAN COOPER Msg #8368 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Mon Feb 9, 1987 8:00am (0:04)
Always nice to here someone on my side.....
Like Phil, however, I was pushed to such messures as requiring as much
memory as I do because of compitition. Phil was pushed into squashing.
From RAHUL DHESI Msg #8405 *ARCHIVERS* (Rcvd)
To DEAN COOPER Mon Feb 9, 1987 9:39pm (0:03)
Phil said once that he introduced squashing to compete with Zoo.
...Now isn't that interesting?
From DEAN COOPER Msg #8438 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Tue Feb 10, 1987 8:16am (0:03)
That's what I was trying to say, the competition forced him into it...
From RAHUL DHESI Msg #8404 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Mon Feb 9, 1987 9:38pm (0:10)
Drawing the line between the obsolete and the underpowered
The Altair is definitely obsolete; CP/M is becoming obsolete; but it's not
clear that the 256 K IBM PC is quite obsolete. A LOT of companies are doing a
rip-roaring business selling PC clones, and you can be sure many people do
choose to buy a barebones machine with 256 K memory. Where does one draw the
line?
But you can be sure that when the time comes to break away from the past, Zoo
will do it! I just don't think we can ignore CP/M users right now -- there
are too many of them.
I think the solution will be to find a compression technique that will not
need the large amount of memory that LZW does for extraction. It isn't the
compression that is memory-critical (since you can always use less meory with
some sacrifice in compression efficienty), it's the decompression.
From RAHUL DHESI Msg #8119 *ARCHIVERS* (Rcvd)
To STEVE MANES Wed Feb 4, 1987 10:37pm (0:07)
ARC.EXE will not add a new compression method.
Thom said version 5.20 will be backwards compatible to version 5.00, which
means all that SEA is doing to improve compression is what Katz and Buerg did
some months ago: improving the table-resetting algorithm.
I think the right way to go, if Thom wants to add a new compression method, is
to make it compatible with PKARC's squashing. That is the only way to avoid
giving Zoo control of the situation.
Or Thom may keep ARC the way it is, and let it stagnate.
From RAHUL DHESI Msg #8118 *ARCHIVERS* (Rcvd)
To STEVE MANES Wed Feb 4, 1987 10:34pm (0:04)
When I saw the controversy about Phil's Squashing, I chuckled!
Because...Zoo users know which program will extract their archives!
From STEVE MANES Msg #7639 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sun Jan 25, 1987 12:44am (0:15)
Thanks, Bruce.
You obviously did a lot of work on this and the results more or less speak for
themselves.
I would, however, hasten to qualify for others that these are JUST speed and
archive filesize comparisons for IBM machines. Depending on one's needs, they
may or may not be definitive of the "best archiver". For instance, if one
needs an archiver to send compressed text files to other types of machines,
such as an Amiga or ST, PKARC with Squashing is inappropriate insofar as there
is no PKARC for anything but IBM computers. Only Thom's ARC and Rahul's ZOO
support other hardware. Granted, PKARC /oc will create ARC-compatible files
but since PKARC is unavailable for, say, UNIX then users will need ARC to
uncompress those files. Therefore, one would have to judge ARC against ZOO in
that environment.
Also not benchmarked are the various frills supported by each archiver, like
ARC's "run" command and ZOO's long filenames. And there are subtler
benchmarks to be considered, such as each compressor's error detection and
recovery, or the fact that a PKARC Squashed file will LOOK like a standard ARC
file but will not uncompress under SEA's ARC. That can and has created
problems (I still think Squashed PKARC files should have their own file
extension for this reason).
However, for file archiving of files that will ONLY be used on an IBM which
ONLY uses PKARC 2.0, then the results of your tests are fairly decisive.
From STEVE MANES Msg #7677 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Sun Jan 25, 1987 7:10pm (0:06)
For example (addendum to reply Left of this)....
Rahul uploaded his new ZOO140 file archiver yesterday in ARC format. Somehow,
there was a cabbaged header in the ARC. WHen attempting to List the contents
of the file with PKXARC /v, it went into space. Only Thom's ARC.EXE correctly
reported the header error and exited normally. So there are other attributes
to consider than speed and archive size.
From BRUCE GOLDMAN Msg #7724 *ARCHIVERS* (Rcvd)
To STEVE MANES Tue Jan 27, 1987 2:38am (0:11)
The sysop of I believe Datacom, used ARC and he claims it destroyed some of
his other files. I tend to find this unbelievable, but he swears it to be
true. He did say he was using the com version rather then the actual EXE
release, so it is possible.
I have found PK to be flawless in error reporting. I never had a lock up
using PK, but if I did, throwing the switch isn't such a major crime.
I am sure that if ZOO had as many users as Katz does, with their scrutiny,
unforseeable bugs just might pop up in it too.`
One of my favorite message I recd on a BBS (PCSI) after reporting a few bugs
in a beta version of QMODEM, that when QMODEM 2.3 would be released it would
be 100% bug free. What a dreamer!
I don't say PKARC is a godsend merely that right now it is the best thing out
there for my purpose. If ZOO comes in with some incredible stat, that I find
useful, I would definately reconsider, right now all the stats I care about is
held by PKARC. Plus it has complete compatability when dearcing.
From RAHUL DHESI Msg #7730 *ARCHIVERS* (Rcvd)
To STEVE MANES Tue Jan 27, 1987 5:23am (0:05)
Why couldn't you recover the data alone and ignore the header?
Because the ARC format is not designed to permit this. Zoo format is so
designed. And right now, anybody who wanted to could create a utlility that
wiould let you recover data from within an archive with a corrupted header.
From DEAN COOPER Msg #7739 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Tue Jan 27, 1987 8:19am (0:05)
Bruce, lets not forget to compare features...
Bruce, if you will notice... DWC has far more features than anyone else out
there and of course, I will be adding more... Now it may be a hard task...
but maybe you should add a feature list and then check off which archivers
support each feature... This would help users make up their own mind...
Dean
- *You have 1 minute(s) left**
From DEAN COOPER Msg #7771 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Wed Jan 28, 1987 7:34am (0:17)
Some more thoughts....
For your information... the reason DWC does better on COM files is that I
detect in the middle of compression that the file is growing instead of
shrinking... In this case, my program backtracks, and outputs a block of the
file without compression... The next block starts the compression over
again... It so happens that COM files usually have parts in them that can't
be compressed... So my little trick makes for better compression....
My "z" algorithm has a even more complicated algorithm for detecting when
it should reset the Lempel-Ziv algorithm... But it not only resets, it also
backtracks as many as two blocks...
Another thing... I saw you mentioned that mine would be better if I
speeded it up like I said I would... well, first you should know that I
released the Prototype mainly so that Bob Mahoney on Exec-PC could do a BIG
test of compressions... He was planning on waiting till later to test for
speed... But, since I've released it, I've gotten a little lazy in getting
back to work on it...
So now that I see a little challenge, I just can't pass it up... So last
night I started rewritting my compression algorithm in assembler (taking the
compilers assembler output for starters)... There were several very obvious
places that the compiler was pretty dumb and I fixed those up... In a little
test case where my program had taken 140 seconds and PKARC 88... now with
just a few obvious fix ups my program takes 114 seconds. So, I've already
narrowed the gap by half...
This also brings up the point of why my decompressor is slower... Well,
now its even slower... It simply that I've done a lot more work on the
compressor... But I'll get around to the other soon enough...
Dean W. Cooper
From DEAN COOPER Msg #7772 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Wed Jan 28, 1987 7:41am (0:22)
Bruce, here's that table again... fixed up...
Bruce, sorry for getting that table all messed up... It's just that I
didn't know the editor would reformat everything... So, here it is again....
----------------------------------------------------------------------- -
Informal test of archivers done 1/10/87. These are the test cases I used
to test the DWC archiver when developing it. They are not intended to be
exhaustive or complete.
Versions used: ARC - ARC 5.12
OLD PK - PKARC 1.2
PK-1 - PKARC 2.0 with /oc switch
PK-2 - PKARC 2.0 without /oc switch
ZOO - ZOO 1.31
DWC-1 - DWC Prototype A2 without "z" option
DWC-2 - DWC Prototype A2 with "z" option
Set 1: 12 Large text files (Documentation and C source code)
2: 8 Large .LIB files (C libraries)
3: 7 Large .EXE files (C compiler and CodeView)
4: 1 PROCOMM.EXE
5: 21 Assorted games
6: 29 Fonts and drivers from MicroSoft Windows
7: 26 PCPaint package and couple picture files
# Original ARC OLD PK PK-1 PK-2 ZOO DWC-1 DWC-2
========================================================================
1 689,067 274,641 276,677 274,275 258,399 262,492 253,284 243,647
2 356,864 263,154 243,473 246,442 246,442 238,501 232,427 226,689
3 624,827 539,664 470,329 468,457 468,457 466,991 455,938 451,667
4 165,456 115,979 103,122 103,492 103,492 103,792 103,348 103,072
5 400,870 312,487 275,635 277,391 277,475 280,677 275,459 272,908
6 334,403 222,355 208,044 208,338 208,338 210,569 209,942 209,353
7 241,172 156,784 144,627 144,461 144,461 144,788 144,635 143,770
========================================================================
2812,659 1885,064 1721,907 1722,856 1707,064 1707,810 1675,033 1651,106
Shrunk - 32.98% 38.78% 38.75% 39.31% 39.28% 40.45% 41.30%
Set 8: 130 Assorted EXE, COM, and system files in my DOS directory
# Original ARC OLD PK PK-1 PK-2 ZOO DWC-1 DWC-2
========================================================================
82184,025 1737,196 1587,990 1587,227 1587,282 1594,085 1576,540 1560,228
Shrunk - 20.46% 27.29% 27.33% 27.32% 27.01% 27.81% 28.56%
From RAHUL DHESI Msg #7821 *ARCHIVERS* (Rcvd)
To DEAN COOPER Thu Jan 29, 1987 6:27am (0:05)
Table formatting and memory needs
Dean, your table got reformatted by Magpie so try again.
Also, you have omitted an important factor: how much memory the archiver
needs. Zoo and PKARC can both run in a small memory partition in a
multitasking environment.
From DEAN COOPER Msg #7825 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Thu Jan 29, 1987 7:28am (0:06)
True... Very true...
Yes, I just "happened" to omit that point... Anyway, I should stress that my
archiver is really only intended for the MS-DOS world... Of course it could be
ported... but that just isn't a concern for me... I gladly leave you the
corner on that market... That is, unless you start making gobs of money, at
which point I'll have to change my stance....
From DEAN COOPER Msg #9038 *ARCHIVERS* (Rcvd)
To BRUCE GOLDMAN Fri Feb 20, 1987 7:42am (0:08)
Bruce, I think I found out why my one case was slower than Phil's....
Well, I havn't got as much work done as I had hoped to, but even still,
my decompressor is almost as fast as Phil's. Except in certain cases. The
most profound case is the same one I did bad in on compressing. It seems that
"lots of small files" was the killer. The reason is that Phil evidently loads
his large buffers with more than one file at a time if possible. Even though
I use large buffers, I work with one file at a time. So, in the case of all
these small files, my performance suffers.... But now that I know the
problem, I'll be fixing it up...
Just wanted to keep you informed... Dean
From PHIL KATZ Msg #10860 *ARCHIVERS*
To ALL Tue Mar 24, 1987 8:36pm (0:08)
Vicious aren't they?
Steve,
Geez, I've been reading thru some of these threads (to the best that my
confused understanding of the message system will allow) and am aghast at the
fierce and spiteful arguing going on. I mean, ARC, PKARC/PKXARC, DWC, and ZOO
are undeniably ALL very good programs. I don't think that all this bickering
like children reflects well on any of us. There is certainly enough room in
the software industry for all these products to exist, and this "fight to the
death" attitude will only harm everyone in the end.
Imagine what kind of product Thom and Dean and Rahul and I could come up with
if we were all working together, instead of against each other!
>Phil>
From STEVE MANES Msg #10868 *ARCHIVERS*
To PHIL KATZ Tue Mar 24, 1987 11:31pm (0:05)
I'll buy that.
Well, ARCHIVERS will continue even after I upload this to CIS so I'd be real
happy to see discussion amongst all of you about developing this
super-archiver. I'll do whatever I can to help out. If you like, I'll set
you guys (the authors only) up with a private *Group* conference to discuss a
collaborative effort.
From DEAN COOPER Msg #10874 *ARCHIVERS*
To PHIL KATZ Wed Mar 25, 1987 7:59am (0:07)
Here, Here! I keep reading the same thing all over: "Why don't you guys get
together and come up with a compatible standard. I agree, and am willing to
work on such.... I think that we should come up with one good file format
that all of us can be happy with. And I would like as few as possible
different compression algorithms to worry about as possible. That is, I would
like to get away from needing to support all prior versions. Let's start from
scratch and build something we can all be happy with.... What do you say??
Dean
From RAHUL DHESI Msg #10909 *ARCHIVERS* (Rcvd)
To DEAN COOPER Thu Mar 26, 1987 12:20am (0:04)
We can't work as a team because the driving force would be lost.
I call it "friendly rivalry".
WHy don't you all join in with the Zoo effort? (grin)
From DEAN COOPER Msg #10914 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Thu Mar 26, 1987 7:44am (0:05)
We don't have to work as a team, just agree on a file format that would be
acceptable to all of us.... We can still create our own programs to keep up
the rivalry. Say, what has become of your interest in incorporating DWC code
into your program??
From RAHUL DHESI Msg #10933 *ARCHIVERS* (Rcvd)
To DEAN COOPER Fri Mar 27, 1987 12:00am (0:04)
I haven't yet downloaded DWC latest version.
But I intend to do so soon and will then try to incorporate its code into Zoo.
I'll keep you posted.
From DEAN COOPER Msg #10942 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Fri Mar 27, 1987 9:24am (0:06)
Rahul, things have been really hectic lately, I'm comming out with a release
right and left as people find bugs and want this or that feature... Since
their just minor improvements, I havn't spread them around, but tell me when
you are really going to take the code and I'll upload my very latest stuff.
I'm up to Prototype A4.4 right now which followed A4, A4.2, A4.3....
I'll have to make my self-extractor program smaller someday, just to keep
you on your toes.... Dean
From RAHUL DHESI Msg #10993 *ARCHIVERS* (Rcvd)
To DEAN COOPER Sat Mar 28, 1987 2:29pm (0:03)
Great! Upload the latest now!
And I will download it. Please let me know when you do.
From DEAN COOPER Msg #11055 *ARCHIVERS* (Rcvd)
To RAHUL DHESI Mon Mar 30, 1987 8:28am (0:03)
OK... Release A4.4 is coming your way... I'll upload it and the source
code right now... Dean
From JOE ZITT Msg #10991 *ARCHIVERS*
To PHIL KATZ Sat Mar 28, 1987 1:42pm (0:03)
So go for it!
Get together and design the ultimate achiver... whadday'all gotta lose?
From PATRICK BENNETT Msg #7945 *ARCHIVERS* (Rcvd)
To DEAN COOPER Sat Jan 31, 1987 4:11pm (0:06)
Dean, I was just looking through the DWC source in the archive window, and
noticed the structure you have defined for what goes at the end of every
archive... I noticed that the name of the header file is stored there... Why
exactly did you do that? What if the archive were to contain Lottsa' files
(say, several hundred+) it would have to search to the end, get the name then
search again to get the header file...
From DEAN COOPER Msg #8016 *ARCHIVERS* (Rcvd)
To PATRICK BENNETT Mon Feb 2, 1987 8:05am (0:08)
Here's what I do...
On almost every command, I go to the end of the archive, read in the archive's
header structure and all of the directory entries... This is the first thing I
do... then depending on the command, I go to the place in the archive where
the compressed data is...
Please note that is my scheme, I can read all of the directory entries
very fast, and then I know exactly where to go in the archive for the
compressed data... I never have to go searching through the file... Since
files are on a random accesss medium, seeks do not have much of a time
penalty...
Dean P.S., thanks for the interest in my archiver!!
From PATRICK BENNETT Msg #8023 *ARCHIVERS* (Rcvd)
To DEAN COOPER Mon Feb 2, 1987 11:41am (0:05)
Ok, I'll have to check out the structure more carefully this time... My
messsage before was left after I noticed the postion of some of the fields...
Didn't continue from there.... I'll d/l (unless I already did, hmmmmmmmm) and
get deeper into it...
[end of the ARCHIVERS thread]