Compression:cooline.gif (479 bytes)

Would you use compression....

NOTE: You can not Compress a FAT32 Volumn (WHY), well why try to Compress A DANG 4K Cluster--PERIOD.

blue_line.jpg (2430 bytes)

First, A word about Zip-Files:

The theoretical maximum allowed by the zip file specification is 4 gigabytes.  Zip files processed by WinZip, however, are currently limited in size to 2 gigabytes.  (Technical note:  the reason for the different limitations is that the Zip file format specifies an unsigned 32-bit integer for the size of a Zip file; the Microsoft C library run-time I/O functions that WinZip employs, however, use a signed 32-bit integer for the file size.  This aspect of the Microsoft I/O functions limits the size of a Zip file that WinZip can process to 2 gigabytes.)

So You Want To Compress:
What kind of Compression system do you have, SpaceSaver, Double Space, DriveSpace, DriveSpace 3, etc., the format makes a difference or the program you might have. I cann't even dream of anyone using any program other than Windows-95 Plus ---of which is DriveSpace 3. We're going to talk about DriveSpace 3 only. If the subject confuses you --- mail-me. Remember also, I recommended that a Pentium be used.

What Is Compression Anyway:
DriveSpace 3 achievies a higher ratio by compressing data (CVF) files in 32k clusters (instead of 8k) as Dblspace & DrvSpace did. DriveSpace 3 can also store more data via a improved fragmentation handling. As with earlier CVF versions, a DriveSpace 3 CVF has a name in the form DRVSPACE.nnn, where nnn is the CVF sequence number in the range 000 through 254.

Using A Floppy Disc:
Remember If you send your floppy to a friend, S/He must have DriveSpace also. So, go ahead and compress your disc--now you have more space on it-ok. You can use it just as you did before compressing, In short, DriveSpace mounts the floppy as if it was un-compressed.

Explain Why The Clean-Up:
Don't compress all your disc/error's, duplicate/files, redundant/files, too many fonts, screen-savers, wallpaper and etc. I'm going to assume that you know how to use the FIND command to check for all those misc. files that you need to delete. Remember, when your getting full, you have to start thinking about all those fat files and kicking heinie for survival. Again, if you don't know how---mail-me.

Preparing The H/D For Compression:
You need to run ScanDisk (thorough) and then a standard ScanDisk. You might have to back and forth a couple of times to get the (thorough) finished, but that's ok.

Making Decision's About The Menu:
Go to Programs/Accessories/System Tools/DriveSpace 3 and view the menu. Don't worry about the advanced tab or adjusting free space, because the drive isn't compressed yet. Highlight and click on compress and you'll see what the compression tool can do, it will create another drive and show you the expected MB savings. Accept it's choice for the time being, because you can adjust free space later. If your running a 150MHz Pentium or higher, then you can think about checking the button for ultra and hipack ratio's, other wise select normal compression. The reason being, everything is going to be a bit slower. This will take awhile, so go ahead if you want compression.

The M/S Plus System Agent:
The system agent is running all the time in the background compressing files whenever you save that file. You need to do a little scheduling for the Utilities in agent's to work, this is totally up to you. I run scandisk everyday after closing all apps. and defrag every week or so and you should throw in compression once a month at least and remember that the system/agent is helping out in the background all the time. Try running your programs for awhile to see if your happy with the speed and remember it's going to be slower unless your running a high end processor and/or a dual or quad cpu motherboard.

Adjusting Free Space and Ratio's:
The only thing here is give you an example of a friend's 486/100MHz 1 gig drive PC and he's done alot of changing back and forth: he was happy with the host/drive being set at 290-300 MB's. You can experiment for your system or e-mail me.

blue_line.jpg (2430 bytes)

History of Compression If your interested....

Two kinds of lossless compression techniques are Huffman coding, and arithematic coding are very common forms.

The research of Ziv and Lempel led to a family of compression methods based on the idea of replacing a group of characters by a reference to an earlier occurrence of that same group.
The authors explore one particular form of this compression method which restricts the references so that they can only point into a finite sliding window of text. Their aim is clearly to develop a practical compression algorithm. In particular, the authors attempt to achieve good compression performance, but at the same time they want fast compression and decompression algorithms with reasonable storage requirements. As far as compression performance is concerned, the algorithms are definitely successful; the figures reported in the paper indicate that they achieve compression similar to or better than competing methods. The compression performance of these algorithms is also good on relatively small files, whereas many other methods achieve their best performance only on very large files.
Unfortunately, the authors do not provide all the data needed to compare execution time requirements. I was particularly disappointed to find that they make no speed comparison with the ZLW (Ziv-Lempel-Welch) method, which is provided on most UNIX systems as the compress command and is in widespread use.

Maybe I should clarify my question...

There are two types of compression, lossy and lossless.

Lossy compression implies that not all of the data is restored upon decompression, but the signal is "good enough" for human beings not to perceive the difference. A transform or filter bank and psychoacoustic model are usally involved. In the case of Twin VQ, from NTT, vector quantization is also used.

Examples of lossly audio compression are
AC-3
MPEGx of all flavors
PAC
ATRAC
Twin VQ
ADPCM

Then there is lossless compression. This is where the entropy of the data from Shannon's theorem is used...huffman coding and arthimetic coding are common techniques used in lossless. Lossless compression geared towards audio also manipulates the periodicity of the signal. So, no "bit" of audio is lost...just how one "reorders" the information, per bit, is manipulated.

Examples are:
Shorten
HEAR
OggSquish

My question really is asking, "does anyone know of any lossless audio compression algorithms beyond the three listed above and the companys' that have them?"

blue_line.jpg (2430 bytes)

INDEX  arrowboth.gif (6812 bytes)  TOP