CMIX

cmix is a lossless data compression program aimed at optimizing compression ratio at the cost of high CPU/memory usage. cmix is free software distributed under the GNU General Public License.

cmix is currently ranked first place on the Large Text Compression Benchmark and the Silesia Open Source Compression Benchmark. It also has state of the art results on the Calgary Corpus and Canterbury Corpus. cmix has surpassed the winning entry of the Hutter Prize (but exceeds the memory limits of the contest).

cmix works in Linux, Windows, and Mac OS X. At least 32GB of RAM is recommended to run cmix. Feel free to contact me at byron@byronknoll.com if you have any questions.

GitHub repository: https://github.com/byronknoll/cmix

Downloads

Source Code Release Date Windows Executable
cmix-v13.zip April 24, 2017 cmix-v13-windows.zip
cmix-v12.zip November 7, 2016 cmix-v12-windows.zip
cmix-v11.zip July 3, 2016 cmix-v11-windows.zip
cmix-v10.zip May 30, 2016 cmix-v10-windows.zip
cmix-v9.zip April 8, 2016 cmix-v9-windows.zip
cmix-v8.zip November 10, 2015
cmix-v7.zip February 4, 2015
cmix-v6.zip September 2, 2014
cmix-v5.zip August 13, 2014
cmix-v4.zip July 23, 2014
cmix-v3.zip June 27, 2014
cmix-v2.zip May 29, 2014
cmix-v1.zip April 13, 2014

Benchmarks

Corpus Original size
(bytes)
Compressed size
(bytes)
Compression time
(seconds)
Memory usage
(KiB)
calgary.tar 3152896 549277 2628.21 20133524
silesia 211938580 29896623
enwik6 1000000 180444 1041.53 19712644
enwik8 100000000 15323969 59085.97 24648612
enwik9 1000000000 120480684 617346.61 27803516

Compression and decompression time are symmetric. The compressed size can vary slightly depending on the options used to compile the executable. The results here used "-Ofast -march=native", which produces different results than the Windows executable. See the README file in the source code for more information.

Calgary Corpus

File Original size
(bytes)
Compressed size
(bytes)
Cross entropy
BIB 111261 17679 1.271
BOOK1 768771 177489 1.847
BOOK2 610856 108870 1.426
GEO 102400 43390 3.390
NEWS 377109 79136 1.679
OBJ1 21504 7173 2.669
OBJ2 246814 40965 1.328
PAPER1 53161 11061 1.665
PAPER2 82199 17499 1.703
PIC 513216 22392 0.349
PROGC 39611 8544 1.726
PROGL 71646 9245 1.032
PROGP 49379 6458 1.046
TRANS 93695 10272 0.877

Canterbury Corpus

File Original size
(bytes)
Compressed size
(bytes)
Cross entropy
alice29.txt 152089 31707 1.668
asyoulik.txt 125179 29922 1.912
cp.html 24603 4886 1.589
fields.c 11150 2027 1.454
grammar.lsp 3721 812 1.746
kennedy.xls 1029744 8422 0.065
lcet10.txt 426754 75124 1.408
plrabn12.txt 481861 113941 1.892
ptt5 513216 22392 0.349
sum 38240 7124 1.490
xargs.1 4227 1167 2.209

Description

I started working on cmix in December 2013. Most of the ideas I implemented came from the book Data Compression Explained by Matt Mahoney.

cmix uses three main components:

  1. Preprocessing
  2. Model prediction
  3. Context mixing

The preprocessing stage transforms the input data into a form which is more easily compressible. This data is then compressed using a single pass, one bit at a time. cmix generates a probabilistic prediction for each bit and the probability is encoded using arithmetic coding.

cmix uses an ensemble of independent models to predict the probability of each bit in the input stream. The model predictions are combined into a single probability using a context mixing algorithm.

Architecture

architecture

The byte-level mixer uses long short-term memory (LSTM) trained using backpropagation through time. I created another project called lstm-compress which compresses data using only LSTM. The output of the bit-level context mixer is refined using an algorithm called secondary symbol estimation (SSE).

Preprocessing

cmix uses a transformation on three types of data:

  1. Binary executables
  2. Natural language text
  3. Images

The preprocessor uses separate components for detecting the type of data and actually doing the transformation.

For images and binary executables, I used code for detection and transformation from the open source paq8pxd program.

I wrote my own code for detecting natural language text. For transforming the text, I used code from the open source paq8hp12any program. This uses an English dictionary and a word replacing transform. The dictionary is 465,211 bytes.

As seen on the Silesia benchmark, additional preprocessing using the precomp program can improve cmix compression on some files.

Model prediction

cmix v13 uses a total of 1,745 independent models. There are a variety of different types of models, some specialized for certain types of data such as text, executables, or images. For each bit of input data, each model outputs a single floating point number, representing the probability that the next bit of data will be a 1. The majority of the models come from other open source compression programs: paq8l, paq8pxd, and paq8hp12any.

Context mixing

mixer

cmix uses a similar neural network architecture to paq8l. cmix v13 uses three layers of connections, with 415,135 neurons and 717,960,394 weights.

There are some differences compared to standard neural network implementations:

  1. Every neuron in the network directly tries to minimize cross entropy, so there is no backpropagation of gradients between layers.
  2. Instead of using a global learning rate, different modules of the network have different learning rate parameters.
  3. Only a small subset of neurons are activated for each prediction. The activations are based on a set of contexts (i.e. functions of the recent input history). The context-dependent activations improve prediction and reduce computational complexity.