throbber
Pergamon
`
`CMIG 264
`
`Computerized
`MedicalImaging
`andGraphics
`
`Computerized Medical Imaging and Graphics 22 (1998) 323–339
`
`A comparison of lossless compression methods for medical images
`Juha Kivija¨rvia, Tiina Ojalaa, Timo Kaukorantaa, Attila Kubab, La´szlo´ Nyu´lb, Olli Nevalainen a,*
`aTurku Centre for Computer Science (TUCS), Department of Computer Science, University of Turku, Lemminka¨isenkatu 14 A, FIN-20520 Turku, Finland
`bDepartment of Applied Informatics, Jo´zsef Attila University, Szeged, PO Box 652, H-6701 Szeged, Hungary
`
`Abstract
`
`In this work, lossless grayscale image compression methods are compared on a medical image database. The database contains 10 different
`types of images with bit rates varying from 8 to 16 bits per pixel. The total number of test images was about 3000, originating from 125
`different patient studies. Methods used for compressing the images include seven methods designed for grayscale images and 18 ordinary
`general-purpose compression programs. Furthermore, four compressed image file formats were used. The results show that the compression
`ratios strongly depend on the type of the image. The best methods turned out to be TMW, CALIC and JPEG-LS. The analysis step in TMW is
`very time-consuming. CALIC gives high compression ratios in a reasonable time, whereas JPEG-LS is nearly as effective and very fast.
`q 1998 Elsevier Science Ltd. All rights reserved
`
`Keywords: Lossless image compression; Medical images; Image databases; PACS
`
`1. Introduction
`
`Digital storing of medical image data is problematic,
`especially because of the requirement to preserve the best
`possible image quality. In several countries this has led to
`the interpretation that medical images should be totally
`reproducible. The reason for this is the uncertainty in the
`lossy image compression methods’ ability to preserve the
`relevant information. During the image generation phase it
`is still not known which details will later turn out to be
`important, considering the safety of the patients. The lossy
`image compression techniques, which reproduce com-
`pletely acceptable decoded images on scenes such as houses
`or landscapes, do not fulfil the strictest quality requirements
`needed in medical applications. There is always the possi-
`bility that a vague detail might give a reason to suspect some
`critical changes in a patient’s condition. For this reason, the
`lossy techniques, which tend to give high compression
`ratios, such as 1:10–1:30, are not acceptable in medical
`image compression. On the other hand, there are several
`well-known reasons why savings in the size of the image
`archives are necessary:
`• Digital medical image databases are very large; usually
`many images are taken of each patient.
`• Even if the number of patients were not radically
`increasing, the patient databases tend to grow because
`
`* Corresponding author. E-mail: olli.nevalainen@utu.fi
`
`0895-6111/98/$19.00 q 1998 Elsevier Science Ltd. All rights reserved
`PII: S0895-6111(98)00042-1
`
`•
`
`of the demand for very long storage periods of patient
`data.
`Individual patient records often consist of large images
`because of the high resolution and many bits per pixel
`used.
`• The transfer time of digital images depends on the size
`of the compressed images. The practical usefulness of a
`picture archiving and communication system (PACS)
`presupposes
`that
`transfer
`operations must
`fulfil
`reasonable time requirements.
`
`In many cases the only possibility is to trust in the lossless
`image compression methods. Unfortunately,
`their com-
`pression ratios are far from those of lossy methods, thus
`the algorithm development is severely behind the hopes
`set on the algorithms.
`This research is part of a larger research project [1,2],
`where the aim is to create a functional hospital information
`record and search system. Here we try to evaluate the exist-
`ing lossless compression methods when applied to a sample
`of the image archive. We measure the compression result by
`the compression ratio
`
`R ¼
`
`original image size
`compressed image size
`
`(1)
`
`Thus, larger values of R indicate a better compression
`method. It should be noted that the original image size
`used in the above formula contains no overhead originating
`from the header information or the byte border alignments.
`
`Netflix 1012 - Page 1
`
`

`
`324
`
`J. Kivija¨rvi et al./Computerized Medical Imaging and Graphics 22 (1998) 323–339
`
`Problems in the byte border alignment occur if the number
`of bits for a pixel is not a multiple of the byte length. (For
`example, if the pixels are expressed by 12 bits, image
`formats which do not include any compression generally
`do not utilize the four last bits of the second byte.) We
`thus calculate the image size simply by the formula
`
`original image size
`¼ Xdimension p Ydimension p bits per pixel
`
`On the other hand, the compressed image size includes all
`the header information needed in the reconstruction of the
`original image data. However, this does not have a great
`effect on the compression ratios, except for some very
`small images (for which we also apply an improved format
`of storage). Note that the images are originally given in the
`DICOM format [3], which includes patient data headers. We
`have discarded these headers because they are irrelevant
`from the compression point of view.
`Fast performance is a desirable property for an algorithm
`and in some cases a long running time may even rule out the
`application of a space-efficient compression technique.
`This is why we measured the running times needed for
`compressing and decompressing the test images.
`This paper is organized as follows. In Section 2 we
`describe the image material we have used and give some
`characterizations of the image types. In Section 3 we recall
`briefly the compression techniques used. These consist of
`seven specific grayscale image compression techniques,
`four compressed image formats and 18 general all-purpose
`compressing techniques. Because the latter group contains
`numerous freely available programs, we have included only
`
`methods that are either known to compress some data very
`well or are widely used. Section 4 contains the summaries of
`the experiments and conclusions are drawn in Section 5.
`
`2. Image types
`
`the Albert Szent-Gyo¨rgyi
`The medical database of
`Medical University contains images of five different
`modalities: computed radiology (CR), computed tom-
`ography (CT), magnetic resonance (MR), nuclear medicine
`(NM) and ultrasound (US). The images originate from 10
`types of patient studies. There are altogether 125 patient
`studies, which contain in total 3147 monochrome images.
`Table 1 shows a summary of the image types and their
`properties. We observe that the pixel depths are 8, 12, 13
`or 16 bpp and a particular image type may appear with many
`different
`resolutions. The heterogeneity of
`the image
`material is added by the fact that some of the image types
`are of constant size whereas some others may appear in
`several different sizes. Appendix A contains an example
`of each image type. Three of the image types are small
`(64 3 64 or 128 3 128) and their compression presupposes
`special care for certain methods. It turns out that a simple
`compression technique (such as FELICS) works well for
`these small images whereas certain advanced techniques
`(e.g. SPIHT) do not operate for them. One solution is to
`form a larger image composed of several small images
`and handle it as a whole, see Table 1, the merged image
`types. A merged image is composed of the images of one
`study. Thus, the images to be merged are of the same size
`and type.
`
`Table 1
`Image types a
`
`Mod
`
`Type
`
`CR
`CT
`CT
`MR
`MR
`NM
`
`NM
`
`NM
`
`NM
`US
`Total
`
`PED CHEST
`ABDOMEN
`HEAD
`ABDOMEN
`SPINE
`BRAIN
`merged
`HB
`merged
`HEART
`merged
`WB
`
`merged
`
`12
`
`3
`3
`3
`3
`3
`
`bpp
`
`8
`
`3
`3
`
`3
`3
`
`Resolution
`
`16
`
`64 2
`
`128 2
`
`256 2
`
`512 2
`
`Other
`
`Study
`count
`
`Image
`count
`
`Filesize/b
`
`Real size/b
`
`3
`3
`3
`3
`
`3
`3
`3
`3
`
`3
`
`3
`
`3
`
`3
`3
`
`3
`
`3
`3
`
`3
`
`3
`3
`3
`
`3
`
`3
`
`3
`3
`3
`
`4
`17
`14
`18
`14
`4
`4
`15
`15
`17
`17
`14
`8
`125
`125
`
`4
`237
`1138
`24
`152
`480
`4
`480
`16
`510
`17
`84
`38
`3147
`1714
`
`35958856
`126134740
`548073169
`3146112
`44304888
`7871520
`7864388
`3939360
`3932432
`4185570
`4178209
`8258810
`11405914
`793278939
`793257518
`
`26969088
`100365346
`411762116
`2359296
`43155456
`7864320
`7864320
`3932160
`3932160
`4177920
`4177920
`8257536
`11405344
`620248582
`620248582
`
`aThe resolutions of the image types are marked by ‘3’. Image count gives the total number of images in all the studies. Filesize includes the space for the
`headers and the extra bits due to the byte border alignment; real size consists of the pixel data only. PED CHEST ¼ pediatric chest, HB ¼ heart and body,
`WB ¼ whole body.
`
`Netflix 1012 - Page 2
`
`

`
`J. Kivija¨rvi et al./Computerized Medical Imaging and Graphics 22 (1998) 323–339
`
`325
`
`Table 2
`Lossless image compression methods a
`
`From external sources
`
`Method
`
`BTPC
`CALIC
`GIF
`JPEG-LS
`
`Program
`
`BTPC
`CALIC
`PBMPLUS
`LOCO
`
`PNG
`
`PNGLIB
`
`SPIHT
`TMW
`
`SPIHT
`TMW
`
`Ver.
`
`4
`
`0.85
`
`0.96
`
`8.01
`0.51
`
`Own implementations
`
`Method
`
`CLIC
`CPM
`FELICS
`LJPEG
`
`Options
`
`(default)
`
`(default)
`5 (predictor)
`
`Restrictions
`
`w ¼ h ¼ 2 k
`
`Author
`
`John Robinson
`Xiaolin Wu
`Jef Poskanzer
`Hewlett-Packard
`Lab
`Andreas Dilger
`
`Amir Said
`Bernd Meyer
`
`Year
`
`1997
`1995
`1991
`1997
`
`1997
`
`1995
`1997
`
`Status
`
`Restrictions
`
`Notes
`
`free
`eval
`free
`eval
`
`free
`
`eval
`eval
`
`max 8 bpp
`
`max 8 bpp
`
`w,h . ¼ 128
`
`not optimized
`
`main program for
`the library was
`written
`
`decoding of 16 bit
`images not
`successful
`
`aVer. ¼ version number, free ¼ freeware, eval ¼ for test or evaluation purposes, w ¼ image width, h ¼ image height. Default options for lossless compression
`were used except for PNG, for which the PNG_FILTER_PAETH option was used.
`
`3. Compression methods
`
`3.1. Grayscale image compression methods
`
`Table 2 contains a summary of the grayscale image com-
`pression methods used. Some of these methods are tested
`with freely available programs and others we have
`implemented according to the algorithm description from
`the literature. Our own implementations do not include
`any special code optimizations for the running time; they
`are written according to good programming style.
`CALIC, a Context Based, Adaptive, Lossless Image
`Codec [4] is a compression technique based on the pixel
`context of the present pixel to be coded (i.e. the setting of
`the pixels of some predetermined pattern of neighbour
`pixels). The method is capable of learning from the errors
`made in the previous predictions and in this way it can
`improve its prediction adaptively when the compression
`proceeds. Pixel values are predicted by a nonlinear pre-
`dictor, which selects the prediction pixels and the particular
`prediction function among several possible prediction func-
`tions on the basis of the local context. The context is built up
`of the local gradient magnitudes in horizontal and vertical
`directions. The accuracy of the prediction is improved by
`adding an estimate of the error to the predicted value. This
`estimate is the average error of the previous prediction
`values in the present context. The context for error estima-
`tion is selected in such a way that it models the magnitudes
`of the local gradient and the two previous error values both
`
`texture of the image and the
`in relation to the local
`prediction value in the most effective way.
`Four coefficients are used to weight the horizontal and
`vertical gradient magnitudes and the previous prediction
`errors, when calculating the context for error estimation.
`The coefficients should be selected on the basis of the train-
`ing set drawn from the type of images to be compressed. The
`final set of prediction errors is coded by arithmetic or
`Huffman coding.
`The coefficients used in this study have been optimized
`for ISO images and it is therefore possible that a case
`specific optimization could still improve our result. The
`final coding was performed by the arithmetic code.
`The Central Prediction Method (CPM) [5] is an iterative
`compression technique that applies the hierarchical four-bit
`coding to the image to code the prediction errors. The pixel
`values are predicted in two phases. In the first phase the
`value of a pixel is predicted as an average of the four pixels
`of the vertical and horizontal neighbours. In the second
`phase half of the remaining pixels are predicted by the
`pixel values in the two diagonal directions. A quarter of
`the pixel values remains unpredicted and will be moved to
`a separate block, which will then be compressed recursively
`by the same prediction method. Fig. 1 illustrates this.
`The recursion terminates when the size of the block of
`original pixel values is 2 3 2. The bit levels of the resulting
`image are coded by the hierarchical four-bit coding scheme.
`The coding scheme divides the bitplane of size N 3 N into
`four subplanes of size N/2 3 N/2, each of which is coded by
`
`Netflix 1012 - Page 3
`
`

`
`326
`
`J. Kivija¨rvi et al./Computerized Medical Imaging and Graphics 22 (1998) 323–339
`
`two children at a lower level of the pyramid. Thus, error
`values form a binary tree. The actual coding of the image
`pyramid is done by adaptive Huffman coding. If all the
`descendants of a zero error value are also zeros, the subtree
`can be coded by a single codeword. The implementation
`used in this study works for 8-bit images only.
`Portable Network Graphics (PNG) [7,8] is essentially a
`file format for lossless compression and storage of images. It
`was developed to replace GIF. In addition to the images, one
`can also store other data in PNG files. The system accepts
`pixel depths 1, 2, 4, 8 and 16 and consumes no extra bits
`between two pixels in the storage. The compression
`method can also handle other pixel depths given to it. It is
`possible to filter the data before compression to get a better
`compression ratio.
`The actual compression is performed by the so-called
`deflate/inflate-compression. Deflate is an LZ77-based
`method which is used in several general compression
`programs (e.g. zip, gzip). Huffman coding is included in
`the method to encode the data. We used the so-called
`Paeth-filtering [9] for the image preprocessing. The com-
`pression level option was set to default compression (6),
`which turned out to be two to three times faster than the
`best compression with only 1% decrease of compression
`ratio. A simple main program was made to call the png
`library, which is freely available on the Internet.
`The SPIHT-compression technique (Set Partitioning in
`Hierarchical Trees) [10] transforms the image to a multi-
`resolution representation by the wavelet transform or, in the
`case of lossless compression, by the S transform. This trans-
`formation is similar to the subband decomposition, but uses
`only integer operations. The S transform scans the columns
`of the image and calculates for each successive pair of the
`pixels the average and the difference. The averages are
`stored at the upper part of the image (l), and the differences
`are stored at the lower part (h), see Fig. 3. The same is
`repeated for the columns of l and h parts giving ll, lh, hl
`and hh images. At the next level
`the ll block will be
`processed in the same way. This gives a multiresolution
`pyramid of the image with reduced variance.
`
`Fig. 1. Prediction scheme of CPM.
`
`a bit in a four-bit codeword. If all the pixel values in a
`subplane are zero, the subplane is coded by a zero. Other-
`wise it is coded by a one. The procedure is recursively
`applied to each of the subplanes. The recursion stops
`when the subplane is coded by a zero or when the size of
`the subplane is 1 3 1. The four-bit method presupposes a
`square-like form with 2 k pixels in each side.
`is
`[6]
`The Binary Tree Predictive Coding (BTPC)
`designed for lossy and lossless image compression and is
`suitable for both photographs and graphics. The method
`forms a binary pyramid of the image by recursively dividing
`the image into two subimages whose sizes are half of the
`size of the previous image. The first subimage (S) is formed
`from the original image by leaving out alternate pixels; from
`each odd-numbered row the even-numbered pixels are
`discarded, and from each even-numbered row the odd-
`numbered pixels are discarded, see Fig. 2. The second
`subimage (E) stores the pixels missing from the first
`subimage, but in a transformed form; a context model
`(shape-based prediction, which uses different predictors
`for edges, flat areas, etc.) uses the neighbouring pixels in
`the subimage S to predict a pixel value. The actual stored
`value is the prediction error, i.e. the difference between the
`predicted and the real pixel value. The prediction error
`transform aims to a skewed distribution of values, which
`can be effectively compressed. The next
`layer of the
`image pyramid is constructed from the previous S subimage
`by the same process.
`Each error value of a particular level of the pyramid has
`
`Fig. 2. Principle of the pyramid coding in BTPC.
`
`Netflix 1012 - Page 4
`
`

`
`J. Kivija¨rvi et al./Computerized Medical Imaging and Graphics 22 (1998) 323–339
`
`327
`
`Fig. 3. Principle of the SPIHT compression technique.
`
`After each one-dimensional transform, the pixel differ-
`ences are replaced by the difference of transformed pixel
`value and prediction value. This is known as the P trans-
`form. The prediction values are calculated as a function of
`pixels of the lower levels of the pyramid and the known
`pixels of the current level. The prediction function can be
`selected on the basis of the properties of the image, but the
`coefficients of the function are not very critical. Arithmetic
`coding concludes the encoding process.
`TMW [11] is based on the use of linear predictors and
`implicit segmentation. The compression process is split into
`an analysis step and a coding step. In the first step, linear
`predictors and other parameters suitable for the image are
`calculated. These are included in the compressed file and
`subsequently used for the coding step.
`In TMW, three different kinds of predictors are used.
`Pixel-predictors predict a pixel value. Sigma-predictors
`predict the magnitude of pixel-predictor’s prediction error.
`Blending-predictors predict how well suited a particular
`pixel-predictor is to predict a pixel value. These all are
`based on the corresponding values of the causal neighbours.
`In the analysis step, the parameters of these predictors are
`iteratively optimized. This is very time-consuming. How-
`ever, if we have several images that resemble one another, it
`is sufficient to build a model for only one of them. This
`model can then be used for compressing all the other similar
`images. The coding step is much faster and basically con-
`sists of arithmetic coding of the prediction error according
`to the distribution.
`The implementation used by us failed to reconstruct the
`16-bit images. However, it seems that the problem lies in the
`decompressor. Thus, the results have been presented. We
`used the parameter file created for the test image ref12b-0 in
`all our tests, because the model generation software was not
`available. Better results would probably have been achieved
`if we had been able to build a model for each image type.
`The Low Complexity Lossless Compression for Images
`(LOCO-I) [12] is based on context modelling and predictive
`coding combined with Huffman coding. The pixel values
`are predicted adaptively according to the output of a simple
`edge detector. The prediction value is a simple function of
`the values of the predictor pixels, which are chosen among
`three neighbouring pixels.
`The prediction residuals are coded by the Golomb–Rice
`code [13,14] in the context that is built out of the differences
`of four local pixel values. The differences are quantized to
`reduce the number of contexts. Golomb–Rice codes are
`characterized by one parameter that forms the optimal
`
`probability distribution for the code. For the present context,
`the parameter value, which is intended to yield as short a
`code length as possible, is calculated from the estimate of
`the expected prediction error magnitude in the context. For
`flat regions, which are detected by checking the context,
`LOCO-I uses run-length coding. Here, a run of symbol is
`coded by coding the length of the run.
`LOCO-I has been chosen as the algorithm of JPEG-LS,
`the emerging lossless/near-lossless compression standard
`for continuous-tone images. A freely available implementa-
`tion of the method was used for testing. Default values were
`chosen for the thresholds for context quantization.
`The JPEG (Joint Photographics Experts Group) standard
`for lossless compression LJPEG [15] is based on a predic-
`tion method combined with the entropy coding. The
`prediction method transforms the original
`image to an
`error
`image with reduced variance using a predictor,
`which is initially chosen among eight possible predictors.
`The prediction is a function of some of the three previous
`neighbouring pixels (pixels to the left, above and upper-left
`of the current pixels). The error image is finally compressed
`using Huffman or arithmetic coding.
`In this study the Huffman code was applied for the com-
`pression of the error images. The Huffman codes were
`derived from static codes for 8-bit images by adding the
`missing codewords. The Huffman code chooses a category
`for the error value on the basis of the most significant bit.
`The codeword for the error value is the codeword for the
`category combined with rest of the bits in the error value
`(the sign and the least significant bits). In the tests we have
`used predictor W þ b(N ¹ NW)/2c (W corresponds to the
`pixel west of the current pixel and so on).
`FELICS (Fast, Efficient, Lossless Image Compression
`System) [16] is based on context modelling and Rice coding.
`The context for a pixel consists of the two previous pixel
`values L and H, which make up a value range from the
`smaller (L) of the values up to the larger (H). Coding of
`the present pixel value P starts by checking whether the
`pixel value belongs to the value range or not. The result
`of this check is indicated by the first bit in the codeword.
`If the value belongs to the range, the difference P ¹ L is
`coded using a binary code. If the pixel value does not belong
`to the range, the second bit in the codeword indicates
`whether the value is bigger or smaller than any of the values
`in the range. In the first case the difference P ¹ H ¹ 1 is
`coded using the exponential Rice prefix code. Otherwise,
`the difference L ¹ P ¹ 1 is coded using the same code. The
`Rice code has one parameter. The parameter value giving
`
`Netflix 1012 - Page 5
`
`

`
`328
`
`J. Kivija¨rvi et al./Computerized Medical Imaging and Graphics 22 (1998) 323–339
`
`the shortest code length is estimated for each context during
`the coding process.
`Two implementations of FELICS were used in this study.
`One program was implemented for images with pixel depth
`8 while the other was implemented for images with pixel
`depths from 9 to 16.
`The Context-based Lossless Image Compression (CLIC)
`[17] uses several contexts to estimate probability density
`functions of error values, which are finally coded by the
`arithmetic coding. The error values are calculated as a
`difference between a pixel value and a prediction. The
`prediction function can be chosen among several possible
`functions such as (N þ W)/2, where N corresponds to the
`pixel value north (above) of the current pixel, etc. This is the
`prediction function we used. The probability density func-
`tion of the error values is estimated for each context by
`
`calculating frequencies of error values. In order to save
`memory, this can be done bitwise by calculating the fre-
`quency of each bit (having value one) in each context. In
`this case each bit is coded separately. The context was built
`out of the error values of the pixels above and left of the
`current pixel.
`We used a backup model when a context of the main
`model was updated less than 64 times. The prediction func-
`tion of the backup model was the same as in the main model.
`The context of the backup model was the error value of the
`pixel left of the current pixel.
`The Graphics Interchange Format (GIF) is a widely used
`image format by CompuServe Incorporated. It applies an
`LZW based compression method. The format is restricted to
`an 8-bit grayscale and palette colour images. The format is the
`most suitable for storing drawn images with few colours.
`
`Table 3
`General-purpose compression programs a
`
`Name
`
`ACB
`ARJ
`BZIP
`BZIP2
`ESP
`GZIP
`HA
`JAR
`PKZIP
`Quantum
`RAR
`RKIVE
`SZIP
`SZIP
`UC2
`UFA
`X1
`YAC
`
`Name
`
`ACB
`ARJ
`BZIP
`BZIP2
`ESP
`GZIP
`HA
`JAR
`PKZIP
`Quantum
`RAR
`RKIVE
`SZIP 1.04
`SZIP 1.05Xd
`UC2
`UFA
`X1
`YAC
`
`Version
`
`2.00c
`2.60
`0.21
`0.1pl2
`1.92
`1.2.4
`9.999b
`1.02
`2.04g
`0.97
`2.02
`1.90b1
`1.04
`1.05Xd
`3.0 Pro
`0.034b4
`9.95a
`1.02
`
`Author
`
`George Buyanovsky
`Robert Jung
`Julian Seward
`Julian Seward
`GyikSoft
`Jean-loup Gailly
`Harri Hirvola
`Robert Jung
`PKWARE Inc.
`Cinematronics
`Eugene Roshal
`Malcolm Taylor
`Michael Schindler
`Michael Schindler
`AIPNL
`Igor Pavlov
`Stig Valentini
`Aleksandras Surna
`
`Options used
`
`Year
`
`1997
`1995
`1996
`1997
`1997
`1993
`1995
`1997
`1993
`1995
`1997
`1997
`1997
`1997
`1995
`1997
`1997
`1995
`
`Status
`
`DOS
`
`W95
`
`Linux
`
`src
`
`beta
`
`solid
`
`Shareware
`Shareware
`Freeware
`Freeware
`Freeware
`Freeware
`Freeware
`Shareware
`Shareware
`Freeware
`Shareware
`Shareware
`Freeware
`Freeware
`Shareware
`Shareware
`Freeware
`Freeware
`
`3
`3
`
`3
`3
`3
`3
`3
`3
`3
`3
`3
`3
`3
`
`3
`3
`
`3
`3
`3
`
`3
`3
`
`3
`3
`3
`3
`3
`3
`3
`
`3
`3
`
`3
`3
`
`3
`
`3
`3
`
`3
`
`3
`3
`
`3
`3
`
`3
`
`3
`
`3
`
`3
`3
`
`3
`
`3
`3
`3
`3
`
`3
`
`3
`3
`3
`3
`3
`3
`3
`3
`
`u (max)
`-jm (max)
`-9 (max block size)
`-9 (max block size)
`/mm (8-bit multimedia)
`-9 (max)
`21 (choose best)
`-m4 (max)
`-ex (max)
`-c5 (max) -t21 (max table size)
`-mm (multimedia) -m5 (max) -s (solid)
`-mtx (best mode) -mm2 (multimedia forced)
`
`-b41 (max blocksize) -o255 (max order) -r1 (recordsize)
`-tst (tight multimedia)
`-m5 (text compression) -mx (max) -s (solid)
`u (solid) m2 (method 2)
`
`Notes
`
`16 MB RAM required
`many options, popular
`single file compressor
`no arithmetic compression
`
`single file compressor, popular
`
`popular
`
`popular
`-mtx requires 32MB RAM
`single file compressor
`new, experimental version
`
`aDOS ¼ program is available for plain DOS, W95 ¼ program (possibly a separate version) supports Windows 95y long file names, Linux ¼ Linux version is
`available, src ¼ program sources are available, beta ¼ program is in beta testing, i.e. not an official release, solid ¼ program supports solid compression.
`
`Netflix 1012 - Page 6
`
`

`
`J. Kivija¨rvi et al./Computerized Medical Imaging and Graphics 22 (1998) 323–339
`
`329
`
`3.2. General-purpose lossless compression methods
`
`Table 3 contains a summary of the general-purpose
`lossless compression methods applied in this work. Most
`of these programs are selected according to our previous
`knowledge on their ability to compress general data.
`GZIP, PKZIP and ARJ have been included because they
`are widely used.
`It should be noted that BZIPs, GZIP and SZIPs are single
`file compressors. When several images were compressed in
`the same file, TAR program was used in these cases. Thus,
`compression was solid, i.e. all the files to be compressed are
`handled as a single file. It is well known that if the files are
`of the same kind, this usually gives better results than
`compressing the files separately. We have tested solid com-
`pression if the program supports it. Note that TAR inserts
`headers between the files, but this was found to have no
`practical effect on the compression ratio.
`There are two versions of BZIP and SZIP. BZIP2 0.1pl2
`is a newer version of BZIP 0.21, but its compression results
`are worse. This is because instead of the arithmetic coding
`used in BZIP, Huffman coding is used in BZIP2. (On the
`other hand, BZIP2 is patent-free, unlike BZIP.) SZIP
`1.05Xd is an experimental, non-supported version of
`SZIP. It has some options so that it can be made to compress
`better at the expense of increased running time.
`
`4. Empirical tests
`
`Table 4 shows a summary of the test results for the 11
`image compression techniques. The numbers shown in the
`table are the average compression ratios calculated for the
`different pixel depths and sizes. Here we will express the
`results as averages over all these. For small images we show
`(if possible) the results for both single and merged images.
`The results of Table 4 indicate that the four most space-
`efficient compression techniques for this set of data are
`
`CALIC, TMW, SPIHT and JPEG-LS. The table also
`shows the average compression ratio of each compression
`method. Here the average is calculated as the total size of
`uncompressed images divided by the size of compressed
`database. Therefore, the averages are close to the results
`for CT/HEAD type of images, as two-thirds of the database
`is of this type. (Note that the weighted average is not the
`only way to compare the different compression tech-
`niques—compare with the unweighted average of all
`compression ratios of images or studies.) The average com-
`pression ratio of CALIC is 2.98 for all image types. For
`SPIHT and JPEG-LS the results are essentially of the
`same quality. The average compression ratio of TMW was
`as much as 3.17. This is because TMW compressed CT/
`HEAD images over 10% better than CALIC. CALIC
`compressed most of the other images better. However, if a
`unique model were generated for each image type, the
`compression results of TMW would probably be better.
`The variation of the compression ratios of the techniques
`is large among different image types. For CALIC the com-
`pression ratio ranges from 2.06 to 5.23. For single images
`the range is even greater, 1.52–24.16 (there are three images
`with compression ratios . 10). In addition, there are some
`cases where the relative order of the three most effective
`techniques is not clear; for MR images SPIHT gives the
`highest compression ratios and for NM/BRAIN images
`JPEG-LS has the highest ratio. It is interesting to note that
`FELICS is essentially as efficient as CALIC for the very
`small (64 3 64) images. Also LJPEG works relatively well
`for
`the very small
`images.
`JPEG-LS is especially
`space-efficient for the 8 bpp images.
`In the next category of the techniques we have FELICS
`and CLIC. FELICS succeeds well on the very small
`images—it gives the best results for the NM/HB images
`compressed separately. This is probably because FELICS
`is a rather simple method and the very small
`images
`(64 3 64) are not suitable for more sophisticated methods.
`For example, the results of TMW are very poor on the small
`
`Table 4
`The average compression ratios for the image compression methods by image type (JP-LS ¼ JPEG-LS, FEL ¼ FELICS)
`
`Mod
`
`Type
`
`TMW
`
`CALIC
`
`SPIHT
`
`JP-LS
`
`CLIC
`
`CR
`CT
`CT
`MR
`MR
`NM
`
`NM
`
`NM
`
`NM
`US
`Total
`
`PED CHEST
`ABDOMEN
`HEAD
`ABDOMEN
`SPINE
`BRAIN
`merged
`HB
`merged
`HEART
`merged
`WB
`(missing)
`
`merged
`
`2.71
`3.77
`3.16
`1.98
`2.81
`3.35
`5.04
`2.08
`3.45
`2.11
`3.54
`3.81
`2.80
`3.17
`3.21
`
`2.86
`3.72
`2.83
`2.06
`2.90
`4.87
`5.23
`3.28
`3.44
`3.36
`3.54
`4.09
`3.02
`2.98
`2.99
`
`2.81
`3.44
`2.82
`2.03
`2.94
`4.58
`4.78
`
`3.39
`
`3.48
`3.84
`2.54
`
`2.94
`
`2.74
`3.59
`2.65
`2.02
`2.81
`4.96
`5.11
`2.71
`3.39
`2.77
`3.47
`4.05
`2.90
`2.81
`2.82
`
`2.68
`3.40
`2.50
`1.86
`2.69
`4.52
`4.81
`2.44
`3.26
`2.51
`3.35
`3.82
`2.73
`2.67
`2.68
`
`FEL
`
`2.60
`2.90
`2.18
`1.89
`2.67
`3.67
`3.77
`3.28
`3.36
`3.36
`3.45
`3.28
`2.40
`2.36
`2.36
`
`LJPEG
`
`PNG
`
`CPM
`
`BTPC
`
`GIF
`
`2.24
`2.68
`2.05
`1.71
`2.42
`2.55
`2.55
`2.90
`2.91
`2.96
`2.97
`2.33
`2.02
`2.18
`2.18
`
`1.97
`2.44
`1.74
`1.45
`2.05
`3.79
`3.99
`2.31
`2.46
`2.36
`2.51
`3.24
`2.50
`1.90
`1.90
`
`1.57
`2.19
`3.28
`
`2.58
`
`2.64
`
`4.30
`4.83
`
`4.04
`4.40
`
`3.77
`2.40
`
`3.19
`1.94
`
`Netflix 1012 - Page 7
`
`

`
`330
`
`J. Kivija¨rvi et al./Computerized Medical Imaging and Graphics 22 (1998) 323–339
`
`Fig. 4. Average compression times (s) for image compression methods with respect to filesize (bytes) for Axil320. The compression times of TMW, BTPC,
`GIF, FELICS and LJPEG are omitted. TMW results were run on a different hardware. BTPC and GIF were not able to compress 16 bit images. FELICS
`compression times are essentially the same as for SPIHT. LJPEG compression times are between JPEG-LS and CALIC.
`
`images because TMW generates a 1596 byte parameter file
`for each image, which we have included in the compressed
`filesize.
`The CPM algorithm presupposes a square-like image and
`we therefore restricted our test with it to this kind of images
`only. Similarly, our GIF results are restricted to 8 bpp
`images. For both of these techniques the results do not
`encourage modifications allowing other image types. Also
`the implementation of BTPC was restricted to 8 bpp images.
`However, moderately good results were obtained by BTPC.
`One aspect is the merging of smaller images to form a
`larger one. A general observation is that the mutual prefer-
`ence of the different methods is preserved by the merging
`operation. The compression ratio, however,
`increases
`radically (for certain images even by 30%). This is because
`the information obtained from one image can be utilized
`when compressing other
`images. The improvement
`is
`greatest for JPEG-LS and CLIC. See Appendix B for
`more details on the test results.
`Tables 6–8 give a summary of the compression and
`decompress

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket