Merge pull request #229 from dsnet/master

Fixed minor white-space formatting and ordering of elements
This commit is contained in:
szabadka 2015-10-20 12:16:32 +02:00
commit b7a613fd51
1 changed files with 106 additions and 106 deletions

View File

@ -1066,10 +1066,10 @@ p1 and p2 are initialized to zero.
There are four methods, called context modes, to compute the There are four methods, called context modes, to compute the
Context ID: Context ID:
.nf .nf
* MSB6, where the Context ID is the value of six most
significant bits of p1,
* LSB6, where the Context ID is the value of six least * LSB6, where the Context ID is the value of six least
significant bits of p1, significant bits of p1,
* MSB6, where the Context ID is the value of six most
significant bits of p1,
* UTF8, where the Context ID is a complex function of p1, p2, * UTF8, where the Context ID is a complex function of p1, p2,
optimized for text compression, and optimized for text compression, and
* Signed, where Context ID is a complex function of p1, p2, * Signed, where Context ID is a complex function of p1, p2,
@ -1326,8 +1326,8 @@ where the _i subscript denotes the transform_id above. Each T_i
is one of the following 21 elementary transforms: is one of the following 21 elementary transforms:
.nf .nf
Identity, OmitLast1, ..., OmitLast9, UppercaseFirst, UppercaseAll, Identity, UppercaseFirst, UppercaseAll,
OmitFirst1, ..., OmitFirst9 OmitFirst1, ..., OmitFirst9, OmitLast1, ..., OmitLast9
.fi .fi
The form of these elementary transforms are as follows: The form of these elementary transforms are as follows:
@ -1335,15 +1335,15 @@ The form of these elementary transforms are as follows:
.nf .nf
Identity(word) = word Identity(word) = word
OmitLastk(word) = the first (length(word) - k) bytes of word, or
empty string if length(word) < k
UppercaseFirst(word) = first UTF-8 character of word upper-cased UppercaseFirst(word) = first UTF-8 character of word upper-cased
UppercaseAll(word) = all UTF-8 characters of word upper-cased UppercaseAll(word) = all UTF-8 characters of word upper-cased
OmitFirstk(word) = the last (length(word) - k) bytes of word, or OmitFirstk(word) = the last (length(word) - k) bytes of word, or
empty string if length(word) < k empty string if length(word) < k
OmitLastk(word) = the first (length(word) - k) bytes of word, or
empty string if length(word) < k
.fi .fi
For the purposes of UppercaseAll, word is parsed into UTF-8 For the purposes of UppercaseAll, word is parsed into UTF-8
@ -1686,7 +1686,7 @@ The decoding algorithm that produces the uncompressed data is as follows:
save previous block type save previous block type
read block count using HTREE_BLEN_I and set BLEN_I read block count using HTREE_BLEN_I and set BLEN_I
decrement BLEN_I decrement BLEN_I
read insert and copy length, ILEN, CLEN with HTREEI[BTYPE_I] read insert and copy length, ILEN, CLEN using HTREEI[BTYPE_I]
loop for ILEN loop for ILEN
if BLEN_L is zero if BLEN_L is zero
read block type using HTREE_BTYPE_L and set BTYPE_L read block type using HTREE_BTYPE_L and set BTYPE_L
@ -1709,7 +1709,7 @@ The decoding algorithm that produces the uncompressed data is as follows:
read block count using HTREE_BLEN_D and set BLEN_D read block count using HTREE_BLEN_D and set BLEN_D
decrement BLEN_D decrement BLEN_D
compute context ID, CIDD from CLEN compute context ID, CIDD from CLEN
read distance code with HTREED[CMAPD[4 * BTYPE_D + CIDD]] read distance code using HTREED[CMAPD[4 * BTYPE_D + CIDD]]
compute distance by distance short code substitution compute distance by distance short code substitution
move backwards distance bytes in the uncompressed data and move backwards distance bytes in the uncompressed data and
copy CLEN bytes from this position to the uncompressed copy CLEN bytes from this position to the uncompressed
@ -1795,7 +1795,7 @@ available in the brotli open-source project:
https://github.com/google/brotli https://github.com/google/brotli
.ti 0 .ti 0
15. Acknowledgements 15. Acknowledgments
The authors would like to thank Mark Adler for providing helpful review The authors would like to thank Mark Adler for providing helpful review
comments, validating the specification by writing an independent decompressor comments, validating the specification by writing an independent decompressor