1. 14 Feb, 2018 4 commits
  2. 12 Feb, 2018 1 commit
    • Fix for bug 2772 · 473851d2
      It is possible to craft a TIFF document where the IFD list is circular,
      leading to an infinite loop while traversing the chain. The libtiff
      directory reader has a failsafe that will break out of this loop after
      reading 65535 directory entries, but it will continue processing,
      consuming time and resources to process what is essentially a bogus TIFF
      document.
      
      This change fixes the above behavior by breaking out of processing when
      a TIFF document has >= 65535 directories and terminating with an error.
      Nathan Baker authored
  3. 09 Feb, 2018 1 commit
  4. 06 Feb, 2018 4 commits
  5. 04 Feb, 2018 2 commits
  6. 30 Jan, 2018 3 commits
  7. 29 Jan, 2018 1 commit
  8. 27 Jan, 2018 2 commits
  9. 25 Jan, 2018 2 commits
  10. 24 Jan, 2018 2 commits
  11. 15 Jan, 2018 7 commits
  12. 12 Jan, 2018 1 commit
    • Prefer target_include_directories · 0b05f432
      When libtiff is included in a super project via a simple
      `add_subdirectory(libtiff)`, this way the `tiff` library target has all
      the necessary information to build against it.
      
      Note: The BUILD_INTERFACE generator expression feature requires at least
      CMake v2.8.11 if I'm correct.
      Kevin Funk authored
  13. 09 Jan, 2018 1 commit
  14. 31 Dec, 2017 3 commits
  15. 21 Dec, 2017 2 commits
    • Add libzstd to gitlab-ci · 25c14f84
      Even Rouault authored
    • Add ZSTD compression codec · 62b9df5d
      From https://github.com/facebook/zstd
      "Zstandard, or zstd as short version, is a fast lossless compression
      algorithm, targeting real-time compression scenarios at zlib-level
      and better compression ratios. It's backed by a very fast entropy stage,
      provided by Huff0 and FSE library."
      
      We require libzstd >= 1.0.0 so as to be able to use streaming compression
      and decompression methods.
      
      The default compression level we have selected is 9 (range goes from 1 to 22),
      which experimentally offers equivalent or better compression ratio than
      the default deflate/ZIP level of 6, and much faster compression.
      
      For example on a 6600x4400 16bit image, tiffcp -c zip runs in 10.7 seconds,
      while tiffcp -c zstd runs in 5.3 seconds. Decompression time for zip is
      840 ms, and for zstd 650 ms. File size is 42735936 for zip, and
      42586822 for zstd. Similar findings on other images.
      
      On a 25894x16701 16bit image,
      
                      Compression time     Decompression time     File size
      
      ZSTD                 35 s                   3.2 s          399 700 498
      ZIP/Deflate       1m 20 s                   4.9 s          419 622 336
      Even Rouault authored
  16. 10 Dec, 2017 4 commits
    • Merge branch 'fix_cve-2017-9935' into 'master' · 5848777b
      Fix CVE-2017-9935
      
      See merge request !7
      Even Rouault authored
    • tiff2pdf: Fix apparent incorrect type for transfer table · d4f21363
      The standard says the transfer table contains unsigned 16 bit values,
      I have no idea why we refer to them as floats.
      Brian May authored
    • tiff2pdf: Fix CVE-2017-9935 · 3dd8f6a3
      Fix for http://bugzilla.maptools.org/show_bug.cgi?id=2704
      
      This vulnerability - at least for the supplied test case - is because we
      assume that a tiff will only have one transfer function that is the same
      for all pages. This is not required by the TIFF standards.
      
      We than read the transfer function for every page.  Depending on the
      transfer function, we allocate either 2 or 4 bytes to the XREF buffer.
      We allocate this memory after we read in the transfer function for the
      page.
      
      For the first exploit - POC1, this file has 3 pages. For the first page
      we allocate 2 extra extra XREF entries. Then for the next page 2 more
      entries. Then for the last page the transfer function changes and we
      allocate 4 more entries.
      
      When we read the file into memory, we assume we have 4 bytes extra for
      each and every page (as per the last transfer function we read). Which
      is not correct, we only have 2 bytes extra for the first 2 pages. As a
      result, we end up writing past the end of the buffer.
      
      There are also some related issues that this also fixes. For example,
      TIFFGetField can return uninitalized pointer values, and the logic to
      detect a N=3 vs N=1 transfer function seemed rather strange.
      
      It is also strange that we declare the transfer functions to be of type
      float, when the standard says they are unsigned 16 bit values. This is
      fixed in another patch.
      
      This patch will check to ensure that the N value for every transfer
      function is the same for every page. If this changes, we abort with an
      error. In theory, we should perhaps check that the transfer function
      itself is identical for every page, however we don't do that due to the
      confusion of the type of the data in the transfer function.
      Brian May authored
    • Merge branch 'undef-warn-fixes' into 'master' · 254262f3
      Fix a couple of harmless but annoying -Wundef warnings
      
      See merge request !8
      Even Rouault authored