This project is mirrored from https://github.com/git/git. Updated .
  1. 28 Aug, 2014 1 commit
    • sprohaska's avatar
      convert: stream from fd to required clean filter to reduce used address space · 9035d75a
      sprohaska authored
      The data is streamed to the filter process anyway.  Better avoid mapping
      the file if possible.  This is especially useful if a clean filter
      reduces the size, for example if it computes a sha1 for binary data,
      like git media.  The file size that the previous implementation could
      handle was limited by the available address space; large files for
      example could not be handled with (32-bit) msysgit.  The new
      implementation can filter files of any size as long as the filter output
      is small enough.
      
      The new code path is only taken if the filter is required.  The filter
      consumes data directly from the fd.  If it fails, the original data is
      not immediately available.  The condition can easily be handled as
      a fatal error, which is expected for a required filter anyway.
      
      If the filter was not required, the condition would need to be handled
      in a different way, like seeking to 0 and reading the data.  But this
      would require more restructuring of the code and is probably not worth
      it.  The obvious approach of falling back to reading all data would not
      help achieving the main purpose of this patch, which is to handle large
      files with limited address space.  If reading all data is an option, we
      can simply take the old code path right away and mmap the entire file.
      
      The environment variable GIT_MMAP_LIMIT, which has been introduced in
      a previous commit is used to test that the expected code path is taken.
      A related test that exercises required filters is modified to verify
      that the data actually has been modified on its way from the file system
      to the object store.
      Signed-off-by: sprohaska's avatarSteffen Prohaska <[email protected]>
      Signed-off-by: default avatarJunio C Hamano <[email protected]>
      9035d75a
  2. 09 Jun, 2014 1 commit
  3. 20 Aug, 2013 1 commit
    • sprohaska's avatar
      xread, xwrite: limit size of IO to 8MB · 0b6806b9
      sprohaska authored
      Checking out 2GB or more through an external filter (see test) fails
      on Mac OS X 10.8.4 (12E55) for a 64-bit executable with:
      
          error: read from external filter cat failed
          error: cannot feed the input to external filter cat
          error: cat died of signal 13
          error: external filter cat failed 141
          error: external filter cat failed
      
      The reason is that read() immediately returns with EINVAL when asked
      to read more than 2GB.  According to POSIX [1], if the value of
      nbyte passed to read() is greater than SSIZE_MAX, the result is
      implementation-defined.  The write function has the same restriction
      [2].  Since OS X still supports running 32-bit executables, the
      32-bit limit (SSIZE_MAX = INT_MAX = 2GB - 1) seems to be also
      imposed on 64-bit executables under certain conditions.  For write,
      the problem has been addressed earlier [6c642a].
      
      Address the problem for read() and write() differently, by limiting
      size of IO chunks unconditionally on all platforms in xread() and
      xwrite().  Large chunks only cause problems, like causing latencies
      when killing the process, even if OS X was not buggy.  Doing IO in
      reasonably sized smaller chunks should have no negative impact on
      performance.
      
      The compat wrapper clipped_write() introduced earlier [6c642a] is
      not needed anymore.  It will be reverted in a separate commit.  The
      new test catches read and write problems.
      
      Note that 'git add' exits with 0 even if it prints filtering errors
      to stderr.  The test, therefore, checks stderr.  'git add' should
      probably be changed (sometime in another commit) to exit with
      nonzero if filtering fails.  The test could then be changed to use
      test_must_fail.
      
      Thanks to the following people for suggestions and testing:
      
          Johannes Sixt <[email protected]>
          John Keeping <[email protected]>
          Jonathan Nieder <[email protected]>
          Kyle J. McKay <[email protected]>
          Linus Torvalds <[email protected]>
          Torsten Bögershausen <[email protected]>
      
      [1] http://pubs.opengroup.org/onlinepubs/009695399/functions/read.html
      [2] http://pubs.opengroup.org/onlinepubs/009695399/functions/write.html
      
      [6c642a] commit 6c642a87
          compate/clipped-write.c: large write(2) fails on Mac OS X/XNU
      Signed-off-by: sprohaska's avatarSteffen Prohaska <[email protected]>
      Signed-off-by: default avatarJunio C Hamano <[email protected]>
      0b6806b9
  4. 17 Feb, 2012 1 commit
    • Jehan Bing's avatar
      Add a setting to require a filter to be successful · 36daaaca
      Jehan Bing authored
      By default, a missing filter driver or a failure from the filter driver is
      not an error, but merely makes the filter operation a no-op pass through.
      This is useful to massage the content into a shape that is more convenient
      for the platform, filesystem, and the user to use, and the content filter
      mechanism is not used to turn something unusable into usable.
      
      However, we could also use of the content filtering mechanism and store
      the content that cannot be directly used in the repository (e.g. a UUID
      that refers to the true content stored outside git, or an encrypted
      content) and turn it into a usable form upon checkout (e.g. download the
      external content, or decrypt the encrypted content).  For such a use case,
      the content cannot be used when filter driver fails, and we need a way to
      tell Git to abort the whole operation for such a failing or missing filter
      driver.
      
      Add a new "filter.<driver>.required" configuration variable to mark the
      second use case.  When it is set, git will abort the operation when the
      filter driver does not exist or exits with a non-zero status code.
      Signed-off-by: default avatarJehan Bing <[email protected]>
      Signed-off-by: default avatarJunio C Hamano <[email protected]>
      36daaaca
  5. 26 May, 2011 2 commits
  6. 22 Dec, 2010 2 commits
  7. 11 Apr, 2010 2 commits
  8. 06 Jan, 2010 1 commit
  9. 12 Mar, 2008 1 commit
  10. 21 Oct, 2007 1 commit
  11. 29 May, 2007 1 commit
  12. 15 May, 2007 1 commit
  13. 25 Apr, 2007 2 commits
    • Junio C Hamano's avatar
      Add 'filter' attribute and external filter driver definition. · aa4ed402
      Junio C Hamano authored
      The interface is similar to the custom low-level merge drivers.
      
      First you configure your filter driver by defining 'filter.<name>.*'
      variables in the configuration.
      
      	filter.<name>.clean	filter command to run upon checkin
      	filter.<name>.smudge	filter command to run upon checkout
      
      Then you assign filter attribute to each path, whose name
      matches the custom filter driver's name.
      
      Example:
      
      	(in .gitattributes)
      	*.c	filter=indent
      
      	(in config)
      	[filter "indent"]
      		clean = indent
      		smudge = cat
      Signed-off-by: default avatarJunio C Hamano <[email protected]>
      aa4ed402
    • Junio C Hamano's avatar
      Add 'ident' conversion. · 3fed15f5
      Junio C Hamano authored
      The 'ident' attribute set to path squashes "$ident:<any bytes
      except dollor sign>$" to "$ident$" upon checkin, and expands it
      to "$ident: <blob SHA-1> $" upon checkout.
      
      As we have two conversions that affect checkin/checkout paths,
      clarify how they interact with each other.
      Signed-off-by: default avatarJunio C Hamano <[email protected]>
      3fed15f5