1. 27 Dec, 2016 1 commit
  2. 07 Dec, 2016 1 commit
    • Junio C Hamano's avatar
      lockfile: LOCK_REPORT_ON_ERROR · 3f061bf5
      Junio C Hamano authored
      The "libify sequencer" topic stopped passing the die_on_error option
      to hold_locked_index(), and this lost an error message from "git
      merge --ff-only $commit" when there are competing updates in
      The command still exits with a non-zero status, but that is not of
      much help for an interactive user.  The last thing the command says
      is "Updating $from..$to".  We used to follow it with a big error
      message that makes it clear that "merge --ff-only" did not succeed.
      What is sad is that we should have noticed this regression while
      reviewing the change.  It was clear that the update to the
      checkout_fast_forward() function made a failing hold_locked_index()
      silent, but the only caller of the checkout_fast_forward() function
      had this comment:
      	    if (checkout_fast_forward(from, to, 1))
          -               exit(128); /* the callee should have complained already */
          +               return -1; /* the callee should have complained already */
      which clearly contradicted the assumption X-<.
      Add a new option LOCK_REPORT_ON_ERROR that can be passed instead of
      LOCK_DIE_ON_ERROR to the hold_lock*() family of functions and teach
      checkout_fast_forward() to use it to fix this regression.
      After going thourgh all calls to hold_lock*() family of functions
      that used to pass LOCK_DIE_ON_ERROR but were modified to pass 0 in
      the "libify sequencer" topic "git show --first-parent 2a4062a4",
      it appears that this is the only one that has become silent.  Many
      others used to give detailed report that talked about "there may be
      competing Git process running" but with the series merged they now
      only give a single liner "Unable to lock ...", some of which may
      have to be tweaked further, but at least they say something, unlike
      the one this patch fixes.
      Reported-by: default avatarRobbie Iannucci <[email protected]>
      Signed-off-by: default avatarJunio C Hamano <[email protected]>
  3. 23 Aug, 2016 1 commit
    • Ben Wijen's avatar
      mingw: ensure temporary file handles are not inherited by child processes · 05d1ed61
      Ben Wijen authored
      When the index is locked and child processes inherit the handle to
      said lock and the parent process wants to remove the lock before the
      child process exits, on Windows there is a problem: it won't work
      because files cannot be deleted if a process holds a handle on them.
      The symptom:
          Rename from 'xxx/.git/index.lock' to 'xxx/.git/index' failed.
          Should I try again? (y/n)
      Spawning child processes with bInheritHandles==FALSE would not work
      because no file handles would be inherited, not even the hStdXxx
      handles in STARTUPINFO (stdin/stdout/stderr).
      Opening every file with O_NOINHERIT does not work, either, as e.g.
      git-upload-pack expects inherited file handles.
      This leaves us with the only way out: creating temp files with the
      O_NOINHERIT flag. This flag is Windows-specific, however. For our
      purposes, it is equivalent to O_CLOEXEC (which does not exist on
      Windows), so let's just open temporary files with the O_CLOEXEC flag and
      map that flag to O_NOINHERIT on Windows.
      As Eric Wong pointed out, we need to be careful to handle the case where
      the Linux headers used to compile Git support O_CLOEXEC but the Linux
      kernel used to run Git does not: it returns an EINVAL.
      This fixes the test that we just introduced to demonstrate the problem.
      Signed-off-by: default avatarBen Wijen <[email protected]>
      Signed-off-by: Johannes Schindelin's avatarJohannes Schindelin <[email protected]>
      Signed-off-by: default avatarJunio C Hamano <[email protected]>
  4. 28 Aug, 2015 1 commit
  5. 10 Aug, 2015 4 commits
  6. 14 May, 2015 1 commit
    • Michael Haggerty's avatar
      lockfile: allow file locking to be retried with a timeout · 044b6a9e
      Michael Haggerty authored
      Currently, there is only one attempt to lock a file. If it fails, the
      whole operation fails.
      But it might sometimes be advantageous to try acquiring a file lock a
      few times before giving up. So add a new function,
      hold_lock_file_for_update_timeout(), that allows a timeout to be
      specified. Make hold_lock_file_for_update() a thin wrapper around the
      new function.
      If timeout_ms is positive, then retry for at least that many
      milliseconds to acquire the lock. On each failed attempt, use select()
      to wait for a backoff time that increases quadratically (capped at 1
      second) and has a random component to prevent two processes from
      getting synchronized. If timeout_ms is negative, retry indefinitely.
      In a moment we will switch to using the new function when locking
      Signed-off-by: default avatarMichael Haggerty <[email protected]>
      Signed-off-by: default avatarJunio C Hamano <[email protected]>
  7. 15 Oct, 2014 1 commit
  8. 01 Oct, 2014 2 commits