1. 02 Nov, 2018 1 commit
    • Derrick Stolee's avatar
      revision.c: generation-based topo-order algorithm · b4542418
      Derrick Stolee authored
      The current --topo-order algorithm requires walking all
      reachable commits up front, topo-sorting them, all before
      outputting the first value. This patch introduces a new
      algorithm which uses stored generation numbers to
      incrementally walk in topo-order, outputting commits as
      we go. This can dramatically reduce the computation time
      to write a fixed number of commits, such as when limiting
      with "-n <N>" or filling the first page of a pager.
      When running a command like 'git rev-list --topo-order HEAD',
      Git performed the following steps:
      1. Run limit_list(), which parses all reachable commits,
         adds them to a linked list, and distributes UNINTERESTING
         flags. If all unprocessed commits are UNINTERESTING, then
         it may terminate without walking all reachable commits.
         This does not occur if we do not specify UNINTERESTING
      2. Run sort_in_topological_order(), which is an implementation
         of Kahn's algorithm. It first iterates through the entire
         set of important commits and computes the in-degree of each
         (plus one, as we use 'zero' as a special value here). Then,
         we walk the commits in priority order, adding them to the
         priority queue if and only if their in-degree is one. As
         we remove commits from this priority queue, we decrement the
         in-degree of their parents.
      3. While we are peeling commits for output, get_revision_1()
         uses pop_commit on the full list of commits computed by
      In the new algorithm, these three steps correspond to three
      different commit walks. We run these walks simultaneously,
      and advance each only as far as necessary to satisfy the
      requirements of the 'higher order' walk. We know when we can
      pause each walk by using generation numbers from the commit-
      graph feature.
      Recall that the generation number of a commit satisfies:
      * If the commit has at least one parent, then the generation
        number is one more than the maximum generation number among
        its parents.
      * If the commit has no parent, then the generation number is one.
      There are two special generation numbers:
      * GENERATION_NUMBER_INFINITY: this value is 0xffffffff and
        indicates that the commit is not stored in the commit-graph and
        the generation number was not previously calculated.
      * GENERATION_NUMBER_ZERO: this value (0) is a special indicator
        to say that the commit-graph was generated by a version of Git
        that does not compute generation numbers (such as v2.18.0).
      Since we use generation_numbers_enabled() before using the new
      algorithm, we do not need to worry about GENERATION_NUMBER_ZERO.
      However, the existence of GENERATION_NUMBER_INFINITY implies the
      following weaker statement than the usual we expect from
      generation numbers:
          If A and B are commits with generation numbers gen(A) and
          gen(B) and gen(A) < gen(B), then A cannot reach B.
      Thus, we will walk in each of our stages until the "maximum
      unexpanded generation number" is strictly lower than the
      generation number of a commit we are about to use.
      The walks are as follows:
      1. EXPLORE: using the explore_queue priority queue (ordered by
         maximizing the generation number), parse each reachable
         commit until all commits in the queue have generation
         number strictly lower than needed. During this walk, update
         the UNINTERESTING flags as necessary.
      2. INDEGREE: using the indegree_queue priority queue (ordered
         by maximizing the generation number), add one to the in-
         degree of each parent for each commit that is walked. Since
         we walk in order of decreasing generation number, we know
         that discovering an in-degree value of 0 means the value for
         that commit was not initialized, so should be initialized to
         two. (Recall that in-degree value "1" is what we use to say a
         commit is ready for output.) As we iterate the parents of a
         commit during this walk, ensure the EXPLORE walk has walked
         beyond their generation numbers.
      3. TOPO: using the topo_queue priority queue (ordered based on
         the sort_order given, which could be commit-date, author-
         date, or typical topo-order which treats the queue as a LIFO
         stack), remove a commit from the queue and decrement the
         in-degree of each parent. If a parent has an in-degree of
         one, then we add it to the topo_queue. Before we decrement
         the in-degree, however, ensure the INDEGREE walk has walked
         beyond that generation number.
      The implementations of these walks are in the following methods:
      * explore_walk_step and explore_to_depth
      * indegree_walk_step and compute_indegrees_to_depth
      * next_topo_commit and expand_topo_walk
      These methods have some patterns that may seem strange at first,
      but they are probably carry-overs from their equivalents in
      limit_list and sort_in_topological_order.
      One thing that is missing from this implementation is a proper
      way to stop walking when the entire queue is UNINTERESTING, so
      this implementation is not enabled by comparisions, such as in
      'git rev-list --topo-order A..B'. This can be updated in the
      In my local testing, I used the following Git commands on the
      Linux repository in three modes: HEAD~1 with no commit-graph,
      HEAD~1 with a commit-graph, and HEAD with a commit-graph. This
      allows comparing the benefits we get from parsing commits from
      the commit-graph and then again the benefits we get by
      restricting the set of commits we walk.
      Test: git rev-list --topo-order -100 HEAD
      HEAD~1, no commit-graph: 6.80 s
      HEAD~1, w/ commit-graph: 0.77 s
        HEAD, w/ commit-graph: 0.02 s
      Test: git rev-list --topo-order -100 HEAD -- tools
      HEAD~1, no commit-graph: 9.63 s
      HEAD~1, w/ commit-graph: 6.06 s
        HEAD, w/ commit-graph: 0.06 s
      This speedup is due to a few things. First, the new generation-
      number-enabled algorithm walks commits on order of the number of
      results output (subject to some branching structure expectations).
      Since we limit to 100 results, we are running a query similar to
      filling a single page of results. Second, when specifying a path,
      we must parse the root tree object for each commit we walk. The
      previous benefits from the commit-graph are entirely from reading
      the commit-graph instead of parsing commits. Since we need to
      parse trees for the same number of commits as before, we slow
      down significantly from the non-path-based query.
      For the test above, I specifically selected a path that is changed
      frequently, including by merge commits. A less-frequently-changed
      path (such as 'README') has similar end-to-end time since we need
      to walk the same number of commits (before determining we do not
      have 100 hits). However, get the benefit that the output is
      presented to the user as it is discovered, much the same as a
      normal 'git log' command (no '--topo-order'). This is an improved
      user experience, even if the command has the same runtime.
      Helped-by: default avatarJeff King <peff@peff.net>
      Signed-off-by: Derrick Stolee's avatarDerrick Stolee <dstolee@microsoft.com>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
  2. 15 Aug, 2018 1 commit
  3. 20 Jul, 2018 2 commits
  4. 09 Jul, 2018 1 commit
    • Jonathan Tan's avatar
      upload-pack: send refs' objects despite "filter" · a0c9016a
      Jonathan Tan authored
      A filter line in a request to upload-pack filters out objects regardless
      of whether they are directly referenced by a "want" line or not. This
      means that cloning with "--filter=blob:none" (or another filter that
      excludes blobs) from a repository with at least one ref pointing to a
      blob (for example, the Git repository itself) results in output like the
          error: missing object referenced by 'refs/tags/junio-gpg-pub'
      and if that particular blob is not referenced by a fetched tree, the
      resulting clone fails fsck because there is no object from the remote to
      vouch that the missing object is a promisor object.
      Update both the protocol and the upload-pack implementation to include
      all explicitly specified "want" objects in the packfile regardless of
      the filter specification.
      Signed-off-by: default avatarJonathan Tan <jonathantanmy@google.com>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
  5. 29 Jun, 2018 9 commits
  6. 15 Jun, 2018 1 commit
    • Jonathan Tan's avatar
      fetch-pack: introduce negotiator API · ec062838
      Jonathan Tan authored
      Introduce the new files fetch-negotiator.{h,c}, which contains an API
      behind which the details of negotiation are abstracted. Currently, only
      one algorithm is available: the existing one.
      This patch is written to be easily reviewed: static functions are
      moved verbatim from fetch-pack.c to negotiator/default.c, and it can be
      seen that the lines replaced by negotiator->X() calls are present in the
      X() functions respectively.
      Signed-off-by: default avatarJonathan Tan <jonathantanmy@google.com>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
  7. 21 May, 2018 1 commit
  8. 17 May, 2018 2 commits
  9. 16 May, 2018 2 commits
  10. 09 May, 2018 3 commits
  11. 16 Apr, 2018 1 commit
    • Duy Nguyen's avatar
      pack-objects: turn type and in_pack_type to bitfields · fd9b1bae
      Duy Nguyen authored
      An extra field type_valid is added to carry the equivalent of OBJ_BAD
      in the original "type" field. in_pack_type always contains a valid
      type so we only need 3 bits for it.
      A note about accepting OBJ_NONE as "valid" type. The function
      read_object_list_from_stdin() can pass this value [1] and it
      eventually calls create_object_entry() where current code skip setting
      "type" field if the incoming type is zero. This does not have any bad
      side effects because "type" field should be memset()'d anyway.
      But since we also need to set type_valid now, skipping oe_set_type()
      leaves type_valid zero/false, which will make oe_type() return
      OBJ_BAD, not OBJ_NONE anymore. Apparently we do care about OBJ_NONE in
      prepare_pack(). This switch from OBJ_NONE to OBJ_BAD may trigger
          fatal: unable to get type of object ...
      Accepting OBJ_NONE [2] does sound wrong, but this is how it is has
      been for a very long time and I haven't time to dig in further.
      [1] See 5c49c116 (pack-objects: better check_object() performances -
      [2] 21666f1a (convert object type handling from a string to a number
          - 2007-02-26)
      Signed-off-by: Duy Nguyen's avatarNguyễn Thái Ngọc Duy <pclouds@gmail.com>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
  12. 11 Apr, 2018 1 commit
  13. 06 Mar, 2018 2 commits
  14. 14 Feb, 2018 1 commit
  15. 28 Dec, 2017 1 commit
  16. 22 Nov, 2017 1 commit
    • Jeff Hostetler's avatar
      list-objects: filter objects in traverse_commit_list · 25ec7bca
      Jeff Hostetler authored
      Create traverse_commit_list_filtered() and add filtering
      interface to allow certain objects to be omitted from the
      Update traverse_commit_list() to be a wrapper for the above
      with a null filter to minimize the number of callers that
      needed to be changed.
      Object filtering will be used in a future commit by rev-list
      and pack-objects for partial clone and fetch to omit unwanted
      objects from the result.
      traverse_bitmap_commit_list() does not work with filtering.
      If a packfile bitmap is present, it will not be used.  It
      should be possible to extend such support in the future (at
      least to simple filters that do not require object pathnames),
      but that is beyond the scope of this patch series.
      Signed-off-by: default avatarJeff Hostetler <jeffhost@microsoft.com>
      Reviewed-by: default avatarJonathan Tan <jonathantanmy@google.com>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
  17. 24 Sep, 2017 1 commit
    • Martin Ågren's avatar
      object_array: add and use `object_array_pop()` · 71992039
      Martin Ågren authored
      In a couple of places, we pop objects off an object array `foo` by
      decreasing `foo.nr`. We access `foo.nr` in many places, but most if not
      all other times we do so read-only, e.g., as we iterate over the array.
      But when we change `foo.nr` behind the array's back, it feels a bit
      nasty and looks like it might leak memory.
      Leaks happen if the popped element has an allocated `name` or `path`.
      At the moment, that is not the case. Still, 1) the object array might
      gain more fields that want to be freed, 2) a code path where we pop
      might start using names or paths, 3) one of these code paths might be
      copied to somewhere where we do, and 4) using a dedicated function for
      popping is conceptually cleaner.
      Introduce and use `object_array_pop()` instead. Release memory in the
      new function. Document that popping an object leaves the associated
      elements in limbo.
      The converted places were identified by grepping for "\.nr\>" and
      looking for "--".
      Make the new function return NULL on an empty array. This is consistent
      with `pop_commit()` and allows the following:
      	while ((o = object_array_pop(&foo)) != NULL) {
      		// do something
      But as noted above, we don't need to go out of our way to avoid reading
      `foo.nr`. This is probably more readable:
      	while (foo.nr) {
      		... o = object_array_pop(&foo);
      		// do something
      The name of `object_array_pop()` does not quite align with
      `add_object_array()`. That is unfortunate. On the other hand, it matches
      `object_array_clear()`. Arguably it's `add_...` that is the odd one out,
      since it reads like it's used to "add" an "object array". For that
      reason, side with `object_array_clear()`.
      Signed-off-by: default avatarMartin Ågren <martin.agren@gmail.com>
      Reviewed-by: default avatarJeff King <peff@peff.net>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
  18. 20 Jul, 2017 1 commit
  19. 08 May, 2017 1 commit
    • brian m. carlson's avatar
      object: convert parse_object* to take struct object_id · c251c83d
      brian m. carlson authored
      Make parse_object, parse_object_or_die, and parse_object_buffer take a
      pointer to struct object_id.  Remove the temporary variables inserted
      earlier, since they are no longer necessary.  Transform all of the
      callers using the following semantic patch:
      expression E1;
      - parse_object(E1.hash)
      + parse_object(&E1)
      expression E1;
      - parse_object(E1->hash)
      + parse_object(E1)
      expression E1, E2;
      - parse_object_or_die(E1.hash, E2)
      + parse_object_or_die(&E1, E2)
      expression E1, E2;
      - parse_object_or_die(E1->hash, E2)
      + parse_object_or_die(E1, E2)
      expression E1, E2, E3, E4, E5;
      - parse_object_buffer(E1.hash, E2, E3, E4, E5)
      + parse_object_buffer(&E1, E2, E3, E4, E5)
      expression E1, E2, E3, E4, E5;
      - parse_object_buffer(E1->hash, E2, E3, E4, E5)
      + parse_object_buffer(E1, E2, E3, E4, E5)
      Signed-off-by: brian m. carlson's avatarbrian m. carlson <sandals@crustytoothpaste.net>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
  20. 08 Feb, 2017 1 commit
    • Jeff King's avatar
      fetch-pack: cache results of for_each_alternate_ref · 41a078c6
      Jeff King authored
      We may run for_each_alternate_ref() twice, once in
      find_common() and once in everything_local(). This operation
      can be expensive, because it involves running a sub-process
      which must freshly load all of the alternate's refs from
      Let's cache and reuse the results between the two calls. We
      can make some optimizations based on the particular use
      pattern in fetch-pack to keep our memory usage down.
      The first is that we only care about the sha1s, not the refs
      themselves. So it's OK to store only the sha1s, and to
      suppress duplicates. The natural fit would therefore be a
      However, sha1_array's de-duplication happens only after it
      has read and sorted all entries. It still stores each
      duplicate. For an alternate with a large number of refs
      pointing to the same commits, this is a needless expense.
      Instead, we'd prefer to eliminate duplicates before putting
      them in the cache, which implies using a hash. We can
      further note that fetch-pack will call parse_object() on
      each alternate sha1. We can therefore keep our cache as a
      set of pointers to "struct object". That gives us a place to
      put our "already seen" bit with an optimized hash lookup.
      And as a bonus, the object stores the sha1 for us, so
      pointer-to-object is all we need.
      There are two extra optimizations I didn't do here:
        - we actually store an array of pointer-to-object.
          Technically we could just walk the obj_hash table
          looking for entries with the ALTERNATE flag set (because
          our use case doesn't care about the order here).
          But that hash table may be mostly composed of
          non-ALTERNATE entries, so we'd waste time walking over
          them. So it would be a slight win in memory use, but a
          loss in CPU.
        - the items we pull out of the cache are actual "struct
          object"s, but then we feed "obj->sha1" to our
          sub-functions, which promptly call parse_object().
          This second parse is cheap, because it starts with
          lookup_object() and will bail immediately when it sees
          we've already parsed the object. We could save the extra
          hash lookup, but it would involve refactoring the
          functions we call. It may or may not be worth the
      Signed-off-by: default avatarJeff King <peff@peff.net>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
  21. 13 Jun, 2016 1 commit
  22. 20 Nov, 2015 3 commits
  23. 19 Oct, 2014 1 commit
  24. 16 Oct, 2014 1 commit
    • Jeff King's avatar
      make add_object_array_with_context interface more sane · 9e0c3c4f
      Jeff King authored
      When you resolve a sha1, you can optionally keep any context
      found during the resolution, including the path and mode of
      a tree entry (e.g., when looking up "HEAD:subdir/file.c").
      The add_object_array_with_context function lets you then
      attach that context to an entry in a list. Unfortunately,
      the interface for doing so is horrible. The object_context
      structure is large and most object_array users do not use
      it. Therefore we keep a pointer to the structure to avoid
      burdening other users too much. But that means when we do
      use it that we must allocate the struct ourselves. And the
      struct contains a fixed PATH_MAX-sized buffer, which makes
      this wholly unsuitable for any large arrays.
      We can observe that there is only a single user of the
      "with_context" variant: builtin/grep.c. And in that use
      case, the only element we care about is the path. We can
      therefore store only the path as a pointer (the context's
      mode field was redundant with the object_array_entry itself,
      and nobody actually cared about the surrounding tree). This
      still requires a strdup of the pathname, but at least we are
      only consuming the minimum amount of memory for each string.
      We can also handle the copying ourselves in
      add_object_array_*, and free it as appropriate in
      Signed-off-by: default avatarJeff King <peff@peff.net>
      Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>