Skip to content
  • Junio C Hamano's avatar
    sha1-lookup: more memory efficient search in sorted list of SHA-1 · 628522ec
    Junio C Hamano authored
    
    
    Currently, when looking for a packed object from the pack idx, a
    simple binary search is used.
    
    A conventional binary search loop looks like this:
    
            unsigned lo, hi;
            do {
                    unsigned mi = (lo + hi) / 2;
                    int cmp = "entry pointed at by mi" minus "target";
                    if (!cmp)
                            return mi; "mi is the wanted one"
                    if (cmp > 0)
                            hi = mi; "mi is larger than target"
                    else
                            lo = mi+1; "mi is smaller than target"
            } while (lo < hi);
    	"did not find what we wanted"
    
    The invariants are:
    
      - When entering the loop, 'lo' points at a slot that is never
        above the target (it could be at the target), 'hi' points at
        a slot that is guaranteed to be above the target (it can
        never be at the target).
    
      - We find a point 'mi' between 'lo' and 'hi' ('mi' could be
        the same as 'lo', but never can be as high as 'hi'), and
        check if 'mi' hits the target.  There are three cases:
    
         - if it is a hit, we have found what we are looking for;
    
         - if it is strictly higher than the target, we set it to
           'hi', and repeat the search.
    
         - if it is strictly lower than the target, we update 'lo'
           to one slot after it, because we allow 'lo' to be at the
           target and 'mi' is known to be below the target.
    
        If the loop exits, there is no matching entry.
    
    When choosing 'mi', we do not have to take the "middle" but
    anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
    satisfied.  When we somehow know that the distance between the
    target and 'lo' is much shorter than the target and 'hi', we
    could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
    which a conventional binary search would pick.
    
    This patch takes advantage of the fact that the SHA-1 is a good
    hash function, and as long as there are enough entries in the
    table, we can expect uniform distribution.  An entry that begins
    with for example "deadbeef..." is much likely to appear much
    later than in the midway of a reasonably populated table.  In
    fact, it can be expected to be near 87% (222/256) from the top
    of the table.
    
    This is a work-in-progress and has switches to allow easier
    experiments and debugging.  Exporting GIT_USE_LOOKUP environment
    variable enables this code.
    
    On my admittedly memory starved machine, with a partial KDE
    repository (3.0G pack with 95M idx):
    
        $ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
        3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
        0inputs+0outputs (0major+55588minor)pagefaults 0swaps
    
    Without the patch, the numbers are:
    
        $ git log -800 --stat HEAD >/dev/null
        4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
        0inputs+0outputs (0major+60258minor)pagefaults 0swaps
    
    In the same repository:
    
        $ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
        0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
        0inputs+0outputs (0major+4241minor)pagefaults 0swaps
    
    Without the patch, the numbers are:
    
        $ git log -2000 HEAD >/dev/null
        0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
        0inputs+0outputs (0major+8506minor)pagefaults 0swaps
    
    There isn't much time difference, but the number of minor faults
    seems to show that we are touching much smaller number of pages,
    which is expected.
    
    Signed-off-by: default avatarJunio C Hamano <gitster@pobox.com>
    628522ec