Commit Graph

956 Commits

Author SHA1 Message Date
Matthias Sohn 857d151198 JGit 0.11.1
Change-Id: I9ac2fdfb4326536502964ba614d37d0bd103f524
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2011-02-11 23:25:34 +01:00
Chris Aniszczyk b46e06bc74 Merge "Fix NPE on reading global config on MAC" into stable-0.11 2011-02-09 12:27:36 -05:00
Jens Baumgart b82e4bf771 Fix NPE on reading global config on MAC
Bug: 336610

Change-Id: Iefcb85e791723801faa315b3ee45fb19e3ca52fb
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
2011-02-09 15:12:31 +01:00
Jens Baumgart c9e4a78555 Add isOutdated method to DirCache
isOutdated returns true iff the memory state differs from the index
file.

Change-Id: If35db06743f5f588ab19d360fd2a18a07c918edb
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
2011-02-09 15:02:22 +01:00
Mathias Kinzler 724af77c65 PullCommand: use default remote instead of throwing Exception
When pulling into a local branch that has no upstream configuration,
pull should try to used the default remote ("origin") instead of
throwing an Exception.

Bug: 336504
Change-Id: Ife75858e89ea79c0d6d88ba73877fe8400448e34
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-02-08 08:56:19 +01:00
Shawn O. Pearce 0180946bc8 Remove quoting of command over SSH
If the command contains spaces, it needs to be evaluated by the remote
shell.  Quoting the command breaks this, making it impossible to run a
remote command that needs additional options.

Bug: 336301
Change-Id: Ib5d88f0b2151df2d1d2b4e08d51ee979f6da67b5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-06 14:04:39 -08:00
Shawn O. Pearce 0fe7eeba04 UploadPack: Tag non-commits SATISIFIED earlier
This gets non-commits out of the wantSatisfied() main loop by making
use of the cached SATISIFIED flag and its existing bypass.  Anything
that isn't a commit cannot be discovered by the have negotiation, so
its always assumed to be SATISIFIED by the server.

Bug: 301639
Change-Id: I1ef354fbf2e2ed44c9020a4069d7179f2159f19f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-06 01:08:41 -08:00
Shawn O. Pearce b5da75bb87 UploadPack: Don't discard COMMON, SATISIFIED flags
When the walker resets, its going to scrub the COMMON and SATISIFIED
flags off a commit if the commit is contained within another commit
the client wants.  This is common if the client asks for both a
'maint' and 'master' branch, and 'maint' is also fully merged into
'master'.

COMMON shouldn't be scrubbed during reset because its used to control
membership of the commonBase collection, which is a List.  commonBase
should technically be a set, but membership is cheaper with a RevFlag.
COMMON appears on a commit reachable from a WANT when there is also a
PEER_HAS flag present, as this is a merge base.  Scrubbing this off
when another branch is tested isn't useful.

SATISIFIED is a cache to tell us if wantSatisified() has already
completed for this particular WANT.  If it has, there isn't a need to
recompute on that branch.  Scrubbing it off 'maint' when we test
'master' just means we would later need to re-test 'maint', wasting
CPU time on the server.

Bug: 301639
Change-Id: I3bb67d68212e4f579e8c5dfb138f007b406d775f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-06 01:08:41 -08:00
Shawn O. Pearce a35c793b2d UploadPack: Fix want-is-satisfied test
okToGiveUpImp() has been missing a ! for a long time.  This loop over
wantAll() is looking for an object where wantSatisfied() returns
false, because there is no common merge base present.  Unfortunately
it was missing a !, causing the loop to break and return false after
at least one want was satisified.

Bug: 301639
Change-Id: Ifdbe0b22c9cd0a9181546d090b4990d792d70c82
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-06 01:08:41 -08:00
Shawn O. Pearce c6423932bf Fix JGit --upload-pack, --receive-pack options
JGit did not use sh -c to run the receive-pack or upload-pack programs
locally, which caused errors if these strings contained spaces and
needed the local shell to evaluate them.

Win32 support using cmd.exe /c is completely untested, but seems like
it should work based on the limited information I could get through
Google search results.

Bug: 336301
Change-Id: I22e5e3492fdebbae092d1ce6b47ad411e57cc1ba
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-05 17:40:54 -08:00
Shawn O. Pearce 2096c749c3 UploadPack: Avoid parsing want list on clone
If a client wants to perform a clone of the repository, it sends
wants, but no haves.  There is no point in parsing the want list
within UploadPack, as there won't be a common merge base search.
Instead just defer the parsing to PackWriter, which will do its
own parsing and object enumeration.

If the client does have a "have" set, defer parsing of the want list
until the have list is also parsed, and parse them together in a
single batch queue.  This lets the underlying storage system use a
larger lookup batch if there is significant latency involved when
resolving an ObjectId to a RevObject.

Change-Id: I9c30d34f8e344da05c8a2c041a6dc181d8e8bc19
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-04 09:10:23 -08:00
Shawn O. Pearce a3620cbbe1 Reuse cached SHA-1 when computing from WorkingTreeIterator
Change-Id: I2b2170c29017993d8cb7a1d3c8cd94fb16c7dd02
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
2011-02-03 17:06:23 -06:00
Shawn O. Pearce 461b012e95 PackWriter: Support reuse of entire packs
The most expensive part of packing a repository for transport to
another system is enumerating all of the objects in the repository.
Once this gets to the size of the linux-2.6 repository (1.8 million
objects), enumeration can take several CPU minutes and costs a lot
of temporary working set memory.

Teach PackWriter to efficiently reuse an existing "cached pack"
by answering a clone request with a thin pack followed by a larger
cached pack appended to the end.  This requires the repository
owner to first construct the cached pack by hand, and record the
tip commits inside of $GIT_DIR/objects/info/cached-packs:

  cd $GIT_DIR
  root=$(git rev-parse master)
  tmp=objects/.tmp-$$
  names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp)
  for n in $names; do
    chmod a-w $tmp-$n.pack $tmp-$n.idx
    touch objects/pack/pack-$n.keep
    mv $tmp-$n.pack objects/pack/pack-$n.pack
    mv $tmp-$n.idx objects/pack/pack-$n.idx
  done

  (echo "+ $root";
   for n in $names; do echo "P $n"; done;
   echo) >>objects/info/cached-packs

  git repack -a -d

When a clone request needs to include $root, the corresponding
cached pack will be copied as-is, rather than enumerating all of
the objects that are reachable from $root.

For a linux-2.6 kernel repository that should be about 376 MiB,
the above process creates two packs of 368 MiB and 38 MiB[1].
This is a local disk usage increase of ~26 MiB, due to reduced
delta compression between the large cached pack and the smaller
recent activity pack.  The overhead is similar to 1 full copy of
the compressed project sources.

With this cached pack in hand, JGit daemon completes a clone request
in 1m17s less time, but a slightly larger data transfer (+2.39 MiB):

  Before:
    remote: Counting objects: 1861830, done
    remote: Finding sources: 100% (1861830/1861830)
    remote: Getting sizes: 100% (88243/88243)
    remote: Compressing objects: 100% (88184/88184)
    Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done.
    remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844)
    Resolving deltas: 100% (1564621/1564621), done.

    real  3m19.005s

  After:
    remote: Counting objects: 1601, done
    remote: Counting objects: 1828460, done
    remote: Finding sources: 100% (50475/50475)
    remote: Getting sizes: 100% (18843/18843)
    remote: Compressing objects: 100% (7585/7585)
    remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510)
    Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done.
    Resolving deltas: 100% (1559477/1559477), done.

    real 2m2.938s

Repository owners can periodically refresh their cached packs by
repacking their repository, folding all newer objects into a larger
cached pack.  Since repacking is already considered to be a normal
Git maintenance activity, this isn't a very big burden.

[1] In this test $root was set back about two weeks.

Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-03 13:20:22 -08:00
Shawn O. Pearce 71f168fcd7 PackWriter: Display totals after sending objects
CGit pack-objects displays a totals line after the pack data
was fully written.  This can be useful to understand some of
the decisions made by the packer, and has been a great tool
for helping to debug some of that code.

Track some of the basic values, and send it to the client when
packing is done:

  remote: Counting objects: 1826776, done
  remote: Finding sources: 100% (55121/55121)
  remote: Getting sizes: 100% (25654/25654)
  remote: Compressing objects: 100% (11434/11434)
  remote: Total 1861830 (delta 3926), reused 1854705 (delta 38306)
  Receiving objects: 100% (1861830/1861830), 386.03 MiB | 30.32 MiB/s, done.

Change-Id: If3b039017a984ed5d5ae80940ce32bda93652df5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-02 17:17:57 -08:00
Shawn O. Pearce 04759f3274 RefAdvertiser: Avoid object parsing
It isn't strictly necessary to validate every reference's target
object is reachable in the repository before advertising it to a
client. This is an expensive operation when there are thousands of
references, and its very unlikely that a reference uses a missing
object, because garbage collection proceeds from the references and
walks down through the graph. So trying to hide a dangling reference
from clients is relatively pointless.

Even if we are trying to avoid giving a client a corrupt repository,
this simple check isn't sufficient.  It is possible for a reference to
point to a valid commit, but that commit to have a missing blob in its
root tree.  This can be caused by staging a file into the index,
waiting several weeks, then committing that file while also racing
against a prune.  The prune may delete the blob, since its
modification time is more than 2 weeks ago, but retain the commit,
since its modification time is right now.

Such graph corruption is already caught during PackWriter as it
enumerates the graph from the client's want list and digs back
to the roots or common base.  Leave the reference validation also
for that same phase, where we know we have to parse the object to
support the enumeration.

Change-Id: Iee70ead0d3ed2d2fcc980417d09d7a69b05f5c2f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-02-02 17:16:32 -08:00
Chris Aniszczyk f265a80d2e Merge "Expose some constants needed for reading the Pull configuration" 2011-02-02 10:22:23 -05:00
Mathias Kinzler 13a406287e Expose some constants needed for reading the Pull configuration
Change-Id: I72cb1cc718800c09366306ab2eebd43cd82023ff
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-02-02 14:45:37 +01:00
Jens Baumgart 29ed09a44f PushCommand: do not set a null credentials provider
PushCommand now does not set a null credentials provider on
Transport because in this case the default provider is replaced with
null and the default mechanism for providing credentials is not
working.

Bug: 336023
Change-Id: I7a7a9221afcfebe2e1595a5e59641e6c1ae4a207
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
2011-02-02 13:13:28 +01:00
Robin Stocker b0245b548b Don't print "into HEAD" when merging refs/heads/master
When MergeMessageFormatter was given a symbolic ref HEAD which points to
refs/heads/master (which is the case when merging a branch in EGit), it
would result in a merge message like the following:

  Merge branch 'a' into HEAD

But it should print the following (as C Git does):

  Merge branch 'a'

The solution is to use the leaf ref when checking for refs/heads/master.

Change-Id: I28ae5713b7e8123a0176fc6d7356e469900e7e97
2011-02-01 22:27:33 +01:00
Shawn O. Pearce 13bcf05a9e PackWriter: Make thin packs more efficient
There is no point in pushing all of the files within the edge
commits into the delta search when making a thin pack.  This floods
the delta search window with objects that are unlikely to be useful
bases for the objects that will be written out, resulting in lower
data compression and higher transfer sizes.

Instead observe the path of a tree or blob that is being pushed
into the outgoing set, and use that path to locate up to WINDOW
ancestor versions from the edge commits.  Push only those objects
into the edgeObjects set, reducing the number of objects seen by the
search window.  This allows PackWriter to only look at ancestors
for the modified files, rather than all files in the project.
Limiting the search to WINDOW size makes sense, because more than
WINDOW edge objects will just skip through the window search as
none of them need to be delta compressed.

To further improve compression, sort edge objects into the front
of the window list, rather than randomly throughout.  This puts
non-edges later in the window and gives them a better chance at
finding their base, since they search backwards through the window.

These changes make a significant difference in the thin-pack:

  Before:
    remote: Counting objects: 144190, done
    remote: Finding sources: 100% (50275/50275)
    remote: Getting sizes: 100% (101405/101405)
    remote: Compressing objects: 100% (7587/7587)
    Receiving objects: 100% (50275/50275), 24.67 MiB | 9.90 MiB/s, done.
    Resolving deltas: 100% (40339/40339), completed with 2218 local objects.

    real    0m30.267s

  After:
    remote: Counting objects: 61549, done
    remote: Finding sources: 100% (50275/50275)
    remote: Getting sizes: 100% (18862/18862)
    remote: Compressing objects: 100% (7588/7588)
    Receiving objects: 100% (50275/50275), 11.04 MiB | 3.51 MiB/s, done.
    Resolving deltas: 100% (43160/43160), completed with 5014 local objects.

    real    0m22.170s

The resulting pack is 13.63 MiB smaller, even though it contains the
same exact objects.  82,543 fewer objects had to have their sizes
looked up, which saved about 8s of server CPU time.  2,796 more
objects from the client were used as part of the base object set,
which contributed to the smaller transfer size.

Change-Id: Id01271950432c6960897495b09deab70e33993a9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Sigend-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-02-01 09:12:06 -06:00
Shawn O. Pearce 2fbcba41e3 PackWriter: Cleanup findObjectToPack method
Some of this code predates making ObjectId.equals() final
and fixing RevObject.equals() to match ObjectId.equals().
It was therefore more complex than it needs to be, because
it tried to work around RevObject's broken equals() rules
by converting to ObjectId in a different collection.

Also combine setUpWalker() and findObjectsToPack() methods,
these can be one method and the code is actually cleaner.

Change-Id: I0f4cf9997cd66d8b6e7f80873979ef1439e507fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-02-01 09:03:24 -06:00
Shawn O. Pearce 8f63dface2 PackWriter: Correct 'Compressing objects' progress message
The first 'Compressing objects' progress message is wrong, its
actually PackWriter looking up the sizes of each object in the
ObjectDatabase, so objects can be sorted correctly in the later
type-size sort that tries to take advantage of "Linus' Law" to
improve delta compression.

Rename the progress to say 'Getting sizes', which is an accurate
description of what it is doing.

Change-Id: Ida0a052ad2f6e994996189ca12959caab9e556a3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-02-01 09:01:58 -06:00
Chris Aniszczyk eb5658e629 Merge "Add git-clone to the Git API" 2011-02-01 09:56:46 -05:00
Shawn O. Pearce 37a10e3006 PackWriter: Don't include edges in progress meter
When compressing objects, don't include the edges in the progress
meter.  These cost almost no CPU time as they are simply pushed into
and popped out of the delta search window.

Change-Id: I7ea19f0263e463c65da34a7e92718c6db1d4a131
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-02-01 08:55:43 -06:00
Chris Aniszczyk cc5295c4b4 Merge "Show resolving deltas progress to push clients" 2011-02-01 09:40:57 -05:00
Chris Aniszczyk c1de63262e Merge "ObjectWalk: Fix reset for non-commit objects" 2011-02-01 09:38:30 -05:00
Chris Aniszczyk 4112884ede Add git-clone to the Git API
Enhance the Git API to support cloning repositories.

Bug: 334763
Change-Id: Ibe1191498dceb9cbd1325aed85b4c403db19f41e
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-31 16:56:56 -06:00
Shawn O. Pearce 168114fd39 Show resolving deltas progress to push clients
CGit push clients 1.6.6 and later support progress messages on the
side-band-64k channel during push, as this was introduced to handle
server side hook errors reported over smart HTTP.

Since JGit's delta resolution isn't always as fast as CGit's is,
a user may think the server has crashed and failed to report
status if the user pushed a lot of content and sees no feedback.
Exposing the progress monitor during the resolving deltas phase
will let the user know the server is still making forward progress.

This also helps BasePackPushConnection, which has a bounded timeout
on how long it will wait before assuming the remote server is dead.
Progress messages pushed down the side-band channel will reset the
read timer, helping the connection to stay alive and avoid timing
out before the remote side's work is complete.

Change-Id: I429c825e5a724d2f21c66f95526d9c49edcc6ca9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-31 12:31:52 -08:00
Shawn O. Pearce c2ab3421a2 ObjectWalk: Fix reset for non-commit objects
Non-commits are added to a pending queue, but duplicates are
removed by checking a flag.  During a reset that flag must be
stripped off the old roots, otherwise the caller cannot reuse
the old roots after the reset.

RevWalk already does this correctly for commits, but ObjectWalk
failed to handle the non-commit case itself.

Change-Id: I99e1832bf204eac5a424fdb04f327792e8cded4a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-31 12:31:52 -08:00
Mathias Kinzler b15b9d5df2 Proper handling of rebase during pull
After consulting with Christian Halstrick, it turned out that the
handling of rebase during pull was implemented incorrectly.

Change-Id: I40f03409e080cdfeceb21460150f5e02a016e7f4
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-01-31 12:12:48 +01:00
Robin Rosenberg 9ffcf2a8b3 Merge changes I3a74cc84,I219f864f
* changes:
  [findbugs] Do not ignore exceptional return value of createNewFile()
  Do not create files to be updated before checkout of DirCache entry
2011-01-29 17:52:12 -05:00
Tomasz Zarna 9fbda22392 Add setCredentialsProvider to PullCommand
Bug: 335703
Change-Id: Id9713a4849c772e030fca23dd64b993264f28366
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-28 14:06:47 -06:00
Chris Aniszczyk a880233d7f Merge "ObjectIdSubclassMap: Support duplicate additions" 2011-01-28 12:45:39 -05:00
Shawn O. Pearce 17dc6bdafd ObjectIdSubclassMap: Support duplicate additions
The new addIfAbsent() method combines get() with add(), but does
it in a single step so that the common case of get() returning null
for a new object can immediately insert the object into the map.

Change-Id: Ib599ab4de13ad67665ccfccf3ece52ba3222bcba
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-28 08:17:20 -08:00
Chris Aniszczyk bf69401fee Merge "Make PullCommand work with Rebase" 2011-01-28 10:52:38 -05:00
Chris Aniszczyk 0b2ac1e929 Merge "RebaseCommand: detect and handle fast-forward properly" 2011-01-28 10:38:27 -05:00
Shawn O. Pearce 065a0a8122 Revert "Teach PackWriter how to reuse an existing object list"
This reverts commit f5fe2dca3c.

I regret adding this feature to the public API.  Caches aren't always
the best idea, as they require work to maintain.  Here the cache is
redundant information that must be computed, and when it grows stale
must be removed.  The redundant information takes up more disk space,
about the same size as the pack-*.idx files are.  For the linux-2.6
repository, that's more than 40 MB for a 400 MB repository.  So the
cache is a 10% increase in disk usage.

The entire point of this cache is to improve PackWriter performance,
and only PackWriter performance, and only when sending an initial
clone to a new client.  There may be better ways to optimize this, and
until we have a solid solution, we shouldn't be using a separate cache
in JGit.
2011-01-28 07:20:26 -08:00
Mathias Kinzler 14ca80bc90 Make PullCommand work with Rebase
Rebase must honor the upstream configuration

branch.<branchname>.rebase

Change-Id: Ic94f263d3f47b630ad75bd5412cb4741bb1109ca
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-01-28 15:04:52 +01:00
Mathias Kinzler e8a1328d05 RebaseCommand: detect and handle fast-forward properly
This bug was hidden by an incomplete test: the current Rebase
implementation using the "git rebase -i" pattern does not work
correctly if fast-forwarding is involved. The reason for this is that
the log command does not return any commits in this case.
In addition, a check for already merged commits was introduced to
avoid spurious conflicts.

Change-Id: Ib9898fe0f982fa08e41f1dca9452c43de715fdb6
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-01-28 15:03:02 +01:00
Mathias Kinzler c544e96a4c TransportHttp wrongly uses JDK 6 constructor of IOException
IOException constructor taking Exception as parameter is
new for JDK 6.

Change-Id: Iec349fc7be9e9fbaeb53841894883c47a98a7b29
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-01-28 09:24:20 +01:00
Matthias Sohn 38eec8f4a2 [findbugs] Do not ignore exceptional return value of mkdir
java.io.File.mkdir() and mkdirs() report failure as an exceptional
return value false. Fix the code which silently ignored this
exceptional return value.

Change-Id: I41244f4b9d66176e68e2c07e2329cf08492f8619
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2011-01-28 01:11:12 +01:00
Matthias Sohn 9ec97688b9 Do not create files to be updated before checkout of DirCache entry
DirCacheCheckout.checkoutEntry() prepares the new file content using a
temporary file and then renames it to the file to be written during
checkout. For files to be updated checkout() created each file before
calling checkoutEntry(). Hence renaming the temporary file always
failed which was corrected in exception handling by retrying to rename
the file after deleting the just newly created file.

Change-Id: I219f864f2ed8d68051d7b5955d0659964fa27274
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2011-01-28 01:11:12 +01:00
Shawn O. Pearce f5fe2dca3c Teach PackWriter how to reuse an existing object list
Counting the objects needed for packing is the most expensive part of
an UploadPack request that has no uninteresting objects (otherwise
known as an initial clone).  During this phase the PackWriter is
enumerating the entire set of objects in this repository, so they can
be sent to the client for their new clone.

Allow the ObjectReader (and therefore the underlying storage system)
to keep a cached list of all reachable objects from a small number of
points in the project's history.  If one of those points is reached
during enumeration of the commit graph, most objects are obtained from
the cached list instead of direct traversal.

PackWriter uses the list by discarding the current object lists and
restarting a traversal from all refs but marking the object list name
as uninteresting.  This allows PackWriter to enumerate all objects
that are more recent than the list creation, or that were on side
branches that the list does not include.

However, ObjectWalk tags all of the trees and commits within the list
commit as UNINTERESTING, which would normally cause PackWriter to
construct a thin pack that excludes these objects.  To avoid that,
addObject() was refactored to allow this list-based enumeration to
always include an object, even if it has been tagged UNINTERESTING by
the ObjectWalk.  This implies the list-based enumeration may only be
used for initial clones, where all objects are being sent.

The UNINTERESTING labeling occurs because StartGenerator always
enables the BoundaryGenerator if the walker is an ObjectWalk and a
commit was marked UNINTERESTING, even if RevSort.BOUNDARY was not
enabled.  This is the default reasonable behavior for an ObjectWalk,
but isn't desired here in PackWriter with the list-based enumeration.
Rather than trying to change all of this behavior, PackWriter works
around it.

Because the list name commit's immediate files and trees were all
enumerated before the list enumeration itself starts (and are also
within the list itself) PackWriter runs the risk of adding the same
objects to its ObjectIdSubclassMap twice.  Since this breaks the
internal map data structure (and also may cause the object to transmit
twice), PackWriter needs to use a new "added" RevFlag to track whether
or not an object has been put into the outgoing list yet.

Change-Id: Ie99ed4d969a6bb20cc2528ac6b8fb91043cee071
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-27 09:38:19 -08:00
Shawn O. Pearce a017fdf112 Allow ObjectReuseAsIs to resort objects during writing
It can be very handy for the implementation to resort the
object list based on data locality, improving prefetch in
the operating system's buffer cache.

Export the list to the implementation was a proper List,
and document that its mutable and OK to be modified.  The
only caller in PackWriter is already OK with these rules.

Change-Id: I3f51cf4388898917b2be36670587a5aee902ff10
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-27 08:58:55 -08:00
Shawn O. Pearce c218a0760d PackWriter: Use TOPO order only for incremental packs
When performing an initial clone of a repository there are no
uninteresting commits, and the resulting pack will be completely
self-contained.  Therefore PackWriter does not need to honor C
Git standard TOPO ordering as described in JGit commit ba984ba2e0
("Fix checkReferencedIsReachable to use correct base list").

Switching to COMMIT_TIME_DESC when there are no uninteresting commits
allows the "Counting objects" phase to emit progress earlier, as the
RevWalk will not buffer the commit list.  When TOPO is set the RevWalk
enumerates all commits first, before outputing any for PackWriter to
mark progress updates from.

Change-Id: If2b6a9903b536c7fb3c45f85d0a67ff6c6e66f22
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-27 08:58:44 -08:00
Shawn O. Pearce 559c4661c3 Remove getObjectsDirectory, openPack from base API
These two methods are specific to the FileRepository implementation
and should not be exposed as part of the base Repository API.  Now
that PackParser is generic and does not require these two methods
to import a pack stream into a repostiory, it is safe to remove
these and get them out of the public view.

Change-Id: I8990004d08074657f467849dabfdaa7e6674e69a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-27 08:56:35 -08:00
Shawn Pearce cc983454c0 Merge "Support for self signed certificate (HTTPS)" 2011-01-27 11:46:49 -05:00
Matthias Sohn 91af19de56 Hard reset should not report conflict on untracked file
This problem surfaced since EGit Core ResetOperationTest is failing
since change I26806d21. JGit detected checkout conflict for untracked
files which never were tracked by the repository. 

"git reset --hard" in c git also doesn't remove such untracked files.

Change-Id: Icc8e1c548ecf6ed48bd2979c81eeb6f578d347bd
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2011-01-27 17:20:04 +01:00
Roberto Tyley afa7c7ab07 Rename PlotWalk.getTags() to getRefs()
Change-Id: I170685e70d9ac09a010df69d26ec1c38bde60174
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-26 22:35:41 -06:00
Roberto Tyley 6ac8279ae7 Provide access to the Refs of a PlotCommit
This information is generally useful - have followed the
accessor pattern of 'children' and 'parents'

Change-Id: I79b3ddd6f390152aa49e6b7a4c72a4aca0d6bc72
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-26 22:03:41 -06:00
Robin Rosenberg 24e7f0f6fa Fix tests broken by fix for adding files in a network share
The change Ie0350e032a97e0d09626d6143c5c692873a5f6a2 was not
done properly. The renamed file was not write protected, and
this broke a test.

Bug: 335388
Change-Id: I41b2235b7677bc5fddc70dda2a56cdd2cb53ce5d
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
2011-01-26 13:52:58 -06:00
Mathias Kinzler a5b36ae1ea FetchCommand: allow to set "TagOpt"
This is needed for implementing Fetch in EGit using the API.

Change-Id: Ibdcc95906ef0f93e3798ae20d4de353fb394f2e2
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-01-26 20:03:02 +01:00
Christian Halstrick 0d7dd6625a Make sure not to overwrite untracked not-ignored files
When DirCacheCheckout was checking out it was silently
overwriting untracked files. This is only ok if the
files are also ignored. Untracked and not ignored files
should not be overwritten. This fix adds checks for
this situation.
Because this change in the behaviour also broke tests
which expected that a checkout will overwrite untracked
files (PullCommandTest) these tests have to be modified
also.

Bug: 333093
Change-Id: I26806d2108ceb64c51abaa877e11b584bf527fc9
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-26 11:41:44 -06:00
Robin Rosenberg c4c8d80fd3 Fix adding files in a network share
We cannot always rename read-only files on network shares,
so rename the temp file for a new loose object first, and
then set it as read-only.

Bug: 335388
Change-Id: Ie0350e032a97e0d09626d6143c5c692873a5f6a2
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-26 11:26:55 -06:00
Chris Aniszczyk 509662653b Merge "Refactor and comment complicated if statements" 2011-01-26 12:23:26 -05:00
Chris Aniszczyk 9b8ac0151e Merge "MergeCommand should create missing branches" 2011-01-26 12:17:47 -05:00
Mathias Kinzler 414e0cd329 Make setCredentialsProvider more convenient to use
Change-Id: I984836ea7d6a67fd2d1d05f270afa7c29f30971c
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-01-26 18:03:22 +01:00
Christian Halstrick 4cba86bfea Refactor and comment complicated if statements
When debugging and enhancing DirCacheCheckout.processEntry() I found
that some of if-statements where hard to read/understand. This
change just splits some long if statements and adds more comments
explaining in which state we are. This change is only a preparation
for followup commits which introduce checks for untracked+ignored
files.

Change-Id: I670ff08310b72c858709b9e395f0aebb4b290a56
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
2011-01-26 17:17:45 +01:00
Christian Halstrick 85f69c286b MergeCommand should create missing branches
If HEAD exists but points to an not-existing branch the merge
command should silently create the missing branch and check
it out. This happens if you pull into freshly initalized repo.
HEAD points to refs/heads/master but refs/heads/master doesn't
exist. If you know merge a commit X into HEAD then the branch
master should be created (pointing to X) the working tree should
be updated to reflect X. That is achieved by checkout with one
tree only (HEAD is missing).

A test for this functionality will come the the next proposal
in PullCommandTest.

Change-Id: Id4a0d56d944e0acebd4b3157428bb50bd3fdd872
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
2011-01-26 17:17:44 +01:00
Per Salomonsson d49530ad86 Support for self signed certificate (HTTPS)
Add possibility to disable ssl verification, just as i can do with git
using: git config --global http.sslVerify false

To enable the feature, configure
Window->Preferences->Team->Git->Configuration
and add a new key/value: http.sslVerify=false

When handling repos over https, JGit will then check that flag to see
if security is loose and the ssl verification should be ignored.

Having it implemented as a key/value makes it not too obvious in the
GUI - so the user must know what he/she is doing when adding it. Being
aware of the risks etc.

Bug: 332487
Change-Id: I2a1b8098b5890bf512b8dbe07da41036c0fc9b72
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2011-01-26 01:17:01 +01:00
Shawn O. Pearce 36e396f8b9 Permit disabling birthday attack checks in PackParser
Reading a repository for millions of missing objects might be very
expensive to perform, especially if the repository is on a network
filesystem or some other costly RPC backend.  A repository owner
might choose to accept some risk in return for better performance,
so allow disabling collision checking when receiving a pack.

Currently there is no way for an end-user to disable this feature.
This is intentional, because it is generally *NOT* a good idea to
skip this check.  Instead this feature is supplied for storage
implementations to bypass the default checking logic, should they
have their own custom routines that is just as effective but can
be handled more efficiently.

Change-Id: I90c801bb40e86412209de0c43e294a28f6a767a5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-25 17:14:23 -06:00
Shawn O. Pearce d62350d907 Ensure all deltas were resolved in a pack
If a pack uses OFS_DELTA only (e.g. its an initial push to a
repository) and PackParser's implementation is broken such that the
delta chain that hangs below a particular object offset is empty, the
entryCount won't match the expected objectCount. Fail fast rather
than claiming the stream was parsed correctly.

The current implementation is not broken as described above.  I broke
the code when I implemented my own new subclass of PackParser (which
incorrectly mucked with the object offset information), leading me to
discover this consistency check was missing.

Change-Id: I07540f0ae1144ef6f3bda48774dbdefb8876e1d3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-25 16:52:22 -06:00
Shawn O. Pearce 1bf0c3cdb1 Refactor IndexPack to not require local filesystem
By moving the logic that parses a pack stream from the network (or
a bundle) into a type that can be constructed by an ObjectInserter,
repository implementations have a chance to inject their own logic
for storing object data received into the destination repository.

The API isn't completely generic yet, there are still quite a few
assumptions that the PackParser subclass is storing the data onto
the local filesystem as a single file.  But its about the simplest
split of IndexPack I can come up with without completely ripping
the code apart.

Change-Id: I5b167c9cc6d7a7c56d0197c62c0fd0036a83ec6c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-25 16:43:06 -06:00
Jesse Greenwald 51dedfdc31 Parse RevCommit bodies before calling RevFilter.include()
RevFilter.include()'s documentation promises the RevCommit's
body is parsed before include is invoked.  This wasn't always
true if the commit was parsed once, had its body discarded,
the RevWalk was reset() and started a new traversal.

Change-Id: Ie5cafde09ae870712b165d8a97a2c9daf90b1dbd
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-25 16:39:00 -06:00
Mathias Kinzler 920ac08777 Allow to set a CredentialsProvider on relevant API commands
This is needed for commands that use Transport internally.

Change-Id: I9417c85255b160723968c647063b9c7e05995ea4
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-25 16:36:10 -06:00
Sasa Zivkov 832d3b8384 Exposed the constructor of Note class
Additionally, defined the NoteMap.getNote method which returns a Note
instance.  These changes were necessary to enable implementation of
the NoteMerger interface (the merge method needs to instantiate a
Note) and to enable direct use of NoteMerger which expects instances
of Note class as its paramters.  Implementing creation of code review
summary notes in Gerrit [1] will make use of both of these features.

[1] https://review.source.android.com/#change,20045

Change-Id: I627aefcedcd3434deecd63fa1d3e90e303b385ac
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-25 16:33:29 -06:00
Christian Halstrick c62882191f Introduce metaData compare between working tree and index entries
Instead of offering only a high-level isModified() method a new
method compareMetadata() is introduced which compares a working tree entry
and a index entry by looking at metadata only. Some use-cases
(e.g. computing the content-id in idBuffer()) may use this new method
instead of isModified().

Change-Id: I4de7501d159889fbac5ae6951f4fef8340461b47
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
2011-01-21 09:23:06 -06:00
Robin Rosenberg 5e2e3819a6 Add progress reporting to IndexDiff
Change-Id: I4f05bdb0c58b039bd379341a6093f06a2cdfec6e
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2011-01-21 01:28:54 +01:00
Robin Rosenberg e43887b69e Fix misc spelling errors in comments and method names
Change-Id: I24552443710075856540696717ac4068dfe6a7f2
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
2011-01-17 22:04:14 +01:00
Matthias Sohn de1d057d72 Merge "File utility for creating a new empty file" 2011-01-16 12:22:46 -05:00
Matthias Sohn c45f2aec56 File utility for creating a new empty file
The java.io.File.createNewFile() method for creating new empty files
reports failure by returning false. To ease proper checking of return
values provide a utility method wrapping createNewFile() throwing
IOException on failure.

Change-Id: I42a3dc9d8ff70af62e84de396e6a740050afa896
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2011-01-14 17:28:14 +01:00
Shawn Pearce 67e176e529 Merge "ConfigConstants: expose some constants for user name and email." 2011-01-12 10:47:55 -05:00
Shawn Pearce 600f624a35 Merge "CheckoutCommand: fix reflog message" 2011-01-12 10:46:23 -05:00
Shawn Pearce 11f2b849a3 Merge "Locate $HOME like C Git does on Windows" 2011-01-12 10:44:57 -05:00
Roberto Tyley 944fcdae66 Fix API ListBranchCommand for listmode 'all'
If remote branches are present they can not be added
to the RefMap from the local branches - the two RefMaps
have a different value of 'prefix' and consequently an
IllegalArgumentException is thrown.
2011-01-12 14:34:10 +00:00
Robin Rosenberg 0fd9676771 Locate $HOME like C Git does on Windows
Java's user.home is not the same as $HOME so EGit did see the
same global configuration as C Git does.

Bug: 333269
Change-Id: Id54fc5292bf8c5a67177f9097ee692717a7df336
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
2011-01-12 14:58:55 +01:00
Mathias Kinzler 7047d2fa8d CheckoutCommand: fix reflog message
There is a space missing between <from> and "to" in the reflog
message produced by the CheckoutCommand, which is of the form

moving from <from> to <to>

Change-Id: I3dc57ab0a6589292db77a17d9029ee9499dfc725
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-01-12 13:19:32 +01:00
Mathias Kinzler 5ebfdc8091 ConfigConstants: expose some constants for user name and email.
This is needed by a EGit change

http://egit.eclipse.org/r/#change,2232

Change-Id: I3d62f904b769fc2f1b7b8f0f24f7dd757fc9c379
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
2011-01-12 08:50:12 +01:00
Matthias Sohn 838fdb342b Merge "Do not cherry-pick or revert commit more than once" 2011-01-10 09:00:30 -05:00
Robin Rosenberg 2058f9272b Do not cherry-pick or revert commit more than once
Instead just return success. In the case that no commit has been
cherry-picked or reverted, just return the old HEAD.
    
Bug: 333814
Change-Id: I67db2b77b52c43932436d22a8daa5a6556423484
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
2011-01-10 08:47:14 +01:00
Shawn O. Pearce 05ca0c49f9 Merge "Use heap based stack for PackFile deltas" 2011-01-09 19:20:13 -05:00
Shawn O. Pearce 680869d779 Merge "Config: Preserve existing case of names in sections" 2011-01-09 19:19:08 -05:00
Sasa Zivkov 1993cf8a27 Merging Git notes
Merging Git notes branches has several differences from merging "normal"
branches. Although Git notes are initially stored as one flat tree the
tree may fanout when the number of notes becomes too large for efficient
access. In this case the first two hex digits of the note name will be
used as a subdirectory name and the rest 38 hex digits as the file name
under that directory. Similarly, when number of notes decreases a fanout
tree may collapse back into a flat tree. The Git notes merge algorithm
must take into account possibly different tree structures in different
note branches and must properly match them against each other.

Any conflict on a Git note is, by default, resolved by concatenating
the two conflicting versions of the note. A delete-edit conflict is, by
default, resolved by keeping the edit version.

The note merge logic is pluggable and the caller may provide custom
note merger that will perform different merging strategy.

Additionally, it is possible to have non-note entries inside a notes
tree. The merge algorithm must also take this fact into account and
will try to merge such non-note entries. However, in case of any merge
conflicts the merge operation will fail. Git notes merge algorithm is
currently not trying to do content merge of non-note entries.

Thanks to Shawn Pearce for patiently answering my questions related to
this topic, giving hints and providing code snippets.

Change-Id: I3b2335c76c766fd7ea25752e54087f9b19d69c88
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2011-01-09 00:27:56 +01:00
Marc Strapetz c87ae94c70 Fix IgnoreRule for directory-only patterns
Patterns containing only a trailing slash have to be treated
as "global" patterns. For example: "classes/" matches "classes"
as well as "dir/classes" directory.
2011-01-07 12:53:14 +01:00
Shawn O. Pearce b2d528887c Config: Preserve existing case of names in sections
When an application asks for the names in a section, it may want to
see the existing case that was stored by the user.  For example,
Gerrit Code Review wants to store a configuration block like:

  [access "refs/heads/master"]
    label-Code-Review = group Developers

and although the name label-Code-Review is case-insensitive, it wants
to display the case as it appeared in the configuration file.

When enumerating section names or variable names (both of which are
case-insensitive), Config now keeps track of the string that first
appeared, and presents them in file order, permitting applications to
use this information.  To maintain case-insensitive behavior, the
contains() method of the returned Set<String> still performs a
case-insensitive compare.

This is a behavior change if the caller enumerates the returned
Set<String> and copies it to his own Set<String>, and then performs
contains() tests against that, as the strings are now the original
case from the configuration block.  But I don't think anyone actually
does this, as the returned sets are immutable and are cached.

Change-Id: Ie4e060ef7772958b2062679e462c34c506371740
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-06 11:13:45 -08:00
Shawn O. Pearce 165358bc99 Use heap based stack for PackFile deltas
Instead of using the current thread's stack to recurse through the
delta chain, use a linked list that is stored in the heap.  This
permits the any thread to load a deep delta chain without running out
of thread stack space.

Despite needing to allocate a stack entry object for each delta
visited along the chain being loaded, the object allocation count is
kept the same as in the prior version by removing the transient
ObjectLoaders from the intermediate objects accessed in the chain.
Instead the byte[] for the raw data is passed, and null is used as a
magic value to signal isLarge() and enter the large object code path.

Like the old version, this implementation minimizes the amount of
memory that must be live at once.  The current delta instruction
sequence, the base it applies onto, and the result are the only live
data arrays.  As each level is processed, the prior base is discarded
and replaced with the new result.

Each Delta frame on the stack is slightly larger than the standard
ObjectLoader.SmallObject type that was used before, however the Delta
instances should be smaller than the old method stack frames, so total
memory usage should actually be lower with this new implementation.

Change-Id: I6faca2a440020309658ca23fbec4c95aa637051c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2011-01-06 09:48:43 -08:00
Sasa Zivkov 7cd812940d NoteMap implements Iterable<Note>
We will need to iterate over all notes of a NoteMap, at least this will be
needed for testing purposes. This change also implied making the Note class
public.

Change-Id: I9b0639f9843f457ee9de43504b2499a673cd0e77
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
2011-01-05 08:24:13 +01:00
Robin Rosenberg b3e59bd9d6 Implement a revert command
This is almost reverted cherry-pick, and the implementation is
almost identical. It orders the input to merge differently to get
the effect and produces a different commit message with the
default author, rather than the original author.

Change-Id: I39970091d9f7406ae7168b8efaab23a5e2c16bad
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
2011-01-02 22:15:07 +01:00
Shawn Pearce 9a3ce780fc Merge "[findbugs] Make CheckoutResult constants final" 2010-12-31 17:08:23 -05:00
Robin Rosenberg d9e07a574a Convert all JGit unit tests to JUnit 4
Eclipse has some problem re-running single JUnit tests if
the tests are in Junit 3 format, but the JUnit 4 launcher
is used. This was quite unnecessary and the move was not
completed. We still have no JUnit4 test.

This completes the extermination of JUnit3. Most of the
work was global searce/replace using regular expression,
followed by numerous invocarions of quick-fix and organize
imports and verification that we had the same number of
tests before and after.

- Annotations were introduced.
- All references to JUnit3 classes removed
- Half-good replacement for getting the test name. This was
  needed to make the TestRngs work. The initialization of
  TestRngs was also made lazily since we can not longer find
  out the test name in runtime in the @Before methods.
- Renamed test classes to end with Test, with the exception
  of TestTranslateBundle, which fails from Maven
- Moved JGitTestUtil to the junit support bundle

Change-Id: Iddcd3da6ca927a7be773a9c63ebf8bb2147e2d13
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2010-12-31 14:00:05 -08:00
Shawn Pearce 7cf8b8812f Merge "Add support for getting the system wide configuration" 2010-12-31 16:13:33 -05:00
Robin Rosenberg 797ebba307 Add support for getting the system wide configuration
These settings are stored in <prefix>/etc/gitconfig. The C Git
binary is installed in <prefix>/bin, so we look for the C Git
executable to find this location, first by looking at the PATH
environment variable and then by attemting to launch bash as
a login shell to find out.

Bug: 333216
Change-Id: I1bbee9fb123a81714a34a9cc242b92beacfbb4a8
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
2010-12-31 11:48:34 +01:00
Shawn Pearce 4da775eaff Merge "IndexPack: Use stack-based recursion for delta resolution" 2010-12-30 19:52:47 -05:00
roberto b5f0a7d7ff IndexPack: Use stack-based recursion for delta resolution
Replace 'method' with 'heap'-based recursion for resolving deltas.

Git packfile delta-chain depth can exceed 50 levels in certain files
(the packfile of the JGit project itself has >800 objects with
chain-length >50). Using method-based recursion on such packfiles will
quickly throw a StackOverflowError on VMs with constrained stack.

Benefits:

* packfile delta-resolution no longer limited by the maximum number
  of stack frames permitted on the current thread.

* slight performance improvement
  (3% speed increase on the packfile of the JGit project)

Change-Id: I1d9b3a8ba3c6d874d83cb93ebf171c6ab193e6cc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2010-12-30 16:49:24 -08:00
Matthias Sohn 65ccadeced [findbugs] Make CheckoutResult constants final
Change-Id: I9117f212e2ad7051fdc6e7417ebc7c2d15b357a8
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2010-12-30 23:04:43 +01:00
Robin Rosenberg 240769e023 Refactor exec of a command and reading one line into utility
Change-Id: Ia9e5afe7f29c3e5e74b8d226441ed429fb229c82
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
2010-12-30 12:41:22 -08:00
Robin Rosenberg 14b358a6fb Refactor search for a file within a PATH
Change-Id: I785ab6bf1823d174394b1d2b25c5bb202535e943
2010-12-30 12:38:18 -08:00
Shawn Pearce 7a1bd7adb1 Merge "Fix FileSnapShot" 2010-12-30 15:31:06 -05:00
Robin Rosenberg c3f52c62a8 Fix FileSnapShot
We cannot use SystemReader to get the time, unless we do that consistently,
which is harder to do and be sure we are really testing what we want.

Then we need to update our lastRead variable whenever we conclude that
our file is not racily clean according to lastRead. It may well be clean,
but we do not know that until we check the system clock again.

Finally add a test for this class.

Change-Id: I1894b032b9bd359d1b5325e5472d48e372599e4c
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
2010-12-30 01:15:59 +01:00
Shawn Pearce 4170913b1b Merge "CheckoutResult: return paths instead of Files" 2010-12-29 14:29:49 -05:00