Commit Graph

4010 Commits

Author SHA1 Message Date
Terry Parker cb24de07d0 Merge "Add BlobObjectChecker" 2017-08-28 12:00:53 -04:00
Matthias Sohn c0ad77d84c Enhance Eclipse save actions
Add the following Eclipse save actions executed when saving modified
lines. This should help to reduce manual work needed to maintain a clean
and consistent code style:
- organize imports
- always use braces around blocks
- add missing annotations
  - @Override including implementation of interface methods
  - @Deprecated
- remove
  - unused imports
  - unnecessary $NON-NLS$ tags
  - redundant type arguments

Also add default values for new settings that were introduced in recent
Eclipse versions up to Neon since we updated save rules the last time.

Change-Id: Idc90b249df044d0552f04edf01a5f607c4846f50
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-28 11:52:45 -04:00
Masaya Suzuki fd74cf2f78 Add BlobObjectChecker
Some repositories can have a policy that do not accept certain blobs. To
check if the incoming pack file contains such blobs, ObjectChecker can
be used. However, this ObjectChecker is not called by PackParser if the
blob is stored as a whole. This is because the object can be so large
that it doesn't fit in memory.

This change introduces BlobObjectChecker. This interface takes chunks of
a blob instead of the entire object. ObjectChecker can optionally return
a BlobObjectChecker. This won't change existing ObjectChecker
implementation; existing implementation continues to receive deltified
blob objects only.

Change-Id: Ic33a92c2de42bd7a89786a4da26b7a648b25218d
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
2017-08-28 08:42:27 -07:00
Thomas Wolf 1637c44048 FetchCommand: pass on CredentialsProvider to submodule fetches
When a JGit API command is implemented in terms of other API
commands, the child command must "inherit" all relevant settings.
Calling configure() ensures that the CredentialsProvider and the
connection timeout are propagated correctly.

Bug: 515325
Change-Id: I948e306693a9edb7b199a735877413b6eddcfba4
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-27 16:37:43 +02:00
Thomas Wolf d031b64667 Exclude file matching: fix backtracking on match failures after **
** matching always tries the empty match first. If a mismatch occurs
later, the ** must be extended by exactly one segment and matching must
resume with the matcher following the ** matcher.

Bug: 520920
Change-Id: Id019ad1c773bd645ae92e398021952f8e961f45c
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-27 16:02:41 +02:00
Thomas Wolf d80b999c76 Fix path pattern matching to work also for gitattributes
Path pattern matching for attribute rules is different than matching
for excluded files.

The first difference concerns patterns without slashes. For
gitattributes those must match on the last component only, not on
any earlier segment. This is true also for directory-only patterns.

The second difference concerns directory-only patterns. Those also
must not match on a prefix or segment except the last one. They do
not apply recursively to all files beneath.

And third, matches only on a prefix must match for gitattributes
only if the last matcher was "/**".

Add a new parameter for such path matching to IMatcher.matches() and
pass it through as appropriate (false for gitignore, true for
gitattributes). As far as gitignore is concerned, there is no change.

New tests have been added, and some existing attribute matching tests
have been fixed since they operated on wrong assumptions.

Bug: 508568
Change-Id: Ie825dc2cac8a85a72a7eeb0abb888f3193d21dd2
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-27 16:02:40 +02:00
Thomas Wolf 426caf99ee Ignore invalid TagOpt values
C git silently ignores invalid tagopt values; so make JGit behave the
same way.

Bug: 429625
Change-Id: I99587cc46c7e0c19348bcc63f602038fa9a7f378
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-26 09:11:03 +02:00
Thomas Wolf 8cbdf523cd Add a getter for a list of RefSpecs to Config
Reading RefSpecs from a Config can be seen as another typed value
conversion, so add a getter to Config and to TypedConfigGetter. Use
it in RemoteConfig.

Doing this allows clients of the JGit library to customize the
handling of invalid RefSpecs in git config files by installing a
custom TypedConfigGetter.

Bug: 517314
Change-Id: I0ebc0f073fabc85c2a693b43f5ba5962d8a795ff
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-26 09:11:02 +02:00
Thomas Wolf d32ad1cadd Improve getting typed values from a Config
Make the handling of typed values somewhat configurable by using
a separate converter. The default converter is the same as before;
just the implementations of the getters were moved. They also still
raise IllegalArgumentException on invalid values as before.

The converter can be set globally via Config.setTypedConfigGetter(),
which EGit can use in its core Activator to plug in a variant that
catches the IllegalArgumentException, logs the problem, and then
returns the default value.

In this way the behavior for other users of the JGit library is
unchanged, while EGit can deal gracefully with invalid git configs.

Bug: 520978
Change-Id: Ie8f81d206e358b6cc57aa29b9d7ad2a5d34b86a1
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-26 09:11:02 +02:00
Matthias Sohn 960d7ff3e5 Prepare 4.5.4-SNAPSHOT builds
Change-Id: Id8b902bf2bf590b41f2e246c5ecf1592e1c411f2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-26 08:08:46 +02:00
David Pursehouse e237c28936 Merge "Fix JGit set core.fileMode to false by default instead of true for non Windows OS." 2017-08-25 20:58:07 -04:00
David Pursehouse 40f40e496a Merge "Fix default directory set when setDirectory wasn't called." 2017-08-25 20:57:52 -04:00
David Pursehouse 0e12692d8c FileMode: Remove unnecessary @SuppressWarnings("synthetic-access")
In Eclipse Oxygen, the following warning is emitted:

  At least one of the problems in category 'synthetic-access' is not
  analysed due to a compiler option being ignored

Removing the suppression gets rid of the warning.

Change-Id: Ibfe5cc1e347150b699f54e2f204ab5ee770da202
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-08-25 20:57:11 -04:00
Matthias Sohn d979dfd00c Add toString() methods to OpenSshConfig to help debugging
Change-Id: I81b60a13a97e78d5ccd593ba8e4aa614df19f925
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-26 01:44:37 +02:00
Thomas Wolf c758a8cd37 Do most %-token substitutions in OpenSshConfig
Except for %p and %r and partially %C, we can do token substitutions
as defined by OpenSSH inside the config file parser. %p and %r can
be replaced only if specified in the config; if not, it would be the
caller's responsibility to replace them with values obtained from the
URI to connect to.

Jsch doesn't know about token substitutions at all. By doing the
replacements as good as we can in the config file parser, we can
make Jsch support most of these tokens.

%i is not handled at all as Java has no concept of a "user ID".

Includes unit tests.

Bug: 496170
Change-Id: If9d324090707de5d50c740b0d4455aefa8db46ee
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-26 01:44:36 +02:00
Thomas Wolf 9d2447063d Let Jsch know about ~/.ssh/config
Ensure the Jsch instance used knows about ~/.ssh/config. This
enables Jsch to honor more user configurations (see
com.jcraft.jsch.Session.applyConfig()), in particular also the
UserKnownHostsFile configuration, or additional identities given
via multiple IdentityFile entries.

Turn JGit's OpenSshConfig into a full parser that can be a
Jsch-compliant ConfigRepository. This avoids a few bugs
in Jsch's OpenSSHConfig and keeps the JGit-facing interface
unchanged. At the same time we can supply a JGit OpenSshConfig
instance as a ConfigRepository to Jsch. And since they'll both
work from the same object, we can also be sure that the parsing
behavior is identical.

The parser does not handle the "Match" and "Include" keys, and it
doesn't do %-token substitutions (yet).

Note that Jsch doesn't handle multi-valued UserKnownHostFile
entries as known by modern OpenSSH.[1]

[1] http://man.openbsd.org/OpenBSD-current/man5/ssh_config.5

Additional tests for new features are provided in OpenSshConfigTest.

Bug: 490939
Change-Id: Ic683bd412fa8c5632142aebba4a07fad4c64c637
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-26 01:41:50 +02:00
Masaya Suzuki 9fb6561e7a Consume request body before flushing the buffer
This is continuation from https://git.eclipse.org/r/#/c/94249/. When an
error happens, we might not read the entire stream. Consume the request
body before we flush the buffer.

Change-Id: Ia473a04ace600653b2d1f2822e3023570d992410
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
2017-08-25 15:23:20 -07:00
Joan Goyeau 88e453995d Fix default directory set when setDirectory wasn't called.
Bug: 519883
Change-Id: I46716e9626b4c4adc0806a7c8df6914309040b94
Signed-off-by: Joan Goyeau <joan@goyeau.com>
2017-08-25 11:41:40 +01:00
David Pursehouse 65b2d0b2d9 ObjectToPack: Add missing @Override annotation
Change-Id: I65ed7b89312d58ea816b46d27707ff907df1c78b
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-08-24 16:20:11 +09:00
Thomas Wolf 1b4daa2994 Cleanup: message reporting for HTTP redirect handling
The addition of "tooManyRedirects" in commit 7ac1bfc ("Do
authentication re-tries on HTTP POST") was an error I didn't
catch after rebasing that change. That message had been renamed
in the earlier commit e17bfc9 ("Add support to follow HTTP
redirects") to "redirectLimitExceeded".

Also make sure we always use the TransportException(URIish, ...)
constructor; it'll prefix the message given with the sanitized URI.
Change messages to remove the explicit mention of that URI inside the
message. Adapt tests that check the expected exception message text.

For the info logging of redirects, remove a potentially present
password component in the URI to avoid leaking it into the log.

Change-Id: I517112404757a9a947e92aaace743c6541dce6aa
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-23 12:20:55 +02:00
Thomas Wolf 7ac1bfc834 Do authentication re-tries on HTTP POST
There is at least one git server out there (GOGS) that does
not require authentication on the initial GET for
info/refs?service=git-receive-pack but that _does_ require
authentication for the subsequent POST to actually do the push.

This occurs on GOGS with public repositories; for private
repositories it wants authentication up front.

Handle this behavior by adding 401 handling to our POST request.
Note that this is suboptimal; we'll re-send the push data at
least twice if an authentication failure on POST occurs. It
would be much better if the server required authentication
up-front in the GET request.

Added authentication unit tests (using BASIC auth) to the
SmartClientSmartServerTest:

- clone with authentication
- clone with authentication but lacking CredentialsProvider
- clone with authentication and wrong password
- clone with authentication after redirect
- clone with authentication only on POST, but not on GET

Also tested manually in the wild using repositories at try.gogs.io.
That server offers only BASIC auth, so the other paths
(DIGEST, NEGOTIATE, fall back from DIGEST to BASIC) are untested
and I have no way to test them.

* public repository: GET unauthenticated, POST authenticated
  Also tested after clearing the credentials and then entering a
  wrong password: correctly asks three times during the HTTP
  POST for user name and password, then gives up.
* private repository: authentication already on GET; then gets
  applied correctly initially to the POST request, which succeeds.

Also fix the authentication to use the credentials for the redirected
URI if redirects had occurred. We must not present the credentials
for the original URI in that case. Consider a malicious redirect A->B:
this would allow server B to harvest the user credentials for server
A. The unit test for authentication after a redirect also tests for
this.

Bug: 513043
Change-Id: I97ee5058569efa1545a6c6f6edfd2b357c40592a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-22 23:57:09 +02:00
Shawn Pearce 44a75d9ea8 reftable: explicitly store update_index per ref
Add an update_index to every reference in a reftable, storing the
exact transaction that last modified the reference.  This is necessary
to fix some merge race conditions.

Consider updates at T1, T3 are present in two reftables.  Compacting
these will create a table with range [T1,T3].  If T2 arrives during
or after the compaction its impossible for readers to know how to
merge the [T1,T3] table with the T2 table.

With an explicit update_index per reference, MergedReftable is able to
individually sort each reference, merging individual entries at T3
from [T1,T3] ahead of identically named entries appearing in T2.

Change-Id: Ie4065d4176a5a0207dcab9696ae05d086e042140
2017-08-21 15:39:08 -07:00
Shawn Pearce 2d76df2442 reftable: reserve standard PackExt
Reserve "ref" extension for reftable files.  This allows them to be
used in a DFS repository as a stream in a DfsPackDescription.

Change-Id: Ife781bb64d0bb063333183ad2be70a41a2482513
2017-08-17 15:06:51 -07:00
Shawn Pearce 0aae64ce74 reftable: resolve symbolic references
resolve(Ref) helps callers recursively chase symbolic references and
is a useful function when wrapping a Reftable inside a RefDatabase, as
RefCursor does not resolve symbolic references during iteration.

Change-Id: I1ba143f403773497972e225dc92c35ecb989e154
2017-08-17 15:06:51 -07:00
Shawn Pearce 195541dd30 reftable: support threshold based compaction
Transactions may wish to merge several tables together as part of an
operation.  Setting a byte limit allows the transaction to consider
only some recent tables, bounding the cost of the compaction.

Change-Id: If037f2cbdc174ff1a215d5917178b33cde4ddaba
2017-08-17 15:06:51 -07:00
Shawn Pearce d48ac5bf01 reftable: compact merged tables
A compaction of reftables is just copying the results of a
MergedReftable into a ReftableWriter.  Wrap this up into a utility.

Change-Id: I6f5677d923e9628993a2d8b4b007a9b8662c9045
2017-08-17 15:06:51 -07:00
Shawn Pearce 77d8eead6d reftable: merge-join reftables
MergedReftable combines multiple reference tables together in a stack,
allowing higher/later tables to shadow earlier/lower tables.  This
forms the basis of a transaction system, where each transaction writes
a new reftable containing only the modified references, and readers
perform a merge on the fly to get the latest value.

Change-Id: Ic2cb750141e8c61a8b2726b2eb95195acb6ddc83
2017-08-17 15:06:51 -07:00
Shawn Pearce 0398f3dd6e reftable: debug tools
Simple debug programs to experiment with the reftable file format:

  debug-read-reftable
  debug-write-reftable
  debug-verify-reftable
  debug-benchmark-reftable

Change-Id: I79db351d86900f1e58b17e922e195dff06ee71f1
2017-08-17 15:06:51 -07:00
Shawn Pearce 0a26dcf4a3 reftable: scan and lookup reftable files
ReftableReader provides sequential scanning support over all
references, a range of references within a subtree (such as
"refs/heads/"), and lookup of a single reference.  Reads can be
accelerated by an index block, if it was created by the writer.

The BlockSource interface provides an abstraction to read from the
reftable's backing storage, supporting a future commit to connect
to JGit DFS and the DfsBlockCache.

Change-Id: Ib0dc5fa937d0c735f2a9ff4439d55c457fea7aa8
2017-08-17 15:06:51 -07:00
Shawn Pearce 0ecc8367e6 reftable: create and write reftable files
This is a simple writer to create reftable formatted files.  Follow-up
commits will add support for reading from reftable, debugging
utilities, and tests.

Change-Id: I3d520c3515c580144490b0b45433ea175a3e6e11
2017-08-17 15:06:50 -07:00
Thomas Wolf e17bfc96f2 Add support to follow HTTP redirects
git-core follows HTTP redirects so JGit should also provide this.

Implement config setting http.followRedirects with possible values
"false" (= never), "true" (= always), and "initial" (only on GET, but
not on POST).[1]

We must do our own redirect handling and cannot rely on the support
that the underlying real connection may offer. At least the JDK's
HttpURLConnection has two features that get in the way:

* it does not allow cross-protocol redirects and thus fails on
  http->https redirects (for instance, on Github).
* it translates a redirect after a POST to a GET unless the system
  property "http.strictPostRedirect" is set to true. We don't want
  to manipulate that system setting nor require it.

Additionally, git has its own rules about what redirects it accepts;[2]
for instance, it does not allow a redirect that adds query arguments.

We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3]
On POST we do not handle 303, and we follow redirects only if
http.followRedirects == true.

Redirects are followed only a certain number of times. There are two
ways to control that limit:

* by default, the limit is given by the http.maxRedirects system
  property that is also used by the JDK. If the system property is
  not set, the default is 5. (This is much lower than the JDK default
  of 20, but I don't see the value of following so many redirects.)
* this can be overwritten by a http.maxRedirects git config setting.

The JGit http.* git config settings are currently all global; JGit has
no support yet for URI-specific settings "http.<pattern>.name". Adding
support for that is well beyond the scope of this change.

Like git-core, we log every redirect attempt (LOG.info) so that users
may know about the redirection having occurred.

Extends the test framework to configure an AppServer with HTTPS support
so that we can test cloning via HTTPS and redirections involving HTTPS.

[1] https://git-scm.com/docs/git-config
[2] 6628eb41db
[3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

CQ: 13987
Bug: 465167
Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-17 22:16:44 +02:00
Christian Halstrick be767fd7d9 Merge "Fix off-by-one error in Strings.count()" 2017-08-16 06:24:43 -04:00
Christian Halstrick c71af0c73a Merge "Use relative paths for attribute rule matching" 2017-08-16 06:24:33 -04:00
Matthias Sohn e21e2436d3 JGit v4.5.3.201708160445-r
Change-Id: I2d57144976e3683e180d3a42edc6c3bf2905e87c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-16 10:42:27 +02:00
Thomas Wolf b13a285098 Send a detailed event on working tree modifications
Currently there is no way to determine the precise changes done
to the working tree by a JGit command. Only the CheckoutCommand
actually provides access to the lists of modified, deleted, and
to-be-deleted files, but those lists may be inaccurate (since they
are determined up-front before the working tree is modified) if
the actual checkout then fails halfway through. Moreover, other
JGit commands that modify the working tree do not offer any way to
figure out which files were changed.

This poses problems for EGit, which may need to refresh parts of the
Eclipse workspace when JGit has done java.io file operations.

Provide the foundations for better file change tracking: the working
tree is modified exclusively in DirCacheCheckout. Make it emit a new
type of RepositoryEvent that lists all files that were modified or
deleted, even if the checkout failed halfway through. We update the
'updated' and 'removed' lists determined up-front in case of file
system problems to reflect the actual state of changes made.

EGit thus can register a listener for these events and then knows
exactly which parts of the Eclipse workspace may need to be refreshed.

Two commands manage checking out individual DirCacheEntries themselves:
checkout specific paths, and applying a stash with untracked files.
Make those two also emit such a new WorkingTreeModifiedEvent.

Furthermore, merges may modify files, and clean, rm, and stash create
may delete files.

CQ: 13969
Bug: 500106
Change-Id: I7a100aee315791fa1201f43bbad61fbae60b35cb
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-15 16:52:00 -04:00
Matthias Sohn 81d020aba9 Merge branch 'stable-4.8'
* stable-4.8:
  Update Oxygen Orbit p2 repository to R20170516192513
  Fix exception handling for opening bitmap index files

Change-Id: Ica20f5aa0d8a365fe3317765b93520b3abd5d342
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-15 00:48:44 +02:00
Matthias Sohn 758a181b82 Merge branch 'stable-4.7' into stable-4.8
* stable-4.7:
  Update Oxygen Orbit p2 repository to R20170516192513
  Fix exception handling for opening bitmap index files

Change-Id: I1e4fcf84506ff4316567bbb1713e84d8d196c2a1
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-15 00:24:49 +02:00
Matthias Sohn 53becf1f59 Merge branch 'stable-4.6' into stable-4.7
* stable-4.6:
  Update Oxygen Orbit p2 repository to R20170516192513
  Fix exception handling for opening bitmap index files

Change-Id: I669fe48ce0034f9ea1977d38ee39099497422c1c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-14 23:50:52 +02:00
Matthias Sohn 985e3c6414 Merge branch 'stable-4.5' into stable-4.6
* stable-4.5:
  Fix exception handling for opening bitmap index files

Change-Id: Ifb511238e3e98b1bc9f79a990807b940a17ebaa6
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-14 23:43:05 +02:00
Christian Halstrick 1ed1e40387 Fix exception handling for opening bitmap index files
When creating a new PackFile instance it is specified whether this pack
has an associated bitmap index file or not. This information is cached
and the public method getBitmapIndex() will always assume a bitmap index
file must exist if the cached data tells so. But it may happen that the
packfiles are repacked during a gc in a different process causing the
packfile, bitmap-index and index file to be deleted. Since JGit still
has an open FileHandle on the packfile this file is not really deleted
and can still be accessed. But index and bitmap index file are deleted.
Fix getBitmapIndex() to invalidate the cached packfile instance if such
a situation occurs.

This problem showed up when a gerrit server was serving repositories
which where garbage collected with native git regularly. Fetch and
clone commands for certain repositories failed permanently after a
native git gc had deleted old bitmap index files.

Change-Id: I8e620bec74dd3f310ba42024f9a657062f868f0e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-14 21:09:48 +02:00
Thomas Wolf 37908321c0 Do not apply pushInsteadOf to existing pushUris
Per the git config documentation[1], pushInsteadOf is ignored when
a remote has explicit pushUris.

Implement this, and adapt tests.

Up to now JGit mistakenly applied pushInsteadOf also to existing
pushUris. If some repositories had relied on this mis-feature,
pushes may newly suddenly fail (the uncritical case; the config
just needs to be fixed) or even still succeed but push to unexpected
places, namely to the non-rewritten pushUrls (the critical case).

The release notes should point out this change.

[1] https://git-scm.com/docs/git-config

Bug: 393170
Change-Id: I38c83204d2ac74f88f3d22d0550bf5ff7ee86daf
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-14 17:27:05 +02:00
Thomas Wolf df3469f6ad Record submodule paths with untracked changes as FileMode.GITLINK
Bug: 520702
Change-Id: I9bb48af9e8f1f2ce7968a82297c7c16f1237f987
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-14 14:03:51 +02:00
Thomas Wolf f5a2c77dc4 Fix handling of pushInsteadOf
According to [1], pushInsteadOf is

1. applied to the uris, not to the pushUris
2. ignored if a remote has an explicit pushUri

JGit applied it only to the pushUris. As a result, pushInsteadOf was
ignored for remotes having only a uri, but no pushUri.

This commit implements (1) if there are no pushUris. I did not dare
implement (2) because:

* there are explicit tests for it that expect that pushInsteadOf gets
  applied to existing pushUrls, and
* people may actually use and rely on this JGit behavior.

[1] https://git-scm.com/docs/git-config

Bug: 393170
Change-Id: I6dacbf1768a105190c2a8c5272e7880c1c9c943a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-14 05:40:47 -04:00
Christian Halstrick 196915dde5 Merge "Ensure EOL stream type is DIRECT when -text attribute is present" 2017-08-14 03:34:57 -04:00
Thomas Wolf b07db60908 Fix off-by-one error in Strings.count()
Change-Id: I0667b1624827d1cf0cc1b81f86c7bb44eafd68a7
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-14 08:04:56 +02:00
Shawn Pearce 53dd9a9e4b Rename extensions.refsStorage to refStorage
This matches the proposal that has been discussed at length on
git-core mailing list and seems to be the accepted convention.

Change-Id: I9f6ab15144826893d1e2a4b48a2d657d6dd445ec
2017-08-11 18:20:50 -07:00
Thomas Wolf a489a8ae9a Ensure EOL stream type is DIRECT when -text attribute is present
Otherwise fancy combinations of attributes (binary or -text in
combination with crlf or eol) may result in the corruption of binary
data.

Bug: 520910
Change-Id: I3ffc666c13d1b9d2ed987b69a67bfc7f42ccdbfc
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-11 22:56:50 +02:00
Thomas Wolf 4bc539a814 Use relative paths for attribute rule matching
Attribute rules must match against the entry path relative to the
attribute node containing the rule. The global entry path is to be
used only for the init and the global node (and of course the root
node).

Bug: 520677
Change-Id: I80389a2dc272a72312729ccd5358d7c75e1ea20a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-08-11 21:59:49 +02:00
Shawn Pearce ed29dec1ea Expose LongMap in util package
This is a useful primitive collection type like IntList.

Change-Id: I04b9b2ba25247df056eb3a1725602f1be6d3b440
2017-08-09 10:42:09 -07:00
Shawn Pearce 40c9c59e07 NB: encode and decode 24-bit ints
Change-Id: Ie036dc46e5a88a4e87dc52e880505bbe34601ca7
2017-08-09 10:42:09 -07:00
Shawn Pearce 22201e8cca Update thread-safety warning about Repository
Change-Id: I1026a77cc688467d5a89a41121146f1bd3d56fa5
2017-08-08 06:44:35 -07:00
Dave Borowitz 8bbe34f27c ReflogWriter: Minor cleanup
Remove unnecessary finals, use consistent punctuation in Javadoc, reflow
some lines, etc.

Change-Id: Ic64db41c86917725ac649022290621406156bcc4
2017-08-02 16:52:34 -04:00
Dave Borowitz cf9662cdfe Eliminate SectionParser construction boilerplate
Happily, most anonymous SectionParser implementations can be replaced
with FooConfig::new, as long as the constructor takes a single Config
arg. Many of these, the non-public ones, can in turn be inlined. A few
remaining SectionParsers can be lambdas.

Change-Id: I3f563e752dfd2007dd3a48d6d313d20e2685943a
2017-08-02 16:50:57 -04:00
Matthias Sohn 3eaa8d8e2a Silence API errors caused by adding enum constants in dbb137e
Change-Id: I46a29eae7b617f3f43f270c40072a1c103ef77f2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-08-01 23:26:42 +02:00
David Pursehouse 4085646f6d Merge changes I424295df,Ib003f7c8
* changes:
  Treat RawText of binary data as file with one single line.
  Trim boilerplate in RawParseUtils_LineMapTest.
2017-08-01 10:18:48 -04:00
Han-Wen Nienhuys a551b64694 Treat RawText of binary data as file with one single line.
This avoids executing mergeAlgorithm.merge on binary data, which is
unlikely to be useful.

Arguably, binary data should not make it to
ResolveMerger#contentMerge, but this approach has the following
advantages:

* binary detection is exact, since it doesn't only look at the start
  of the blob.

* it is cheap, as we have to iterate over the bytes anyway to find
  '\n'.

Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Change-Id: I424295df1dc60a719859d9d7c599067891b15792
2017-08-01 16:00:46 +02:00
Terry Parker 8c6a9a286e Merge "Use w1 for hashCode of AbbreviatedObjectId" 2017-07-28 19:24:11 -04:00
David Pursehouse 8391cc233b Merge "IntList: support contains(int)" 2017-07-28 14:18:21 -04:00
David Pursehouse 9f462a9914 Merge "Replace findbugs by spotbugs" 2017-07-28 13:47:21 -04:00
Shawn Pearce 4a00f18e8e Use w1 for hashCode of AbbreviatedObjectId
Very short abbreviations that are under 8 hex digits do not
have values in w2. Use w1 as the Java hashCode() instead, so
that the prefix of the abbreviation is always included in the
hashing function used by any java.util.Collection type.

Change-Id: Idaf69f86b62630ba4a022d31b4c293c6d138f557
2017-07-28 10:20:45 -07:00
Shawn Pearce 652a6b0334 IntList: support contains(int)
LongList supports contains(long).
IntList should also support contains(int).

Change-Id: Ic7a81c3c25b0f10d92087b56e9f200b676060f63
2017-07-28 10:18:27 -07:00
Matthias Sohn de7698476b Replace findbugs by spotbugs
SpotBugs [1] is the spiritual successor of FindBugs, carrying on from
the point where it left off with support of its community.

[1] http://spotbugs.readthedocs.io/

Change-Id: I127f2c54b04265b6565e780116617ffa8a4d7eaf
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-07-28 16:15:54 +01:00
Dave Borowitz 45da0fc6f7 RefDirectory: Add in-process fair lock for atomic updates
In a server scenario such as Gerrit Code Review, there may be many
atomic BatchRefUpdates contending for locks on both the packed-refs file
and some subset of loose refs. We already retry lock acquisition to
improve this situation slightly, but we can do better by using an
in-process lock. This way, instead of retrying and potentially exceeding
their timeout, different threads sharing the same Repository instance
can wait on a fair lock without having to touch the disk lock. Since a
server is probably already using RepositoryCache anyway, there is a high
likelihood of reusing the Repository instance.

Change-Id: If5dd1dc58f0ce62f26131fd5965a0e21a80e8bd3
2017-07-28 11:03:32 -04:00
Dave Borowitz 6f23210781 RefDirectory: Retry acquiring ref locks with backoff
If a repo frequently uses PackedBatchRefUpdates, there is likely to be
contention on the packed-refs file, so it's not appropriate to fail
immediately the first time we fail to acquire a lock. Add some logic to
RefDirectory to support general retrying of lock acquisition.

Currently, there is a hard-coded wait starting at 100ms and backing off
exponentially to 1600ms, for about 3s of total wait. This is no worse
than the hard-coded backoff that JGit does elsewhere, e.g. in
FileUtils#delete. One can imagine a scheme that uses per-repository
configuration of backoff, and the current interface would support this
without changing any callers.

Change-Id: I4764e11270d9336882483eb698f67a78a401c251
2017-07-28 07:53:25 -04:00
David Pursehouse 5188c23104 Merge "Fix committing empty commits" 2017-07-28 06:08:33 -04:00
David Pursehouse 94aebcb949 Merge "Support overriding a batch's reflog on a per-ReceiveCommand basis" 2017-07-28 06:07:08 -04:00
Christian Halstrick da0770fdec Fix committing empty commits
Allow to explicitly create an empty commit even if committing only
certain files.

Bug: 510685 
Change-Id: If9bf664d7cd824f8e5bd6765fa6cc739af3d7721
2017-07-28 10:46:42 +01:00
David Pursehouse 7e4946626e Merge changes from topic 'batch-ref-update-reflog'
* changes:
  BatchRefUpdate: Expand javadocs and add @Nullable
  PackedBatchRefUpdate: Write reflogs
  Extract constants for reflog entry message prefixes
2017-07-28 05:40:45 -04:00
Zhen Chen b0695e5b7b Add commit check for head references
Make sure all refs/heads/* point to a commit object.

Change-Id: I9c7cf347aaf63d5ef604d520c2383c6cf3043890
Signed-off-by: Zhen Chen <czhen@google.com>
2017-07-26 10:12:37 -07:00
Zhen Chen 673acfc6bd Add connectivity check from references
Make sure all objects referenced by references are reachable. Stop at
the first missing object.

Change-Id: Ifcd7392c4321b17d9290bd87f038bc62bc10dabb
Signed-off-by: Zhen Chen <czhen@google.com>
2017-07-26 10:12:37 -07:00
Zhen Chen 2c2999643f Add dfs fsck implementation
JGit already had some fsck-like classes like ObjectChecker which can
check for an individual object.

The read-only FsckPackParser which will parse all objects within a pack
file and check it with ObjectChecker. It will also check the pack index
file against the object information from the pack parser.

Change-Id: Ifd8e0d28eb68ff0b8edd2b51b2fa3a50a544c855
Signed-off-by: Zhen Chen <czhen@google.com>
2017-07-26 10:12:29 -07:00
Dave Borowitz 104107bf43 Support overriding a batch's reflog on a per-ReceiveCommand basis
Change-Id: I86a4b8f6b4f85b2bae64c1b121e4ee527d46de83
2017-07-26 11:40:15 -04:00
Dave Borowitz a1e11461cc BatchRefUpdate: Expand javadocs and add @Nullable
Change-Id: I22d739a9677e24f36323dceadf7d375ac2f446e8
2017-07-26 11:39:39 -04:00
Dave Borowitz 22e9106224 PackedBatchRefUpdate: Write reflogs
On-disk reflogs are not stored in the packed-refs file, so we cannot
ensure atomic updates. We choose the lesser evil of dropping failed
reflog updates on the floor, rather than throwing an exception even
though the underlying ref updates succeeded.

Add tests for reflogs to BatchRefUpdateTest.

Change-Id: Ia456ba9e36af8e01fde81b19af46a72378e614cd
2017-07-26 11:39:33 -04:00
Dave Borowitz dbb137e0f3 Extract constants for reflog entry message prefixes
Document explicitly that these are untranslated to (mostly) match C git.

Change-Id: I3abcffb4fd611d053bf4373e5d6a14a66f7b9b6b
2017-07-25 13:14:50 -04:00
Dave Borowitz 26962861d4 Implement atomic BatchRefUpdates for RefDirectory
The existing packed-refs file provides a mechanism for implementing
atomic multi-ref updates without any changes to the on-disk format or
lockfile protocol. We just need to make sure that there are no loose
refs involved in the transaction, which we can achieve by packing the
refs while holding locks on all loose refs. Full details of the
algorithm are in the PackedBatchRefUpdate javadoc.

This change does not implement reflog support, which will come in a
later change.

Change-Id: I09829544a0d4e8dbb141d28c748c3b96ef66fee1
2017-07-25 13:14:50 -04:00
Dave Borowitz cf9e3fad52 Separate RefUpdate.Result.REJECTED_{MISSING_OBJECT,OTHER_REASON}
ReceiveCommand.Result has a slightly richer set of possibilities, so it
makes sense for RefUpdate.Result to have more values in order to match.
In particular, this allows us to return REJECTED_MISSING_OBJECT from
RefUpdate when an object is missing.

The comment in RefUpdate#safeParse about expecting some old objects to be
missing is only applicable to the old ID, not the new ID. A missing new
ID is a bug or programmer error, and we should not update a ref to point
to one.

Fix various tests that started failing because they depended for no good
reason on setting refs to point to nonexistent objects; it's always easy
to create a real object when necessary.

It is possible that some downstream users of RefUpdate.Result might
choose to handle one of the new statuses differently, for example by
providing a more user-readable error message; that is not done in this
change.

Change-Id: I734b1c32d5404752447d9e20329471436ffe05fc
2017-07-25 13:12:34 -04:00
David Pursehouse 4940ea14b7 Add missing newlines at ends of Java files
Change-Id: Iead36f53d57ead0eb3edd3f9efb63b6630c9c20c
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-07-25 10:37:21 +01:00
Joan Goyeau 826e22e7cc Fix JGit set core.fileMode to false by default instead of true for non Windows OS.
Bug: 519887
Change-Id: I4ae0d6783a9dc62f78ead54ddd1ab2b5b66a811c
Signed-off-by: Joan Goyeau <joan@goyeau.com>
2017-07-24 13:57:21 +01:00
Dmitry Pavlenko 843e444561 Fix matching ignores and attributes pattern of form a/b/**.
Fix patch matching for patterns of form a/b/** : this should not match
paths like a/b but still match a/b/ and a/b/c.

Change-Id: Iacbf496a43f01312e7d9052f29c3f9c33807c85d
Signed-off-by: Dmitry Pavlenko <pavlenko@tmatesoft.com>
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-07-24 09:16:33 +01:00
David Pursehouse ba91e8a086 Merge changes from topic 'packed-batch-ref-update'
* changes:
  Add tests for updating single refs to missing objects
  Fix deleting symrefs
  RefDirectory: Throw exception if CAS of packed ref list fails
  ReceiveCommand: Explicitly check constructor preconditions
  BatchRefUpdate: Document when getPushOptions is null
2017-07-24 03:38:42 -04:00
Shawn Pearce ad269ae426 Merge "Make 'inCoreLimit' of LocalFile used in ResolveMerger configurable" 2017-07-22 18:45:56 -04:00
Changcheng Xiao 1baf86d4d2 Make 'inCoreLimit' of LocalFile used in ResolveMerger configurable
This change makes it possible to configure the 'inCoreLimit' of LocalFile
used in ResolveMerger#insertMergeResult. Since LocalFile itself has some
risks, e.g. it may be left behind as garbage in case of failure. It should
be good to be able to control the size limit for using LocalFile.

Change-Id: I3dc545ade370b2bbdb7c610ed45d5dd4d39b9e8e
Signed-off-by: Changcheng Xiao <xchangcheng@google.com>
2017-07-22 21:51:12 +02:00
Shawn Pearce d4cfa95ba3 dfs: optionally store blockSize in DfsPackDescription
Allow a DFS implementation to report blockSize to DfsPackFile,
bypassing alignment errors and corrections in the DfsBlockCache when
the blockSize of a specific file differs from the cache's configured
blockSize.

Change-Id: Ic376314d4a86a0bd528c033e169d93eef035b233
2017-07-21 08:33:17 -07:00
Shawn Pearce f414f7de1f dfs: Fix DataFormatException: 0 bytes to inflate
When a file uses a different block size (e.g.  500) than the cache
(e.g.  512), and the DfsPackFile's blockSize field has not been
initialized, the cache misaligns block loads.  The cache uses its
default of 512 to compute the block alignment instead of the file's
500.

This causes DfsReader try to set an empty range into an Inflater,
resulting in an object being unable to load.

Change-Id: I7d6352708225f62ef2f216d1ddcbaa64be113df6
2017-07-19 14:28:59 -07:00
Shawn Pearce da0a7c1f3c dfs: actually allow current DfsBlock to GC
Holding the current DfsBlock in a local variable 'b' may prevent the
Java GC from reclaiming it while loading the next block.  Remove the
local variable and rely only on the field.

Change-Id: Ibfc8394cac717b485fdc94d5c8479c3f8ca78ee4
2017-07-19 13:56:06 -07:00
Shawn Pearce 0d4832e15b Merge "dfs: only create DfsPackFile if description has PACK" 2017-07-19 14:49:37 -04:00
Shawn Pearce a6afed9bb8 dfs: Fix incorrect use of reference == for DfsStreamKey
Must use .equals() now with DfsStreamKey.

Change-Id: I35fecbe3895c2078d69213e9c708a9b0613a1c7c
2017-07-19 10:04:09 -07:00
Shawn Pearce 8d27c480df dfs: Fix build break caused by DfsStreamKey.of signature change
Change-Id: I6c49cf42a04dd0d96cfe0751f500a51f56f0bdb8
2017-07-19 09:32:00 -07:00
Shawn Pearce e6d9ae058b dfs: only create DfsPackFile if description has PACK
In the future with reftable a DFS implementation may choose to create
a PackDescription that contains only a REFTABLE extension.  Filter
these out by only creating a DfsPackFile if the PackDescription as the
expected PackExt.PACK.

Change-Id: I4c831622378156ae6b68f82c1ee1db5e150893be
2017-07-19 09:01:43 -04:00
Shawn Pearce 4321ccd468 dfs: Fix default DfsStreamKey to include DfsRepositoryDescription
Not all DFS implementations use globally unique pack names in the
DfsPackDescription.  Most require the DfsRepositoryDescription to
qualify the pack.  Include DfsRepositoryDescription in the default
DfsStreamKey implementation, to prevent cache collisions.

Change-Id: I9ebf0c76bf2b414a702ae050b32e42588067bc44
2017-07-19 05:53:30 -07:00
Shawn Pearce 90a957c947 dfs: Shrink DfsPackDescription.sizeMap storage
Using a HashMap is overkill for this storage.  PackExt is a
constrained type that permits no more than 32 unique values in the JVM.
Each is assigned a unique index (getPosition), which can be used as
indexes in a simple long[].

Change-Id: Ib8e3b2db15d3fde28989b6f4b9897f8a7bb36f3b
2017-07-19 05:45:15 -07:00
Shawn Pearce da7671fcd5 dfs: Fix caching of index, bitmap index, reverse index
When 07f98a8b71 ("Derive DfsStreamKey from DfsPackDescription")
stopped caching DfsPackFile in the DfsBlockCache, the DfsPackFile began
to always load the idx, bitmap, or compute reverse index, as the cache
handles were no longer populated by prior requests.

Rework caching to lookup the objects from the DfsBlockCache if the
local DfsPackFile handle is invalid.  This allows the DfsPackFile to
be more of a flyweight instance across requests.

Change-Id: Ic7b42ce2d90692cccea36deb30c2c76ccc81638b
2017-07-18 21:58:30 -07:00
Shawn Pearce b1bdeeb0ee dfs: Use special ForReverseIndex DfsStreamKey wrapper instead of derive
While implementing a custom subclass of DfsStreamKey it became obvious
the required derive(String) was making it impossible to construct an
efficient key in all cases.

Instead, use a special wrapper type ForReverseIndex around the INDEX's
own DfsStreamKey to denote the reverse index stream in the
DfsBlockCache.  This adds a smaller layer of boxing, but eliminates
weird issues for DFS implementors using specialized DfsStreamKey
implementations for space efficiency reasons.

Now that DfsStreamKey is reasonably light-weight, avoid allocating the
index and reverse index keys until necessary.  DfsPackFile mostly
holds the DfsBlockCache.Ref handle to the object, and only needs the
DfsStreamKey when its looking up the handle.

Change-Id: Icea78e8f7f1514087b94ef5f525d9573ea2913f2
2017-07-18 21:37:51 -07:00
Shawn Pearce 07f98a8b71 Derive DfsStreamKey from DfsPackDescription
By making this a deterministic function, DfsBlockCache can stop
retaining a map of every DfsPackDescription it has ever seen.  This
fixes a long standing memory leak in DfsBlockCache.

This refactoring also simplifies the idea of setting up more
lightweight objects around streams.

Change-Id: I051e7b96f5454c6b0a0e652d8f4a69c0bed7f6f4
2017-07-17 13:20:34 -07:00
Dave Borowitz f529fa6729 Fix deleting symrefs
The RefDirectory implementation of doDelete never considered whether to
delete a symref or its leaf, because the detachingSymbolicRef bit was
never exposed from RefUpdate. The behavior was thus incorrectly to
always delete the symref, never the leaf.

There was no test for this behavior. The only thing that attempted to be
a test was testDeleteHeadInBareRepo, but this test was broken for
reasons unrelated to this bug. Specifically, it set the leaf to point to
a completely nonexistent object, and then asserted that deleting HEAD
resulted in NO_CHANGE. The only reason this test ever passed is because
of a quirk of updateImpl, which treats a missing object as the same as
null. This quirk aside, the test wasn't really testing the right thing.
Turn this into a real test by writing out a real object and pointing the
leaf at that.

Also, add a test for the detachingSymbolicRef case, i.e. deleting the
symref and leaving the leaf alone.

Change-Id: Ib96d2a35b4f99eba0734725486085fc6f9d78aa5
2017-07-17 11:56:35 -04:00
Dave Borowitz 9c33f7364d RefDirectory: Throw exception if CAS of packed ref list fails
The contents of the packedRefList AtomicReference should never differ
from what we expect prior to writing, because this segment of the code
is protected by the packed-refs lock file on disk. If it does happen,
whether due to programmer error or a rogue process not respecting the
locking protocol, it's better to let the caller know than to silently
drop the whole commit operation on the floor.

The existing concurrentOnlyOneWritesPackedRefs test is inherently
nondeterministic as written, and was already about 6% flaky as measured
by bazel:

  $ bazel test --runs_per_test=200 //org.eclipse.jgit.test:org_eclipse_jgit_internal_storage_file_GcPackRefsTest
  ...
  INFO: Elapsed time: 42.608s, Critical Path: 10.35s
  //org.eclipse.jgit.test:org_eclipse_jgit_internal_storage_file_GcPackRefsTest FAILED in 12 out of 200 in 1.6s
    Stats over 200 runs: max = 1.6s, min = 1.1s, avg = 1.3s, dev = 0.1s

This flakiness was caused by the assumption that exactly one of the 2
threads would fail, when both might actually succeed in practice due to
racing on the compare-and-swap.

For whatever reason, this change affected the interleaving behavior in
such a way that the flakiness jumped to around 50%. Making the
interleaving of the test fully deterministic is beyond the scope of this
change, but a simple tweak to the assertion is enough to make it pass
consistently 200+ times both before and after this change.

Change-Id: I5ff4dc39ee05bda88d47909acb70118f3d0c8f74
2017-07-17 11:56:35 -04:00
Dave Borowitz 21ec281f3e ReceiveCommand: Explicitly check constructor preconditions
Some downstream code checks whether a ReceiveCommand is a create or a
delete based on the type field. Other downstream code (in particular a
good chunk of Gerrit code I wrote) checks the same thing by comparing
oldId/newId to zeroId. Unfortunately, there were no strict checks in the
constructor that ensures that zeroId is only set for oldId/newId if the
type argument corresponds, so a caller that passed mismatched IDs and
types would observe completely undefined behavior as a result. This is
and always has been a misuse of the API; throw IllegalArgumentException
so the caller knows that it is a misuse.

Similarly, throw from the constructor if oldId/newId are null. The
non-nullness requirement was already documented. Fix RefDirectoryTest to
not do the wrong thing.

Change-Id: Ie2d0bfed8a2d89e807a41925d548f0f0ce243ecf
2017-07-17 11:56:35 -04:00
Dave Borowitz 00a72e22e6 BatchRefUpdate: Document when getPushOptions is null
Change-Id: I4cccda0ec3a8598edb723dc49101a16d603d1e82
2017-07-17 11:56:35 -04:00
Shawn Pearce 84c71ac933 Extract BlockBasedFile base class for DfsPackFile
This new base class has the minimum set of properties and methods
necessary for DfsBlockCache to manage blocks of a file in the cache.
Subclasses can use DfsBlockCache for any content.

This refactoring opens the door for additional PackExt types other
than PACK to be stored on a block-by-block basis by the DfsBlockCache.

Change-Id: I307228fc805c3ff0c596783beb24fd52bec35ba8
2017-07-17 08:15:37 -07:00
Shawn Pearce 8c566be72f Use separate DfsStreamKey for PackIndex
Instead of overloading the pack's DfsStreamKey with negative positions
for the idx, reverse idx and bitmap, assign a unique DfsStreamKey for
each of these related streams.

Change-Id: Ie048036c74a1d1bbf5ea7e888452dc0c1adf992f
2017-07-17 08:15:37 -07:00
Shawn Pearce e924de5295 Rename DfsPackKey to DfsStreamKey
This renaming supports reusing DfsStreamKey in a future commit
to index other PackExt type streams inside of the DfsBlockCache.

Change-Id: Ib52d374e47724ccb837f4fbab1fc85c486c5b408
2017-07-17 08:15:37 -07:00
Matthias Sohn dfb9884dbc Add missing @since 4.9 for new API PackParser.setExpectedObjectCount()
Change-Id: I58fa956aea37c696dbc35ecd229d8971d532923f
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-07-08 00:06:31 +02:00
Dave Borowitz 106ed5fea0 Merge changes from topic 'packed-batch-ref-update'
* changes:
  RefList: Support capacity <= 0 on new builders
  Short-circuit writing packed-refs if no refs were packed
  BatchRefUpdate: Clarify some ref prefix calls
2017-07-07 13:51:25 -04:00
Zhen Chen abe2a87cb3 Make possible to overwrite the object count
Right now, PackParser relies on the object count from the pack header.
However, when creating Dfs INSERT packs, the object count is not known
at the beginning of the operation. And when we append the base to a
RECEIVE pack, we can't modify the pack header for object count in most
Dfs implementations.

Make it possible to tell PackParser the expected object count by adding
a setter for expectedObjectCount, implementation can overwrite the
object count in onPackHeader function.

Change-Id: I646ca33ab2b843de84edc287abfb65803a56a927
Signed-off-by: Zhen Chen <czhen@google.com>
2017-07-05 14:12:42 -07:00
Dave Borowitz 40748e8303 RefList: Support capacity <= 0 on new builders
Callers may estimate the size, and their estimate may be zero. Silently
allow this, rather than throwing IndexOutOfBoundsException later during
add.

Change-Id: Ife236f9f4ce469c57b18e76cf4fad6feb52cb2b0
2017-07-05 15:51:26 -04:00
Dave Borowitz e08fa5afcd Short-circuit writing packed-refs if no refs were packed
Change-Id: Id691905599b242e48f590138a96e0c86132308fd
2017-07-05 15:51:26 -04:00
Dave Borowitz 28adcce862 BatchRefUpdate: Clarify some ref prefix calls
Inline the old addRefToPrefixes, since it was just a glorified addAll.
Split getPrefixes into a variant, addPrefixesTo, that doesn't allocate a
small Collection on every invocation. Use this in the tight loop of
getTakenPrefixes.

Change-Id: I25cc7feef0c8e312820d85b7ed48559da49b83d2
2017-07-05 15:51:26 -04:00
Christian Halstrick 1968b20066 Merge "Support -merge attribute in binary macro" 2017-07-03 07:48:19 -04:00
Shawn Pearce 5fdbcc1081 Use read ahead during copyPackThroughCache
If a block is missing from the block cache, open the pack stream,
retain the ReadableChannel, and turn on read-ahead.  This should help
to load a medium sized pack into a cold cache more quickly from a
slower IO stream, as the pack is scanned sequentially and missing
blocks are more likely to be available through the read-ahead.

Change-Id: I3300d936b9299be6d9eb642992df7c04bb439cde
2017-06-27 09:52:41 -07:00
Mathieu Cartaud f7e233e450 Support -merge attribute in binary macro
The merger is now able to react to the use of the merge attribute.
The value unset and the custom value 'binary' are handled (-merge
and merge=binary)

Since the specification of the merge attribute states that when the
attribute is unset, ours version must be kept in case of a conflict, we
don't overwrite the file but keep the local version.

Bug: 517128
Change-Id: Ib5fbf17bdaf727bc5d0e106ce88f2620d9f87a6f
Signed-off-by: Mathieu Cartaud <mathieu.cartaud@obeo.fr>
2017-06-27 10:33:50 +02:00
David Turner 695e38a83b Add a test for parsing fsck config options and expose FsckMode enum
These config options allow overriding the message type (error, warn or
ignore) of a specific message ID such as missingEmail.
The supported fsck message IDs are defined in ObjectChecker.ErrorType.

Since TransferConfig.FsckMode wasn't public parsing fsck configuration
options like e.g. fsck.missingEmail=ignore failed with an
IllegalAccessException. Fix this by declaring this enum public.

Change-Id: I3f41ff7a76a846250a63ce92a9fd111eb347269f
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-23 00:10:20 +02:00
Oliver Lockwood 060f3699d4 Fix bug in multiple tag handling on DescribeCommand
In the case of multiple tags on the same commit, jgit previously
only ever looked at the last of those tags; git behaviour is to
return the first tag (or first matching one if --match is
specified).

Bug: 518377
Change-Id: I3b6b58ad9f8aa3879ae35b84542b7bddc74a27d6
Signed-off-by: Oliver Lockwood <oliver.lockwood@cantab.net>
2017-06-21 17:25:19 +01:00
Oliver Lockwood af0867cb86 Support --match functionality in DescribeCommand
A `match()` method has been added to the DescribeCommand, allowing
users to specify one or more `glob(7)` matchers as per Git convention.

Bug: 518377
Change-Id: Ib4cf34ce58128eed0334adf6c4a052dbea62c601
Signed-off-by: Oliver Lockwood <oliver.lockwood@cantab.net>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-20 00:23:26 +02:00
Matthias Sohn df638e0cfc Allow to programmatically set FastForwardMode for PullCommand
Bug: 517847
Change-Id: I70d12dbe347a3d7a3528687ee04e52a2052bfb93
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-16 23:20:20 +02:00
Mattias Neuling 0d447b1660 Add support for config "pull.ff
When the configuration entry 'pull.ff' exists the merge of the pull will
use the value as fast forward option.

Bug: 474174
Change-Id: Ic8db2f00095ed81528667b064ff523911e6c122e
Signed-off-by: Mattias Neuling <neuling@dakosy.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-16 23:20:20 +02:00
David Pursehouse b4a46b5ed0 Fetch/PullCommand: Improve Javadoc of setRecurseSubmodules
Annotate the `recurse` parameter as @Nullable and expand the
Javadoc to clarify the precedence of options.

Change-Id: I7aee800cdbf8243133a0d353ef79b97b67ce011e
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-06-16 12:52:31 +09:00
Matthias Sohn a45b045c73 Improve javadoc for MergeCommand.setFastForward()
- mark parameter to be nullable
- explain that we fallback to value of merge.ff if set to null and to
--ff if also not configured there

Change-Id: Id077763b95195d21543ac637f9939a6d4179e982
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-15 23:11:46 +02:00
Terry Parker 8dd53135cb Add a new singlePack option to PackConfig
If set, "singlePack" will create a single GC pack file for all
objects reachable from refs/*. If not set, the GC pack will contain
object reachable from refs/heads/* and refs/tags/*, and the GC_REST
pack will contain all other reachable objects.

Change-Id: I56bcb6a9da2c10a0909c2f940c025db6f3acebcb
Signed-off-by: Terry Parker <tparker@google.com>
2017-06-14 15:38:11 -07:00
Matthias Sohn 7922f31fa3 Prepare 4.8.1-SNAPSHOT builds
Change-Id: I7ca4186bbfe5ccc3fed4509a1fe4fc47bb2e8c50
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-12 22:19:30 -04:00
Matthias Sohn 03b8d1a202 JGit v4.8.0.201706111038-r
Change-Id: Ie33623a2191ffffc2ca5756fd078a7003c0c660f
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-11 16:39:41 +02:00
David Pursehouse 2dc66e93ca Merge branch 'stable-4.8'
* stable-4.8:
  Use a dedicated executor to run auto-gc in command line interface
  Allow to use an external ExecutorService for background auto-gc
  Fetch: Add --recurse-submodules and --no-recurse-submodules options
  Fix capitalization of command help summaries

Change-Id: I7c85f11daa34c11c7f6389de885a2183a686197e
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-06-11 20:24:12 +09:00
Matthias Sohn 18ae9bb57d Allow to use an external ExecutorService for background auto-gc
If set use the external executor, otherwise use JGit's own simple
WorkQueue. Move WorkQueue to an internal package so we can reuse it
without exposing it in the public API.

Change-Id: I060d62ffd6692362a88b4bf13ee07b0dc857abe9
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-11 12:24:12 +02:00
David Pursehouse b6f954ad42 Fetch: Add --recurse-submodules and --no-recurse-submodules options
Add options to control recursion into submodules on fetch.

Add a callback interface on FetchCommand, to allow Fetch to display
an update "Fetching submodule XYZ" for each submodule.

Change-Id: Id805044b57289ee0f384b434aba1dbd2fd317e5b
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-11 12:24:12 +02:00
David Pursehouse a7949c1e35 Merge branch 'stable-4.8'
* stable-4.8:
  SubmoduleUpdateCommand#setCallback should return 'this'
  CloneCommand#setCallback should return 'this'
  Prepare 4.7.2-SNAPSHOT builds
  JGit v4.7.1.201706071930-r
  ArchiveCommand: Create prefix entry with commit time
  Run auto GC in the background
  Update Orbit to the Oxygen version R20170516192513

Change-Id: Ibf90b4899d097474e7836e6baab8829e66fca524
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-06-10 14:14:18 +09:00
Matthias Sohn 4acad15086 SubmoduleUpdateCommand#setCallback should return 'this'
The other methods in this class follow the builder pattern, and
return 'this', allowing multiple method calls to be chained in a
single statement.

Update the setCallback method to do the same.
Change-Id: I4ddaacd6d50601f47f61eb6be8b62c8d59cce062
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-10 00:58:23 +02:00
Zhen Chen 9a3e037726 Defer object collision check until pack stream is done
Object collision check requires read from local storage which may be
slow. We already delay this check for blobs, this change will also delay
other objects until the pack stream is closed. In this way, there is no
readCurs call until the pack stream is closed.

Change-Id: I3c8c4720dd19a5f64f8c7ddf07d815ed6877b6aa
Signed-off-by: Zhen Chen <czhen@google.com>
2017-06-08 21:57:03 -07:00
David Pursehouse 9c7b95684c CloneCommand#setCallback should return 'this'
The other methods in this class follow the builder pattern, and
return 'this', allowing multiple method calls to be chained in a
single statement.

Update the setCallback method to do the same.

Change-Id: I0366d28bf66ba47f08ee7eee636d613c9fe079f5
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-06-08 22:45:33 +02:00
Matthias Sohn 8afd9b1648 Prepare 4.7.2-SNAPSHOT builds
Change-Id: I7c127bd402cd84c68d8f33a32c6aad093a2264c8
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-08 13:33:44 +02:00
David Pursehouse 39ea39e817 Merge branch 'stable-4.7' into stable-4.8
* stable-4.7:
  JGit v4.7.1.201706071930-r
  ArchiveCommand: Create prefix entry with commit time

Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Change-Id: Id4df76da84fde253ce04484f3437816dc145b4f2
2017-06-08 09:03:25 +09:00
Matthias Sohn 1d14296975 JGit v4.7.1.201706071930-r
Change-Id: I28cd8fbe995d76c8a00e7db6ddf826e983d89043
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-08 01:19:38 +02:00
Matthias Sohn 94c06009aa Merge branch 'stable-4.7' into stable-4.8
* stable-4.7:
  Run auto GC in the background

Change-Id: I5e25765f65d833f13cbe99696ef33055d7f5c4cf
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-07 16:58:18 +02:00
Yasuhiro Takagi a66e60a986 ArchiveCommand: Create prefix entry with commit time
The cgit archive command creates a prefix (root) directory entry
in the archive file. That entry's time is set to the commit time.

This patch makes jgit's behavior consistent with with cgit:

prefix: hoge/     -> creates prefix directory "hoge/" entry.
prefix: hoge////  -> creates prefix directory "hoge/" entry.
prefix: hoge/foo  -> does not create prefix directory entry, but for
                     each file/directory entry, prefix is added.

Change-Id: I2610e40ce37972c5f7456fdca6337e7fb07176e5
Signed-off-by: Yasuhiro Takagi <ytakagi@bea.hi-ho.ne.jp>
2017-06-05 19:35:46 -04:00
David Turner 6b1e3c58b1 Run auto GC in the background
When running an automatic GC on a FileRepository, when the caller
passes a NullProgressMonitor, run the GC in a background thread. Use a
thread pool of size 1 to limit the number of background threads spawned
for background gc in the same application. In the next minor release we
can make the thread pool configurable.

In some cases, the auto GC limit is lower than the true number of
unreachable loose objects, so auto GC will run after every (e.g) fetch
operation.  This leads to the appearance of poor fetch performance.
Since these GCs will never make progress (until either the objects
become referenced, or the two week timeout expires), blocking on them
simply reduces throughput.

In the event that an auto GC would make progress, it's still OK if it
runs in the background. The progress will still happen.

This matches the behavior of regular git.

Git (and now jgit) uses the lock file for gc.log to prevent simultaneous
runs of background gc. Further, it writes errors to gc.log, and won't
run background gc if that file is present and recent. If gc.log is too
old (according to the config gc.logexpiry), it will be ignored.

Change-Id: I3870cadb4a0a6763feff252e6eaef99f4aa8d0df
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-06-06 01:18:29 +02:00
Shawn Pearce 0d20573d9c fetch: Accept any SHA-1 on lhs of refspec
Allow fetch to accept a SHA-1 on the left hand side of a RefSpec,
enabling callers to pass a specific SHA-1 they want that may not have
been advertised by the remote repository. This can be passed along to
the network protocol to be sent in a "want" line.

Rest of the plumbing only cares about the ObjectId of the Ref in
the askFor map, so make up a fake name using ObjectId.name() to
pass the desired ObjectId into the network code.

Change-Id: I620a189f3de005c403aa68b7d0442d6aa94e6056
2017-06-04 13:58:16 -07:00
Matthias Sohn df9ce4b981 Prepare 4.9.0-SNAPSHOT builds
Change-Id: I52a4153d573799e861ab104939f51fac1aceb9ee
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-30 13:42:07 +02:00
Han-Wen Nienhuys 832808bd50 Fix out-of-bounds exception in RepoCommand#relative
Change-Id: I9c91aa2ff037bff27a8131fba54be22f5f27d80d
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-24 23:43:59 +02:00
Bryan Donlan 2204cc9866 Fix null return from FS.readPipe when command fails to launch
When a command invoked from readPipe fails to launch (i.e. the exec call
fails due to a missing command executable), Process.start() throws,
which gets caught by the generic IOException handler, resulting in a
null return. This change detects this case and rethrows a
CommandFailedException instead.

Additionally, this change uses /bin/sh instead of bash for its posix
command failure test, to accomodate building in environments where bash
is unavailable.

Change-Id: Ifae51e457e5718be610c0a0914b18fe35ea7b008
Signed-off-by: Bryan Donlan <bdonlan@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-24 23:35:39 +02:00
Dave Borowitz a46b28808b RenameDetector: Clarify rename limits <= 0
Change-Id: I8da386e02272316b8e5e5c2f31ce10ad98bcdb28
2017-05-24 09:26:40 -04:00
Zhen Chen 099dbe6ef5 Remove unnecessary cast for DfsReader
Change-Id: I22aaccfc9d589750f9d1d711b655dd0fd543fa57
Signed-off-by: Zhen Chen <czhen@google.com>
2017-05-22 10:27:20 -07:00
David Pursehouse 9a4486003f Merge "Fix javadoc of TooLargeObjectInPackException" 2017-05-22 01:12:10 -04:00
Shawn Pearce 1513a5632d Allow DfsReader to be subclassed
Necessary if a DFS implementation wants to override close()
to record DfsReaderIoStats.

Change-Id: I144575f9bf1abf2c1fd72030550c4f0795fcf44d
2017-05-19 13:50:36 -07:00
Shawn Pearce 562de51239 Track read IO for DfsReader
Compute how much disk IO a DfsReader is performing, and how long the
sum of those operations took on this reader instance. Implementations
of DFS and interested applications can get the stats by calling the
new DfsReader.getIoStats() method at or after close().

Change-Id: If585741301f29182617933d6406d4a70497f2ca7
2017-05-19 12:23:02 -07:00
Matthias Sohn ef0237564e Fix javadoc of TooLargeObjectInPackException
The API exception should have the same javadoc like the internal
exception org.eclipse.jgit.errors.TooLargeObjectInPackException

Change-Id: Ia7508c77609e53c8e808412ac523a93194648e49
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-19 11:22:05 +02:00
Terry Parker c46c720e99 Exclude refs/tags from bitmap commit selection
Commit db77610 ensured that all refs/tags commits are added to the
primary GC pack. It did that by adding all of the refs/tags commits
to the primary GC pack PackWriter's "interesting" object set.

Unfortunately, all commit objects in the "interesting" set are
selected as commits for which bitmap indices will be built. In a
repository like chromium with lots of tags, this changed the number of
bitmaps created from <700 to >10000. That puts huge memory pressure on
the GC task.

This change restores the original behavior of ignoring tags when
selecting commits for bitmaps.

In the "uninteresting" set, commits for refs/heads and refs/tags for
unannotated tags can not be differentiated. We instead identify
refs/tags commits by passing their ObjectIds as a new "noBitmaps"
parameter to the PackWriter.preparePack() methods.
PackWriterBitmapPreparer.setupTipCommitBitmaps() can then use that
"noBitmaps" parameter to exclude those commits.

Change-Id: Icd287c6b04fc1e48de773033fe432a9b0e904ac5
Signed-off-by: Terry Parker <tparker@google.com>
2017-05-18 15:25:21 -07:00
Matthias Sohn 69d5e89e99 [findBugs] Use UTF-8 to write to the error stream in TextProgressMonitor
Change-Id: Ic85db2043d6f673f268bf781917daad45d28f8cd
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-15 10:30:24 +02:00
Matthias Sohn f1dd61f646 [findBugs] Use UTF-8 to read git-rebase-todo file
Change-Id: I7c6f71e13ef106678157eae1aa3f9d39712e577b
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-15 10:29:47 +02:00
Matthias Sohn 0aa1a19cab [findBugs] Use UTF-8 when writing to the error stream in GitHook
Change-Id: Ica8a40b909ed45cf8e538714e4f26b64ff9a3d21
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-15 10:28:53 +02:00
Matthias Sohn 9f98d3e2e4 Add shutdown hook to cleanup unfinished clone when JVM is killed
Bug: 516303
Change-Id: I5181b0e8096af3537296848ac7dd74dff0b6d279
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-13 17:23:34 +02:00
Thomas Wolf 09d96f8d46 Clean up the disk when cloning fails
CloneCommand.call() has three stages: preparation, then the actual
clone (init/fetch), and finally maybe checking out the working
directory.

Restructure such that if we fail or are cancelled during the actual
clone (middle phase), we do clean up the disk again. This prevents
leaving behind a partial clone in an inconsistent state: either we
have a fully successfully built clone, or nothing at all.

Bug: 516303
Change-Id: I9b18c60f8f99816d42a3deb7d4a33a9f22eeb709
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-05-12 05:11:13 -04:00
Christian Halstrick 501af12c19 Checkout should not use too long filenames
DirCacheCheckout is generating names for temporary files. It was not checking
the length of this filenames. It may happen that a generated filename is
longer than 255 chars which causes problems on certain platforms. Make sure
that filenames for temporary files do not exceed 255 chars.

Bug: 508823
Change-Id: I9475c04351ce3faebdc6ad40ea4faa3c326815f4
2017-05-10 00:33:44 +02:00
Mickael Istria 5b84e25fa3 Support pull on detached HEAD
Bug: 485396
Change-Id: I82be09385c9b0bcc0054fea5a9cb9d178a41e278
Signed-off-by: Mickael Istria <mistria@redhat.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-05-08 00:38:25 +02:00
Zhen Chen 8f7d0a4fbe Reset ObjectWalker when it starts a new walk
The ObjectWalker in PackWriterBitmapWalker needs to be reset whenever it
starts a new walk. Move this responsibility from the caller to the
method when the new walk starts.

Change-Id: Ib66003be1b5bdc80f46b9bbbb17d45e616714912
Signed-off-by: Zhen Chen <czhen@google.com>
2017-05-03 15:02:33 -07:00
Shawn Pearce d377a885a9 Fix stack overflow in MergeBaseGenerator
Some repository topologies can cause carryOntoHistory to overflow the
thread stack, due to its strategy of recursing into the 2nd+ parents
of a merge commit.  This can easily happen if a project maintains a
local fork, and frequently pulls from the upstream repository, which
itself may have a branchy history.

Rewrite the carryOntoHistory algorithm to use a fixed amount of thread
stack, pushing the save points onto the heap.  By using heap space the
thread stack depth is no longer a concern.  Repositories are instead
limited by available memory.

The algorithm is now structured as two loops:

  carryOntoHistory: This outer loop pops saved commits off the top of
  the stack, allowing the inner loop algorithm to dive down that path
  and carry bits onto commits along that part of the graph.  The loop
  ends when there are no more stack elements.

  carryOntoHistoryInner: The inner loop walks along a single path of
  the graph. For a string of pearls (commits with one parent each)

    r <- s <- t <- u

  the algorithm walks backwards from u to r by iteratively updating
  its local variable 'c'.  This avoids heap allocation along a simple
  path that does not require remembering state.

  The inner loop breaks in the HAVE_ALL case, when all bits have been
  found to be previously set on the commit.  This occurs when a prior
  iteration of the outer loop (carryOntoHistory) explored a different
  path to this same commit, and copied the bits onto it.

  When the inner loop encounters a merge commit, it pushes all parents
  onto the heap based stack by allocating individual CarryStack
  elements for each parent.  Parents are pushed in order, allowing
  side branches to be explored first.

  A small optimization is taken for the last parent, avoiding pushing
  it and instead updating 'c', allowing the side branch to be entered
  without allocating a CarryStack.

Change-Id: Ib7b67d90f141c497fbdc61a31b0caa832e4b3c04
2017-05-02 11:38:59 -07:00
David Pursehouse 005e5feb4e Clone: add --recurse-submodules option
Add the --recurse-submodules option on the command, which causes
submodules to also be initialized and updated.

Add a callback interface on CloneCommand and SubmoduleUpdateCommand to
them to provide progress feedback for clone operations.

Change-Id: I41b1668bc0d0bdfa46a9a89882c9657ea3063fc1
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-27 09:19:08 +02:00
Thirumala Reddy Mutchukota 5e250e45be Delete expired garbage even when there is no GC pack present.
Delete the condition to check whether the garbage pack creation time
is older than the last GC operation, because it's not possible to
find the last GC operation time when there is no GC pack.

Add additional tests to make sure the contents of the expired garbage
packs are considered during the GC operation and any actively
referenced objects from the garbage packs are copied successfully
into the GC pack before deleting the garbage pack.

Change-Id: I09e8b2656de8ba7f9b996724ad1961d908e937b6
Signed-off-by: Thirumala Reddy Mutchukota <thirumala@google.com>
2017-04-21 14:06:58 -07:00
Martin Fick f9b69677f6 Add parseCommit(AnyObjectId) method to Repository.
It is quite common to want to parse a commit without already having a
RevWalk.  Provide a shortcut to do so to make it more convenient, and to
ensure that the RevWalk is released afterwards.

Signed-off-by: Martin Fick<mfick@codeaurora.org>
Change-Id: I9528e80063122ac318f115900422a24ae49a920e
2017-04-19 09:42:47 +02:00
Dan Willemsen b6fc8e2f3c RepoCommand: Add linkfile support.
Android wants them to work, and we're only interested in them for bare
repos, so add them just for that.

Make sure to use symlinks instead of just using the copyfile
implementation. Some scripts look up where they're actually located in
order to find related files, so they need the link back to their
project.

Change-Id: I929b69b2505f03036f69e25a55daf93842871f30
Signed-off-by: Dan Willemsen <dwillemsen@google.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Jeff Gaston <jeffrygaston@google.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-04-18 10:33:37 +02:00
Jonathan Nieder f9e13efe47 Merge "Process all "have"s even when MissingObjectException is encountered" 2017-04-17 14:53:27 -04:00
Jonathan Nieder c2e6e7abc9 Process all "have"s even when MissingObjectException is encountered
Because objects described by the client using "have" lines do not need
to be reachable by any ref on the server, it is possible for them to
point to missing objects in the reachability graph.  When such an
object is encountered, I1097a2defa4a9dcf502ca8baca5d32880378818f (Only
throw MissingObjectException when necessary, 2017-03-29) aborts the
"have" walk early to salvage the fetch.  The downside of that change
is that remaining "have"s are ignored unless they pointed directly to
an object with a bitmap.  In the worst case this can increase the
bandwidth cost of a fetch to the cost of a clone because most "have"s
are ignored.

Avoid this cost by bypassing the failed "have" completely and moving
on to the remaining "have"s.

Change-Id: Iac236b6d05f735078c9935abfa6e58d1eb47f388
2017-04-17 11:50:28 -07:00
David Pursehouse a6df70569a Merge "Prevent alternates loop" 2017-04-17 12:01:55 -04:00
Martin Fick e4714a2a5f Prevent alternates loop
When looping through alternates, prevent visiting the same object
directory twice. This could happen when the objects/info/alternates file
includes itself directly or indirectly via a another repo and its
alternates file.

Change-Id: I79bb3da099ebc3c262d2e6c61ed4578eb1aa3474
Signed-off-by: James Melvin <jmelvin@codeaurora.org>
Signed-off-by: Martin Fick <mfick@codeaurora.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-14 23:35:17 +02:00
Matthias Sohn 3af4afdfbf Add missing @since tag for new API RepoCommand.setTargetURI()
Change-Id: I4531b94e3a04606a69eeb3c3d154510b87507012
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-14 19:49:19 +02:00
David Pursehouse c80d8c5901 Bazel: Restrict src globs to Java source files
Generating the src list with an unrestricted wildcard causes all
files in the source tree to be included. This results in junk files
such as .orig (generated during merge conflict resolution) to be
included, which causes in a build error:

  in srcs attribute of java_library rule //org.eclipse.jgit:jgit:
  file '//org.eclipse.jgit:src/org/eclipse/jgit/gitrepo/RepoCommand.java.orig'
  is misplaced here (expected .java, .srcjar or .properties).

Modify the globs to only include Java source files.

Change-Id: Iaef3db33ac71d71047cd28acb0378e15cb09ece9
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-04-13 14:14:55 +09:00
Han-Wen Nienhuys fe5437e96b Fix RepoCommand to allow for relative URLs
This is necessary for deploying submodules on android.googlesource.com.

* Allow an empty base URL. This is useful if the 'fetch' field is "."
  and all names are relative to some host root.

* The URLs in the resulting superproject are relative to the
  superproject's URL. Add RepoCommand#setDestinationURI to
  set this. If unset, the existing behavior is maintained.

* Add two tests for the Android and Gerrit case, checking the URL
  format in .gitmodules; the tests use a custom RemoteReader which is
  representative of the use of this class in Gerrit's Supermanifest
  plugin.

Change-Id: Ia75530226120d75aa0017c5410fd65d0563e91b
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-04-13 10:53:58 +09:00
Jonathan Nieder e730fcce77 Merge "BundleWriter: Allow constructing from only an ObjectReader" 2017-04-12 21:12:15 -04:00
Terry Parker 56a1cced74 Merge "Only throw MissingObjectException when necessary" 2017-04-12 10:25:11 -04:00
Dave Borowitz c9c9e672e5 BundleWriter: Allow constructing from only an ObjectReader
Change-Id: I01821d6a9fbed7a5fe4619884e42937fbd6909ce
2017-04-12 08:27:57 -04:00
Matthias Sohn cc0dbbae43 Merge branch 'stable-4.7'
* stable-4.7:
  Cleanup and test trailing slash handling in ManifestParser
  ManifestParser: Throw exception if remote does not have fetch attribute

Change-Id: Ia9dc3110bcbdae05175851ce647ffd11c542f4c0
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-11 00:54:16 +02:00
Han-Wen Nienhuys f17ec3928c Cleanup and test trailing slash handling in ManifestParser
This is a workaround for
https://bugs.openjdk.java.net/browse/JDK-4666701.

Change-Id: Idd04657e8d95a841d72230f8881b6b899daadbc2
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-11 00:37:38 +02:00
Han-Wen Nienhuys 84d855cda7 ManifestParser: Throw exception if remote does not have fetch attribute
In the repo manifest documentation [1] the fetch attribute is marked
as "#REQUIRED".

If the fetch attribute is not specified, this would previously result in
NullPointerException. Throw a SAXException instead.

[1] https://gerrit.googlesource.com/git-repo/+/master/docs/manifest-format.txt

Change-Id: Ib8ed8cee6074fe6bf8f9ac6fc7a1664a547d2d49
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-04-10 15:08:32 +02:00
Matthias Sohn b3cc05d886 Remove unused API filters
Change-Id: I1e00d71395228265aad4071b023024ee1bf855d5
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-09 23:43:43 +02:00
Matthias Sohn 3db0f507ee Prepare 4.5.3-SNAPSHOT builds
Change-Id: I69681b7a5687ca76bd0dd5d3e7ce2cff841d0e32
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-08 00:31:09 +02:00
Matthias Sohn c1d3ecbeab JGit v4.5.2.201704071617-r
Change-Id: I66402643d7c84c90bf5cefed4d2ec3aa68c94cfb
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-07 22:17:27 +02:00
Matthias Sohn 7adacbd19a Silence API error for new method added to abstract MergeStrategy
OSGi semantic versioning rules allow to break implementors of an API in
a minor version.

Change-Id: I4ada3e6455e8e8e1bb8fb71affa0a1b36bd46fc4
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-06 18:17:22 +02:00
Matthias Sohn 4e8655c74d Fix @since tags of new API added after 4.7.0
Change-Id: I356f71cdef8e23a9b06cf0a4079060a116b9ed27
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-06 18:16:04 +02:00
Zhen Chen f5368dc97f Only throw MissingObjectException when necessary
When preparing the bitmap, the flag ignoreMissingStart only applied to
the start object. However, sometime the start object is present but some
related objects are not present during the walk, we should only release
the MissingObjectException when the ignoreMissingStart is set false.

Change-Id: I1097a2defa4a9dcf502ca8baca5d32880378818f
Signed-off-by: Zhen Chen <czhen@google.com>
2017-04-05 19:09:16 -04:00
Matthias Sohn 6a311a071f Prepare 4.7.1-SNAPSHOT
Change-Id: I16a45035258276217446bccc0ad1b0991383aa0c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-06 00:16:53 +02:00
Dave Borowitz 4c3e274588 Support creating Mergers without a Repository
All that's really required to run a merge operation is a single
ObjectInserter, from which we can construct a RevWalk, plus a Config
that declares a diff algorithm. Provide some factory methods that don't
take Repository.

Change-Id: Ib884dce2528424b5bcbbbbfc043baec1886b9bbd
2017-04-05 17:50:54 -04:00
Matthias Sohn 9f4c10784b JGit v4.7.0.201704051617-r
Change-Id: Ic2bd6aca0b7a7e0597ffc1f7cf647b49878f9950
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-05 22:17:44 +02:00
Matthias Sohn aec22e74cf Prepare 4.8.0-SNAPSHOT builds
Change-Id: Ifea6750e79d417a8a2a891b3b5f96d68c7200011
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-04-05 14:49:49 +02:00
Andrey Loskutov 7476baebfc Fixed NP dereference error reported by ecj in UploadPack.stopBuffering()
Introduced via commit 3b2508b514.

Change-Id: I2b6175c095aea2868a8c302103095accde5170e3
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
2017-04-05 09:51:12 +02:00
Shawn Pearce db2493e7d8 Merge "Make diff locations more consistent" 2017-04-04 22:26:38 -04:00
Dave Borowitz e4672d1c16 NameConflictTreeWalk: Mark repo param @Nullable
This is passed directly to the super constructor, where it is also
@Nullable. Marking it here saves the reader a jump.

Change-Id: Icc8db2f2dc6aae6e591aa4f09a3c283336a5424c
2017-04-04 14:53:17 -04:00
Jonathan Nieder db58abbbe8 Merge "Buffer the response until request parsing has done" 2017-04-04 14:25:41 -04:00
Masaya Suzuki 3b2508b514 Buffer the response until request parsing has done
This is a continuation from https://git.eclipse.org/r/#/c/4716/. For a
non-bidirectional request, we need to consume the request before writing
any response. In UploadPack, we write "shallow"/"unshallow" responses
before parsing "have" lines. This has happened not to be a problem most
of the time in the smart HTTP protocol because the underlying
InputStream has a 32 KiB buffer in SmartOutputStream.

Change-Id: I7c61659e7c4e8bd49a8b17e2fe9be67bb32933d3
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
2017-04-04 10:52:49 -07:00
KB Sriram 4a985f5aa8 Make diff locations more consistent
DiffAlgorithms can return different edit locations for inserts or
deletes, if they can be "shifted" up or down repeating blocks of
lines. This causes the 3-way merge to apply both edits, resulting in
incorrectly removing or duplicating lines.

Augment an existing "tidy-up" stage in DiffAlgorithm to move all
shiftable edits (not just the last INSERT edit) to a consistent
location, and add test cases for previously incorrect merges.

Bug: 514095
Change-Id: I5fe150a2fc04e1cdb012d22609d86df16dfb0b7e
Signed-off-by: KB Sriram <kbsriram@google.com>
2017-04-03 16:45:13 -07:00
Matthias Sohn b65a764b6b Remove unused import from ManifestParser
Change-Id: Ie60ef9c7bc6ce0fdf017949ebfb9a21753e70506
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-31 00:38:36 +02:00
Han-Wen Nienhuys f32d65759c Document the intended use of RepoCommand#setURI()
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Change-Id: I4a59dd8278b7b0026094692127b7f55e89c10bae
2017-03-29 16:54:29 +02:00
Han-Wen Nienhuys 6e652846f6 Noop changes to ManifestParser
* Parse the base URL in ManifestParser construction.  This will signal
  errors earlier.

* Simplify stripping of trailing slashes.

Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Change-Id: I4a86f68c9d7737f71cf20352cfe26288fbd2b463
2017-03-29 13:51:37 +02:00
Han-Wen Nienhuys 27b05c7d71 Consistently use 'path' for the path to a subrepo in RepoCommand
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Change-Id: I79ea7eb7b4d319e0100e3121aca5ef82eb8ad92a
2017-03-27 17:36:56 -04:00
Matthias Sohn 251abbfcd1 Merge branch 'stable-4.6'
* stable-4.6:
  Only mark packfile invalid if exception signals permanent problem
  Don't flag a packfile invalid if opening existing file failed
  Prepare 4.5.2-SNAPSHOT builds

Change-Id: Ife4efad1135d3870a5a0fb71e60b9524fb8777ab
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-27 22:45:59 +02:00
David Pursehouse 7f013924a8 Merge branch 'stable-4.5' into stable-4.6
* stable-4.5:
  Only mark packfile invalid if exception signals permanent problem
  Don't flag a packfile invalid if opening existing file failed
  Prepare 4.5.2-SNAPSHOT builds

Change-Id: I20b50981adc54c426666015ff04fe3bb1db9abd9
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-03-27 10:14:50 +09:00
Matthias Sohn aaf3c5154e Only mark packfile invalid if exception signals permanent problem
Add NoPackSignatureException and UnsupportedPackVersionException to
explicitly mark permanent unrecoverable problems with a pack 

Assume problem with a pack is permanent only if we are sure the
exception signals a non-transient problem we can't recover from:
- AccessDeniedException: we lack permissions
- CorruptObjectException: we detected corruption
- EOFException: file ended unexpectedly
- NoPackSignatureException: pack has no pack signature
- NoSuchFileException: file has gone missing
- PackMismatchException: pack no longer matches its index
- UnpackException: unpacking failed
- UnsupportedPackIndexVersionException: unsupported pack index version
- UnsupportedPackVersionException: unsupported pack version

Do not attempt to handle Errors since they are thrown for serious
problems applications should not try to recover from.

Change-Id: I2c416ce2b0e23255c4fb03a3f9a0ee237f7a484a
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-26 11:40:47 +02:00
Luca Milanesio 363a3657b1 Don't flag a packfile invalid if opening existing file failed
A packfile random file open operation may fail with a
FileNotFoundException even if the file exists, possibly
for the temporary lack of resources.

Instead of managing the FileNotFoundException as any generic
IOException it is best to rethrow the exception but prevent
the packfile for being flagged as invalid until it is actually
opened and read successfully or unsuccessfully.

Bug: 514170
Change-Id: Ie37edba2df77052bceafc0b314fd1d487544bf35
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-25 01:33:18 +01:00
Matthias Sohn 11a12ceb0b Prepare 4.5.2-SNAPSHOT builds
Change-Id: I8485de1f3f63dc9ec445b8fb08093ca144aedc59
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-25 01:21:58 +01:00
David Pursehouse 5f902f07cc PullCommand: Add support for recursing into submodules
Add a new API method to set the recurse mode, and pass the mode into
the fetch command.

Extend the existing FetchCommandRecurseSubmodulesTest to also perform
the same tests for fetch. Rename the test class accordingly.

Change-Id: I12553af47774b4778f7011e1018bd575a7909bd0
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-03-24 00:02:45 +01:00
Matthias Sohn 61f830d3a2 Explain in error message how to recover from lock failure
Bug: 483897
Change-Id: I70f8d9c82c1efe2928f072a2fb69461160f7c5f7
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-22 18:17:01 -04:00
David Pursehouse 2d0ce094b4 Remove Buck build
Buck will be replaced with Bazel

Change-Id: I3cf07d7aaaa2a58bac34e16c50af5416693254ac
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-22 01:41:21 +01:00
Matthias Sohn a9a84b7235 JGit v4.5.1.201703201650-r
Change-Id: I88de7c9f52abbc4921a82208ed74d22aa19fb3cd
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-20 21:44:47 +01:00
Jonathan Nieder bc5014faec bazel: Add explicit targets for library dependencies
This provides a place to declare visibility restrictions and
transitive dependencies for each library.

Other targets should only declare dependencies on what they directly
use, making dependencies easier to maintain.

Trim the dependencies of org.eclipse.jgit:jgit to follow that rule.
It declares dependencies on Apache httpcomponents and the servlet
API but doesn't use them.

Tested:
* 'bazel build //...' succeeds
* applying the change https://gerrit-review.googlesource.com/90843
  to a copy of Gerrit, following the instructions there, and running
  'bazel test //...' in that copy of Gerrit still succeeds

Change-Id: I3ab958ce8b3227019cdbe4cc81e0f042e1541034
2017-03-19 18:51:03 -07:00
David Ostrovsky 7e4258113c Move SHA1 compress/recompress files to resource folder
This fixes Bazel build:

in srcs attribute of java_library rule //org.eclipse.jgit:jgit:
file '//org.eclipse.jgit:src/org/eclipse/jgit/util/sha1/SHA1.recompress'
is misplaced here (expected .java, .srcjar or .properties).

Another option that was considered is to exclude the non source files.

Change-Id: I7083f27a4a49bf6681c85c7cf7b08a83c9a70c77
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
2017-03-18 16:46:58 +01:00
Matthias Sohn 50ac852551 Merge "Merge branch 'stable-4.6'" 2017-03-15 19:50:04 -04:00
Matthias Sohn dab8e0e7cb Merge branch 'stable-4.6'
* stable-4.6:
  Don't remove pack when FileNotFoundException is transient

Change-Id: I82941a98385cda27c89e1e6750b7b6db4e39f414
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-16 00:29:43 +01:00
Matthias Sohn 405fdf76d5 Merge branch 'stable-4.5' into stable-4.6
* stable-4.5:
  Don't remove pack when FileNotFoundException is transient

Change-Id: Ic17c542d78a4cad48ff1ed77dcdc853a4ef2dc06
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-16 00:26:37 +01:00
Luca Milanesio 4c558225dc Don't remove pack when FileNotFoundException is transient
The FileNotFoundException is typically raised in three conditions:
1. file doesn't exist
2. incompatible read vs. read/write open modes
3. filesystem locking
4. temporary lack of resources (e.g. too many open files)

1. is already managed, 2. would never happen as packs are not
overwritten while with 3. and 4. it is worth logging the exception and
retrying to read the pack again.

Log transient errors using an exponential backoff strategy to avoid
flooding the logs with the same error if consecutive retries to access
the pack fail repeatedly.

Bug: 513435
Change-Id: I03c6f6891de3c343d3d517092eaa75dba282c0cd
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-15 23:43:39 +01:00
Andrey Loskutov a4b9c73391 Don't try to strip new line if the message buffer is empty
Bug: 513726
Change-Id: I0e7c19f8883b93bad1b9de166f671d28f3e9c240
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
2017-03-15 20:29:21 +01:00
David Pursehouse 2fe1a3abbe FetchCommand: Fix detection of submodule recursion mode
The submodule.name.fetchRecurseSubmodules value was being read from the
configuration of the submodule, but it should be read from the config
of the parent repository.

Also, the fetch.recurseSubmodules value from the parent repository's
configuration was not being considered at all.

Fix both of these and add tests. Now the precedence of the recurse mode
is determined as follows:

 1. Value passed to the API
 2. Value configured in submodule.name.fetchRecurseSubmodules
 3. Value configured in fetch.recurseSubmodules
 4. Default to "on demand"

Change-Id: Ic23b7c40b5f39135fb3fd754c597dd4bcc94240c
2017-03-10 13:17:39 +09:00
Matthias Sohn 79f85d1cf2 Prepare 4.6.2-SNAPSHOT builds
Change-Id: I8835f79145e6a989787d47322c3d8cb9baf0624a
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-07 20:49:24 +01:00
Matthias Sohn 258dc5a715 JGit v4.6.1.201703071140-r
Change-Id: I842dc95313e5b47b0b7ec983c4a0a91915ed4183
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-07 17:40:32 +01:00
David Pursehouse 503d59044f FetchCommand: Add basic support for recursing into submodules
Extend FetchCommand to expose a new method, setRecurseSubmodules(mode),
which allows to set the mode to ON, OFF or ON_DEMAND.

After fetching a repository, its submodules are recursively fetched:

- When the mode is YES, submodules are always fetched.

- When the mode is NO, submodules are not fetched.

- When the mode is ON_DEMAND, submodules are only fetched when the
  parent repository receives an update of the submodule and the new
  revision is not already in the submodule.

The mode is determined in the following order of precedence:

- Value specified in the API call using setRecurseSubmodules.

- Value specified in the repository's config under the key
  submodule.name.fetchRecurseSubmodules

- Defaults to ON_DEMAND if neither of the previous is set.

Extend FetchResult to recursively include results for submodules, as
a map of the submodule path to an instance of FetchResult.

Test setup is based on testCloneRepositoryWithNestedSubmodules.

Change-Id: Ibc841683763307cb76e78e142e0da5b11b1add2a
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-03-04 09:31:16 +09:00
David Pursehouse d4895c7160 Remove unnecessary @SuppressWarnings("nls")
Change-Id: Idc5f82af17ecc944b5657b02823412ea46b38413
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-03-04 09:27:14 +09:00
Thomas Wolf 1f3e74ed9f Make Repository.normalizeBranchName less strict
This operation was added recently with the goal to provide some
way to auto-correct invalid user input, or to provide a correction
suggestion to the user -- EGit uses it now that way. But the initial
implementation was very restrictive; it removed all non-ASCII
characters and even slashes.

Understandably end users were not happy with that. Git has no such
restriction to ASCII-only; nor does JGit. Branch names should be
meaningful to the end user, and if a user-supplied branch name is
invalid for technical reasons, a "normalized" name should still
be meaningful to the user.

Rewrite to attempt a minimal fix such that the result will pass
isValidRefName.

* Replace all Unicode whitespace by underscore.
* Replace troublesome special characters by dash.
* Collapse sequences of underscores, dots, and dashes.
* Remove underscores, dots, and dashes following slashes, and
  collapse sequences of slashes.
* Strip leading and trailing sequences of slashes, dots, dashes,
  and underscores.
* Avoid the ".lock" extension.
* Avoid the Windows reserved device names.
* If input name is null return an empty String so callers don't need to
check for null.

This still allows branch names with single slashes as separators
between components, avoids some pitfalls that isValidRefName() tests
for, and leaves other character untouched and thus allows non-ASCII
branch names.

Also move the function from the bottom of the file up to where
isValidRefName is implemented.

Bug: 512508
Change-Id: Ia0576d9b2489162208c05e51c6d54e9f0c88c3a7
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-04 00:23:42 +01:00
Jonathan Nieder 45f62576de Merge "SHA-1: collision detection support" 2017-03-02 13:26:45 -05:00
Shawn Pearce 83ad74b6b9 SHA-1: collision detection support
Update SHA1 class to include a Java port of sha1dc[1]'s ubc_check,
which can detect the attack pattern used by the SHAttered[2] authors.

Given the shattered example files that have the same SHA-1, this
modified implementation can identify there is risk of collision given
only one file in the pair:

  $ jgit ...
  [main] WARN org.eclipse.jgit.util.sha1.SHA1 - SHA-1 collision 38762cf7f55934b34d179ae6a4c80cadccbb7f0a

When JGit detects probability of a collision the SHA1 class now warns
on the logger, reporting the object's SHA-1 hash, and then throws a
Sha1CollisionException to the caller.

From the paper[3] by Marc Stevens, the probability of a false positive
identification of a collision is about 14 * 2^(-160), sufficiently low
enough for any detected collision to likely be a real collision.

git-core[4] may adopt sha1dc before the system migrates to an entirely
new hash function.  This commit enables JGit to remain compatible with
that move to sha1dc, and help protect users by warning if similar
attacks as SHAttered are identified.

Performance declined about 8% (detection off), now:

  MessageDigest        238.41 MiB/s
  MessageDigest        244.52 MiB/s
  MessageDigest        244.06 MiB/s
  MessageDigest        242.58 MiB/s

  SHA1                 216.77 MiB/s (was ~240.83 MiB/s)
  SHA1                 220.98 MiB/s
  SHA1                 221.76 MiB/s
  SHA1                 221.34 MiB/s

This decline in throughput is attributed to the step loop unrolling in
compress(), which was necessary to easily fit the UbcCheck logic into
the hash function.  Using helper functions s1-s4 reduces the code
explosion, providing acceptable throughput.

With detection enabled (default):

  SHA1 detectCollision 180.12 MiB/s
  SHA1 detectCollision 181.59 MiB/s
  SHA1 detectCollision 181.64 MiB/s
  SHA1 detectCollision 182.24 MiB/s

  sha1dc (native C)   ~206.28 MiB/s
  sha1dc (native C)   ~204.47 MiB/s
  sha1dc (native C)   ~203.74 MiB/s

Average time across 100,000 calls to hash 4100 bytes (such as a commit
or tree) for the various algorithms available to JGit also shows SHA1
is slower than MessageDigest, but by an acceptable margin:

  MessageDigest        17 usec
  SHA1                 18 usec
  SHA1 detectCollision 22 usec

Time to index-pack for git.git (217982 objects, 69 MiB) has increased:

  MessageDigest   SHA1 w/ detectCollision
  -------------   -----------------------
         20.12s   25.25s
         19.87s   25.48s
         20.04s   25.26s

    avg  20.01s   25.33s    +26%

Being implemented in Java with these additional safety checks is
clearly a penalty, but throughput is still acceptable given the
increased security against object name collisions.

[1] https://github.com/cr-marcstevens/sha1collisiondetection
[2] https://shattered.it/
[3] https://marc-stevens.nl/research/papers/C13-S.pdf
[4] https://public-inbox.org/git/20170223230621.43anex65ndoqbgnf@sigill.intra.peff.net/

Change-Id: I9fe4c6d8fc5e5a661af72cd3246c9e67b1b9fee6
2017-02-28 16:38:43 -08:00
Matthias Sohn 9d2a7de65e Silence API error caused by changed return type of digest()
Change-Id: Ic0810ed7fea837c45cbc9a4649ca51d140bad6e6
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-03-01 00:34:59 +01:00
Magnus Vigerlöf 2a5d20c138 Correct the boolean logic for filtering paths
The TreeWalk filtering classes need to support the three different
meanings of the return value the path comparison generates.
A new path comparison method (isPathMatch) is created with
three distinct return values (isPathPrefix use value '0' to
encode two of these) which will makes it possible for the logical
operators (especially NOT) to aggregate a correct verdict.

A filter like: AND(Path("path"), NOT(Path("path/to/other")))
Should filter out 'path/to/other/file', but not 'path/to/my/file'.

The path-limiting feature when testing path/to/my/file, would
result to run test for the following paths:

    path
    path/to
    path/to/my
    path/to/my/file

isPathPrefix('path/to/other') will return '0' for the first two
and since there is no way for NOT to distinguish between an exact
match and a match indicating that the tested path is a 'parent',
it will incorrectly return false and thus remove everything below
'path' immediately.
isPathMatch has a distinguished value for 'parent' matches that
will be preserved through the logic operators and should not
cause an over-eager removal of paths.

The functionality of isPathPrefix is required by other parts
and is untouched.

Unit tests are included to ensure that the logical functionality
is correct and can be preserved.

Change-Id: Ice2ca9406f09f1b179569e99b86a0e5d77baa20d
Signed-off-by: Magnus Vigerlöf <magnus.vigerlof@gmail.com>
2017-02-28 23:56:33 +01:00
Shawn Pearce 1bf7d3f290 SHA1: support reset() and reuse instances
Allow SHA1 instances to be reused to compute another hash value, and
resume caching them in ObjectInserter and PackParser.  This shaves a
small amount of running time off parsing git.git's pack file:

  before   after
  ------   ------
  25.25s   25.55s
  25.48s   25.06s
  25.26s   24.94s

Almost noise (small difference), but recycling the instances reduces
some stress on the memory allocator finding two 80 word message block
arrays needed for hashing and collision detection.

Change-Id: I4af88a720e81460293bc5c5d1d3db1a831e7e228
2017-02-26 15:26:53 -08:00
Shawn Pearce 0f25f64d48 Switch to pure Java SHA1 for ObjectId
Generate names for objects using only the pure Java SHA1
implementation, but continue using MessageDigest in tests.
This opens the possibility of changing the hashing function
to incorporate additional safety measures, such as those
used in sha1dc[1].

Since MessageDigest has higher throughput, continue using
MessageDigest for computing pack, idx and DirCache trailers.
These are less likely to be sensitive to SHAttered[2] types
of attacks, as Git uses them to detect random bit flips
during transfer, and not for content identity.

[1] https://github.com/cr-marcstevens/sha1collisiondetection
[2] https://shattered.it/

Change-Id: If6da98334201f7f20cb916e46f782c45f373784e
2017-02-26 11:16:19 -08:00
Shawn Pearce 982f5d1bf1 Pure Java SHA-1
This implementation is derived straight from the description written
in RFC 3174.  On Mac OS X with Java 1.8.0_91 it offers similar
throughput as MessageDigest SHA-1:

  system   239.75 MiB/s
  system   244.71 MiB/s
  system   245.00 MiB/s
  system   244.92 MiB/s

  sha1     234.08 MiB/s
  sha1     244.50 MiB/s
  sha1     242.99 MiB/s
  sha1     241.73 MiB/s

This is the fastest implementation I could come up with.  Common SHA-1
implementation tricks such as unrolling loops creates a method too
large for the JIT to effectively optimize, resulting in lower overall
hashing throughput. Using a preprocessor to perform the register
renaming of A-E also didn't help, as again the method was too large
for the JIT to effectively optimize.

Fortunately the fastest version is a naive, straight-forward
implementation very close to the description in RFC 3174.

Change-Id: I228b05c4a294ca2ad51386cf0e47978c68e1aa42
2017-02-26 11:16:19 -08:00
David Pursehouse 3b4448637f Enable and fix warnings about redundant specification of type arguments
Since the introduction of generic type parameter inference in Java 7,
it's not necessary to explicitly specify the type of generic parameters.

Enable the warning in Eclipse, and fix all occurrences.

Change-Id: I9158caf1beca5e4980b6240ac401f3868520aad0
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-02-20 22:47:23 +01:00
Shawn Pearce 07fdc50c07 Fix bad test fix from 0bff481 "Limit receive commands"
In 0bff481d45 to accurately use the two
limits it was necessary to move the LimitedInputStream out of the
PacketLineIn and further down to the PackParser. Unfortuantely this
didn't survive review, as a buggy test failed and the "fix" was to
drop this part of the code.

The maxPackSizeLimit should apply to the pack stream, not the pkt-line
framing used to send commands to control the ReceivePack instance. The
commands are controlled using a different limit. The failing test allowed
too many bytes in the pack and was only failing because it was including
the command framing. The correct fix for the test was simply to drop the
limit lower, to more closely match the actual pack size.

Change-Id: I47d3885b9d7d527e153df7ac9c62fc2865ceecf4
2017-02-20 10:51:27 -08:00
David Pursehouse fceac7e44d Add some more missing @Override annotations
Change-Id: Ic13160920b986edde87c928c473240cc9c034f50
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-02-20 11:32:22 +09:00
David Pursehouse 7ac182f4e4 Enable and fix 'Should be tagged with @Override' warning
Set missingOverrideAnnotation=warning in Eclipse compiler preferences
which enables the warning:

  The method <method> of type <type> should be tagged with @Override
  since it actually overrides a superclass method

Justification for this warning is described in:

  http://stackoverflow.com/a/94411/381622

Enabling this causes in excess of 1000 warnings across the entire
code-base. They are very easy to fix automatically with Eclipse's
"Quick Fix" tool.

Fix all of them except 2 which cause compilation failure when the
project is built with mvn; add TODO comments on those for further
investigation.

Change-Id: I5772061041fd361fe93137fd8b0ad356e748a29c
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-02-19 20:05:08 -04:00
Thomas Wolf 0a4cf573d3 Fix typo in @since
Change-Id: I266b0c72d2827bcf2b86ddc6c1892d1a46c548eb
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
2017-02-19 16:46:44 +01:00
David Pursehouse 1cda4faed4 PullCommand: Allow to set tag behavior
Add a new method setTagOpt which sets the annotated tag behavior during
fetch. Pass the option to the fetch command.

No explicit tests are added; the fetch with tags functionality is already
covered by the tests of the fetch command.

Change-Id: I131e1f68d8fcced178d8fa48abf7ffab17f8e173
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-02-18 15:21:26 +01:00
Naoki Takezoe 1448ec37f9 Set commit time to ZipArchiveEntry
Archived zip files for a same commit have different MD5 hash because
mdate and mdate in the header of zip entries are not specified. In
this case, Commons Compress sets an archived time.

In the original git implementation, it's set a commit time:
e2b2d6a172/archive.c (L378)

By this fix, archive command sets the commit time to ZipArchiveEntry
when RevCommit is given as an archiving target.

Change-Id: I30dd8710e910cdf42d57742f8709e9803930a123
Signed-off-by: Naoki Takezoe <takezoe@gmail.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-02-18 10:47:27 +01:00
David Turner d3962fef6b GC: don't loosen doomed objects
If the pruneexpire config is set to "now", then any unreferenced loose
objects are immediately eligible for gc.  So there is no need to
actually write the loose objects.

Users who run hosting services which sometimes accept large, entirely
garbage packs might set the following configurations:

gc.pruneExpire = now
gc.prunePackExpire = 2.weeks

Then garbage objects will be kept around in packs, but after two weeks
the packs themselves will get deleted.

For client-side users of jgit, the default settings will loosen
garbage objects, and, after an hour, delete the old packs in which
they resided.

Change-Id: I8f686ac60b40181b1ee92ac6c313c3f33b55c44c
Signed-off-by: David Turner <dturner@twosigma.com>
2017-02-17 11:26:09 -05:00
Jonathan Nieder b537e372c9 Update name of InsecureCipherMode error-prone pattern
Without this, using bazel 0.4.4 to build fails:

 ERROR: jgit/org.eclipse.jgit/BUILD:29:1: Java compilation in rule '//org.eclipse.jgit:insecure_cipher_factory' failed: Worker process sent response with exit code: 1.
 jgit/src/org/eclipse/jgit/transport/InsecureCipherFactory.java:63: error: [InsecureCryptoUsage] Insecure usage of a crypto API: the transformation is not a compile-time constant expression.
                return Cipher.getInstance(algo);
                                         ^
    (see http://errorprone.info/bugpattern/InsecureCryptoUsage)

Change-Id: I7f9a3a5117e42cb68544674f5312df0368aa3674
2017-02-15 16:01:42 -08:00
Zhen Chen 87d81a7301 Add missing skip garbage pack logic in DfsReader
* Missing garbage pack check in getObjectSize(AnyObjectId, int)
* Missing `last` pack check in has(AnyObjectId) and open(AnyObjectId,
int)

Change-Id: Idd1b9dd8db34c92d7da546fef1936ec9b2728718
Signed-off-by: Zhen Chen <czhen@google.com>
2017-02-15 15:40:04 -08:00
Zhen Chen ff852dad51 Skip first pack if avoid garbage is set and it is a garbage pack
At beginning of the OBJECT_SCAN loop, it will first check if the object
exists in the last pack, however, it forgot to avoid garbage pack for
the first iteration.

Change-Id: I8a99c0f439218d19c49cd4dae891b8cc4a57099d
Signed-off-by: Zhen Chen <czhen@google.com>
2017-02-13 20:54:35 -04:00
Zhen Chen 8dd5b644dc Refactor skip garbage pack logic into a method
There are multiple places in DfsReader to skip garbage pack if both of
the following conditions satisfied:

* AvoidUnreachable flag is set
* The pack is a garabge pack

Refactor them into a shared private method.

Change-Id: I67d6bb601db55f904437c807c6a3c36f0a723265
Signed-off-by: Zhen Chen <czhen@google.com>
2017-02-13 15:33:23 -08:00
Shawn Pearce 0bff481d45 Limit receive commands
Place a configurable upper bound on the amount of command data
received from clients during `git push`.  The limit is applied to the
encoded wire protocol format, not the JGit in-memory representation.
This allows clients to flexibly use the limit; shorter reference names
allow for more commands, longer reference names permit fewer commands
per batch.

Based on data gathered from many repositories at $DAY_JOB, the average
reference name is well under 200 bytes when encoded in UTF-8 (the wire
encoding).  The new 3 MiB default receive.maxCommandBytes allows about
11,155 references in a single `git push` invocation.  A Gerrit Code
Review system with six-digit change numbers could still encode 29,399
references in the 3 MiB maxCommandBytes limit.

Change-Id: I84317d396d25ab1b46820e43ae2b73943646032c
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-02-11 00:20:36 +01:00
David Pursehouse 1834421a7f BlameGenerator: Annotate #getRenameDetector as Nullable
The renameDetector member returned by this method will be null when
following file renames has been disabled by previously calling:

  setFollowFileRenames(false).

Annotate it as @Nullable and update the Javadoc to explicitly
document the null return.

Change-Id: I9bdf443a64cf3c45352d3ab023051a2e11f7426d
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-02-09 22:40:56 +01:00
David Pursehouse d9d8c507a4 RefLeaseSpec: Fix Eclipse errors
- Remove unused import

- Remove unused private constructor

- Add Javadoc for public constructor

Change-Id: I1253e9fe863ca0f63182461ee87357fbf726ea2e
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-02-09 15:10:15 +09:00
Shawn Pearce 8fce17a995 Merge "push: support per-ref force-with-lease" 2017-02-08 22:27:06 -05:00
David Turner 46d35a8502 push: support per-ref force-with-lease
When rebasing, force-pushing has a race condition: someone else might
have pushed a commit since the one you just rewrote. The force-with-lease
option prevents this by ensuring that the ref's old value is the one
that you expected.

Change-Id: I97ca9f8395396c76332bdd07c486e60549ca4401
Signed-off-by: David Turner <dturner@twosigma.com>
2017-02-08 19:42:33 -05:00
Shawn Pearce 6450d956bc Assume GC_REST and GC_TXN also attempted deltas during packing
In a DFS repository the DfsGarbageCollector will typically attempt
delta compression while creating the three main pack files: GC,
GC_REST and GC_TXN. Include all of these in the wasDeltaAttempted()
decision so that future packers can bypass delta compression of
non-delta objects.

Change-Id: Ic2330c69fab0c494b920b4df0a290f3c2e1a03d7
2017-02-08 15:34:00 -08:00
Shawn Pearce d67b183537 Prefer smaller GC files during DFS garbage collection
In 8ac65d33ed PackWriter changed its
behavior to always prefer the last object representation presented
to it by the ObjectReuseAsIs implementation. This was a fix to avoid
delta chain cycles.

Unfortunately it can lead to suboptimal compression when concurrent
GCs are run on the same repository. One case is automatic GC running
(with default settings) in parallel to a manual GC that has disabled
delta reuse in order to generate new smaller deltas for the entire
history of the repository.

Running GC with no-reuse generally requires more CPU time, which
also translates to a longer running time.  This can lead to a race
where the automatic GC completes before the no-reuse GC, leaving
the repository in a state such as:

  no-reuse GC:   size 1 GiB, mtime = 18:45
  auto GC:       size 8 GiB, mtime = 17:30

With the default sort ordering, the smaller no-reuse GC pack is
sorted earlier in the pack list, due to its more recent mtime.

During object reuse in a future GC, these smaller representations
are considered first by PackWriter, but are all discarded when the
auto GC file from 17:30 is examined second (due to its older mtime).

Work around this in two ways.

Well formed DFS repositories should have at most 1 GC pack. If
2 or more GC packs exist, break the sorting tie by selecting the
smaller file earlier in the pack list. This allows all normal read
code paths to favor the smaller file, which places less pressure
on the DfsBlockCache. If any GC race happens, readers serving clone
requests will prefer the file that is smaller.

During object reuse, flip this ordering so that the smaller file is
last. This allows PackWriter to see smaller deltas last, replacing
larger representations that were previously considered from other
pack files.

Change-Id: I0b7dc8bb9711c82abd6bd16643f518cfccc6d31a
2017-02-08 14:37:12 -08:00
Shawn Pearce 61d4922928 Fix missing deltas near type boundaries
Delta search was discarding discovered deltas if an object appeared
near a type boundary in the delta search window. This has caused JGit
to produce larger pack files than other implementations of the packing
algorithm.

Delta search works by pushing prior objects into a search window, an
ordered list of objects to attempt to delta compress the next object
against. (The window size is bounded, avoiding O(N^2) behavior.)

For implementation reasons multiple object types can appear in the
input list, and the window. PackWriter commonly passes both trees and
blobs in the input list handed to the DeltaWindow algorithm. The pack
file format requires an object to only delta compress against the same
type, so the DeltaWindow algorithm must stop doing comparisions if a
blob would be compared to a tree.

Because the input list is sorted by object type and the window is
recently considered prior objects, once a wrong type is discovered in
the window the search algorithm stops and uses the current result.

Unfortunately the termination condition was discarding any found
delta by setting deltaBase and deltaBuf to null when it was trying
to break the window search.

When this bug occurs, the state of the DeltaWindow looks like this:

                                 current
                                  |
                                 \ /
  input list:  tree0 tree1 blob1 blob2

  window:      blob1 tree1 tree0
                / \
                 |
              res.prev

As the loop iterates to the right across the window, it first finds
that blob1 is a suitable delta base for blob2, and temporarily holds
this in the bestDelta/deltaBuf fields. It then considers tree1, but
tree1 has the wrong type (blob != tree), so the window loop must give
up and fall through the remaining code.

Moving the condition up and discarding the window contents allows
the bestDelta/deltaBuf to be kept, letting the final file delta
compress blob1 against blob0.

The impact of this bug (and its fix) on real world repositories is
likely minimal. The boundary from blob to tree happens approximately
once in the search, as the input list is sorted by type. Only the
first window size worth of blobs (e.g. 10 or 250) were failing to
produce a delta in the final file.

This bug fix does produce significantly different results for small
test repositories created in the unit test suite, such as when a pack
may contains 6 objects (2 commits, 2 trees, 2 blobs).  Packing test
cases can now better sample different output pack file sizes depending
on delta compression and object reuse flags in PackConfig.

Change-Id: Ibec09398d0305d4dbc0c66fce1daaf38eb71148f
2017-02-08 14:36:24 -08:00
Shawn Pearce 12c8462602 Merge "Reintroduce garbage pack coalescing when ttl > 0." 2017-02-08 00:23:40 -05:00
Thirumala Reddy Mutchukota 006f4d4d29 Reintroduce garbage pack coalescing when ttl > 0.
Disabling the garbage pack coalescing when garbageTtl > 0 can result in
lot of garbage packs if they are created within the garbageTtl time.

To avoid a large number of garbage packs, re-introducing garbage pack
coalescing for the packs that are created within a single calendar day
when the garbageTtl is more than one day or one third of the garbageTtl.

Change-Id: If969716aeb55fb4fd0ff71d75f41a07638cd5a69
Signed-off-by: Thirumala Reddy Mutchukota <thirumala@google.com>
2017-02-07 20:34:31 -08:00
David Pursehouse 5336a07386 Merge "Branch normalizer should not normalize already valid branch names" 2017-02-07 07:31:06 -05:00
Matthias Sohn 08480c948c [infer] Fix ObjectWalk leak in PackWriter.preparePack()
Change-Id: I5d2455404e507faa717e9d916e9b6cd80aa91473
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-02-07 00:50:09 +01:00
Matthias Sohn f8d232213c Branch normalizer should not normalize already valid branch names
Change-Id: Ib746655e32a37c4ad323f1d12ac0817de8fa56cf
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-02-07 00:24:39 +01:00
Bo Zhang d4bd09b78d Follow redirects in transport
Bug: 465167
Change-Id: I6da19c8106201c2a1ac69002bd633b7387f25d96
Signed-off-by: Bo Zhang <zhangbodut@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-02-02 21:20:23 -04:00
Matthias Sohn 566794d001 Merge branch 'stable-4.6'
* stable-4.6:
  GC: delete empty directories after purging loose objects
  GC.prune(Set<ObjectId>): return early if objects directory is empty

Change-Id: I3d6cacf80d3b4c69ba108e970855963bd9f6ee78
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-02-02 23:36:28 +01:00
Matthias Sohn 18cda3888c GC: delete empty directories after purging loose objects
In order to limit the number of directories we check for emptiness only
consider fanout directories which contained unreferenced loose objects
we deleted in the same gc run.

Change-Id: Idf8d512867ee1c8ed40bd55752122ce83a98ffa2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
2017-02-01 23:44:07 +01:00
David Pursehouse b20f7d610e Organize imports
Change-Id: I97044f69d220fc2d3f9fe890fdfec542454f02d2
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
2017-02-01 14:31:44 +09:00
Hongkai Liu a33663fd4e Detect stale-file-handle error in causal chain
Cover the case where the exception is wrapped up as a
cause, e.g., PackIndex#open(File).

Change-Id: I0df5b1e9c2ff886bdd84dee3658b6a50866699d1
Signed-off-by: Hongkai Liu <hongkai.liu@ericsson.com>
2017-01-30 22:36:59 -04:00