We observed the following issue:
$ git tag -a v1.0 -m v1.0
$ git push origin tag v1.0
$ git ls-remote origin v1.0^{}
... empty result ...
On the server (Gerrit) side run git-gc to pack the refs:
$ git gc
Repeat the ls-remote from the client and the result is correct:
$ git ls-remote origin v1.0^{}
7ad85c810de3ae922903d4bdd17c53cd627260ba refs/tags/v1.0^{}
Unfortunately, the existing UploadPackTest didn't reveal this issue
although it provided the test case needed to do so: testV2LsRefsPeel.
This is because The UploadPackTest uses InMemoryRepository which
internally uses Dfs* implementations. The issue is only reproducible
when using the FileRepository.
It is a non-trivial task to refactor the UploadPackTest to work against
both InMemoryRepository and FileRepository and this change is not trying
to do that. This change creates a new test:
UploadPackLsRefsFileRepositoryTest and copies the necesssary code from
the UploadPackTest.
Change-Id: Icfc7d0ca63f1524bafe24c9626ce12ea72aa3718
Signed-off-by: Saša Živkov <sasa.zivkov@sap.com>
* stable-5.8:
reftable: drop code for truncated reads
reftable: pass on invalid object ID in conversion
Update eclipse-jarsigner-plugin to 1.3.2
Change-Id: I9e66ef90dc9a65bac47b35705d679bf992bd72b9
* stable-5.7:
reftable: drop code for truncated reads
reftable: pass on invalid object ID in conversion
Update eclipse-jarsigner-plugin to 1.3.2
Change-Id: I88c47ff57f4829baec5b19aad3d8d6bd21f31a86
* stable-5.6:
reftable: drop code for truncated reads
reftable: pass on invalid object ID in conversion
Update eclipse-jarsigner-plugin to 1.3.2
Change-Id: I1c18f5f435f4a4a86e0548a310dbfc74191e1ed5
The reftable format is a block based format, but allows for variably
sized blocks. This obviously happens for reflog blocks (which are zlib
compressed), but is also accepted for index blocks: In the spec, this
is motivated as
To achieve constant O(1) disk seeks for lookups the index must be
a single level, which is permitted to exceed the file's
configured block size, but not the format's max block size of
15.99 MiB.
Hence, when parsing a block, one cannot be sure of its exact size:
after reading a default-size block (eg. 4kb), the block header may
state that the block is in fact larger.
Before, the code would mark the block as `truncated`, noting
// Its OK during sequential scan for an index block to have been
// partially read and be truncated in-memory. This happens when
// the index block is larger than the file's blockSize. Caller
// will break out of its scan loop once it sees the blockType.
This looks like either
* a remnant of never-implemented functionality. There is no reason to
ever sequentially scan an index block.
* alluding to sequential scan of the data blocks before the index
blocks (eg. scanning refs, which ends when we find the first ref index
block, and we can then ignore the index block).
This comment is followed by code that populates the
restartTbl/restartCnt fields relative to the (possibly truncated)
buffer. If the buffer is truncated, this essentially reads garbage,
leading to OOB array access when using the index block.
Fix this by dropping the truncated logic and issuing a second read if
the first read was short.
Add a test.
We have never observed this failure scenario at Google. We use 64kb
blocksize, which requires us to need fewer index entries. The reftable
spec mentions an Android repo of size 36M. With 64kb blocks, that's
just 562 index entries. Even with historical growth, we are long from
requiring an index whose size exceeds a single block.
When adding the analogous test for seeking refs, there was no failure.
This points to another possibility which is that the code tries to
avoid writing large index blocks for refs.
I did not investigate further which one it is.
Fixes https://bugs.eclipse.org/bugs/show_bug.cgi?id=576250
Bug: 576250
Change-Id: I41ec21fac9e526ef57b3d6fb57b988bd353ee338
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Before, while trying to determine if an object ID was a tag or not,
the reftable conversion would yield an exception.
Change-Id: I3688a0ffa9e774ba27f320e3840ff8cada21ecf0
* stable-5.8:
[test] Create keystore with the keytool of the running JDK
ReachabilityCheckerTestCase: fix reachable from self test case
Change-Id: If09cbb877c674f15261715cecef7a2393bf66fa3
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
* stable-5.7:
[test] Create keystore with the keytool of the running JDK
ReachabilityCheckerTestCase: fix reachable from self test case
Change-Id: I32010c6bf45d5138e17143d6c284ac56434eade1
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
* stable-5.6:
[test] Create keystore with the keytool of the running JDK
ReachabilityCheckerTestCase: fix reachable from self test case
Change-Id: I1f6b4fc26f6ee6f22cc0aacd032c1e73ba246dbc
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
When reading loose objects over NFS it is possible that the OS syscall
would fail with ESTALE errors: This happens when the open file
descriptor no longer refers to a valid file.
Notoriously it is possible to hit this scenario when git data is shared
among multiple clients, for example by multiple gerrit instances in HA.
If one of the two clients performs a GC operation that would cause the
packing and then the pruning of loose objects, the other client might
still hold a reference to those objects, which would cause an exception
to bubble up the stack.
The Linux NFS FAQ[1] (at point A.10), suggests that the proper way to
handle such ESTALE scenarios is to:
"[...] close the file or directory where the error occurred, and reopen
it so the NFS client can resolve the pathname again and retrieve the new
file handle."
In case of a stale file handle exception, we now attempt to read the
loose object again (up to 5 times), until we either succeed or encounter
a FileNotFoundException, in which case the search can continue to
Packfiles and alternates.
The limit of 5 provides an arbitrary upper bounds that is consistent to
the one chosen when handling stale file handles for packed-refs
files (see [2] for context).
[1] http://nfs.sourceforge.net/
[2] https://git.eclipse.org/r/c/jgit/jgit/+/54350
Bug: 573791
Change-Id: I9950002f772bbd8afeb9c6108391923be9d0ef51
Update tests to record the number of events fired post-setup and only
assert for events fired during BatchRefUpdate.execute. For tests which
use writeLooseRef to setup refs, create new tests which assert the
number of RefsChangedEvent(s) rather than updating the existing ones
to call RefDirectory.exactRef as it changes the code path.
Change-Id: I0187811628d179d9c7e874c9bb8a7ddb44dd9df4
Signed-off-by: Kaushik Lingarkar <quic_kaushikl@quicinc.com>
Don't create the stream eagerly in lock(); that may cause JGit to
exceed OS or JVM limits on open file descriptors if many locks need
to be created, for instance when creating many refs. Instead create
the output stream only when one really needs to write something.
Bug: 573328
Change-Id: If9441ed40494d46f594a896d34a5c4f56f91ebf4
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
This would allow other JGit users to access and reuse the constants.
Change-Id: I1608802f45586af5f8582afa592e26679e9cebe3
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If pack or index files are guarded by a pack lock (.keep file)
deleteOrphans() should not touch the respective files protected by the
lock file. Otherwise it may interfere with PackInserter concurrently
inserting a new pack file and its index.
The problem was caused by the following race.
All mentioned files are located in "objects/pack/".
File endings relevant in "pack" dir:
.pack
.keep
.idx
.bitmap
When ReceivePack receives a pack file it executes the following steps:
ReceivePack.service():
receivePackAndCheckConnectivity():
receivePack():
receive the pack
parse the pack, returns packLock (.keep file)
PackInserter.flush():
write tmpPck file: "insert_<random>.pack"
write tmpIdx file: "insert_<random>.idx"
real pack name: "pack-<SHA1>.pack"
real index name: "pack-<SHA1>.idx"
atomic rename tmpPack to realPack
atomic rename tmpIdx to tmpIdx
execute commands
unlock pack by removing .keep file
trigger auto gc if enabled
When PackInserter.flush() renames the temporary pack to the final
"pack-xxx.pack" file the temporary pack index file "insert_xxx.idx"
has no matching .pack file with the same base name for a short interval.
If deleteOrphans() ran during that interval it deduced the pack index
file was orphaned. Subsequently the missing pack index caused
MissingObjectExceptions since objects contained in the pack couldn't be
looked up anymore.
Bug: https://bugs.chromium.org/p/gerrit/issues/detail?id=13544
Change-Id: I559c81e4b1d7c487f92a751bd78b987d32c98719
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>