IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
If fetching GPG-signed commits over plain HTTP, a MitM attacker can
fill up the drive of targets by simply returning an enormous stream
for the commit object.
Related to this, an attacker can also cause OSTree to perform large
memory allocations by returning enormous GVariants in the metadata.
This helps close that attack by limiting all metadata objects to 10
MiB, so the initial fetch will be truncated.
But now the attack is only slightly more difficult as the attacker
will have to return a correctly formed commit object, then return a
large stream of < 10 MiB dirmeta/dirtree objects.
https://bugzilla.gnome.org/show_bug.cgi?id=725921
This test had some nondeterminism because we chose a random
object to corrupt, but because there were multiple commits, it
was possible that we chose an object that was not being pulled.
Fix this by writing some custom GJS code to find an explicitly random
object that exists in a given ref, an change a random byte offset.
This adds a lot more randomness to the testing too.
This uses gpgv for verification against DATADIR/ostree/pubring.gpg by
default. The keyring can be overridden by specifying OSTREE_GPG_HOME.
Add a unit test for commit signing with gpg key and verifying on pull;
to implement this we ship a test GPG key generated with no password
for Ostree Tester <test@test.com>.
Change all of the existing tests to disable GPG verification.
These corruption tests could be a lot better...like randomly try
single bit flips, range flips. Better, content-aware fuzzing. But
this is useful for now.