IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Otherwise the `umount()` will always fail. This hasn't been a problem
so far while running in a external container (docker/systemd-nspawn),
but is when running in `mock` because it doesn't set its namespace to
be private.
This should help Fedora's Bodhi, which uses rpm-ostree inside mock.
Allows clients to see version, timestamp
and other detailed information along with
the rpm diffs for cached updates and rebases.
This will be used by the Cockpit interface.
Adds a CachedUpdate property that allows clients
to see version, timestamp and other detailed
information for pending updates. Additionally
changes to this property signal clients that a
new rpm-diff can be fetched with GetCachedUpdateRpmDiff.
This will be used by the Cockpit interface.
The assert[_not]_file_has_content functions used grep to check the
pattern against the content. This meant that the '.' characters are
interpreted as "any char". Yesterday, the date was 20150910 and thus the
otheros' tree's version was labelled as such. This date also happens to
match the 1.0.10 pattern, and thus caused the test to fail.
This patch makes sure this doesn't happen again by escaping all the dots
to make them literal.
During autoreconf, automake would emit many warnings regarding the
option 'subdir-objects' being disabled. We squash those warnings by
enabling the option.
We also fix Makefile.am so that it includes the patched libglnx Makefile
rather than the original one, which would cause libglnx output to be
placed in the literal dir './$(libglnx_srcpath)'.
During autoreconf, we would get a warning due to privdatadir being
defined in both Makefile-rpm-ostree.am and Makefile.am. Remove the
instance in Makefile.am.
With the errexit bash option turned on, these conditionals would never
actually be reached since the failure from `which` would cause the
script to exit.
As a result, if autoreconf was not installed, all the user would see
would be the error message from `which`, and not pretty error we have
for them. Similarly, even though gtk-doc should be optional, the script
would fail if gtkdocize wasn't installed.
Also fix minor typo.
Various OS "diff" methods can run concurrently with whatever else is
going on since they don't have to obtain the system root lock.
Just to make sure there's no conflicts when writing deployments or
downloading RPM package details, use an internal reader/writer lock
to protect the critical sections of upgrade, rebase, rollback, etc.
Unfortunately RHEL 7 has an older version of dbus, and I use it as a
workstation. It's not a lot of code and only used for tests. We can
make it build time conditional down the line or something.
Create and load a new OstreeSysroot and OstreeRepo instance as needed.
This ensures its internal state is up-to-date, since several ostree
commands can alter stored state without the daemon's knowledge.
I would prefer keeping persistent instances if these issues can be
addressed, as it would eliminate some inconvenient error handling.
But this way is safer for now.
Having the OS.Upgrade() and OS.Rebase() logic flows conflated in the
daemon had me nervous. A day's worth of debugging a failing test case
proved that nervousness well-founded. Split them into distinct backend
operations.
So the client side can read it back.
This replaces the GObject "sysroot-path" property in the wrapper class,
which created some additional daemon refactoring.
This closes a race condition where the objects might not be exported
by the time clients call methods.
Also delete the code in the "on name lost" handler - it's not going to
happen in practice (we don't allow replacement), and causes issues as
it may be run first before we get the notification that the name is
owned. github.com/cockpit-project/storaged has some better code here
which we could copy later.
This then in turn allows us to delete the "hold"/"release"
infrastructure. Basically the daemon will live forever in the
process.
If a client makes a request that is identical (that is, same method name
and same parameters) to an ongoing transaction, return the bus address of
that transaction. The client can then "tune in" to the progress messages.
(Remember the Transaction.Start() method returns a boolean to distinguish
a newly-started transaction from an ongoing transaction.)
The driving use case for this is a dropped ssh connection during a long
running transaction -- like "upgrade" -- and being able to reattach to
the transaction's progress messages mid-stream.
Few things to note:
- Cancelling a transaction no longer immediately destroys it.
- Transaction is destroyed when finished (or cancelled) and has
no client connections.
- If a client attaches to a finished transaction and calls Start(),
the transaction will re-emit the Finished signal to that client.
- The transaction bus address is not yet shared across multiple
clients, so multiple connections doesn't actually work from the
outside yet. It's just supported internally.
Change the ActiveTransaction property from the bus address of the active
transaction to a string tuple: (method name, sender name)
The bus address was only a placeholder, and not very useful since each
transaction only accepts one connection (presumably the method caller).
The current policy is to only allow the root user access to the Sysroot
and OS interfaces, but this can be expressed in the static bus config.
The long-term intention is to integrate with PolicyKit. Leave comments
in the code stating so but remove the unnecessary authorization handler
for the time being, just so there's less code to review.
Since the daemon can detect when the client closes its peer-to-peer
connection, simplify the API by converting the Finish() method to a
Finished signal that is only emitted once.
Internally, add a "closed" signal to transactions (triggered by a
closed GDBusConnection), and have the transaction monitor use that
instead of "finished" to know when to dispose of the transaction.
Because peer-to-peer endpoints don't get assigned unique names, the
sender == owner check is rendered useless. But I'm not sure we even
need a check since the transaction *is* peer-to-peer now.
One way to secure the returned bus address from prying eyes would be
to employ GcrSecretExchange, but this would only complicate the hand-
shake further and (imo) necessitate a public client-side function to
implement the handshake correctly.
Transaction progress and message signals are really only intended for
one recipient: the client that invoked the method. Use a peer-to-peer
connection for transactions so we're not spamming the system bus.
This entails returning a bus address rather than an object path in
methods that use transactions. The client opens a connection to the
bus address, connects handlers to the Transaction interface (on path
"/"), and then invokes the Start() method.
To finish a transaction, the client need only close the connection,
either explicitly or by terminating. The server will detect this
and clean up resources for that transaction.
Implementing a template pattern for transactions.
The TransactionClass is now abstract, and transaction_new() is replaced
with various method-specific functions like transaction_new_upgrade().
These custom subclasses live in a new file transaction-types.[ch].
Further, transaction_monitor_new_transaction() is replaced with
transaction_monitor_add(). So the handlers for "OS" interface methods
need only create an appropriate transaction instance and hand it off to
the transaction monitor.
Move as much as possible out of transaction_new() for the benefit of
subclasses. The GDBusMethodInvocation property for setting up D-Bus
properties and name watching in the constructed() method.