981 Commits

Author SHA1 Message Date
286c55d5b4 bump proxmox-apt to 0.9.4-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-06-09 11:05:20 +02:00
6ec35bdbbd apt: repositories: also detect repository with next suite as configured
This avoids that no standard Proxmox repository can be detected during
upgrade anymore. There is a 'ignore-pre-upgrade-warning' about the
suite already, that the frontend can display when upgrading is not
allowed yet.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-06-09 10:46:04 +02:00
82417de8a8 import proxmox-apt crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-24 08:39:45 +02:00
cc553060e0 import proxmox-openid crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-24 08:12:09 +02:00
8f8d52f148 update d/copyright files to debian copyright-format 1.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-23 13:02:39 +02:00
392290ec6c buildsys: improve clean target
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-23 10:50:33 +02:00
77e8db8649 buildsys: add dsc and %-dsc targets
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-23 10:50:27 +02:00
76ac1a3903 bump proxmox-tfa to 4.0.0-1, auth-api to 0.1.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-10 10:43:21 +02:00
4324aea004 auth-api: update to new tfa crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-10 10:43:21 +02:00
39017fa334 tfa: add functions to unlock totp and tfa
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-10 10:35:54 +02:00
a3448feb1a tfa: log all tfa verify errors and treat as failure, count
Use a custom result type to return success/failure and the
need to save the user data to the caller, while having
logged the error messages rather than returning them.

We count general TFA failures and also TOTP specifically,
and lock the user out of their 2nd factors on too many
failures.

To this end, all errors are now treated as failures.
While technically we can have crypto errors the user might
not be able to cause, we can't always know, and not all
errors are guaranteed to be a host side configuration issue,
so instead, all errors (since they are rare) now now counted
as a regular TFA error.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-10 10:35:54 +02:00
50b793db8d tfa: add data for rate limiting and blocking
TfaUserData uses `#[serde(deny_unknown_fields)]`, so we add
this now, but using it will require explicitly enabling it.

If the TOTP count is high, the user should be locked out of
TOTP entirely until they use a recovery key to reset the
count.

If a user's TFA try count is too high, they should get rate
limited.

In both cases they should receive some kind of notification.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-10 10:26:51 +02:00
8d968274f1 tfa: make 'anyhow' optional, enable with the 'api' feature
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-08 10:32:26 +02:00
3224f42ff5 tfa: fix warning with types feature w/o api feature
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-08 10:32:26 +02:00
5c39559cad tfa: drop anyhow from totp module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-08 10:32:26 +02:00
c45620b447 tfa: drop anyhow from u2f module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-08 10:32:26 +02:00
0d942e81a3 tfa: add a 'types' feature to get TfaInfo and TfaType
without adding the entire API as well, so API clients can
actually use the types used by the api methods without
requiring the backend implementation being built in as
well...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-08 10:32:26 +02:00
b6840e95ad tfa: make failing to generate a webauthn challenge non-fatal
If WA or U2F fail to produce a challenge, the user may still
log in with other factors and the challenge will be
considered to not be empty.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-08 10:32:26 +02:00
4b3d171b2d tfa: don't return a challenge if all 2nd factors are disabled
Instead, this should allow the user to login without them.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-08 10:32:26 +02:00
ea1d023a61 tfa: don't automatically drop empty recovery
This should only ever be explicitly removed.

Similarly, include an empty array of recovery keys in the
tfa challenge, so that clients know about empty recoveries
rather than getting an empty challenge when there are no
other factors available.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-08 10:32:26 +02:00
b66ceaede0 proxmox-longin: allow access to RecoveryState keys (make it pub)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2023-05-08 10:26:54 +02:00
a41c8481e2 proxmox-login: pass body as &str to response()
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2023-05-08 08:23:10 +02:00
be169e25ae add new proxmox-login to workspace members 2023-05-05 09:29:50 +02:00
26f586d5eb new proxmox-login package
Author: Wofgang Bumiller <w.bumiller@proxmox.com>
2023-05-04 09:09:08 +02:00
12674a37e0 api-macro: support non-idents in serde(rename)
For PVE we'll have enum variants like /dev/urandom...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-04-03 10:01:44 +02:00
39453abb8f http: sync: drop unused &self parameter
these are just internal helpers, changing their signature is fine.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-03-07 09:30:13 +01:00
6a1be173a6 http: sync: derive default user-agent from crate version
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-03-07 09:30:13 +01:00
5ba9d9b2c2 http: sync: remove redundant calls for setting User-Agent
the requests are all created via the agent that already contains the user
agent, so this internal helper isn't needed anymore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-03-07 09:30:13 +01:00
d69fee254a http: sync: set user-agent via ureq agent
this allows us to slim down our code, and once
https://github.com/algesten/ureq/pull/597 is merged upstream (and/or we update
to a version containing the fix) it also means the custom user agent is used
for requests to the proxy host, if one is configured.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-03-07 09:30:13 +01:00
5df815f660 proxmox-tfa: update generated d/control
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-03-02 16:54:59 +01:00
32e7d3ccdf bump proxmox-auth-api to 0.1.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:44:35 +01:00
1bccff7e68 auth-api: make example require pam-authenticator
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:44:35 +01:00
82e212e33a bump schema dependency to 1.3.7 for auth-api
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:44:35 +01:00
2f5b1f26cc bump proxmox-schema to 1.3.7-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:44:35 +01:00
bca9c6dbaf bump proxmox-tfa to 3.0.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:44:35 +01:00
5349ae208b add proxmox-auth-api crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:44:35 +01:00
a8bd8fca15 schema: add basic api types feature
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
f813e8d866 sort workspace members
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
8a90efba68 bump proxmox-metrics to 0.2.2
to update proxmox-http dep to 0.8

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
71794901c7 bump proxmox-subscription to 0.3.1
to update proxmox-http dependency

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
89eaf83755 bump proxmox-rest-server to 0.3.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
d422852f51 bump proxmox-http to 0.8.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
dcd6e85ab2 rest-server: update example to new ApiConfig
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
1f373b9276 rest-server: add wasm content type
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
b4bb3feef3 rest-server: tls-acceptor: allow setting cipher suite and list
just pass the strings to openssl

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
6873926dea rest-server: generic certificate path types
to not require a PathBuf on the caller side

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
2f2f5cfcd8 rest-server: more convenient alias-list for ApiConfig
To the existing `.alias(item)`, add a
`.aliases(into-item-iter)` similar to how `Extend` works.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
310310c650 rest-server: make all ApiConfig methods builder-style
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
e2ac53e3de rest-server: add AcceptorBuilder
The connection submodule now allows building an "acceptor"
for hyper connections which can either take an explicit ssl
acceptor, or builds a default one with a self signed
certificate.

The rate-limited-stream feature enables a method to
lookup/update rate limiters for connections.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
666f920291 rest-server: impl PeerAddress for RateLimitedStream via feature
rest-server can now optionally provide a PeerAddress
implementation for RateLimitedStream by activating its
'rate-limited-stream' feature

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
d7ed04f8e5 http: add RateLimitedStream::inner, drop peer_addr
instead of implementing 'peer_addr' specifically for
RateLimitedStream<tokio::net::TcpStream>, just provide
.inner() and .inner_mut() so the user can reach the inner
stream directly.

This way we can drop the tokio/net feature as well

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
b2c26f74a6 http: lower hyper feature requirements for client feature
instead of 'full', we only need 'tcp+http1+http2'

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
10a3ab222b http: move rate-limiting out of client feature
this can now be used separately

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
b62d76e80c http: start 0.8.0 refactoring
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
726bf413f5 rest-handler: more convenient auth/index handler setters
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
4639542fce rest-server: PeerAddress for Pin<Box<T>>
since this is how tokio-openssl's SslStream is used in
practice

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
515cc729d0 rest-server: drop ServerAdapter, move AuthError
Instead of a ServerAdapter for the index page and
authentication checking (which don't relate to each other),
provide a `.with_auth_handler` and `.with_index_handler`
builder for ApiConfig separately.

Both are optional. Without an index handler, it'll produce a
404. Without an auth handler, an `AuthError::NoData` is
returned.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
6904dcf4e6 rest-server: make adapter optional
when no user information or index needs to be defined

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:14:04 +01:00
4a5360aef4 rest-server: drop Router from ApiConfig
instead, allow attaching routers to path prefixes and also
add an optional non-formatting router

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:13:55 +01:00
258e2399a6 rest-server: make handlebars optional as 'templates' feature
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 16:07:50 +01:00
28ba2016e5 rest-server: cleanup unreadable code
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 15:44:35 +01:00
a1119a3e63 rest-server: use BAD_REQUEST for non-GET on file-paths
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 15:44:35 +01:00
93c027f5cc rest-server: make handle_request a method of ApiConfig
This is what actually defines the API server after all.
The ApiService trait in between is a hyper impl detail.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 15:44:35 +01:00
5fe0777318 rest-server: drop allocation in Service impl
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 15:44:35 +01:00
e377909bee rest-server: PeerAddr trait, drop proxmox-http dep
We pulled in proxmox-http with the client feature solely to
implement the `Service` trait on
`SslStream<RateLimitedStream<TcpStream>>`.

All those `Service` impls are the same: provide a peer
address and return an `ApiService`.
Let's put the `peer_addr()` call into a trait and build from
there.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 15:44:35 +01:00
01436ae30f rest-server: make socketpair private
`proxmox_rest_server::socketpair` doesn't make sense as an
external API

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 15:44:35 +01:00
ccc70bc95f rest-server: start 0.3 api refactoring
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 15:44:35 +01:00
1a14696a5c ldap: test fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-02 15:44:11 +01:00
7e12788c60 sys: drop sortable and identity macros
We should not use the sys crate to pull in the sortable
macro, just depend on its crate instead...
And the identity macro used to be required by the sortable
macro, but is not anymore and has been deprecated for a
while, so we can now drop it.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-03-01 13:40:40 +01:00
2cf54dcf2e router: make format&print generic
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-02-28 14:57:35 +01:00
46e803256e release proxmox-ldap to 0.1.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-02-10 10:42:40 +01:00
6dcdbd2bd1 bump proxmox-rest-server to 0.2.2-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-02-10 10:42:40 +01:00
d696ad5bd1 rest-server: add handle_worker from backup debug cli
The function has now multiple users, so it is moved
here.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-02-10 10:23:41 +01:00
e8e8f83723 ldap: fixup d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-02-08 14:30:52 +01:00
870be885ed ldap: drop Ldap prefix from types that have it
for a bit more consistency and since we tend to repeat stuff
too much

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-02-08 14:29:12 +01:00
4ff5c59559 fix 'default-features = false' for ldap3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-02-08 14:26:34 +01:00
cd61c8741c ldap: clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-02-08 14:15:44 +01:00
1db057e189 ldap: add debian packaging
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-02-08 14:11:24 +01:00
582e994cca ldap: tests: add LDAP integration tests
This commit adds integration tests to ensure that the crate works as intended.
The tests are executed against a real LDAP server, namely `glauth`. `glauth` was
chosen because it ships as a single, statically compiled binary and can
be configured with a single configuration file.

The tests are written as off-the-shelf unit tests. However, they are
 #[ignored] by default, as they have some special requirements:
   * They required the GLAUTH_BIN environment variable to be set,
     pointing to the location of the `glauth` binary. `glauth` will be
     started and stopped automatically by the test suite.
   * Tests have to be executed sequentially (`--test-threads 1`),
     otherwise multiple instances of the glauth server might bind to the
     same port.

The `run_integration_tests.sh` checks whether GLAUTH_BIN is set, or if
not, attempts to find `glauth` on PATH. The script also ensures that the
tests are run sequentially.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-02-08 14:11:21 +01:00
4488256cb1 ldap: allow searching for LDAP entities
This commit adds the search_entities function, which allows to search for
LDAP entities given certain provided criteria.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-02-08 14:11:18 +01:00
b9ab0ba4fa ldap: add helpers for constructing LDAP filters
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-02-08 14:11:15 +01:00
6fd77c9a5e ldap: add basic user auth functionality
In the LDAP world, authentication is done using the bind operation, where
users are authenticated with the tuple (dn, password). Since we only know
the user's username, it is first necessary to look up the user's
domain (dn).

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-02-08 14:11:12 +01:00
0e2f88ccf3 ldap: create new proxmox-ldap crate
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-02-08 14:11:08 +01:00
fbac2f0a0c sys: fixup error types handling
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-02-02 16:32:37 +01:00
ce389914ff sys: cope with unavailable KSM sharing info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-02-02 16:12:20 +01:00
2cebe420c1 bump proxmox-time to 1.1.5-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-01-12 14:23:11 +01:00
fadf7f7bd8 re-add proxmox-uuid d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-01-12 14:22:52 +01:00
78d9b156a8 bump proxmox-uuid to 1.0.2-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-01-12 13:49:18 +01:00
9c44e9b410 update d/control files
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-01-05 12:17:00 +01:00
3046e2f285 bump proxmox-rest-server to 0.2.1-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-01-05 12:15:14 +01:00
30ae33a31d bump proxmox-shared-memory to 0.2.3-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-01-05 12:13:53 +01:00
40cb468bef bump proxmox-router to 1.3.1-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-01-05 12:10:00 +01:00
d0c1958f86 bump proxmox-schema to 1.3.6-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-01-05 12:08:58 +01:00
01e9b3affc bump proxmox-sys to 0.4.2-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-01-05 12:07:45 +01:00
ff9aa2012e update nix to 0.26
it's the version currently shipped by bookworm, so let's unify this widely-used
dependency to make bootstrapping easier.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-01-05 12:07:16 +01:00
6953154254 update d/control
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-01-05 11:49:03 +01:00
78e86f3261 re-add epoch_to_rfc3339_utc on wasm target
This was lost in commit 980d6b26df.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2022-12-15 13:35:53 +01:00
acaf55c437 clippy fix
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-13 14:56:42 +01:00
6eb638c806 section-config: silence clippy
these two functions don't actually use the `type_name` parameter, but the
interface including custom formatters require it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-13 14:53:28 +01:00
77ac0bd5fe section-config: make ReST dump reproducible
HashMaps are not ordered, so each package build containing a section config
dump would have the documentation ordered randomly.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-13 14:51:50 +01:00
cb2646c7b4 section config: fix handling array schema in unknown sections
Mostly relevant when the config is written out again after parsing it
with unknown sections. Previously, with duplicate keys, only the last
value would be saved. Now, duplicate keys are assumed to be part of
an array schema and handled as such.

Because the unknown section parsing does not know if a certain
property does actually have an array schema, it's not possible to
detect duplicate keys for non-array-schema properties, and if a
property with array-schema shows up only once, it will not be saved as
a Value::Array, but a Value::String.

Writing, or to be precise the format_section_content methods, already
handle Value::Array, so don't need to be adapted.

Fixes: 0cd0d16 ("section config: support allowing unknown section types")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-12-12 14:03:26 +01:00
e97f41e290 section config: add test for array schema
where duplicate keys are allowed.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-12-12 14:03:23 +01:00
aaf4b72839 deps: bump api-macro to current version
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-12-12 11:51:08 +01:00
7bc85c05c9 bump proxmox-api-macro to 1.0.4-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-12-12 11:34:18 +01:00
38a60d3acb api: support #[default] attribute
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-12-12 11:34:18 +01:00
0719e1db1c update/extend README.rst
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-12 11:05:30 +01:00
ee8419cf2d workspace: switch remaining dependencies
while these are (currently) only used by a single member each, having *all*
dependency versions specified in the top level Cargo.toml only makes the whole
process of managing them less error-prone.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-12 11:05:30 +01:00
1380182538 update d/control
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-12 11:05:30 +01:00
2610208794 io: add boxed module for boxed bytes like vec::zeroed...
- proxmox_io::boxed::uninitialized(len) -> Box<[u8]>
  same as vec::uninitialized, but as a box

- proxmox_io::boxed::zeroed(len) -> Box<[u8]>
  same as vec::zeroed, but as a box

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-12-12 11:00:22 +01:00
a7d84effc5 io: deny unsafe_op_in_unsafe_fn
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-12-12 11:00:22 +01:00
8316fd3899 more workspace dependencies
regex was missed in the first pass, and two intra-workspace dev-dependencies as
well.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-12 09:30:30 +01:00
485ed1a2a2 switch exclude to workspace in README
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-09 13:31:51 +01:00
d3f2a86f80 buildsys: get crate list via cargo metadata in Makefile
so we don't have to keep this in sync manually

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-12-09 13:18:23 +01:00
e6d1e6440d add bump.sh
for bumping crates in this workspace (it requires cargo-edit being installed).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 10:36:27 +01:00
de6a59289a proxmox-time: drop TryFrom use statement
no longer needed with edition 2021

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 09:48:47 +01:00
46a675830d update d/control files
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 09:48:47 +01:00
48abc5afa3 update outdated workspace dependencies
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 09:48:47 +01:00
bdca6de588 update d/control files
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 09:48:47 +01:00
e5abc0590e define workspace dependencies in workspace
so that we no longer have to (or forget to) bump the version in multiple places.

notable changes:
- outdated versions have been unified
- proxmox-metrics -> proxmox-async no longer uses explicit empty features
  (proxmox-async doesn't provide any anyway)
- proxmox-subscription -> proxmox-http no longer uses explicit default_features
  = false (proxmox-http has an empty default feature anyway)
- missing path dependencies added (mainly proxmox-rest-server)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 09:48:38 +01:00
6c161bd5ab update d/control files
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 09:48:38 +01:00
4189221470 inherit shared, external dependencies
noteworthy changes:
- proxmox-http had a default_features_false dep on hyper, which is dropped (the
  default feature is empty anyway)
- hyper, libc, nix, tokio and url versions are unified
- missing (cosmetic) bindgen feature on zstd enabled everywhere

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 09:48:38 +01:00
64959d9ae0 move common metadata to workspace
and switch all crates to 2021 edition as well as a unified "authors" value.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-07 09:48:25 +01:00
5ec765f842 update d/control files
debcargo 2.6 changed some minor details

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-06 11:21:43 +01:00
6a3a3f0413 use statement fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-12-05 14:11:16 +01:00
538578c558 clippy 1.65 fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-05 11:17:37 +01:00
50aa62b764 proxmox-schema: bump to 1.3.5
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-01 11:10:01 +01:00
30388b7256 schema: update to textwrap 0.16
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-12-01 11:06:41 +01:00
d513ef7836 bump proxmox-section-config to 1.0.2-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-11-28 09:30:03 +01:00
c39d04c33e minor doc fixup
an 'ignore' block assumes rust syntax, a 'text' block should
just be plain text

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-11-28 09:27:28 +01:00
0cd0d16c1e section config: support allowing unknown section types
Similar to commit c9ede1c ("support unknown types in section config")
in pve-common.

Unknown sections are parsed as String-JSON String key-value pairs
without any additional checks and also written as-is.

Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-11-28 09:19:05 +01:00
bb7519af3b subscription: recognize 'Suspended' status
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-11-25 09:53:58 +01:00
208d4ffac6 bump proxmox-section-config to 1.0.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-24 14:07:30 +02:00
c8f26ee0ea section-config: expose order
we already expose the raw sections which are sometimes
easier to use, but we don't expose the order at all this way
otherwise

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-24 13:45:58 +02:00
b4a798f0a7 section-config: use Vec for section order
We use none of the additional functionality provided by
VecDeque.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-24 13:44:23 +02:00
3b2b1214b3 section config: parse additional properties when schema allows it
Additional properties will be parsed according to the default string
schema.

This is relevant for use cases when the full schema is not known for
some reason or another. In particular this allows support for parsing
older/newer versions of configuration files. One example of this is
the proposed proxmox-mail-forward helper binary, which currently
doesn't have access to the PBS API types for dependency reasons and
is only interested in the email field for the root user. If it can
only use a minimal schema with additional_properties set to true, it
will be robust against changes.

Writing already works, because the ObjectSchema's verify_json()
already handles additional_properties correctly and
format_section_content() handles them like all other properties
(method doesn't depend on the schema).

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2022-10-24 13:35:50 +02:00
cfa77e0e88 sys: impl AsFd for PTY
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-19 14:36:45 +02:00
34688a6d74 sys: impl AsFd for PidFd
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-19 14:35:07 +02:00
34f47339d5 bump sys to 0.4.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-19 14:21:08 +02:00
7b350d3e43 proxmox-http: fix last changelog entry
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-10-19 14:17:42 +02:00
ab17fdb4d9 sys: deprecate RawFdNum
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-19 13:55:47 +02:00
88677d8955 sys: add From<OwnedFd/Fd> for Fd/OwnedFd temporarily
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-19 13:50:17 +02:00
8bd961acdc rest-server: update to OwnedFd
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-19 13:25:40 +02:00
8420c266af sys: deprecate Fd, add its methods as module functions
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-19 13:25:40 +02:00
85c260f6b5 sys: deprecate BorrowedFd
std has this now, stable since 1.63

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-19 10:55:16 +02:00
b039bf011b bump edition in rustfmt.toml
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-10-13 15:00:28 +02:00
7c7e2f886c rest-server: add packaging and bump to 0.2.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-10-11 15:09:50 +02:00
bd00e2f317 cargo: rest-server: set license property
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-10-11 15:09:44 +02:00
a7122eb99e Merge remote-tracking branch 'proxmox-rest-merge/master'
split out from proxmox-backup using `git filter-repo` including
history with the following --paths-from-file:

```
proxmox-rest-server
src/api/server.rs
src/server/command_socket.rs
src/server/config.rs
src/server/environment.rs
src/server/formatter.rs
src/server/h2service.rs
src/server/rest.rs
src/server/state.rs
src/tools/compression.rs
src/tools/daemon.rs
src/tools/file_logger.rs
src/worker_task.rs
```
2022-10-11 15:09:28 +02:00
95f7232188 cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-10-11 09:48:11 +02:00
28e30719e8 clippy fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-10-11 09:48:04 +02:00
2ae95b5f4e metrics: bump to 0.2.1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 14:49:35 +02:00
2502d691b6 subscription: bump to 0.3.0
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 14:49:35 +02:00
916aa8a2db update proxmox-router to 1.3.0
no real change for PBS usage - the ApiHandler enum is marked
non_exhaustive now because it has extra values if the new (enabled by
default) "server" feature is enabled.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 14:17:12 +02:00
e8d199d51c update to proxmox-http 0.7
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 14:17:00 +02:00
a6e03dfe42 subscription: properly forward verification error
when verifying the server response used for offline mirror keys.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 13:06:02 +02:00
d55816e9dd subscription: use lowercase for Display-ing status 2022-09-07 13:05:42 +02:00
31f1bbbf40 subscription: properly alias 'notfound'
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 13:05:42 +02:00
f908f216ae subscription: conditionalize checks
signed subscription info files should always be checked to catch
attempts of invalid signatures, but the age and serverid checks only
need to apply to "active" files, else the status might switch from a
more meaningful one to "invalid" by accident.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 13:05:42 +02:00
4beac11b34 subscription: add Expired status
this can be returned by the shop when checking an online subscription.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 13:05:42 +02:00
5b90667d05 http: bump to 0.7.0
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:35:51 +02:00
08a6d56eae http: client_trait: make request body generic
like the response body, instead of hard-coding Read.
2022-09-07 09:25:47 +02:00
891dcfda2f http: add extra_headers to post
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
ec77785578 http: sync: add HttpClient for Box<dyn Read>
for use cases where the full request body is not available from the
start, or the response doesn't need to be fully read in one go.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
00f5eca155 http: make post() take Read, not &str
for more flexibility.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
7863eff2a5 http: fix typo
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
da49b98d15 http: rename SimpleHttp to Client
so we have proxmox_http::client::Client for the async, hyper-based
client and proxmox_http::client::sync::Client for the sync, ureq-based
one.

this is a breaking change.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
7ffb895062 http: add "raw" sync client
and switch String one over to use it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
9c444d7a94 http: add ureq-based sync client
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
f429fcb592 http: extend HttpClient trait
to allow get requests with extra headers (such as `Authorization`) and a
generic `request` fn to increase flexibility even more.

this is a breaking change.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
ab5d5b39f6 http: move SimpleHttpOptions to http-helpers feature
and rename it to HttpOptions, since it's not specific to the "Simple"
client at all.

this is a breaking change.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
e8d02db689 router: bump to 1.3.0
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
66ace63618 router: make hyper/http optional
but enable it by default.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-07 09:17:45 +02:00
0376c3b50b build: more missing features
these would cause failures when building the sub-crates directly from
their sub-directory.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-09-05 12:55:33 +02:00
f92c8f92cc api-macro: track d/control
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-08-26 12:57:57 +02:00
056a5eb581 git: ignore top level *-deb make target files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-08-26 12:18:44 +02:00
52a8eb6ace d/control: tree wide update after switch to weak/namespaced dependencies
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-08-26 12:07:36 +02:00
289d297c7d build: use weak and namespaced features
to reduce the creep of optional dependencies being exposed as features.

this requires cargo 0.63 (and debcargo built against at least that
version), but greatly reduces the number of binary packages and provides
generated, while still allowing sensible selection of optional
dependencies via the explicit feature meant for pulling them in.

diff stat for running `make  deb` after this change:
 proxmox-http/debian/control         | 226 ++++--------------------------------
 proxmox-router/debian/control       |  74 +-----------
 proxmox-schema/debian/control       |  53 ++-------
 proxmox-subscription/debian/control |  17 +--
 proxmox-sys/debian/control          |  51 +++-----
 proxmox-tfa/debian/control          | 110 ++----------------
 6 files changed, 72 insertions(+), 459 deletions(-)

the 'dep:' prefix marks something on the RHS inside the features section
as dependency, it's only allowed if the string after it is an optional
dependency an no explicit feature of the same name exists. if all
pointers to the optional dependency in the features section are marked
as such, the optional dependency itself will not be exposed as a feature
(either on the cargo or debian/control level).

the '?' suffix marks dependencies as "weak", which only enables the
optional dependency + its feature(s) if the optional dependency itself
is also enabled. it has no effect on d/control since such a relationship
is not encodable in Debian package relations, but it does affect cargo
dependency resolution and allows skipping the build of unneeded optional
dependencies in some cases.

with no packages/crates depending on the no longer exposed automatically
generated features/packages, so these are safe to remove even though
it's technically a breaking change.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-26 10:35:00 +02:00
1cd6a842f7 subscription: add missing path dependencies
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-19 14:20:58 +02:00
d7082b037d make: add proxmox-metrics to crate list
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-19 14:20:02 +02:00
9478ae2bed fixup! time: update to nom 7 2022-08-19 14:19:39 +02:00
12da49b5ec schema: bump to 1.3.4
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-19 12:41:15 +02:00
1349f24d49 schema: update to textwrap 0.15
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-19 12:26:42 +02:00
1344ffdd94 time: bump to 1.1.4
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-19 12:22:45 +02:00
552f14e916 time: update to nom 7
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-08-19 12:22:27 +02:00
5ac4e0fcae more stable clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-08-17 09:22:32 +02:00
505e28d8a3 bump proxmox-sys dep to 0.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-28 13:47:37 +02:00
b3e2a1f574 bump d/control files
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-28 13:39:21 +02:00
d52a1b7889 bump proxmox-subscription to 0.2.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-28 13:35:47 +02:00
7540ebd238 bump proxmox-shared-memory to 0.2.2-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-28 13:35:47 +02:00
2eacdbe090 bump proxmox-http to 0.6.5-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-28 13:35:47 +02:00
1d3f4a4bbd http, shared-memory, subscription: bump proxmox-sys dependency to 0.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-28 13:35:47 +02:00
6dc2393625 sys: drop comment from Cargo.toml
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-28 13:29:42 +02:00
6e857c6090 bump proxmox-sys to 0.4.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-28 13:28:21 +02:00
00f16b4e94 rest-server: clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-27 14:54:44 +02:00
c5382d1b20 sys: doc fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-27 10:55:10 +02:00
28ee8bc6d0 http: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-27 10:52:50 +02:00
31a569b425 api-macro: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-27 10:52:07 +02:00
c78c47cff2 uuid: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-27 10:51:22 +02:00
6cac8d5cbe sys: another Arc::clone
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-26 14:45:39 +02:00
51746a0d45 sys: explicit Arc::clone
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-26 14:44:49 +02:00
6e247b9593 sys: use Iterator::min instead of a manual version
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-26 13:21:36 +02:00
36903bf2fe sys: bump edition to 2021
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-26 13:04:09 +02:00
4de145aaef lang: update c_str
drop the let binding, easier to use in const context since
CStr::from_bytes_with_nul_unchecked is const since 1.59

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-26 12:27:44 +02:00
34d2c91118 sys: drop proxmox-borrow dependency
nix now has an owning directory iterator, we don't need it
anymore

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-26 12:24:14 +02:00
36625fb92c tfa: bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-26 12:24:01 +02:00
d0b4f0bf2f tfa: docs fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-26 12:23:50 +02:00
df0d30a106 bump proxmox-tfa to 2.1.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-25 13:38:04 +02:00
a7f808d43b tfa: bump edition to 2021
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-25 13:35:58 +02:00
d396c3ea31 tfa: clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-25 13:35:57 +02:00
ea34292850 tfa: expose 'allow_subdomains' property
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-25 13:25:52 +02:00
b84446a030 sys: enable CreateOptions::group_root/root_only
nix now has the required const fns

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-25 10:31:49 +02:00
d3364e07fb sys: file locking depends on timer feature
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-22 11:58:18 +02:00
e2e7ea6d62 sys: introduce 'acl', 'crypt' and 'timer' features
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-21 15:42:11 +02:00
f6c7d46d04 bump proxmox-subscription to 0.2.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-21 14:36:49 +02:00
6ff1c96021 add default signing key path
for use in dependent modules. this file should be shipped via
proxmox-archive-keyring.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-07-21 13:30:11 +02:00
5391f5313b subscription: make key optional and support multiple keys
this is a breaking change requiring updates in proxmox-perl-rs and
proxmox-backup.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-07-21 13:29:15 +02:00
2e929cc386 bump proxmox-http dep to 0.6.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-21 13:01:14 +02:00
d228ef6e20 http: bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-21 12:58:41 +02:00
dc57115703 bump proxmox-http to 0.6.4-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-21 12:58:11 +02:00
a8a838754d http: fix proxy authorization header to include type
and encode the username:password string as base64 [0]. This fixes the
error 407 issue when using proxy authentication [1].

[0] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Proxy-Authorization
[1] https://forum.proxmox.com/threads/checking-the-subscription-behind-a-proxy-fails.112063/

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2022-07-21 12:54:59 +02:00
760c49be6e subscription: check in d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-21 12:54:59 +02:00
6794989b2a proxmox-subscription: initial bump to 0.1.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-20 13:37:08 +02:00
757031ef33 sys: clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-20 13:35:58 +02:00
94d388b988 http: clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-20 13:31:58 +02:00
5e630472ec subscription: clippy fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-20 13:28:31 +02:00
ab17e16664 subscription: line-wrap test data
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-20 13:28:31 +02:00
0cd02a0d2b subscription: doc comment fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-20 13:28:31 +02:00
3f694b5481 subscription: clippy lints
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-07-20 13:28:31 +02:00
baf31dc2d8 subscription: properly case status enum values
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-07-20 13:28:31 +02:00
38492bde83 check signature when reading subscription
and handle signed keys differently w.r.t. age checks, since they will be
refreshed less frequently.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-07-19 15:20:36 +02:00
4ec9a8183d add new proxmox-subscription crate
taking over slighlty generified helpers and types:
- subscription info and status
- checking subscription status with shop
- reading/writing local subscription-related files

the perl-based code uses base64 with newlines for the data, and base64
without padding for the checksum. accordingly, calculate the checksum
with and without newlines, and compare the decoded checksum instead of
the encoded one.

furthermore, the perl-based code encodes the subscription status using
Capitalized values instead of lowercase, so alias those for the time
being.

PVE also stores the serverid as 'validdirectory', so add that as alias
as well.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-07-19 15:20:36 +02:00
6e4a43d683 bump proxmox-sys to 0.3.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-07 11:49:57 +02:00
67fbf2b2ee sys: make escape_unit() more flexible, add unescape_unit_path
This adds the ability to use these functions with non-utf8
strings as well.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-07 11:45:26 +02:00
c7224c5f67 bump proxmox-compression to 0.1.2-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-05 13:46:38 +02:00
ff86aa5f8a compression: more cleanups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-05 13:23:14 +02:00
2d37cd92e0 compression: indentation cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-05 12:17:04 +02:00
cc6e5d7372 compression: minor cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-07-05 12:13:38 +02:00
6e989a1c29 proxmox-compression: add 'tar_directory'
similar to 'zip_directory', this is intended to tar a local directory,
e.g. when we're in a restore vm.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-07-05 12:12:02 +02:00
38db37dc5f proxmox-compression: make ZstdEncoder stream a bit more generic
by not requiring the 'anyhow::Error' type from the source, but only
that it implements 'Into<Error>'. This way, we can also accept a stream
that produces e.g. an io::Error

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-07-05 12:11:57 +02:00
b5accff750 bump proxmox-http to 0.6.3-1 2022-06-30 12:42:18 +02:00
bd1f9f103e http: implement HttpClient for SimpleHttp
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-30 12:42:17 +02:00
3c0486be50 http: add HttpClient trait
gated behind feature "client-trait"

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-30 12:42:12 +02:00
94456ee4b1 http: move TLS helper to client feature
it's only used there and pulls in hyper and tokio, which we don't
want/need in http-helpers.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-29 10:32:44 +02:00
210d4fdb68 http: take over json_object_to_query 2022-06-29 10:32:44 +02:00
4393217633 bump proxmox-serde to 0.1.1 2022-06-29 10:32:44 +02:00
b239b56999 serde: take over to/write_canonical_json
from PBS' tools module, and feature-guard via optional `serde_json`
dependency.
2022-06-29 10:32:44 +02:00
f505240065 cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-29 10:32:29 +02:00
0d30720907 http: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 10:14:33 +02:00
064791e565 section-config: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 10:11:28 +02:00
7b1aad429f api-macro: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 10:10:07 +02:00
30901c60f5 compression: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 10:08:49 +02:00
b06b4c7426 lang: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 10:06:57 +02:00
1fca7b715d time: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 10:06:20 +02:00
b1a5daef61 sys: clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 09:58:34 +02:00
2f82a04734 bump proxmox-sys dep to 0.3.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 09:45:11 +02:00
8f769a3996 bump proxmox-sys to 0.3.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 09:41:36 +02:00
1e2aa3eea4 partially fix #2915: proxmox-sys: scandir: stat if file_type is unknown
when using readdir/getdents the file type might be 'DT_UNKNOWN'
(depending on the filesystem). Do a fstatat in that case to
get the file type. Since maybe the callback wants to do
a stat anyway, pass it there (if done)

adds two new helpers:
'file_type_from_file_stat': uses a FileStat struct to get the file type
'get_file_type': calls fstatat to determine the file_type

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-29 09:34:34 +02:00
8e06108d10 proxmox-rest-server: replace print with log macro
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-21 10:43:19 +02:00
a6f9cf3d73 bump proxmox-router dep to 1.2.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-21 10:43:19 +02:00
09032fb88c bump proxmox-router to 1.2.4-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-21 10:40:05 +02:00
d934c79d4c router: restrict 'env_logger' dep to 'cli' feature
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-21 10:38:17 +02:00
32b8ae982f router: add init_cli_logger helper function
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-21 10:37:07 +02:00
41d8747de3 metrics: check in d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-13 10:03:48 +02:00
b446456aa5 bump proxmox-metrics to 0.2.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-13 09:56:05 +02:00
e12230d543 proxmox-metrics: send_data_to_channels: change from slice to IntoIterator
which is a bit generic and allows us to use e.g. a map result to be
passed here

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-13 09:53:47 +02:00
be8f24ff5d tree wide: typo fixes through codespell
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-07 14:08:09 +02:00
917f5f73af tree wide: clippy lint fixes
most (not all) where done automatically

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-06-02 15:59:55 +02:00
4faf81dc69 update to nix 0.24 / rustyline 9 / proxmox-sys 0.3
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:33 +02:00
7667e549a5 bump proxmox-http to 0.6.2
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:12 +02:00
05d3c3f412 http: bump proxmox-sys to 0.3.0
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:12 +02:00
980d6b26df proxmox-time: add missing 1.1.3 change
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:12 +02:00
29c051f5f8 bump proxmox-router to 1.2.3
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:12 +02:00
8cb1a99934 bump proxmox-schema to 1.3.3
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:12 +02:00
f7c9738ead bump proxmox-shared-memory to 0.2.1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:12 +02:00
e3232f2fe3 shared-memory: bump proxmox-sys to 0.3.0
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:12 +02:00
9dd4134052 bump proxmox-sys to 0.3.0
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:33:12 +02:00
97fd3a0a14 sys: feature-gate logrotate (and zstd)
it's not needed everywhere we pull in proxmox-sys.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 14:08:37 +02:00
a3efe0b3dc bump rustyline to 9
it works with nix 0.24

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 13:34:09 +02:00
f8b19c2e22 proxmox-shared-memory: fix nix 0.24 compat in test
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-06-02 13:34:09 +02:00
abf5aedc54 update proxmox-router to nix 0.24.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-02 13:34:09 +02:00
92069b224d update proxmox-schema to nix 0.24.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-02 13:34:09 +02:00
bd0a7cb223 update proxmox-shared-memory to nix 0.24.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-02 13:34:09 +02:00
376af82be7 update proxmox-sys to nix 0.24.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-02 13:34:09 +02:00
28c85a828e proxmox-serde: move serde_json to dev-dependencies
it's only used in doctests

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-02 13:18:03 +02:00
f7da6ee7d4 bump proxmox-schema to 1.3.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-02 10:14:23 +02:00
d68a755536 schema: drop some commented out lines
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-02 10:14:23 +02:00
3facb7b455 router: clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-06-02 10:10:01 +02:00
532140de86 schema: bump api macro dep to 1.0.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-05-31 09:43:59 +02:00
9079afc831 schema: clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-05-31 09:42:25 +02:00
17805f9791 api-macro: doc update
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-05-23 10:34:57 +02:00
b49f3554c7 bump proxmox-api-macro to 1.0.3-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-05-19 12:04:03 +02:00
43b4440ef0 sys: bump log dependency
to ensure format strings with "{var}" work properly.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-05-16 15:50:50 +02:00
fbbab0d8e0 build: bump required log version
else logging using "{var}" in format strings doesn't work properly.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-05-16 15:02:07 +02:00
d51475123e rest: example: fix comment width
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-12 11:57:51 +02:00
457ccc9bb3 rest server: daemon: update PID file before sending MAINPID notification
There is a race upon reload, where it can happen that:
1. systemd forks off /bin/kill -HUP $MAINPID
2. Current instance forks off new one and notifies systemd with the
   new MAINPID.
3. systemd sets new MAINPID.
4. systemd receives SIGCHLD for the kill process (which is the current
   control process for the service) and reads the PID of the old
   instance from the PID file, resetting MAINPID to the PID of the old
   instance.
5. Old instance exits.
6. systemd receives SIGCHLD for the old instance, reads the PID of the
   old instance from the PID file once more. systemd sees that the
   MAINPID matches the child PID and considers the service exited.
7. systemd receivese notification from the new PID and is confused.
   The service won't get active, because the notification wasn't
   handled.

To fix it, update the PID file before sending the MAINPID
notification, similar to what a comment in systemd's
src/core/service.c suggests:
> /* Forking services may occasionally move to a new PID.
>  * As long as they update the PID file before exiting the old
>  * PID, they're fine. */
but for our Type=notify "before sending the notification" rather than
"before exiting", because otherwise, the mix-up in 4. could still
happen (although it might not actually be problematic without the
mix-up in 6., it still seems better to avoid).

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-05-12 11:53:54 +02:00
7fe84f8e15 api-macro: allow overriding field attributes in the updater
This allows fixing up things such as `skip_serialize_if`
calls like so:

    #[derive(Updater)]
    struct Foo {
        #[serde(skip_serializing_if = "MyType::is_special")]
        #[updater(serde(skip_serializing_if = "Option::is_none"))]
        field: MyType,
    }

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-05-11 16:00:42 +02:00
44735fe5d6 schema: doc comment format/slight-expansion
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-05 11:03:21 +02:00
42dd95aa6f http: bump version to 0.6.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-05 10:42:52 +02:00
9c0e9dca59 tree wide update of genereated control
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-05 10:22:50 +02:00
e48568a7f1 router: bump version to 1.2.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-05 09:32:46 +02:00
de5d5f7618 router: format doc comment, use full text width
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-05 08:54:14 +02:00
43b5f1ae3e router: permissions: allow to pass partial-collapsed acl path components
This would allow the following components:

* all in one : &["system/network"]
* mixed: &["system/network", "dns"]
* with templates: &["datastore/{store}"]
* with the value of template being a path, e,g, with ns = "foo/bar":
  &["/datastore/{store}/{ns}"]

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-05-05 08:54:14 +02:00
e98ca77777 permissions: fix doc comment text width
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-28 11:45:21 +02:00
47acc8dc8f router: drop Index impls for references
these should not be required as the use cases should all be
covered by the non-reference impls

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-21 14:04:51 +02:00
16daad64c7 bump proxmox-router to 1.2.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-21 13:48:57 +02:00
39956b5d09 router: fix impl Index for dyn RpcEnvironment
implement Index and IndexMut on `dyn RpcEnvironment` rather
than on a reference to it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-21 13:45:26 +02:00
47c9bed30d impl epoch_to_rfc3339_utc on wasm target 2022-04-20 09:10:47 +02:00
169a91c332 bump proxmox-compression dependency to 0.1.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:37:20 +02:00
c01b08fea9 bump proxmox-compression to 0.1.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:35:31 +02:00
f8fe8f59a6 compression: limit ZstdEncoder constructors to usable ones
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:31:57 +02:00
79ac8d7344 compression: don't use tokio::main in doctest
because we have no rt feature enabled

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:31:26 +02:00
99add1733c compression: style changes
use where clauses where the parameter list is short enough
to become a single line

easier to read

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:31:10 +02:00
d4a09de520 compression: fmt
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:08:44 +02:00
fa5373c5c0 compression: clone_from_slice -> copy_from_slice
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:07:33 +02:00
e7e4411f44 proxmox-compression: add streaming zstd encoder
similar to our DeflateEncoder, takes a Stream and implements it itself,
so that we can use it as an adapter for async api calls

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:02:54 +02:00
d107d0b2eb proxmox-compression: add async tar builder
inspired by tar::Builder, but limited to the things we need and using
AsyncRead+AsyncWrite instead of the sync variants.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 09:02:51 +02:00
ac50b068de bump proxmox-schema to 1.3.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 08:20:35 +02:00
e734143380 bump proxmox-schema dependency to 1.3.1 for streaming attribute
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 08:20:27 +02:00
8291d9ed81 schema: bump api macro to 1.0.2 for the streaming attribute
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 08:19:00 +02:00
97c5095486 bump proxmox-router dependency to 1.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 08:17:08 +02:00
922d61d276 adapt to the new ApiHandler variants
namely 'StreamingSync' and 'StreamingAsync'
in rest-server by using the new formatter function,
and in the debug binary by using 'to_value'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 08:13:40 +02:00
cd4e485600 proxmox-rest-server: OutputFormatter: add new format_data_streaming method
that takes the data in form of a `Box<dyn SerializableReturn + Send>`
instead of a Value.

Implement it in json and extjs formatter, by starting a thread and
stream the serialized data via a `BufWriter<SenderWriter>` and use
the Receiver side as a stream for the response body.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-13 08:13:36 +02:00
bcf6abaa0d bump proxmox-api-macro to 1.0.2-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:27:11 +02:00
6ce3a73681 bump proxmox-router to 1.2.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:26:07 +02:00
930bb59d84 proxmox-router: depend on proxmox-async 0.4.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:24:27 +02:00
61d6541ce2 router: deduplicate some code
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:23:05 +02:00
ca3b25869c proxmox-api-macro: add 'streaming' option
to generate the `Streaming` variants of the ApiHandler

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:23:05 +02:00
f585722aad proxmox-router: add new ApiHandler variants for streaming serialization
they should behave like their normal variants, but return a
`Box<dyn SerializableReturn + Send>` instead of a value. This is useful
since we do not have to generate the `Value` in-memory, but can
stream the serialization to the client.

We cannot simply use a `Box<dyn serde::Serialize>`, because that trait
is not object-safe and thus cannot be used as a trait-object.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:23:05 +02:00
2c9272945e promxox-router: add SerializableReturn Trait
this will be useful as a generic return type for api calls which
must implement Serialize.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:23:05 +02:00
27c8106d7b bump proxmox-async to 0.4.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:22:59 +02:00
9471ba9969 proxmox-async: add SenderWriter helper
this wraps around a tokio Sender for Vec<u8>, but implements a blocking
write. We can use thas as an adapter for something that only takes a
writer, and can read from it asynchonously

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-04-12 14:05:14 +02:00
9661defb0f tree wide: some stylistic clippy fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-11 08:14:28 +02:00
4cdeee64dc sys: make acl constants rustfmt safe
there's not much better one can do here..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 17:39:31 +02:00
0a651e00a9 sys: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 17:39:31 +02:00
3a3dd296cc schema: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:42:09 +02:00
ae9d6e255c lang: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:41:44 +02:00
0eeb0dd17c http: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:41:21 +02:00
05cad8926b router: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:40:39 +02:00
0ec1c684ae shared memory: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:37:38 +02:00
6f8173f67a tfa: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:34:41 +02:00
4554034d32 time: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:34:04 +02:00
800cf63a8a uuid: rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:34:01 +02:00
d3b387f1a7 update gitignore
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-10 12:29:03 +02:00
6b06cc6839 rest server: log rotation: refactor and comment improvements
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-07 14:04:18 +02:00
5eaab7864a rest server: log rotation: fix off-by-one for max_days
The entries in a file go from oldest end-time in the first time to
newest end-time in the last line. So, just because the first line is
older than the cut-off time, the remaining one doesn't necessarily
have to be old enough too. What we can know for sure that older than
the current checked rotations of the task archive are definitively up
for deletion.

Another possibility would be to check the last line, but as scanning
backwards is more expensive/complex to do while only being an actual
improvement in a very specific edge case (it's more likely to have a
mixed time-cutoff vs. task-log-file boundary than that those are
aligned)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-07 12:58:32 +02:00
bd8fd62de4 rest-server: add option to rotate task logs by 'max_days' instead of 'max_files'
and use it with the configurable: 'task_log_max_days' of the node config

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-04-06 17:12:49 +02:00
6c856eed5e rest-server: cleanup_old_tasks: improve error handling
by not bubbling up most errors, and continuing on. this avoids that we
stop cleaning up because e.g. one directory was missing.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-04-06 17:10:02 +02:00
2bda552b55 rest server: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-04-06 16:55:39 +02:00
f04eb949d1 tfa: serde tools: improve variance and dropck
`FoldSeqVisitor` doesn't actually own a `T` and therefore
cannot drop a `T`, we only use it via the `Fn(&mut Out, T)`,
so use `fn(T)` in the `PhantomData` to keep `T`
contravariant.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-22 12:31:54 +01:00
6221d86c64 schema: add another test case
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-09 14:29:53 +01:00
d7283d5aeb schema: don't accept unterminated quoted strings
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-09 13:41:09 +01:00
cdf4326e43 schema: factor out string verify fn and improve docs
We'll need to name the type when we add more perl bindings.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-04 14:51:16 +01:00
c01fbafae0 bump proxmox-schema dep to 1.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-04 09:50:21 +01:00
6e4a912bea bump schema to 1.3.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-04 09:47:40 +01:00
1e0c04a4ba drop param_bail test
it's not actually testing anything useful

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-04 09:44:51 +01:00
7c7e99dca1 proxmox-schema: add convenience macros for ParameterError
with two variants:

(expr, expr) => assumes that the second is an 'Error'
(expr, (tt)+) => passes the tt through anyhow::format_err

also added tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-03-04 09:43:01 +01:00
ec5ff23d70 make property_string module public
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-03-01 14:57:45 +01:00
fca5218749 schema: add const fn unwrap_*_schema/format methods
'unwrap_' because they will panic and as `const fn` since
panic in const fn is now possible

Note that const evaluation will only be triggered when
actually used in const context, so to ensure *compile time*
checks, use something like this:

    const FOO_SCHEMA: &AllOfSchema =
        SomeType::API_SCHEMA.unwrap_all_of_schema();
    then_use(FOO_SCHEMA);

or to use the list of enum values of an enum string type
with compile time checks:

    const LIST: &'static [EnumEntry] =
        AnEnumStringType::API_SCHEMA
            .unwrap_string_schema()
            .unwrap_format()
            .unwrap_enum_format();
    for values in LIST {
        ...
    }

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-22 09:35:06 +01:00
98811ba9f1 bump proxmox-metrics to 0.1.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-22 09:00:39 +01:00
77ea32cc5a metrics: bump async dep to 0.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-22 08:59:37 +01:00
53aa06e411 bump proxmox-async dep to 0.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 14:25:37 +01:00
6fe5357ce9 bump proxmox-lang dep to 1.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 14:24:24 +01:00
f1681d4b83 use io_format_err, io_bail, io_err_other from proxmox-lang
and move the comment from the local io_bail in pbs-client/src/pxar/fuse.rs
to the only use

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 14:24:13 +01:00
13408babad depend on new 'proxmox-compression' crate
the compression utilities live there now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 14:23:43 +01:00
1e75baecb0 bump proxmox-async to 0.4.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 14:20:13 +01:00
04b3bdb6cd proxmox-compression: update d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 14:17:01 +01:00
b8bf6a5c81 split out compression code into new crate 'proxmox-compression'
this removes quite a bit of dependecies of proxmox-async

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[set proxmox-lang dep to 1.1]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 14:10:53 +01:00
d663ff328a formatting fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 13:49:48 +01:00
57d052af36 workspace: set proxmox-lang dep version to 1.1
to ensure the error macros are available

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 13:45:46 +01:00
d8ecb87358 bump proxmox-lang to 1.1.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 13:45:10 +01:00
fc4e02a34f lang: remove io_assert
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 13:41:59 +01:00
d4b4115400 move io error helpers to proxmox-lang
this removes proxmox_sys as a dependecy for proxmox-async

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-21 13:35:14 +01:00
46580ee3e9 bump proxmox-schema to 1.2.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-17 12:06:15 +01:00
46541feecf support quoted strings in property strings
This allows free form text to exist within property strings,
quoted, like:
    key="A value with \"quotes, also commas",key2=value2
or also:
    "the value for a default_key",key2=value2

And drop ';' as a key=value separator since those are meant
for arrays inside property strings...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-17 09:36:43 +01:00
39eac6280f schema: FromIterator lifetime fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-14 11:20:48 +01:00
c43ac0a64c schema: impl FromIterator for ParameterError
for where we also have Extend impls

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-14 09:46:20 +01:00
706d966c87 schema: bump edition to 2021
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-14 09:45:38 +01:00
fb27e132e7 rest-server: bump schema to 1.2 and use convenience methods
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-11 14:09:45 +01:00
ae0ee80f43 bump proxmox-schema to 1.2.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-11 14:08:30 +01:00
8a60e6c8d0 schema: rustfmt
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-11 14:05:05 +01:00
a5ba444a5b schema: ParameterError convenience
for ease of use implement these traits for ParameterError:
    * From<(&str, Error)>
    * From<(String, Error)>
    * Extend<(String, Error)>
    * Extend<(&str, Error)>
    * IntoIterator<Item = (String, Error)>

and add the following methods:
    * fn into_inner(self) -> Vec<(String, Error)>;

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-11 14:01:57 +01:00
68d22d4888 proxmox-rest-server: add missing 'derive' feature
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-11 13:57:48 +01:00
70142e607d proxmox-http: drop 'mut' on specialized request methods
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-11 08:56:34 +01:00
86f3c90763 proxmox-tfa: fully deserialize TfaChallenge
otherwise clients cannot use this...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-11 08:56:34 +01:00
e5a43afe10 proxmox-tfa: make TfaChallenge members public
rust based *clients* may want/need access to it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-11 08:56:34 +01:00
bb7018e183 misc clippy fixes
the trivial ones ;)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-08 14:57:16 +01:00
9ba2092d1b proxmox-async: rustfmt (again)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-08 14:52:42 +01:00
09d1344d61 proxmox-async: another clippy fixup
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-08 14:33:50 +01:00
ca563a8cfd misc clippy fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2022-02-08 14:28:44 +01:00
9539fbde1c proxmox-async: clippy fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-08 14:26:46 +01:00
0b90f8d802 api-macro: fix "Forgerty" typo
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
2022-02-07 15:38:39 +01:00
5cc4ce3b4d http: websocket: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 17:16:36 +01:00
1edb52411e http: websocket: drop Text frame auto-detection from docs
was forgotten in commit 232d87531e when
dropping the bogus frame auto detection

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 17:15:49 +01:00
42b6f4331f http: websocket: avoid modulo for power of 2
even for the small cases this can matter

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 17:12:07 +01:00
c70d98c90c tfa: fix hyperlink in doc comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 17:06:02 +01:00
425b52586e http: websocket: rustfmt and small cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 17:05:45 +01:00
170564dd77 http: websocket: doc wording cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-04 17:04:50 +01:00
4826ff99d8 proxmox-http: websocket: fix comment about callback
this was once a callback in an early version, but it changed to a
channel, but the comment was not updated

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-02-04 16:10:31 +01:00
138f32e360 metrics: cleanup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-03 13:40:07 +01:00
645b2ae89b rest: add cookie_from_header helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-03 13:12:02 +01:00
48e047cefc proxmox-metrics: re-bump version for first upload
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-02-03 13:11:38 +01:00
3745f36ab1 metrics: use builder pattern for adding tags
Rather than going from a list of `(&str, &str)` tuples to a
`HashMap<String, String>`, add a `.tag()` builder method
and use `Cow` behind the scenes to efficiently allow the
caller to choose between a static literal and a `String`
value.

Previously the methods forced `&str` slices and then
always-copied those into `String`s even if the caller could
just *move* it.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-02 15:23:22 +01:00
e325f4a0d8 metrics: more doc fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-02 14:30:14 +01:00
c609a58086 doc fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-02 14:14:09 +01:00
41862eeb95 bump proxmox-async to 0.3.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-02 13:21:55 +01:00
8bf293bfc5 proxmox-metrics: implement metrics server client code
influxdb (udp + http(s)) only for now

general architecture looks as follows:

the helper functions influxdb_http/udp start a tokio task and return
a Metrics struct, that can be used to send data and wait for the tokio
task. if the struct is dropped, the task is canceled.

so it would look like this:
  let metrics = influxdb_http(..params..)?;
  metrics.send_data(...).await?;
  metrics.send_data(...).await?;
  metrics.join?;

on join, the sending part of the channel will be dropped and thus
flushing the remaining data to the server

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[renamed proxmox_async::io::udp -> proxmox_async::net::udp]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-02 12:56:21 +01:00
de891b1f76 proxmox_async: rustfmt
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-02 12:55:01 +01:00
807a70cecc proxmox_async: move io::udp to net::udp
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-02 12:54:39 +01:00
9ebf24b4f8 proxmox-async: add udp::connect() helper
so that we do not have to always check the target ipaddr family manually

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-02-02 12:43:48 +01:00
e428920a15 proxmox-sys: add FileSystemInformation struct and helper
code mostly copied from proxmox-backups 'disk_usage'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-02-01 12:26:14 +01:00
131f3d9471 proxmox-sys: make some structs serializable
we already depend on serde anyway, and this makes gathering structs a
bit more comfortable

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-02-01 12:26:14 +01:00
ff132e93c6 rustfmt
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-01-20 10:12:02 +01:00
175648763d bump proxmox-async to 0.3.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-01-20 10:10:16 +01:00
d0a3e38006 drop RawWaker usage
this was also leaking a refcount before, this is fixed now

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2022-01-20 10:08:05 +01:00
1d72829310 proxmox-async: bump version to 0.3.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2022-01-12 15:54:06 +01:00
e4974b9891 fix #3618: proxmox-async: zip: add conditional EFS flag to zip files
this flag marks the file names as 'UTF-8' encoded if they are valid UTF-8.

By default, encoding of file names in zips are defined as code page 437,
but we save the filenames as bytes (like in linux fs).

For linux systems this would not be a problem since most tools
simply use the filenames as bytes, but for the zip utility under
windows it's important since NTFS uses UTF-16 for file names.

For filenames that are valid UTF-8, they are decoded as UTF-8 everywhere
correctly (Linux as UTF-8 bytes, Windows as correct UTF-16 sequence) and
for other filenames with a high bit set, it depends on the OS/Software
what exactly happens. Some cases below:

* Windows + Built-in/7zip: decoded as CP437
* Debian + zip: Bytes taken as-is
* Debian + 7z: interpreted as Windows1252, decoded as UTF-8

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2022-01-11 06:31:58 +01:00
6ad9248cf3 tree-wide: drop redundant clones
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 15:02:07 +01:00
647a0db882 tree-wide: fix needless borrows
found and fixed via clippy

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 13:55:33 +01:00
1cc9b91c4f async: track d/control
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 11:51:08 +01:00
121af8a06f proxmox-serde: track d/control
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 11:51:08 +01:00
bbc635375e shared-memory: track d/control
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 11:51:08 +01:00
3ee175c798 tfa: ignore uncompilable doctest
the doctest code uses non-public `fold`, up for re-evaluation if this
gets moved to proxmox-serde and made public..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 11:51:08 +01:00
2fa269fb05 shared-memory: make tests integration tests
same as with sys/xattr, these touch files, so should use a tmpdir
provided by cargo, which requires them being integration tests.

if the tmpdir doesn't support O_TMPFILE (like overlayfs), the test is
not run (unfortunately, there is no way to indicate this via the test
result like with other test frameworks).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 11:51:08 +01:00
540fb905c2 sys: make xattr tests integration tests
these touch files, so should use the cargo-provided tmp dir, but that is
only available to benchmarks and integration tests, not unit tests.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 11:51:08 +01:00
e8cb382442 add missing library dependencies
without these, the generated d/control files are incomplete and builds
fail on clean systems.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 11:51:08 +01:00
d363fb2bee switch to new schema verify methods
the deprecated ones only forward to the new ones anyway..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-30 11:51:08 +01:00
2d9fbc02ab schema: fix deprecation warnings in tests
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-17 08:07:02 +01:00
b28f0d820b time: fix tests
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-17 08:04:12 +01:00
8393bcb268 bump regex dep to 1.5
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-16 11:25:02 +01:00
049972844e cleanup schema function calls
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-16 11:25:02 +01:00
88b56894c7 bump proxmox-schema to 1.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-16 11:25:02 +01:00
241dbcff16 schema: bump regex dep to 1.5
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-16 11:16:00 +01:00
e865ac59f3 bump proxmox-schema to 1.1.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-16 11:07:30 +01:00
28c0ede638 schema: deny unsafe_op_in_unsafe_fn
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-16 10:17:41 +01:00
efe492034e schema: make verification functions methods
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-16 10:16:55 +01:00
fbd82c81d1 proxmox-router: fix glob-import of anyhow
will break usage of the `Result::Ok()' with anyhow 1.0.49+ as that
added a new Ok helper, so a glob-import would make that shadow the
core one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-12-13 08:13:13 +01:00
61cd0ac2ba proxmox-sys: fix glob-import of anyhow
will break usage of the `Result::Ok()' with anyhow 1.0.49+ as that
added a new Ok helper, so a glob-import would make that shadow the
core one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-12-13 07:46:34 +01:00
5f75b37301 schema: formatting
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:53:18 +01:00
fd39f876dc shared-memory: formatting
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:52:58 +01:00
1185458719 serde: formatting
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:52:39 +01:00
dddfa1164b tfa: formatting
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:51:22 +01:00
a774958239 io: formatting
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:51:15 +01:00
d871d6849b api-macro: formatting
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:51:09 +01:00
5e490dd7a0 uuid: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:50:27 +01:00
d851078eae shared-memory: clippy fixes (docs)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:49:23 +01:00
179515c5b2 http: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:45:52 +01:00
59986f1195 sys: another minor clippy fix
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:43:28 +01:00
b213dbb7c8 sys: formatting
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:42:53 +01:00
c280f73793 sys: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-12-07 11:42:53 +01:00
e888fa5181 proxmox-uuid: fix wasm32 build
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-12-03 09:33:02 +01:00
b39e6ac669 bump proxmox-uuid to version 1.0.1-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-12-03 09:21:17 +01:00
0dc3bcd1a5 proxmox-uuid: implement uuid on target wasm 2021-12-03 09:17:20 +01:00
165fa05290 Allow to compile on wasm32 target
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-12-03 08:52:54 +01:00
259e4b1441 update proxmox-time to version 1.1.2-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-12-02 10:38:06 +01:00
6bdf5085dc proxmox-time: calendar-events: parse 'UTC' timezone into calendarevent
like we do in pve. this breaks the (newly added) compute_next_event
method of CalendarEvent, but we can keep compatibilty with the
deprecated function

adapt the tests so that they add a ' UTC' by default
(so they can run reliably on any machine)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-02 10:32:53 +01:00
50dc2daddb fixup changelog entry
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-02 09:12:30 +01:00
6bb932e604 use nix::unistd::fchown
instead of re-implementing it, now that we depend on >=0.19

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-02 09:04:55 +01:00
1db9a5bc0e clippy: allow manual_range_contains
we use it quite often in this module, and it's more readable when split.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-02 09:04:55 +01:00
9c9b5c02b4 clippy: collapse match/if let/..
best viewed with `-w` ;)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-02 09:01:52 +01:00
e303ad8605 clippy: misc fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-02 09:00:52 +01:00
b1c2250000 clippy: use matches!
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-02 08:59:27 +01:00
a81b2672d8 clippy: remove unnecessary reference taking
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-02 08:58:10 +01:00
0a0f3906b5 proxmox-router: bump to 1.1.1
for current anyhow compat

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-12-01 13:22:26 +01:00
63cecf8a69 bump proxmox-time version to 1.1.1-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-12-01 07:18:44 +01:00
c77ab2c7e5 proxmox-time: time-span: implement FromStr
and deprecate 'parse_time_span'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:14:33 +01:00
42420d3a5e proxmox-time: calendar_events: implement FromStr
and deprecate 'parse_calendar_event'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:14:20 +01:00
a96e9fb724 proxmox-time: calendar-events: make compute_next_event a method
and deprecated the standalone function

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:14:06 +01:00
032787a6a3 proxmox-time: lib.rs: rustfmt
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:13:45 +01:00
676146fd90 proxmox-time: move tests from time.rs to test.rs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:13:36 +01:00
22b3388500 proxmox-time: move TimeSpan into time_span.rs
and related parsing functions

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:13:23 +01:00
6a680aac55 proxmox-time: move CalendarEvent into calendar_events.rs
and all relevant parsing functions as well

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:13:08 +01:00
83cf350f04 proxmox-time: daily_duration.rs: rustfmt
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:12:51 +01:00
78e1e8ce09 proxmox-time: move parse_daily_duration to daily_duration.rs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 07:12:35 +01:00
f61ee1372f proxmox-time: split DateTimeValue into own file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 06:55:51 +01:00
07cc21bd5a proxmox-time: move WeekDays into own file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 06:55:32 +01:00
a104c8fc41 proxmox-time: move common parse functions to parse_helpers
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 06:55:04 +01:00
8e0fc66dfe proxmox-time: calendar-events: make hour optional
to be compatible with our perl calendar events, we have to make hour optional
in that case we select every hour, so 'X' is the same as writing '*:X'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 06:54:32 +01:00
8480b7b4ff proxmox-time: calendar-events: implement repeated ranges
so that we can have e.g. '7..17/2:00' as timespec

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-12-01 06:53:55 +01:00
fc80f519f4 api-macro: add #[updater(type = "...")]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-30 17:21:20 +01:00
e461be1c9f api-macro: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-30 17:02:25 +01:00
60fa521095 time: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-29 14:56:31 +01:00
36e064d73a io: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-29 14:51:26 +01:00
eac7ebfc55 sys: add back d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-29 11:35:39 +01:00
30e99fef5b bump proxmox-sys to 0.2.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-29 11:33:42 +01:00
35ecf9a551 sys: deprecate the identity macro
to be removed with the next major version bump

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-29 11:31:39 +01:00
9dcc229491 sys: depend on sortable-macro 0.1.2
drops the requirement for the identity macro

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-29 11:30:06 +01:00
3891953724 proxmox-sys: fix a warning in io_bail_last macro
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-29 11:29:31 +01:00
6679005b4f bump proxmox-tfa to 2.0.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-28 17:00:29 +01:00
d85ebbb464 tfa: clippy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-28 17:00:29 +01:00
637188d4ba tfa: make configured webauthn origin optional
and add a webauthn origin override parameter to all methods
accessing it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-28 17:00:29 +01:00
508c1e7c85 tfa: let OriginUrl deref to its inner Url, add FromStr impl
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-28 17:00:29 +01:00
df3e1c53d5 tfa: add WebauthnConfig::digest method
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-28 17:00:29 +01:00
21b56f0c79 tfa: fix typo in docs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-28 17:00:29 +01:00
248e888ae7 cleanup: avoid use anyhow::*
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-28 12:50:59 +01:00
0bb298b262 bump proxmox-sortable-macro to 0.1.2-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-26 14:11:30 +01:00
2ce2136744 sortable-macro: drop anyhow dependency
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-26 14:10:14 +01:00
be7b330d8f sortable-macro: remove need for 'identity' macro
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-26 14:10:14 +01:00
cdf8220676 bump proxmox-io version to 1.0.1-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-25 12:16:03 +01:00
10ad340322 bump proxmox-sys version to 0.2.1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-25 12:12:35 +01:00
41d3df8950 proxmox-io: imported pbs-tools/src/sync/std_channel_writer.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-25 11:21:31 +01:00
80df41a887 proxmox-sys: import pipe() function from pbs-toos/src/io.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-25 10:34:33 +01:00
93625e4f87 update to proxmox-sys 0.2 crate
- imported pbs-api-types/src/common_regex.rs from old proxmox crate
- use hex crate to generate/parse hex digest
- remove all reference to proxmox crate (use proxmox-sys and
  proxmox-serde instead)

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:32:27 +01:00
d357ce2070 remove proxmox crate (no longer used)
Most functionality is now in proxmox-sys. The common-regex.rs is
moved back into proxmox-backup-server, because it is only used there.
Serde code is now in new proxmox-serde crate.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
82245339b8 use new proxmox-sys 0.2.0 for all crates
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
dace74a556 bump proxmox-sys version to 0.2.0
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
194b028789 proxmox-serde: add new crate with code from proxmox/src/tools/serde.rs 2021-11-24 10:00:38 +01:00
b06807532e proxmox-sys: add stortable-macro feature and remove it from proxmox
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
ee4e56e372 proxmox-sys: moved nodename from proxmox crate
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
ec3965fdad proxmox-sys: fix regression tests 2021-11-24 10:00:38 +01:00
21686c99e6 proxmox-sys: fixup worker task log macros 2021-11-24 10:00:38 +01:00
6efbe4e6e8 proxmox-sys: imported proxmox/src/tools/systemd.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
ba1f59c098 proxmox-sys: imported proxmox/src/tools/email.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
0806020ebf proxmox-sys: improve dev docs
And move WorkerTaskContext to top level.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
e011964f81 proxmox-sys: imported pbs-tools/src/command.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
4c7bd0ee50 proxmox-sys: imported pbs-tools/src/acl.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
66aea897b6 proxmox-sys: imported pbs-tool/src/xattr.rs 2021-11-24 10:00:38 +01:00
d98ed51fa8 proxmox-sys: move file_get_non_comment_lines to src/fs/file.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
92caf51634 proxmox-sys: split xattr code into extra file
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
d0b7e1e299 proxmox-sys: imported pbs-tools/src/fs.rs to src/fs/read_dir.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
32b69176dd proxmox-sys: imported proxmox tools/sys
And split some files into smaller parts.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-24 10:00:38 +01:00
4b1cb9f9b3 bump proxmox-tfa to 1.3.2-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-22 13:30:51 +01:00
54e97d35c1 fix u2f context instantiation
don't use the appid for the origin if an origin was
specified

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-22 13:25:49 +01:00
4c66ea2789 d/control and Cargo.toml bumps
* pin-utils isn't used anymore
* proxmox-sys version should also be tracked in Cargo.toml

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-22 10:56:36 +01:00
c0312f3717 bum proxmox-sys version to 0.1.2
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-22 10:35:45 +01:00
4d158ec1b3 proxmox-sys: fix test for wrong logrotate path
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-22 10:35:45 +01:00
19c29ab9b2 clipy fixes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-22 09:07:29 +01:00
9da1062f82 add Mmap::assume_init
to convert Mmap<MaybeUninit<T>> to Mmap<T>

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-22 09:06:46 +01:00
15819cdcfc depend on proxmox-async 0.2 2021-11-20 17:14:02 +01:00
ffbf58cad3 bump proxmox-async version to 0.2.0 2021-11-20 17:07:52 +01:00
cab125297b proxmox-async: improve dev docs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-20 17:07:52 +01:00
5bd54b4d9b proxmox-async: move AsyncChannelWriter to src/io
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-20 17:07:52 +01:00
7b7247fa80 proxmox-async: move TokioWriterAdapter to blocking
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-20 16:38:36 +01:00
6c4982bd7c proxmox-async: remove duplicate src/stream/wrapped_reader_stream.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-20 16:15:05 +01:00
781b5161bd proxmox-async: split stream.rs into separate files
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-20 16:12:55 +01:00
fa2032c7aa proxmox-async: split blocking.rs into separate files 2021-11-20 15:58:04 +01:00
4a07f14565 proxmox-rest-server: remove pbs-tools dependency 2021-11-19 18:06:54 +01:00
66b1f90f97 use new proxmox-async crate
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 18:03:22 +01:00
4413002f22 proxmox-async: add copyright file
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 16:43:19 +01:00
b63229bf1d proxmox-async: imported pbs-tools/src/zip.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 16:42:11 +01:00
64dca3c869 proxmox-async: imported pbs-tools/src/compression.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 16:42:11 +01:00
112b239d50 proxmox-async: imported pbs-tools/src/tokio/tokio_writer_adapter.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 16:42:11 +01:00
743f7df2a5 proxmox-async: imported pbs-tools/src/stream.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 16:42:11 +01:00
a2d62a2555 proxmox-async: imported pbs-tools/src/broadcast_future.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 16:42:11 +01:00
e1f0eb4aec proxmox-async: start new crate
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 16:42:11 +01:00
45645d9aee proxmox-sys: add copyright file
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 15:48:18 +01:00
c08d4a173d tfa: remove unnecessary bound attribute
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-19 12:45:23 +01:00
5ecc7724e2 sys: update d/control
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 12:12:01 +01:00
f3872d0a69 bump proxmox-tfa to 1.3.1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 12:05:48 +01:00
91932da15c tfa: bump webauthn-rs to 0.3
switch WebauthnConfig to use Url for the origin field, via a wrapper
type to make Updater and ApiType happy.

the two new Credential fields `verified` and `registration_policy` are
always set to `false` and `Discouraged`, to get the same behaviour as
before.

Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 12:05:48 +01:00
148950fd17 tfa: properly wrap webauthn credentials
this (external) struct gets new fields in webauthn-rs 0.3, so let's
properly wrap / convert it instead of just aliasing, else deserializing
will fail.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 12:05:48 +01:00
d49e6a362e bump proxmox-sys to 0.1.1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 12:05:48 +01:00
aedc5197f5 sys: add missing file
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 12:05:48 +01:00
6b2d0e7427 bump proxmox-http to 0.5.6
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 11:46:21 +01:00
57d31a8683 bump proxmox to 0.15.4
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 11:46:21 +01:00
dc14d03171 all crates: bump base64 dep to 0.13
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-19 11:46:21 +01:00
ef69d1aeb9 use new proxmox-sys crate
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 11:06:35 +01:00
20661f014d proxmox-sys: imported pbs-tools/src/task.rs (as worker_task_context.rs)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 10:05:06 +01:00
202641757e proxmox-sys: imported pbs-tools/src/logrotate.rs (with various cleanups)
- new: CreateOption s instead of owner string
- new: returns Result instead of Option
- new: add max_files  option
- remove new_with_user()

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 09:28:59 +01:00
8ac4c949bb proxmox-sys: imported pbs-tools/src/process_locker.rs
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 09:28:52 +01:00
840154f61b proxmox-sys: add new crate
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-19 09:28:32 +01:00
246ce4e801 bump proxmox version to 0.15.3-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-18 16:54:44 +01:00
a24b72c4de use proxmox::tools::fd::fd_change_cloexec from proxmox 0.15.3
Depend on proxmox 0.15.3

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-18 13:43:41 +01:00
9f4c20f3d2 proxmox: add fd_change_cloexec 2021-11-18 13:06:34 +01:00
3a378a34bb bump proxmox-time version to 1.1.0 2021-11-17 13:03:36 +01:00
6871232791 proxmox-time: remove custom error type
None of the functions we call returns a resonable error number anyways.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-17 13:03:36 +01:00
4a5dbd2129 proxmox-time: added time related fuctions from proxmox-systemd crate.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-17 13:03:36 +01:00
5b2e8b4c66 Revert "lang: get offsetof const fn ready"
This reverts commit 8f89f9ad60.

generates broken code in release builds on current rustc
1.55
2021-11-17 10:07:08 +01:00
bbdfd8ede9 bump proxmox-tfa to 1.3.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-17 09:30:27 +01:00
313d0a6b88 proxmox-tfa: import tfa api from proxmox-perl-rs as api feature
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-17 08:39:56 +01:00
41d0cef377 bump proxmox-shared-memory to version 0.1.1-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 11:32:16 +01:00
ece92bde29 proxmox-shared-memory: depend on libc 0.2.107
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 11:29:53 +01:00
e3a14098f7 bump proxmox-http to 0.5.5-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 11:05:55 +01:00
81e959548b proxmox-http: impl RateLimiterVec
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 11:01:03 +01:00
c5d396cdb9 bump proxmox-shared-memory to version 0.1.0-2
And add missing debian files.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 07:29:13 +01:00
42002bb5c9 proxmox-shared-memory: avoid compiler warnings
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 07:29:13 +01:00
a1728a72c8 proxmox-shared-memory: remove debug println
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-15 07:14:49 +01:00
b8724f4952 bump proxmox-http version to 0.5.4-1 2021-11-14 08:19:15 +01:00
b9a1d62e47 proxmox-http: RateLimit - remove average_rate
Instead, add a method to return overall traffic.
2021-11-14 08:15:42 +01:00
100848de10 bump proxmox-http version to 0.5.3-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-13 17:38:41 +01:00
13276cc619 proxmox-http: use repr(C) for RateLimiter
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-13 17:38:18 +01:00
8734d0c2f9 proxmox-http: use SharedRateLimit trait object for RateLimitedStream
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-13 17:38:10 +01:00
937d1a6095 proxmox-http: define a RateLimit trait
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-13 17:38:00 +01:00
564703b195 proxmox-shared-memory: improve regression tests
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-12 18:20:44 +01:00
09dc3a4abc proxmox-shared-memory: create debian package
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-12 18:20:23 +01:00
0ef72957a6 proxmox-shared-memory: implement helper to init subtypes
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-12 18:20:06 +01:00
c8251d4d24 proxmox-shared-memory: add magic number test
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-12 18:19:49 +01:00
9828acd2ef proxmox-shared-memory: shared memory and shared mutex implementation 2021-11-12 18:19:36 +01:00
956e7041fe bump proxmox version to 0.15.2-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-12 17:39:38 +01:00
c4cff1278f rest: make successful-ticket auth log a debug one to avoid syslog
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-12 11:10:12 +01:00
8bcd8e5357 impl proxmox::tools::fs::CreateOptions::apply_to()
Split out code to apply CreateOptions.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-12 09:24:27 +01:00
28e1d4c342 bump proxmox-http version to 0.5.2-1 2021-11-12 09:24:17 +01:00
4b3e0e331c implement Servive for RateLimitedStream
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 10:15:40 +01:00
745c4f37dd bump proxmox-schema version to 1.0.1-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 09:58:32 +01:00
2f221df863 RateLimiter: add update_rate method 2021-11-10 09:51:08 +01:00
0c27d5da17 RateLimitedStream: implement peer_addr 2021-11-10 09:51:08 +01:00
e0a9982dd1 RateLimiter: avoid panic in time computations 2021-11-10 09:51:08 +01:00
e0305f724b RateLimitedStream: allow periodic limiter updates 2021-11-10 09:51:08 +01:00
00ca0b7fae HttpsConnector: use RateLimitedStream
So that we can limit used bandwidth.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 09:51:08 +01:00
ded24b3f4c RateLimitedStream: implement poll_write_vectored
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 09:51:08 +01:00
c94ad247b1 Implement a rate limiting stream (AsyncRead, AsyncWrite)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-11-10 09:51:08 +01:00
e848148f5c websocket: adapt for client connection
previously, this was only used for the server side handling of web
sockets. by making the mask part of the WebSocket struct and making some
of the fns associated, we can re-use this for client-side connections
such as in proxmox-websocket-tunnel.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-09 15:45:56 +01:00
e0df53e793 bump proxmox-tfa to 1.2.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-09 13:27:59 +01:00
0156b3fe03 proxmox-tfa: add version field to u2f::AuthChallenge
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-09 13:27:59 +01:00
83934e59e6 proxmox-tfa: make u2f::AuthChallenge Clone + Debug
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-11-09 13:27:59 +01:00
4128c5fdb5 updater: impl UpdaterType for Vec
by replacing the whole Vec.

if we ever want to support adding/removing/modifying elements of a Vec
via the Updater, we'd need to extend it anyway (or use a custom
updater).

Suggested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-09 09:18:08 +01:00
bc38ff7878 bump proxmox-tfa to 1.1.0-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-29 15:07:09 +02:00
1554465d45 proxmox-tfa: add Totp::digits
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-29 14:31:39 +02:00
4f4fa80f2f bump proxmox to 0.15.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-21 12:44:12 +02:00
bdc7e9d145 proxmox: cleanup files on fsync errors
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-21 12:42:23 +02:00
a0cfb9c20d bump proxmox-http version to 0.5.1-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-21 07:37:30 +02:00
e9bea7b7ed use new fsync parameter to replace_file and atomic_open_or_create
Depend on proxmox 0.15.0 and proxmox-openid 0.8.1

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-21 07:28:32 +02:00
8ae297c8d2 proxmox: bump d/control
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-21 07:27:43 +02:00
bb089965c9 bump proxmox vertsion to 0.15.0-1
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-21 07:08:34 +02:00
b960bc3a4a add fsync parameter to replace_file and atomic_open_or_create
The fsync is required for consistency after power failure, so it should
be set when writing config files or otherwise important data.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2021-10-21 07:06:04 +02:00
b87aa76b64 rest-server: use hashmap for parameter errors
our ui expects a map here with 'field: "error"'. This way it can mark
the relevant field as invalid and correctly shows the complete error
message

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-21 06:32:23 +02:00
ffd1d5f378 uuid: bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-20 12:50:26 +02:00
1f03763c3b uuid: fixup debcargo.toml to include uuid-dev dependency
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-20 12:49:40 +02:00
8398620669 tfa: u2f: bytes_as_base64{,url} weren't meant to be public
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-18 14:31:04 +02:00
0cf4129204 use complete_file_name from proxmox-router 1.1 2021-10-13 14:10:02 +02:00
48ef839043 bump proxmox-router version to 1.1.0-1 2021-10-13 12:28:16 +02:00
417b7159d2 add filename completions helper (moved from pbs-tools)
Depend on 'nix' now.
2021-10-13 12:28:16 +02:00
17adc570db bump proxmox-borrow to 1.0.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-13 10:41:01 +02:00
087bf31567 borrow: update to ManuallyDrop::take
and fixup into_boxed_inner along the way

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-13 10:39:12 +02:00
d18292192d bump proxmox-api-macro to 1.0.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-12 14:51:35 +02:00
5988a1adf1 drop automatically_derived attribute for now
new rustc seems to *sometimes* complain about it

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-12 14:49:11 +02:00
f63ce12b66 lang: prepare c_str for const fns with 1.56
provides an api compatible const-fn-compatible c_str
alternative working on 1.56

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-12 14:29:03 +02:00
8f89f9ad60 lang: get offsetof const fn ready
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-12 14:14:25 +02:00
af33a97547 lang: deprecate ops::ControlFlow
as we now have rustc 1.55

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-12 14:13:59 +02:00
4ccd6256a8 update proxmox-http to 0.5 for the split
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 12:34:14 +02:00
336dab0177 update proxmox crate to the current split
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 12:34:14 +02:00
09046671ed update to first proxmox crate split
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 11:58:49 +02:00
1d24555b28 drop u2f-api file
used to be used by examples at some point

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 11:39:59 +02:00
f35dbbd651 add proxmox-section-config crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 11:39:59 +02:00
41f3fdfeb9 add proxmox-schema and proxmox-router crates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 11:39:59 +02:00
01a8b6f1bf add proxmox-io crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-11 10:07:53 +02:00
91f59f9f59 add proxmox-lang crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-08 15:22:17 +02:00
9b6fe4aceb add proxmox-time crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-08 15:22:17 +02:00
7db0a3c6df add proxmox-borrow crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-08 15:22:17 +02:00
bd67ccc1b3 add proxmox-uuid crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-08 15:22:17 +02:00
77dc52c047 add proxmox-tfa crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-08 15:22:17 +02:00
2859858f59 fix systemd::escape_unit's hex encoding
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-08 15:22:17 +02:00
6ad1bcaf89 bump proxmox dependency to 0.14.0 and proxmox-http to 0.5.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-08 11:18:22 +02:00
fb6823b54b rest-server: add cleanup_old_tasks
this is a helper that removes task log files that are not referenced
by the task archive anymore

it gets the oldest task archive file, gets the first endtime (the
oldest) and removes all files in the taskdir where the mtime is older
than that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-08 06:38:52 +02:00
b89c56b96e start checklist for adding crates in README.rst
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-06 10:37:17 +02:00
3ffe2ebc64 proxmox-rest-server: use new ServerAdapter trait instead of callbacks
Async callbacks are a PITA, so we now pass a single trait object which
implements check_auth and get_index.
2021-10-05 11:13:10 +02:00
2c09017045 proxmox-rest-server: pass owned RestEnvironment to get_index
This way we avoid pointers with lifetimes.
2021-10-05 11:12:53 +02:00
591a32ecd4 proxmox-rest-server: cleanup, access api_auth using a method 2021-10-05 11:12:53 +02:00
f189895cef fix deprecated use of std::u16 module
integer primitive type modules are deprecated, use
associated constants instead

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-10-04 15:03:50 +02:00
4348c807f7 rest: daemon: group systemd FFI together
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:45:34 +02:00
62b226e9c4 rest: daemon: sd notify: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:45:34 +02:00
7fac98519c rest: daemon: sd notify barrier: avoid barging in between SystemdNotify enum and systemd_notify
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:45:34 +02:00
83f15413fd rest: daemon: sd notify barrier: allow caller to set timeout
else it's rather to subtle and not a nice interface considering that
we only want to have a thin wrapper for sd_notify_barrier..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:44:20 +02:00
947f4c78a7 rest: daemon: comment why using a systemd barrier is important for main PID handover
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:44:20 +02:00
5027727fc5 rest-server/daemon: use sd_notify_barrier for service reloading
until now, we manually polled the systemd service state during a reload
so that the sd_notify messages get processed in the correct order
(RELOAD(old) -> MAINPID(old) -> READY(new))

with systemd >= 246 there is now 'sd_notify_barrier' which
blocks until systemd processed all prior messages

with that change, the daemon does not need to know the service name anymore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-02 11:44:20 +02:00
89766c4f95 proxmox-rest-server: make get_index async 2021-10-01 09:38:10 +02:00
58a6e5f512 proxmox-rest-server: add comment why ApiService needs to be 'pub' 2021-10-01 08:35:51 +02:00
2b023101f7 proxmox-rest-server: make check_auth async 2021-10-01 07:53:59 +02:00
a6c0ec35a3 proxmox-rest-server: fix spelling errors 2021-10-01 06:43:30 +02:00
be98d3156d proxmox-rest-server: improve ApiService docs 2021-09-30 17:18:47 +02:00
58eba821e6 proxmox-rest-server: start module docs 2021-09-30 13:49:29 +02:00
ad449a5780 rename CommandoSocket to CommandSocket 2021-09-30 12:52:35 +02:00
249aae1f05 drop fd_change_cloexec from proxmox-rest-server
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-09-30 12:43:22 +02:00
6d4e47fb09 proxmox-rest-server: improve docs
And rename enable_file_log to enable_access_log.
2021-09-30 12:29:15 +02:00
9cb2c97c77 proxmox-rest-server: improve docs
And renames abort_worker_async to abort_worker_nowait (avoid confusion,
because the function itself is not async).
2021-09-30 10:51:41 +02:00
50c62be82c proxmox-rest-server: cleanup FileLogger docs 2021-09-30 10:51:31 +02:00
f23aeff910 cleanup: move use clause to top 2021-09-30 08:42:37 +02:00
2ed2c0334c proxmox-rest-server: allow to catch SIGINT and SIGHUP separately
And make ServerState private.
2021-09-30 08:41:30 +02:00
93802ec2ef proxmox-rtest-server: make Reloader and Reloadable private 2021-09-30 07:44:19 +02:00
abfac6738c proxmox-rest-server: improve logging
And rename server_state_init() into catch_shutdown_and_reload_signals().
2021-09-29 14:48:46 +02:00
5b72478077 proxmox-rest-server: avoid useless call to request_shutdown
Also avoid unsafe code.
2021-09-29 14:37:07 +02:00
aedc1db9e2 daemon: simlify code (make it easier to use) 2021-09-29 12:04:48 +02:00
a8c75df695 cleanup: make BoxedStoreFunc private
There is no need to export this type.
2021-09-29 09:55:43 +02:00
15dcfbf162 examples: add example for a simple rest server with a small api
show how to generally start a daemon that serves a rest api + index page

api calls are (prefixed with either /api2/json or /api2/extjs):
/		GET	listing
/ping		GET	returns "pong"
/items		GET	lists existing items
		POST	lets user create new items
/items/{id}	GET	returns the content of a single item
		PUT	updates an item
		DELETE	deletes an item

Contains a small dummy user/authinfo

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-29 09:48:47 +02:00
2ce3e5fb78 rest-server: use hypers AddrIncoming for proxmox-backup-api
this has a 'from_listener' (tokio::net::TcpListener) since hyper 0.14.5 in
the 'tcp' feature (we use 'full', which includes that; since 0.14.13
it is not behind a feature flag anymore).

this makes it possible to create a hyper server without our
'StaticIncoming' wrapper and thus makes it unnecessary.

The only other thing we have to do is to change the Service impl from
tokio::net::TcpStream to hyper::server::conn::AddStream to fulfill the trait
requirements.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-29 09:38:40 +02:00
5bff2a1d4b add test for property string verification errors 2021-09-29 08:59:28 +02:00
d62d1e5707 move api schema tests into separate file 2021-09-29 08:25:23 +02:00
8e1d573844 add tests for schema verification errors 2021-09-29 08:17:53 +02:00
9ec9d1f9e6 ParameterError: construct XPath like string to identify nested properties 2021-09-28 12:34:08 +02:00
48ba0a2dd5 ExtJsFormatter: use ParameterError to correctly compute 'errors'
By default, 'errors' is now empty.

Depend on proxmox 0.13.5.
2021-09-28 10:19:55 +02:00
b5ea4f9bb2 bump proxmox version to 0.13.5-1 2021-09-28 09:52:27 +02:00
51db0d0f12 ParameterError: record parameter names 2021-09-28 09:52:27 +02:00
5a88aaf074 cli/text_table: calculate correct column width for unicode characters
When printing unicode text, a glyph can take up more (or less) space than
a single column. To handle that, use the 'unicode-width' crate which
calculates the width by the unicode standard.

This makes the text tables correctly aligned when printing unicode
characters (e.g. in a datastore/user/syncjob comment).

'unicode-width' is used itself in the rust compiler to format errors
(see e.g. the Cargo.toml in /compiler/rustc_errors of the rust git)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-09-28 08:05:28 +02:00
962553d252 proxmox-rest-server: cleanup formatter, improve docs
Use trait for OutputFormatter. This is functionally equivalent,
but more rust-like...
2021-09-28 07:45:50 +02:00
6e50c7aac3 WorkerTaskContext: add shutdown_requested() and fail_on_shutdown() 2021-09-24 12:04:31 +02:00
42dae7e1fb cleanup WorkerTaskContext 2021-09-24 11:39:30 +02:00
5a37cfd4c0 upid: remove arbitrary 128 max length for UPID
we can easily go beyond that when having long datastore/remote names
also because we do 'systemd-encode' them, which means that every special
char takes up 4 bytes (e.g. '-' => '\x2d')

while we could just increase the lenght to say 256 or 512, i do not
really see the benefit to limit this at all, since users cannot create
tasks with arbitrary names, and all other fields are generated from
other valid types (username, datastore, remote, etc.)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-09-24 11:08:59 +02:00
020c8e6980 cleanup worker task logging
In order to avoid name conflicts with WorkerTaskContext

- renamed WorkerTask::log to WorkerTask::log_message

Note: Methods have different fuction signatures

Also renamed WorkerTask::warn to WorkerTask::log_warning for
consistency reasons.

Use the task_log!() and task_warn!() macros more often.
2021-09-24 10:34:11 +02:00
7a4bb6000e rename TaskState to WorkerTaskContext 2021-09-24 10:33:49 +02:00
85ec987a48 move src/server/h2service.rs into proxmox-rest-server crate 2021-09-24 10:28:17 +02:00
e8c124fe1b move worker_task.rs into proxmox-rest-server crate
Also moved pbs-datastore/src/task.rs to pbs-tools, which now depends on 'log'.
2021-09-24 10:28:17 +02:00
3cec879463 bump proxmox version to 0.13.4-1 2021-09-23 12:06:48 +02:00
ed9bdab576 add UPID api type 2021-09-23 12:04:15 +02:00
fc2253b3e8 add systemd escape_unit and unescape_unit 2021-09-23 12:04:15 +02:00
eb1f23c588 use UPID and systemd helpers from proxmox 0.13.4 2021-09-23 12:01:43 +02:00
0999494564 schema: add extra info to array parameters
it's not immediately obvious that they can be specified more than once
otherwise.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 07:58:52 +02:00
5f5a5194de schema: print item type-text instead of <array>
this is only used for CLI synopsis/usage strings, the API viewer already
prints the full type text in a correct format. the old variant was also
rather misleading, since on the CLI we don't pass in an array, but each
item as its own parameter.

noticed this while working on the pull/sync filtering series, but it
affects quite a lot of stuff, among other things the Updater and
Deleteable CLI, e.g. from `man proxmox-backup-manager`:

>       --delete <array>
>                     List of properties to delete.

vs.

>       --delete disable|validation-delay
>                     List of properties to delete.

But some of them might only have <string> as the item type text,
which is not much nicer but also not really worse.

The whole "List of .." is confusing anyway, but not easily solvable,
since the description is used for
- API dump/viewer (where it is a list/array of ..)
- usage message/man pages (where it's a parameter that gives a single
  element, but it might be passed in multiple times to construct an
  array)

Also, for some common occurrences, the item description is too
generic, and it's not possible to override the description for
external types with the current api macro.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
 [ Thomas: Added more context that was in the diffstat of the path ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 07:56:09 +02:00
1d60abf9f1 move src/server/rest.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
7c1bd58a8a rest server: cleanup auth-log handling
Handle auth logs the same way as access log.
- Configure with ApiConfig
- CommandoSocket command to reload auth-logs "api-auth-log-reopen"

Inside API calls, we now access the ApiConfig using the RestEnvironment.

The openid_login api now also logs failed logins and return http_err!(UNAUTHORIZED, ..)
on failed logins.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
48b7a61a21 rest server: do not use pbs_api_types::Authid
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
2ea6f8d01d rest server: return UserInformation from ApiAuth::check_auth
This need impl UserInformation for Arc<CachedUserInfo> which is implemented
with proxmox 0.13.2

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
efeccc11cc make get_index and ApiConfig property (callback)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
3d73529460 rest server: simplify get_index() method signature
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
cc67441662 move normalize_uri_path and extract_cookie to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
fbe0de85d0 move src/tools/compression.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
dc28aa1ae7 move src/server/formatter.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
ba04dfb9b2 move src/server/environment.rs to proxmox-rest-server crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
51d84f9847 move src/tools/daemon.rs to proxmox-rest-server workspace
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
ca7a26166f move ApiConfig, FileLogger and CommandoSocket to proxmox-rest-server workspace
ApiConfig: avoid using  pbs_config::backup_user()
CommandoSocket: avoid using  pbs_config::backup_user()
FileLogger: avoid using  pbs_config::backup_user()
- use atomic_open_or_create_file()

Auth Trait: moved definitions to proxmox-rest-server/src/lib.rs
- removed CachedUserInfo patrameter
- return user as String (not Authid)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
2e426f9df2 start new proxmox-rest-server workspace
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-21 08:46:41 +02:00
3cd221c163 bump proxmox version to 0.13.3-1 2021-09-21 06:45:28 +02:00
b6b9118ce3 atomic_open_or_create_file: add support for OFlag::O_EXCL 2021-09-20 08:24:44 +02:00
b46a0720ae atomic_open_or_create_file: catch unsupported flag OFlag::O_DIRECTORY 2021-09-20 08:13:28 +02:00
c7e1a1f3c8 update proxmox/debian/control 2021-09-20 08:12:18 +02:00
297a076478 bump proxmox version to 0.13.2-1 2021-09-16 11:03:27 +02:00
fb2b7a4e93 impl <T: UserInformation> UserInformation for std::sync::Arc<T> 2021-09-16 10:21:50 +02:00
254ef7ff82 proxmox: generate_usage_str: don't require static lifetimes
this prevents us from using it under certain conditions and it's
actually not necessary, so drop them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-09-15 08:09:24 +02:00
c983a1d623 avoid type re-exports 2021-09-14 08:35:43 +02:00
908d87f6cb more api type cleanups: avoid re-exports 2021-09-10 12:25:32 +02:00
bc6ad1dfce move user configuration to pbs_config workspace
Also moved memcom.rs and cached_user_info.rs
2021-09-10 07:09:04 +02:00
93dfd7d8e9 start new pbs-config workspace
moved src/config/domains.rs
2021-09-02 12:58:20 +02:00
6a52b3b8e7 proxmox: bump d/control for api-macro 0.5.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 10:49:27 +02:00
36ed21c548 bump proxmox to 0.13.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 10:47:28 +02:00
cb96de1c4d bump proxmox-api-macro to 0.5.1-1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 10:46:05 +02:00
14bd81fbc4 doc: update UpdaterType documentation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-30 08:21:08 +02:00
76641b0a72 api-macro: allow external schemas in 'returns' specification
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-26 13:34:23 +02:00
bf84e75603 api: move ReturnType from router to schema
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-26 11:39:55 +02:00
0f2caafc4e bump proxmox to 0.13.0-1 and proxmox-api-macro to 0.5.0
and proxmox-http to 0.4.0... urgh

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 09:41:17 +02:00
2df0b8efb3 tools::serde: support Option<String> in string_as_base64
This will make Updater derivations go more smoothly.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-25 09:41:17 +02:00
8c125364e4 websocket: fix doc test
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-24 15:19:47 +02:00
917ce00dd6 rustfmt
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-13 13:31:57 +02:00
744d69c2ab more updater cleanups
* Updatable is now named UpdaterType
* UPDATER_IS_OPTION is now assumed to always be true
    While an updater can be a non-optional struct, being an
    updater, all its fields are also Updaters, so after
    expanding all levels of nesting, the resulting list of
    fields can only contain optional values.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-13 11:11:27 +02:00
cb9d57b453 put API_SCHEMA variable into ApiType trait
This way we can assign `API_SCHEMA` constants to `Option`
types.

Here's why:

The api-macro generated code usese `T::API_SCHEMA` when
building ObjectSchemas.

For Updaters we replace `T` with
  `<T as Updatable>::Updater`

This means for "simple" wrappers like our `Authid` or
`Userid`, the ObjectSchema will try to refer to
  `<Authid as Updatable>::Updater::API_SCHEMA`
which resolves to:
  `Option<Authid>::API_SCHEMA`
which does not exist, for which we cannot add a normal
`impl` block to add the schema variable, since `Option` is
not "ours".

But we now have a blanket implementation of `ApiType` for
`Option<T> where T: ApiType` which just points to the
original `T::API_SCHEMA`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-12 10:15:13 +02:00
783cbcb499 fixup schema entry for updaters with explicit types
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-10 14:17:20 +02:00
34020ea3d6 change updater derivation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-10 12:03:29 +02:00
017b81712e api macro: assume that field types are api types by default
#[api]
    struct Foo {
        field: Bar,
    }

does not require the use of
    #[api(
        properties: {
            field: {
                type: Bar,
            },
        },
    )]

anymore

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-10 11:42:24 +02:00
8ebcd68a2c refactor serde parsing for later reuse
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-10 11:42:24 +02:00
cb04553d47 proxmox: d/control: commit version update changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 09:53:38 +02:00
907a9f344d proxmox: bump version to 0.12.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 09:48:04 +02:00
3fd900d2b2 tools fs: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-22 09:44:17 +02:00
11dccc40b5 fs: link fallback for atomic create
Some file systems don't support renameat2's RENAME_NOREPLACE
flag (eg. ZFS), at the some time, some other file systems
don't support hardlinks via link (eg. vfat, cifs), so we now
try both: first the rename (since it's more efficient), then
link+unlink for the rest.

If both fail, the file system is simply not supported for
our purposes anyway...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-22 09:41:04 +02:00
2d9a018854 proxmox: bump api-macro dependency
cyclic stuff is annoying...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 17:45:58 +02:00
c75e706006 proxmox-api-macro: bump version to 0.4.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 17:45:29 +02:00
44d571a8e0 proxmox-http: bump version to 0.3.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 17:45:29 +02:00
5cbddad172 update versions in generated control files 2021-07-20 15:45:45 +02:00
82d7a1cdbf bump proxmox version to 0.12.0-1 2021-07-20 15:45:45 +02:00
08c2fc4acb open_file_locked: add options parameter (CreateOptions)
To be able to set file permissions and ownership.

This is a breaking change.
2021-07-20 15:45:45 +02:00
a818e74a23 new helper atomic_open_or_create_file() 2021-07-20 15:45:45 +02:00
716101d660 move channel/stream helpers to pbs-tools
pbs_tools
  ::blocking: std/async wrapping with block_in_place
  ::stream: stream <-> AsyncRead/AsyncWrite wrapping

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-20 11:27:40 +02:00
893cc0455d buildsys: drop buster from upload target
we are not really compatible with pbs1, needs a stable-1 branch if we
need to backport something

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-20 08:59:59 +02:00
e8349f3d2e move client to pbs-client subcrate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-19 12:58:43 +02:00
1c81fc4cc1 rest: log response: avoid unnecessary mut on variable
a match expresses the fallback slightly nicer and needs no mut,
which is always nice to avoid.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-11 13:05:19 +02:00
223db419a4 test fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-07 12:17:10 +02:00
6759d3ebaf split out pbs-buildcfg module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-07-06 12:00:14 +02:00
7bed6d5c31 REST: set error message extenesion for bad-request response log
We send it already to the user via the response body, but the
log_response does not has, nor wants to have FWIW, access to the
async body stream, so pass it through the ErrorMessageExtension
mechanism like we do else where.

Note that this is not only useful for PBS API proxy/daemon but also
the REST server of the file-restore daemon running inside the restore
VM, and it really is *very* helpful to debug things there..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
3c2492f0ff REST: rust fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-03 21:34:03 +02:00
fa8194e0bb refactor send_command
- refactor the combinators,
- make it take a `&T: Serialize` instead of a Value, and
  allow sending the raw string via `send_raw_command`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-05-11 16:28:08 +02:00
610b147ba7 server/rest: fix new type ambiguity
basically the same as commit 5fb15b781e
Will be required once we get to use a newer rustc, at least the
client build for archlinux was broken due to this.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-22 21:24:44 +02:00
ac49fbd472 enable tape backup by default
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 12:31:56 +02:00
c80251ecca server/rest: add ApiAuth trait to make user auth generic
This allows switching the base user identification/authentication method
in the rest server. Will initially be used for single file restore VMs,
where authentication is based on a ticket file, not the PBS user
backend (PAM/local).

To avoid putting generic types into the RestServer type for this, we
merge the two calls "extract_auth_data" and "check_auth" into a single
one, which can use whatever type it wants internally.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-08 13:57:57 +02:00
1b2a34166d server: rest: collapse nested if for less indentation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-07 17:57:46 +02:00
27fca61f1b server: rest: switch from fastest to default deflate compression level
I made some comparision with bombardier[0], the one listed here are
30s looped requests with two concurrent clients:

[ static download of ext-all.js ]:
  lvl                              avg /   stdev  / max
 none        1.98 MiB  100 %    5.17ms /  1.30ms / 32.38ms
 fastest   813.14 KiB   42 %   20.53ms /  2.85ms / 58.71ms
 default   626.35 KiB   30 %   39.70ms /  3.98ms / 85.47ms

[ deterministic (pre-defined data), but real API call ]:
  lvl                              avg /   stdev  / max
 none      129.09 KiB  100 %    2.70ms / 471.58us / 26.93ms
 fastest    42.12 KiB   33 %    3.47ms / 606.46us / 32.42ms
 default    34.82 KiB   27 %    4.28ms / 737.99us / 33.75ms

The reduction is quite better with default, but it's also slower, but
only when testing over unconstrained network. For real world
scenarios where compression actually matters, e.g., when using a
spotty train connection, we will be faster again with better
compression.

A GPRS limited connection (Firefox developer console) requires the
following load (until the DOMContentLoaded event triggered) times:
  lvl        t      x faster
 none      9m 18.6s   x 1.0
 fastest   3m 20.0s   x 2.8
 default   2m 30.0s   x 3.7

So for worst case using sligthly more CPU time on the server has a
tremendous effect on the client load time.

Using a more realistical example and limiting for "Good 2G" gives:

 none      1m  1.8s   x 1.0
 fastest      22.6s   x 2.7
 default      16.6s   x 3.7

16s is somewhat OK, >1m just isn't...

So, use default level to ensure we get bearable load times on
clients, and if we want to improve transmission size AND speed then
we could always use a in-memory cache, only a few MiB would be
required for the compressable static files we server.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-07 17:57:42 +02:00
692de3a903 server/rest: compress static files
compress them on the fly
and refactor the size limit for chunking files

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
376d90679c server/rest: compress api calls
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
c2a1c87039 server/rest: add helper to extract compression headers
for now we only extract 'deflate'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
a06d3892fe tools/compression: add DeflateEncoder and helpers
implements a deflate encoder that can compress anything that implements
AsyncRead + Unpin into a file with the helper 'compress'

if the inner type is a Stream, it implements Stream itself, this way
some streaming data can be streamed compressed

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
d28249c290 tools: add compression module
only contains a basic enum for the different compresssion methods

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-07 12:34:31 +02:00
7334107fae server/rest: drop now unused imports
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-01 11:53:13 +02:00
25a14e8eb6 server/rest: extract auth to seperate module
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 11:26:28 +02:00
5fb15b781e server/rest: fix type ambiguity
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 12:02:30 +02:00
45a652c26f server/rest: rust format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-29 08:17:26 +02:00
9dbc5f47ca ui: enable experimental tape UI if tape.cfg exists 2021-03-03 09:02:02 +01:00
8635b165b2 rest: implement tower service for UnixStream
This allows anything that can be represented as a UnixStream to be used
as transport for an API server (e.g. virtio sockets).

A tower service expects an IP address as it's peer, which we can't
reliably provide for unix socket based transports, so just fake one.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-02-17 07:50:35 +01:00
2a16daafdd allow complex Futures in tower_service impl
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-26 09:53:55 +01:00
38df61a621 clippy: fix/allow needless_range_loop
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
ba81db7848 clippy: us *_or_else with function calls
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:23:54 +01:00
724e2f47f9 clippy: use strip_prefix instead of manual stripping
it's less error-prone (off-by-one!)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:22:59 +01:00
28f3b0df9e clippy: remove unnecessary clones
and from::<T>(T)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-20 16:22:59 +01:00
d6874957e3 proxmox 0.10: adapt to moved ParameterSchema
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
e2e0339966 cleanup: remove unnecessary 'mut' and '.clone()'
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
c99f950310 tokio 1.0: update to new tokio-openssl interface
connect/accept are now happening on pinned SslStreams

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
d94119efc8 tokio 1.0: delay -> sleep
almost the same thing, new name(s), no longer Unpin

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-01-14 16:01:33 +01:00
33839410b9 api: tfa management and login
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-01-11 10:22:32 +01:00
8bb221663e adaptions for proxmox 0.9 and proxmox-api-macro 0.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-12-22 07:31:05 +01:00
f9378cad13 tools/daemon: improve reload behaviour
it seems that sometimes, the child process signal gets handled
before the parent process signal. Systemd then ignores the
childs signal (finished reloading) and only after going into
reloading state because of the parent. this will never finish.

Instead, wait for the state to change to 'reloading' after sending
that signal in the parent, an only fork afterwards. This way
we ensure that systemd knows about the reloading before actually trying
to do it.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Fabian Ebner <f.ebner@proxmox.com>
2020-12-18 10:30:37 +01:00
6b0dabefd4 file logger: remove test.log after test as well
and a doc formatting fixup

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-30 14:13:21 +01:00
85b5be8133 rest: check for disabled token (user)
when authenticating a token, and not just when authenticating a
user/ticket.

Reported-By: Dominik Jäger <d.jaeger@proxmox.com>

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-11 12:21:29 +01:00
e6edbb5c3b daemon: rename method, endless loop, bail on exec error
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-11 10:14:01 +01:00
d75579790c daemon: add hack for sd_notify
sd_notify is not synchronous, iow. it only waits until the message
reaches the queue not until it is processed by systemd

when the process that sent such a message exits before systemd could
process it, it cannot be associated to the correct pid

so in case of reloading, we send a message with 'MAINPID=<newpid>'
to signal that it will change. if now the old process exits before
systemd knows this, it will not accept the 'READY=1' message from the
child, since it rejects the MAINPID change

since there is no (AFAICS) library interface to check the unit status,
we use 'systemctl is-active <SERVICE_NAME>' to check the state until
it is not 'reloading' anymore.

on newer systemd versions, there is 'sd_notify_barrier' which would
allow us to wait for systemd to have all messages from the current
pid to be processed before acknowledging to the child, but on buster
the systemd version is to old...

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-11 09:43:00 +01:00
df9b42db3f tools/daemon: fix reload with open connections
instead of await'ing the result of 'create_service' directly,
poll it together with the shutdown_future

if we reached that, fork_restart the new daemon, and await
the open future from 'create_service'

this way the old process still handles open connections until they finish,
while we already start a new process that handles new incoming connections

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-05 11:14:56 +01:00
5d7ae1f38c api: factor out auth logger and use for all API authentication failures
we have information here not available in the access log, especially
if the /api2/extjs formatter is used, which encapsulates errors in a
200 response.

So keep the auth log for now, but extend it use from create ticket
calls to all authentication failures for API calls, this ensures one
can also fail2ban tokens.

Do that logging in a central place, which makes it simple but means
that we do not have the user ID information available to include in
the log.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
4031710b36 server: implement access log rotation with re-open via command socket
re-use the future we already have for task log rotation to trigger
it.

Move the FileLogger in ApiConfig into an Arc, so that we can actually
update it and REST using the new one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
6e2e7e66c5 command socket: make create_control_socket private
this is internal for now, use the comanndo socket struct
implementation, and ideally not a new one but the existing ones
created in the proxy and api daemons.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:50:24 +01:00
3dd23fd3ba server: add CommandoSocket where multiple users can register commands
This is a preparatory step to replace the task control socket with it
and provide a "reopen log file" command for the rest server.

Kept it simple by disallowing to register new commands after the
socket gets spawned, this avoids the need for locking.

If we really need that we can always wrap it in a Arc<RWLock<..>> or
something like that, or even nicer, register at compile time.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
67769843da tools: file logger: avoid some possible unwraps in log method
writing to a file can explode quite easily.
time formatting to rfc3339 should be more robust, but it has a few
conditions where it could fail, so catch that too (and only really
do it if required).

The writes to stdout are left as is, it normally is redirected to
journal which is in memory, and thus breaks later than most stuff,
and at that point we probably do not care anymore anyway.

It could make sense to actually return a result here..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
66b9170dda file logger: allow reopening file
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 10:03:10 +01:00
95ac6d87c3 server/rest: accept also = as token separator
Like we do in Proxmox VE

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:34:26 +01:00
7b2f5672c5 server/rest: user constants for HTTP headers
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:33:36 +01:00
e57660691d api tokens: add authorization method
and properly decode secret (which is a no-op with the current scheme).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 13:15:14 +01:00
971dc0e6bc REST: extract and handle API tokens
and refactor handling of headers in the REST server while we're at it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
ed512bc26f replace Userid with Authid
in most generic places. this is accompanied by a change in
RpcEnvironment to purposefully break existing call sites.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
9af79677b2 REST: rename token to csrf_token
for easier differentiation with (future) api_token

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 14:02:19 +02:00
32bb67cf60 build: bump nix dependency
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 12:12:33 +02:00
f10a722eda file logger: add option to make the backup user the log file owner
and use that in ApiConfig to avoid that it is owned by root if the
proxmox-backup-api process creates it first.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-19 10:37:26 +02:00
98f91a4d24 server/rest: also log user agent
allows easily to see if a request is from a browser or a proxmox-backup-client
CLI

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 11:23:49 +02:00
287659dcda server/rest: implement request access log
reuse the FileLogger module in append mode.
As it implements write, which is not thread safe (mutable self) and
we use it in a async context we need to serialize access using a
mutex.

Try to use the same format we do in pveproxy, namely the one which is
also used in apache or nginx by default.

Use the response extensions to pass up the userid, if we extract it
from a ticket.

The privileged and unprivileged dameons log both to the same file, to
have a unified view, and avoiding the need to handle more log files.
We avoid extra intra-process locking by reusing the fact that a write
smaller than PIPE_BUF (4k on linux) is atomic for files opened with
the 'O_APPEND' flag. For now the logged request path is not yet
guaranteed to be smaller than that, this will be improved in a future
patch.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 11:23:49 +02:00
1d07c62e74 tools file logger: fix example and comments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 11:16:29 +02:00
3b2aaa1878 tools: file logger: use option struct to control behavior
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 10:48:36 +02:00
43ab000e92 server: rest: also log the query part of URL
As it is part of the request and we do so in our other products

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 10:41:05 +02:00
da8439b25e server: rest: implement max URI path and query length request limits
Add a generous limit now and return the correct error (414 URI Too
Long). Otherwise we could to pretty larger GET requests, 64 KiB and
possible bigger (at 64 KiB my simple curl test failed due to
shell/curl limitations).

For now allow a 3072 characters as combined length of URI path and
query.

This is conform with the HTTP/1.1 RFCs (e.g., RFC 7231, 6.5.12 and
RFC 2616, 3.2.1) which do not specify any limits, upper or lower, but
require that all server accessible resources mus be reachable without
getting 414, which is normally fulfilled as we have various length
limits for stuff which could be in an URI, in place, e.g.:
 * user id: max. 64 chars
 * datastore: max. 32 chars

The only known problematic API endpoint is the catalog one, used in
the GUI's pxar file browser:
GET /api2/json/admin/datastore/<id>/catalog?..&filepath=<path>

The <path> is the encoded archive path, and can be arbitrary long.

But, this is a flawed design, as even without this new limit one can
easily generate archives which cannot be browsed anymore, as hyper
only accepts requests with max. 64 KiB in the URI.
So rather, we should move that to a GET-as-POST call, which has no
such limitations (and would not need to base32 encode the path).

Note: This change was inspired by adding a request access log, which
profits from such limits as we can then rely on certain atomicity
guarantees when writing requests to the log.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 10:40:39 +02:00
e4bece494c server/rest: forward real client IP on proxied request
needs new proxmox dependency to get the RpcEnvironment changes,
adding client_ip getter and setter.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 10:36:54 +02:00
e30fd48080 server: rest: refactor code to avoid multiple log_response calls
The 'Ok::<_, Self::Error>(res)' type annotation was from a time where
we could not use async, and had a combinator here which needed
explicity type information. We switched over to async in commit
df52ba5e45 and, as the type annotation
is already included in the Future type, we can safely drop it.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-15 13:58:47 +02:00
009737844c code cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-15 13:58:47 +02:00
649ff6f67f server/REST: check auth: code cleanup, better variable names
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-12 18:39:45 +02:00
1fa5b1108d server/REST: make handle_request private
it's not used anywhere else, so do not suggest so

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-12 18:39:00 +02:00
4becd202c5 server: get index: make content-type non mutable
feels more idiomatic

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-12 13:36:45 +02:00
a049949d14 server/rest: code cleanup: use async
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-12 13:36:45 +02:00
16f05d6649 REST: don't print CSRF token
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-08 15:57:22 +02:00
d2b63c504a REST server: avoid hard coding world readable API endpoints
while we probably do not add much more to them, it still looks ugly.

If this was made so that adding a World readable API call is "hard"
and not done by accident, it rather should be done as a test on build
time. But, IMO, the API permission schema definitions are easy to
review, and not often changed/added - so any wrong World readable API
call will normally still caught.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-05 08:29:43 +02:00
c6ab240333 rest server: cleanup use statements
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-02 13:04:08 +02:00
dc12cf44aa avoid chrono dependency, depend on proxmox 0.3.8
- remove chrono dependency

- depend on proxmox 0.3.8

- remove epoch_now, epoch_now_u64 and epoch_now_f64

- remove tm_editor (moved to proxmox crate)

- use new helpers from proxmox 0.3.8
  * epoch_i64 and epoch_f64
  * parse_rfc3339
  * epoch_to_rfc3339_utc
  * strftime_local

- BackupDir changes:
  * store epoch and rfc3339 string instead of DateTime
  * backup_time_to_string now return a Result
  * remove unnecessary TryFrom<(BackupGroup, i64)> for BackupDir

- DynamicIndexHeader: change ctime to i64

- FixedIndexHeader: change ctime to i64
2020-09-15 07:12:57 +02:00
28aef354ca don't truncate DateTime nanoseconds
where we don't care about them anyway..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:48:10 +02:00
1b79d5d5a5 ui: add translation support
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 08:54:33 +02:00
7917e89426 tools: rename extract_auth_cookie to extract_cookie
It does nothing specific to authentication..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 08:54:33 +02:00
3c8cb5129e replace and remove old ticket functions
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-12 14:28:21 +02:00
906ea7a45b introduce Username, Realm and Userid api types
and begin splitting up types.rs as it has grown quite large
already

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-10 12:05:01 +02:00
df9109d493 bump proxmox to 0.3, cleanup http_err macro usage
Also swap the order of a couple of `.map_err().await` to
`.await.map_err()` since that's generally more efficient.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-29 09:38:36 +02:00
67294d4796 followup: server/state: rename task_count to internal_task_count
so that the relation with spawn_internal_task is made more clear

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-24 12:11:39 +02:00
8588c41f3c server/state: add spawn_internal_task and use it for websockets
is a helper to spawn an internal tokio task without it showing up
in the task list

it is still tracked for reload and notifies the last_worker_listeners

this enables the console to survive a reload of proxmox-backup-proxy

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-24 11:17:33 +02:00
054951682d server/rest: add console to index
register the console template and render it when the 'console' parameter
is given

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 12:06:38 +02:00
9993f4d099 server/config: add mechanism to update template
instead of exposing handlebars itself, offer a register_template and
a render_template ourselves.

render_template checks if the template file was modified since
the last render and reloads it when necessary

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-23 11:55:00 +02:00
ab08bd7f39 server: add path value to NOT_FOUND http error
Especially helpful for requests not coming from browsers (where the
URL is normally easy to find out).

Makes it easier to detect if one triggered a request with an old
client, or so..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-16 12:46:51 +02:00
c6f8eaf481 src/server/rest.rs: avoid compiler warning 2020-07-10 09:13:52 +02:00
991cc982c7 src/server/rest.rs: disable debug logs 2020-07-09 16:18:14 +02:00
52148f892d improve 'debug' parameter
instead of checking on '1' or 'true', check that it is there and not
'0' and 'false'. this allows using simply

https://foo:8007/?debug

instead of

https://foo:8007/?debug=1

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-26 09:12:14 +02:00
fa9323b5c4 src/tools/daemon.rs: reopen STDOUT/STDERR journald streams to get correct PID in logs 2020-06-22 13:06:53 +02:00
ef783bdbef tools::daemon: sync with child after MainPid message
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-22 10:58:04 +02:00
93686a9ad7 tools::daemon: fetch exe name in the beginning
We get the path to our executable via a readlink() on
"/proc/self/exe", which appends a " (deleted)" during
package reloads.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-22 10:31:54 +02:00
4e33373ca1 typo fixes all over the place
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-30 16:39:08 +02:00
3f633844fa depend on proxmox 0.1.31 - use Value to store result metadata 2020-05-18 09:57:35 +02:00
813aacde7e src/server/command_socket.rs: do not abort loop on client errors, allow backup gid 2020-05-07 09:27:33 +02:00
d0815e5e71 change index to templates using handlebars
using a handlebars instance in ApiConfig, to cache the templates
as long as possible, this is currently ok, as the index template
can only change when the whole package changes

if we split this in the future, we have to trigger a reload of
the daemon on gui package upgrade (so that the template gets reloaded)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-04-29 17:05:53 +02:00
d9d3da2b68 src/config/cached_user_info.rs: cache it up to 5 seconds 2020-04-18 08:49:20 +02:00
803e71103a switch from failure to anyhow
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-04-17 18:43:30 +02:00
5bbd4eef2d src/server/rest.rs: reduce delay for permission error to 500ms 2020-04-16 12:56:34 +02:00
a73a7c33b2 start impl. access permissions 2020-04-16 12:47:16 +02:00
d967838214 api: add list_domains 2020-04-09 11:36:45 +02:00
7726d660b6 src/server/rest.rs: use correct formatter 2020-03-26 12:54:20 +01:00
6b5dc96c7d bump proxmox crate to 0.1.7
The -sys, -tools and -api crate have now been merged into
the proxmx crate directly. Only macro crates are separate
(but still reexported by the proxmox crate in their
designated locations).

When we need to depend on "parts" of the crate later on
we'll just have to use features.

The reason is mostly that these modules had
inter-dependencies which really make them not independent
enough to be their own crates.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-01-21 13:48:37 +01:00
a185257e80 update to nix 0.16 2019-12-19 09:29:44 +01:00
4a08490b81 add CSS file for PBS ExtJS6 basic ui
some fitting rules copied over from PVE's ext6-pve.css file.
simply place it in the css subfolder where the proxmox-backup-gui.js
file is hosted and add a "css/" alias for that directory, the
formatter gets use the right content type with that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-12-17 11:20:32 +01:00
26fff0806a handle_static_file_download: move from and_then to await
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-12-17 08:56:55 +01:00
3e2c83471a api2: update for latest proxmox-api changes
- rename ApiFuture into ApiResponseFuture
- impl. ApiHandler::Async
2019-12-16 10:01:51 +01:00
9717c1b4eb update a chunk of stuff to the hyper release
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-12-13 11:24:41 +01:00
4e1e7b8738 src/server/formatter.rs: impl. new result attribute "active" 2019-12-07 15:29:42 +01:00
51ea723de3 rename ApiHandler::Async into ApiHandler::AsyncHttp 2019-11-23 09:03:21 +01:00
764bdf54cf src/server/rest.rs: simplify code 2019-11-22 18:44:14 +01:00
fe73df5be1 src/server/rest.rs: rename get_request_parameters_async to get_request_parameters 2019-11-22 17:24:16 +01:00
ac5b1c2805 src/server/rest.rs - only pass ObjectSchema to get_request_parameters_async() 2019-11-22 17:22:07 +01:00
b0e1e693d9 src/server/rest.rs: cleanup async code 2019-11-22 13:02:05 +01:00
4759ecac59 move src/api_schema/config.rs -> src/server/config.rs 2019-11-22 09:23:03 +01:00
ac21864dcf api/compat: drop more compat imports from api_schema.rs
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-11-21 14:36:28 +01:00
13f6a30f52 api/compat: drop api_handler submodule
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-11-21 14:18:41 +01:00
92ffe68022 api: BoxFut -> ApiFuture
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-11-21 14:16:37 +01:00
4da0705cc4 move api schema into proxmox::api crate
And leave some compat imports in api_schema.rs to get it to
build with minimal changes.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-11-21 14:14:54 +01:00
3de9361d12 use const api definitions 2019-11-21 13:32:09 +01:00
40acfdf04c avoid some clippy warnings 2019-10-26 11:42:05 +02:00
d26fde6986 avoid some clippy warnings 2019-10-25 18:44:51 +02:00
00d669295f clippy: use write_all in file logger
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-11 13:56:09 +02:00
5d6ebfb8dc update to tokio 0.2.0-alpha.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-02 15:21:26 +02:00
149b9d997f src/server/state.rs: update to tokio alpha.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-02 15:21:26 +02:00
a19a6f1b37 src/server/rest.rs: use tokio::timer::delay
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-02 15:21:26 +02:00
9b7434ba15 src/tools/daemon.rs: switch to async
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-02 15:21:26 +02:00
6e35cda060 src/server/state.rs: switch to async
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-02 15:21:26 +02:00
df52ba5e45 src/server/rest.rs: switch to async
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-02 15:21:26 +02:00
3a1a7cbdfc src/server/h2service.rs: switch to async
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-02 15:21:26 +02:00
a079f39b40 src/server/command_socket.rs: switch to async
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-09-02 15:21:26 +02:00
0c3f5449d7 use new proxmox::tools::nodename 2019-08-03 17:06:23 +02:00
465990e139 update to nix 0.14, use code from proxmox:tools 2019-08-03 13:05:38 +02:00
fd40d69ae0 src/server/rest.rs: avoid unwrap 2019-07-03 12:00:43 +02:00
4ca8acb083 src/server/rest.rs: log peer address, use hyper MakeService 2019-07-03 11:54:35 +02:00
744903f874 daemon: remove last use of tools::read/write
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-07-01 10:39:13 +02:00
206d63a20b file download: avoid unnecessary copy 2019-06-28 07:07:52 +02:00
319e42552b src/server/h2service.rs: implement generic h2 service 2019-06-26 17:38:33 +02:00
a99f7ec987 tree-wide: use 'dyn' for all trait objects
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-06-07 13:13:48 +02:00
e8a799cf06 src/server/rest.rs: correctly verify json parameters 2019-05-24 09:10:59 +02:00
4dd886d9a7 src/server/rest.rs: improve error handling 2019-05-23 08:15:32 +02:00
27c2183ef4 src/server/rest.rs: do not log 1xx status codes as errors 2019-05-14 06:23:22 +02:00
3bbbece6a2 handle_async_api_request: put rpcenv into a Box
So that we can pass rpcenv into futures.
2019-05-09 18:01:24 +02:00
d5901112de src/server/formatter.rs: further cleanups and renaming ... 2019-05-09 13:28:26 +02:00
5b91995837 src/server/formatter.rs: rename format_result to format_data
To avoid confusions with Rust Result type.
2019-05-09 13:15:15 +02:00
1b61b80482 src/api2/admin/datastore/backup.rs: implement upload chunk 2019-05-09 13:06:09 +02:00
66f849d272 rc/api2/admin/datastore/h2upload.rs: implement BackupEnvironment
To pass arbitrary information/state to api methods.
2019-05-08 11:26:54 +02:00
010e7b80a8 src/server/rest.rs: use generics to pass RpcEnvironment 2019-05-08 11:09:01 +02:00
19b33e55af src/server/rest.rs: make handle_(a)sync_api_request public 2019-05-07 11:23:52 +02:00
16b5c3c80b RestEnvironment: derive Clone 2019-05-07 11:09:18 +02:00
e53d4dadaa move normalize_path to tools::normalize_uri_path 2019-05-07 09:44:34 +02:00
83f663b7a3 src/server/state.rs: use new BroadcastData helper 2019-04-30 10:21:48 +02:00
88aaa1841a use double-fork for reload
To ensure the new process' parent is pid 1, so systemd won't
complain about supervising a process it does not own.

Fixes the following log spam on reloads:
Apr 25 10:50:54 deb-dev systemd[1]: proxmox-backup.service: Supervising process 1625 which is not our child. We'll most likely not notice when it exits.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-04-25 11:02:12 +00:00
30150eef0f use service Type=notify
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-04-25 11:01:33 +00:00
efd898a71c tools/daemon: add sd_notify wrapper
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-04-25 11:01:28 +00:00
e6bdfe0674 api_schema: allow generic api handler functions
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-04-16 11:22:23 +02:00
3e4290e956 src/server/command_socket.rs: check control socket permissions 2019-04-11 10:51:59 +02:00
04f7276b1a tools/daemon: dup the TcpListener file descriptor
Now that we let hyper shutdown gracefully we need an owned
version of the listening socket to prevent it from closing
before running the reload preparations.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-04-10 15:20:10 +02:00
dfb73ee286 src/server/worker_task.rs: implement abort_worker (via command_socket) 2019-04-10 12:42:24 +02:00
1662975b70 src/server/command_socket.rs: correctly handle/spawn handle parallel connections 2019-04-10 11:05:00 +02:00
116990f264 src/server/worker_task.rs: use abstract socket 2019-04-10 09:03:17 +02:00
203b64ee92 start hyper server using with_graceful_shutdown()
Without, hyper keeps some futures running, and the server does not
correctly shutdown.
2019-04-10 08:24:32 +02:00
230d6ebc2a src/server/command_socket.rs: code cleanup - fix error message 2019-04-09 12:47:42 +02:00
1432561044 src/server/command_socket.rs: implement auto_remove flag
Remove the socket file on close.
2019-04-09 11:47:23 +02:00
8c4656ea04 src/server/command_socket.rs: simple command socket 2019-04-08 17:59:39 +02:00
b9e9f05aaf src/tools/daemon.rs: use new ServerState handler 2019-04-08 14:00:23 +02:00
9761b81b84 implement server state/signal handling, depend on tokio-signal 2019-04-08 13:59:07 +02:00
71d03f1ef4 src/tools/file_logger.rs: fix test 2019-04-06 11:24:37 +02:00
ff995ce0e1 src/server.rs: improve crate layout 2019-04-06 09:17:25 +02:00
e3e5ef3929 src/tools/file_logger.rs: new - accept AsRef<Path> 2019-04-03 14:13:33 +02:00
1ee4442d87 src/tools/file_logger.rs: change timestamp format to rfc3339 2019-04-03 08:58:43 +02:00
edc588857e add global var to indicate server shutdown requests 2019-04-01 12:05:11 +02:00
c76ceea941 src/server/rest.rs: use formatter to encode errors 2019-04-01 08:04:12 +02:00
24c023fe47 src/server/rest.rs: generate csrf token if we have a valid ticket
This is important if the user reloads the browser page.
2019-04-01 07:52:30 +02:00
022b626bc0 src/server/rest.rs: correctly extract content type 2019-03-19 12:50:15 +01:00
50e95f7c39 daemon: simplify daemon creation
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-03-19 12:12:54 +01:00
3aa2dbc857 tools: daemon: rename some structs
Reloadable resources are now 'Reloadable' instead of
'ReexecContinue'.

The struct handling the reload is a 'Reloader', not a
'ReexecStore'.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-03-19 11:09:46 +01:00
14ed3eb57c tools: implement ReexecContinue for tokio's TcpListener
This is the only thing we currently need to keep alive for
reloads.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-03-18 12:29:18 +01:00
66c138a51a tools: daemon: add a default signalfd helper
Proxy and daemon for now just want to handle reload via
`SIGHUP`, so provide a helper creating the signalfd stream
doing that - this is simply a filtered stream which passes
the remaining signals through, so it can be used exactly
like the signalfd stream could before to add more signals.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-03-18 12:29:18 +01:00
946995d984 tools: add daemon helpers
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-03-18 12:29:18 +01:00
6dd8bfb84b src/tools/ticket.rs: define const TICKET_LIFETIME 2019-03-05 12:56:21 +01:00
b9febc5f1c src/tools/file_logger.rs: class to log into files 2019-03-01 09:34:29 +01:00
9b4e1de1c0 rc/server/rest.rs: allow to pass parameters as application/json 2019-02-27 12:37:53 +01:00
3dc99a5049 cleanup
Error::from is already a function taking 1 parameter,
there's no need to wrap it with `|e| Error::from(e)`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-02-18 13:21:27 +01:00
435615a34a src/server/rest.rs: correctly insert NoLogExtension() 2019-02-18 06:54:12 +01:00
bc6fa1684e src/server/rest.rs: get_index() include username and CSRF token
When we have an valid ticket. Also delay get_index() if called with
an invalid ticket.
2019-02-17 19:28:32 +01:00
c4c7466024 src/server/rest.rs: factor our normalize_path() 2019-02-17 17:31:53 +01:00
fce8be6fe1 src/server/rest.rs: improve logs for unauthorized request 2019-02-17 17:18:44 +01:00
b1c1c468ee improve api_schema module structure 2019-02-17 10:16:33 +01:00
304bfa59a8 rename src/api to src/api_schema 2019-02-17 09:59:20 +01:00
124b26b892 cleanup auth code, verify CSRF prevention token 2019-02-16 15:52:55 +01:00
1aff635a23 server/rest.rs: add method to log message 2019-02-15 10:16:12 +01:00
1314000db7 server/rest.rs: log full error messages 2019-02-15 09:55:12 +01:00
8daf9fd839 server/rest.rs: use a protocol extension to avoid double log
Instead of modifying the response header itself.
2019-02-14 16:04:24 +01:00
9bbd574fba avoid double logging of proxied requests 2019-02-14 13:28:41 +01:00
e683d9ccb7 src/server/rest.rs: log failed requests 2019-02-14 13:07:34 +01:00
50ff21da59 src/client/http_client.rs: try to login
use an environment var to store passphrase (PBS_PASSWORD)
2019-02-13 14:31:43 +01:00
fe3b25029b remove some rather inconvenient debug output
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-02-04 15:34:38 +01:00
9707fdadd7 implement relead_timezone flag 2019-02-01 10:04:46 +01:00
5d63509787 delay unauthorized request (rate limit) 2019-01-31 14:34:21 +01:00
8f75d998be move http error class to router.rs 2019-01-31 13:22:30 +01:00
0ef7c190e1 server/rest.rs: verify auth cookie 2019-01-31 12:22:00 +01:00
200b5b87ea Utils.js: fix cookie handling
Use unsecure cookie foör testing.
2019-01-31 10:08:08 +01:00
1701fd9bd4 api2/access.rs: add ticket api 2019-01-30 15:16:10 +01:00
c4f2b212c5 server/rest.rs: simplify proxy code
Only pass neccessary parameters.
2019-01-28 18:22:16 +01:00
8ec1299ab3 server/rest.rs: implement proxy_sync_api_request 2019-01-28 18:06:42 +01:00
1aa3b197a6 server/rest.rs: add proxy_sync_api_request() dummy 2019-01-28 17:30:39 +01:00
4e5a5728cb server/formatter.rs: fix extjs error format 2019-01-28 13:44:48 +01:00
08e45e3573 src/bin/proxmox-backup-proxy.rs: implement unpriviledged server
We want to run the public server as user www-data. Requests needing
root priviledges needs to be proxied to the proxmox-backup.service, which
now listens to 127.0.0.1:82.
2019-01-28 13:29:58 +01:00
42e06fc5ca RpcEnvironment: implement set_user() and get_user() 2019-01-27 10:52:26 +01:00
23db39488f RpcEnvironment: add environment type enum RpcEnvironmentType 2019-01-27 10:33:42 +01:00
084ccdd590 also pass rpcenv to async handlers 2019-01-27 10:18:52 +01:00
a0a545c720 move rpc environment implementation to separate files 2019-01-26 15:08:02 +01:00
32f3db27bd api: pass RpcEnvirnment to api handlers 2019-01-26 14:50:37 +01:00
b1be01218a server/rest.rs: fake login cookie 2019-01-23 12:49:10 +01:00
c643065864 rename api3 back to api2
There is no real need to change the path, so using api2 we can reuse
all helpers (like tools from proxmox widget toolkit).
2019-01-22 12:10:38 +01:00
e35404deb7 remove crate tokio-codec (seems to be part of tokio now) 2019-01-20 14:28:06 +01:00
85722a8492 api/router.rs: rename ApiUploadMethod to ApiAsyncMethod
We can use this for uploads and downloads ...
2019-01-19 16:42:43 +01:00
6e219aefd3 api3/admin/datastore/upload_catar.rs: verify content type ("application/x-proxmox-backup-catar") 2019-01-17 12:43:29 +01:00
90e1d858e0 api/router.rs: return Result in upload handler 2019-01-17 12:03:38 +01:00
148b327e63 server/rest.rs: correctly pass query/url parameters 2019-01-16 13:58:36 +01:00
c36fa61287 api3/admin/datastore/upload_catar.rs: implement upload future 2019-01-15 11:38:26 +01:00
c1582dcf39 api/router.rs: allow different types of api methods
Added a prototype for file/backup uploads.
2019-01-14 12:26:04 +01:00
ac1397dedb rest: rename utf-8-checked 'bytes' to 'utf8'
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-01-08 14:22:43 +01:00
3cd4bb8a63 rest: don't copy the body
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2019-01-08 14:21:54 +01:00
2ad4db5d13 simplify formatter code 2018-12-05 18:22:56 +01:00
53acf7490b add output formatter 2018-12-05 12:43:22 +01:00
0f30b2b4c4 move src/api/server.rs -> src/server/rest.rs 2018-12-05 10:16:23 +01:00
fc45b741cb start the GUI 2018-12-04 17:53:10 +01:00
185f4301dc set content type for static file download 2018-12-02 11:00:52 +01:00
4892b32829 fix file download, listen to 0.0.0.0 2018-12-01 15:21:25 +01:00
b53007523d remove www/pbs-index.html.tt, hardcode into rust for now 2018-12-01 13:37:49 +01:00
dd4b1a797b router: no need to use Fn (fn also works for static closures) 2018-11-16 11:12:00 +01:00
1716112285 handle uri parameters correctly 2018-11-16 09:15:33 +01:00
5106bbc70e allow closure handlers 2018-11-15 17:47:59 +01:00
1ac1f7fd24 cleanup module names 2018-11-15 17:07:10 +01:00
406 changed files with 1180500 additions and 6312 deletions

4
.gitignore vendored
View File

@ -2,3 +2,7 @@
/*/target /*/target
Cargo.lock Cargo.lock
**/*.rs.bk **/*.rs.bk
/*.buildinfo
/*.changes
/build
/*-deb

View File

@ -1,10 +1,100 @@
[workspace] [workspace]
members = [ members = [
"proxmox",
"proxmox-api-macro", "proxmox-api-macro",
"proxmox-apt",
"proxmox-async",
"proxmox-auth-api",
"proxmox-borrow",
"proxmox-compression",
"proxmox-http", "proxmox-http",
"proxmox-io",
"proxmox-lang",
"proxmox-ldap",
"proxmox-login",
"proxmox-metrics",
"proxmox-openid",
"proxmox-rest-server",
"proxmox-router",
"proxmox-schema",
"proxmox-section-config",
"proxmox-serde",
"proxmox-shared-memory",
"proxmox-sortable-macro", "proxmox-sortable-macro",
"proxmox-subscription",
"proxmox-sys",
"proxmox-tfa",
"proxmox-time",
"proxmox-uuid",
] ]
exclude = [ exclude = [
"build", "build",
] ]
[workspace.package]
authors = ["Proxmox Support Team <support@proxmox.com>"]
edition = "2021"
license = "AGPL-3"
repository = "https://git.proxmox.com/?p=proxmox.git"
exclude = [ "debian" ]
homepage = "https://www.proxmox.com"
[workspace.dependencies]
# any features enabled here are enabled on all members using 'workspace = true'!
# external dependencies
anyhow = "1.0"
base32 = "0.4"
base64 = "0.13"
bytes = "1.0"
crc32fast = "1"
endian_trait = "0.6"
flate2 = "1.0"
futures = "0.3"
handlebars = "3.0"
hex = "0.4"
http = "0.2"
hyper = "0.14.5"
lazy_static = "1.4"
ldap3 = { version = "0.11", default-features = false }
libc = "0.2.107"
log = "0.4.17"
native-tls = "0.2"
nix = "0.26.1"
once_cell = "1.3.1"
openssl = "0.10"
pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
proc-macro2 = "1.0"
quote = "1.0"
regex = "1.5"
serde = "1.0"
serde_json = "1.0"
serde_plain = "1.0"
syn = { version = "1.0", features = [ "full", "visit-mut" ] }
tar = "0.4"
tokio = "1.6"
tokio-openssl = "0.6.1"
tokio-stream = "0.1.0"
tower-service = "0.3.0"
url = "2.2"
walkdir = "2"
webauthn-rs = "0.3"
zstd = { version = "0.6", features = [ "bindgen" ] }
# workspace dependencies
proxmox-api-macro = { version = "1.0.4", path = "proxmox-api-macro" }
proxmox-async = { version = "0.4.1", path = "proxmox-async" }
proxmox-compression = { version = "0.1.1", path = "proxmox-compression" }
proxmox-http = { version = "0.8.0", path = "proxmox-http" }
proxmox-io = { version = "1.0.0", path = "proxmox-io" }
proxmox-lang = { version = "1.1", path = "proxmox-lang" }
proxmox-rest-server = { version = "0.3.0", path = "proxmox-rest-server" }
proxmox-router = { version = "1.3.1", path = "proxmox-router" }
proxmox-schema = { version = "1.3.7", path = "proxmox-schema" }
proxmox-serde = { version = "0.1.1", path = "proxmox-serde", features = [ "serde_json" ] }
proxmox-sortable-macro = { version = "0.1.2", path = "proxmox-sortable-macro" }
proxmox-sys = { version = "0.4.2", path = "proxmox-sys" }
proxmox-tfa = { version = "4.0.0", path = "proxmox-tfa" }
proxmox-time = { version = "1.1.4", path = "proxmox-time" }
proxmox-uuid = { version = "1.0.1", path = "proxmox-uuid" }

View File

@ -1,6 +1,6 @@
# Shortcut for common operations: # Shortcut for common operations:
CRATES=proxmox proxmox-api-macro proxmox-http proxmox-sortable-macro CRATES != cargo metadata --format-version=1 | jq -r .workspace_members'[]' | awk '{ print $$1 }'
# By default we just run checks: # By default we just run checks:
.PHONY: all .PHONY: all
@ -11,6 +11,11 @@ deb: $(foreach c,$(CRATES), $c-deb)
echo $(foreach c,$(CRATES), $c-deb) echo $(foreach c,$(CRATES), $c-deb)
lintian build/*.deb lintian build/*.deb
.PHONY: dsc
dsc: $(foreach c,$(CRATES), $c-dsc)
echo $(foreach c,$(CRATES), $c-dsc)
lintian build/*.dsc
.PHONY: autopkgtest .PHONY: autopkgtest
autopkgtest: $(foreach c,$(CRATES), $c-autopkgtest) autopkgtest: $(foreach c,$(CRATES), $c-autopkgtest)
@ -24,6 +29,10 @@ dinstall:
./build.sh $* ./build.sh $*
touch $@ touch $@
%-dsc:
BUILDCMD='dpkg-buildpackage -S -us -uc -d' ./build.sh $*
touch $@
%-autopkgtest: %-autopkgtest:
autopkgtest build/$* build/*.deb -- null autopkgtest build/$* build/*.deb -- null
touch $@ touch $@
@ -51,7 +60,8 @@ doc:
.PHONY: clean .PHONY: clean
clean: clean:
cargo clean cargo clean
rm -rf build *-deb *-autopkgtest rm -rf build/
rm -f -- *-deb *-dsc *-autopkgtest *.buildinfo *.changes
.PHONY: update .PHONY: update
update: update:
@ -62,5 +72,4 @@ update:
dcmd --deb rust-$*_*.changes \ dcmd --deb rust-$*_*.changes \
| grep -v '.changes$$' \ | grep -v '.changes$$' \
| tar -cf "$@.tar" -T-; \ | tar -cf "$@.tar" -T-; \
cat "$@.tar" | ssh -X repoman@repo.proxmox.com upload --product devel --dist buster; \
cat "$@.tar" | ssh -X repoman@repo.proxmox.com upload --product devel --dist bullseye cat "$@.tar" | ssh -X repoman@repo.proxmox.com upload --product devel --dist bullseye

View File

@ -14,9 +14,66 @@ the dependency needs to point directly to a path or git source.
Steps for Releases Steps for Releases
================== ==================
- Cargo.toml updates: - Run ./bump.sh <CRATE> [patch|minor|major|<VERSION>]
- Bump all modified crate versions. -- Fill out changelog
- Update all the other crates' Cargo.toml to depend on the new versions if -- Confirm bump commit
required, then bump their version as well if not already done.
- Update debian/changelog files in all the crates updated above.
- Build packages with `make deb`. - Build packages with `make deb`.
-- Don't forget to commit updated d/control!
Adding Crates
=============
1) At the top level:
- Generate the crate: ``cargo new --lib the-name``
- Sort the crate into ``Cargo.toml``'s ``workspace.members``
2) In the new crate's ``Cargo.toml``:
- In ``[package]`` set:
authors.workspace = true
license.workspace = true
edition.workspace = true
exclude.workspace = true
- Add a meaningful ``description``
- Copy ``debian/copyright`` and ``debian/debcargo.toml`` from another subcrate.
Adding a new Dependency
=======================
1) At the top level:
- Add it to ``[workspace.dependencies]`` specifying the version and any
features that should be enabled throughout the workspace
2) In each member's ``Cargo.toml``:
- Add it to the desired dependencies section with ``workspace = true`` and no
version specified.
- If this member requires additional features, add only the extra features to
the member dependency.
Updating a Dependency's Version
===============================
1) At the top level:
- Bump the version in ``[workspace.dependencies]`` as desired.
- Check for deprecations or breakage throughout the workspace.
Notes on Workspace Inheritance
==============================
Common metadata (like authors, license, ..) are inherited throughout the
workspace. If new fields are added that are identical for all crates, they
should be defined in the top-level ``Cargo.toml`` file's
``[workspace.package]`` section, and inherited in all members explicitly by
setting ``FIELD.workspace = true`` in the member's ``[package]`` section.
Dependency information is also inherited throughout the workspace, allowing a
single dependency specification in the top-level Cargo.toml file to be used by
all members.
Some restrictions apply:
- features can only be added in members, never removed (this includes
``default_features = false``!)
- the base feature set at the workspace level should be the minimum (possibly
empty!) set required by all members
- workspace dependency specifications cannot include ``optional``
- if needed, the ``optional`` flag needs to be set at the member level when
using a workspace dependency

44
bump.sh Executable file
View File

@ -0,0 +1,44 @@
#!/bin/bash
package=$1
if [[ -z "$package" ]]; then
echo "USAGE:"
echo -e "\t bump.sh <crate> [patch|minor|major|<version>]"
echo ""
echo "Defaults to bumping patch version by 1"
exit 0
fi
cargo_set_version="$(command -v cargo-set-version)"
if [[ -z "$cargo_set_version" || ! -x "$cargo_set_version" ]]; then
echo 'bump.sh requires "cargo set-version", provided by "cargo-edit".'
exit 1
fi
if [[ ! -e "$package/Cargo.toml" ]]; then
echo "Invalid crate '$package'"
exit 1
fi
version=$2
if [[ -z "$version" ]]; then
version="patch"
fi
case "$version" in
patch|minor|major)
bump="--bump"
;;
*)
bump=
;;
esac
cargo_toml="$package/Cargo.toml"
changelog="$package/debian/changelog"
cargo set-version -p "$package" $bump "$version"
version="$(cargo metadata --format-version=1 | jq ".packages[] | select(.name == \"$package\").version" | sed -e 's/\"//g')"
DEBFULLNAME="Proxmox Support Team" DEBEMAIL="support@proxmox.com" dch --no-conf --changelog "$changelog" --newversion "$version-1" --distribution stable
git commit --edit -sm "bump $package to $version-1" Cargo.toml "$cargo_toml" "$changelog"

View File

@ -1,28 +1,35 @@
[package] [package]
name = "proxmox-api-macro" name = "proxmox-api-macro"
edition = "2018" edition.workspace = true
version = "0.3.4" version = "1.0.4"
authors = [ "Wolfgang Bumiller <w.bumiller@proxmox.com>" ] authors.workspace = true
license = "AGPL-3" license.workspace = true
repository.workspace = true
description = "Proxmox API macro" description = "Proxmox API macro"
exclude = [ "debian" ] exclude.workspace = true
[lib] [lib]
proc-macro = true proc-macro = true
[dependencies] [dependencies]
anyhow = "1.0" anyhow.workspace = true
proc-macro2 = "1.0" proc-macro2.workspace = true
quote = "1.0" quote.workspace = true
syn = { version = "1.0", features = [ "extra-traits", "full", "visit-mut" ] } syn = { workspace = true , features = [ "extra-traits" ] }
[dev-dependencies] [dev-dependencies]
futures = "0.3" futures.workspace = true
proxmox = { version = "0.11.0", path = "../proxmox", features = [ "test-harness", "api-macro" ] } serde = { workspace = true, features = [ "derive" ] }
serde = "1.0" serde_json.workspace = true
serde_derive = "1.0"
serde_json = "1.0" [dev-dependencies.proxmox-schema]
workspace = true
features = [ "test-harness", "api-macro" ]
[dev-dependencies.proxmox-router]
workspace = true
features = [ "test-harness" ]
# [features] # [features]
# # Used to quickly filter out the serde derive noise when using `cargo expand` for debugging! # # Used to quickly filter out the serde derive noise when using `cargo expand` for debugging!

View File

@ -1,3 +1,61 @@
rust-proxmox-api-macro (1.0.4-1) stable; urgency=medium
* support #[default] attribute for types which derive Default
* documentation updates
-- Proxmox Support Team <support@proxmox.com> Mon, 12 Dec 2022 11:31:34 +0100
rust-proxmox-api-macro (1.0.3-1) stable; urgency=medium
* allow overriding fiel attributes when deriving an updater
-- Proxmox Support Team <support@proxmox.com> Thu, 19 May 2022 12:03:36 +0200
rust-proxmox-api-macro (1.0.2-1) stable; urgency=medium
* support streaming api handlers
-- Proxmox Support Team <support@proxmox.com> Tue, 12 Apr 2022 14:26:46 +0200
rust-proxmox-api-macro (1.0.1-1) stable; urgency=medium
* stop adding automatically_derived to derived output to please new rustc
-- Proxmox Support Team <support@proxmox.com> Tue, 12 Oct 2021 14:49:35 +0200
rust-proxmox-api-macro (1.0.0-1) stable; urgency=medium
* schema was split out of proxmox into a new proxmox-schema crate
-- Proxmox Support Team <support@proxmox.com> Thu, 07 Oct 2021 14:28:14 +0200
rust-proxmox-api-macro (0.5.1-1) stable; urgency=medium
* allow external `returns` specification on methods, refereincing a
`ReturnType`.
-- Proxmox Support Team <support@proxmox.com> Mon, 30 Aug 2021 10:44:21 +0200
rust-proxmox-api-macro (0.5.0-1) stable; urgency=medium
* for non structs without Updater types and methods, `type: Foo` can now be
omitted for api types
* Adapt to the changes to Updatable in the proxmox crate
* Updaters have no try_build_from or update_from method anymore for now
* #[api] types automatically implement the new ApiType trait
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Aug 2021 15:22:05 +0200
rust-proxmox-api-macro (0.4.0-1) stable; urgency=medium
* update proxmox to 0.12.0
-- Proxmox Support Team <support@proxmox.com> Tue, 20 Jul 2021 17:09:40 +0200
rust-proxmox-api-macro (0.3.4-1) unstable; urgency=medium rust-proxmox-api-macro (0.3.4-1) unstable; urgency=medium
* fix path in generated Updatable derive entry to not require explicit * fix path in generated Updatable derive entry to not require explicit

View File

@ -1,8 +1,8 @@
Source: rust-proxmox-api-macro Source: rust-proxmox-api-macro
Section: rust Section: rust
Priority: optional Priority: optional
Build-Depends: debhelper (>= 11), Build-Depends: debhelper (>= 12),
dh-cargo (>= 18), dh-cargo (>= 25),
cargo:native <!nocheck>, cargo:native <!nocheck>,
rustc:native <!nocheck>, rustc:native <!nocheck>,
libstd-rust-dev <!nocheck>, libstd-rust-dev <!nocheck>,
@ -14,9 +14,11 @@ Build-Depends: debhelper (>= 11),
librust-syn-1+full-dev <!nocheck>, librust-syn-1+full-dev <!nocheck>,
librust-syn-1+visit-mut-dev <!nocheck> librust-syn-1+visit-mut-dev <!nocheck>
Maintainer: Proxmox Support Team <support@proxmox.com> Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.4.1 Standards-Version: 4.6.1
Vcs-Git: git://git.proxmox.com/git/proxmox.git Vcs-Git: git://git.proxmox.com/git/proxmox.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox.git Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
X-Cargo-Crate: proxmox-api-macro
Rules-Requires-Root: no
Package: librust-proxmox-api-macro-dev Package: librust-proxmox-api-macro-dev
Architecture: any Architecture: any
@ -32,12 +34,12 @@ Depends:
librust-syn-1+visit-mut-dev librust-syn-1+visit-mut-dev
Provides: Provides:
librust-proxmox-api-macro+default-dev (= ${binary:Version}), librust-proxmox-api-macro+default-dev (= ${binary:Version}),
librust-proxmox-api-macro-0-dev (= ${binary:Version}), librust-proxmox-api-macro-1-dev (= ${binary:Version}),
librust-proxmox-api-macro-0+default-dev (= ${binary:Version}), librust-proxmox-api-macro-1+default-dev (= ${binary:Version}),
librust-proxmox-api-macro-0.3-dev (= ${binary:Version}), librust-proxmox-api-macro-1.0-dev (= ${binary:Version}),
librust-proxmox-api-macro-0.3+default-dev (= ${binary:Version}), librust-proxmox-api-macro-1.0+default-dev (= ${binary:Version}),
librust-proxmox-api-macro-0.3.4-dev (= ${binary:Version}), librust-proxmox-api-macro-1.0.4-dev (= ${binary:Version}),
librust-proxmox-api-macro-0.3.4+default-dev (= ${binary:Version}) librust-proxmox-api-macro-1.0.4+default-dev (= ${binary:Version})
Description: Proxmox API macro - Rust source code Description: Proxmox API macro - Rust source code
This package contains the source for the Rust proxmox-api-macro crate, packaged This package contains the source for the Rust proxmox-api-macro crate, packaged
by debcargo for use with cargo and dh-cargo. by debcargo for use with cargo and dh-cargo.

View File

@ -1,16 +1,18 @@
Copyright (C) 2019 Proxmox Server Solutions GmbH Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
This software is written by Proxmox Server Solutions GmbH <support@proxmox.com> Files:
*
This program is free software: you can redistribute it and/or modify Copyright: 2019 - 2023 Proxmox Server Solutions GmbH <support@proxmox.com>
it under the terms of the GNU Affero General Public License as published by License: AGPL-3.0-or-later
the Free Software Foundation, either version 3 of the License, or This program is free software: you can redistribute it and/or modify it under
(at your option) any later version. the terms of the GNU Affero General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your option) any
This program is distributed in the hope that it will be useful, later version.
but WITHOUT ANY WARRANTY; without even the implied warranty of .
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the This program is distributed in the hope that it will be useful, but WITHOUT
GNU Affero General Public License for more details. ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
You should have received a copy of the GNU Affero General Public License details.
along with this program. If not, see <http://www.gnu.org/licenses/>. .
You should have received a copy of the GNU Affero General Public License along
with this program. If not, see <https://www.gnu.org/licenses/>.

View File

@ -0,0 +1,75 @@
use proc_macro2::TokenStream;
use quote::ToTokens;
use syn::{Meta, NestedMeta};
use crate::util::{self, default_false, parse_str_value_to_option, set_bool};
#[derive(Default)]
pub struct UpdaterFieldAttributes {
/// Skip this field in the updater.
skip: Option<syn::LitBool>,
/// Change the type for the updater.
ty: Option<syn::TypePath>,
/// Replace any `#[serde]` attributes on the field with these (accumulates).
serde: Vec<syn::Attribute>,
}
impl UpdaterFieldAttributes {
pub fn from_attributes(input: &mut Vec<syn::Attribute>) -> Self {
let mut this = Self::default();
util::extract_attributes(input, "updater", |attr, meta| this.parse(attr, meta));
this
}
fn parse(&mut self, attr: &syn::Attribute, input: NestedMeta) -> Result<(), syn::Error> {
match input {
NestedMeta::Lit(lit) => bail!(lit => "unexpected literal"),
NestedMeta::Meta(meta) => self.parse_meta(attr, meta),
}
}
fn parse_meta(&mut self, attr: &syn::Attribute, meta: Meta) -> Result<(), syn::Error> {
match meta {
Meta::Path(ref path) if path.is_ident("skip") => {
set_bool(&mut self.skip, path, true);
}
Meta::NameValue(ref nv) if nv.path.is_ident("type") => {
parse_str_value_to_option(&mut self.ty, nv)
}
Meta::NameValue(m) => bail!(&m => "invalid updater attribute: {:?}", m.path),
Meta::List(m) if m.path.is_ident("serde") => {
let mut tokens = TokenStream::new();
m.paren_token
.surround(&mut tokens, |tokens| m.nested.to_tokens(tokens));
self.serde.push(syn::Attribute {
path: m.path,
tokens,
..attr.clone()
});
}
Meta::List(m) => bail!(&m => "invalid updater attribute: {:?}", m.path),
Meta::Path(m) => bail!(&m => "invalid updater attribute: {:?}", m),
}
Ok(())
}
pub fn skip(&self) -> bool {
default_false(self.skip.as_ref())
}
pub fn ty(&self) -> Option<&syn::TypePath> {
self.ty.as_ref()
}
pub fn replace_serde_attributes(&self, attrs: &mut Vec<syn::Attribute>) {
if !self.serde.is_empty() {
attrs.retain(|attr| !attr.path.is_ident("serde"));
attrs.extend(self.serde.iter().cloned())
}
}
}

View File

@ -25,6 +25,8 @@ pub fn handle_enum(
error!(fmt.span(), "illegal key 'format', will be autogenerated"); error!(fmt.span(), "illegal key 'format', will be autogenerated");
} }
let has_default_attrib = attribs.get("default").map(|def| def.span());
let schema = { let schema = {
let mut schema: Schema = attribs.try_into()?; let mut schema: Schema = attribs.try_into()?;
@ -39,6 +41,8 @@ pub fn handle_enum(
}; };
let container_attrs = serde::ContainerAttrib::try_from(&enum_ty.attrs[..])?; let container_attrs = serde::ContainerAttrib::try_from(&enum_ty.attrs[..])?;
let derives_default = util::derives_trait(&enum_ty.attrs, "Default");
let mut default_value = None;
let mut variants = TokenStream::new(); let mut variants = TokenStream::new();
for variant in &mut enum_ty.variants { for variant in &mut enum_ty.variants {
@ -55,7 +59,7 @@ pub fn handle_enum(
let attrs = serde::SerdeAttrib::try_from(&variant.attrs[..])?; let attrs = serde::SerdeAttrib::try_from(&variant.attrs[..])?;
let variant_string = if let Some(renamed) = attrs.rename { let variant_string = if let Some(renamed) = attrs.rename {
renamed.into_lit_str() renamed
} else if let Some(rename_all) = container_attrs.rename_all { } else if let Some(rename_all) = container_attrs.rename_all {
let name = rename_all.apply_to_variant(&variant.ident.to_string()); let name = rename_all.apply_to_variant(&variant.ident.to_string());
syn::LitStr::new(&name, variant.ident.span()) syn::LitStr::new(&name, variant.ident.span())
@ -64,8 +68,23 @@ pub fn handle_enum(
syn::LitStr::new(&name.to_string(), name.span()) syn::LitStr::new(&name.to_string(), name.span())
}; };
if derives_default {
if let Some(attr) = variant.attrs.iter().find(|a| a.path.is_ident("default")) {
if let Some(default_value) = &default_value {
error!(attr => "multiple default values defined");
error!(default_value => "default previously defined here");
} else {
default_value = Some(variant_string.clone());
if let Some(span) = has_default_attrib {
error!(attr => "#[default] attribute in use with 'default' #[api] key");
error!(span, "'default' also defined here");
}
}
}
}
variants.extend(quote_spanned! { variant.ident.span() => variants.extend(quote_spanned! { variant.ident.span() =>
::proxmox::api::schema::EnumEntry { ::proxmox_schema::EnumEntry {
value: #variant_string, value: #variant_string,
description: #comment, description: #comment,
}, },
@ -74,13 +93,24 @@ pub fn handle_enum(
let name = &enum_ty.ident; let name = &enum_ty.ident;
let default_value = match default_value {
Some(value) => quote_spanned!(value.span() => .default(#value)),
None => TokenStream::new(),
};
Ok(quote_spanned! { name.span() => Ok(quote_spanned! { name.span() =>
#enum_ty #enum_ty
impl #name {
pub const API_SCHEMA: ::proxmox::api::schema::Schema = impl ::proxmox_schema::ApiType for #name {
const API_SCHEMA: ::proxmox_schema::Schema =
#schema #schema
.format(&::proxmox::api::schema::ApiStringFormat::Enum(&[#variants])) .format(&::proxmox_schema::ApiStringFormat::Enum(&[#variants]))
#default_value
.schema(); .schema();
} }
impl ::proxmox_schema::UpdaterType for #name {
type Updater = Option<Self>;
}
}) })
} }

View File

@ -11,7 +11,7 @@ use std::mem;
use anyhow::Error; use anyhow::Error;
use proc_macro2::{Span, TokenStream}; use proc_macro2::{Span, TokenStream};
use quote::{quote, quote_spanned}; use quote::{quote, quote_spanned, ToTokens};
use syn::ext::IdentExt; use syn::ext::IdentExt;
use syn::spanned::Spanned; use syn::spanned::Spanned;
use syn::visit_mut::{self, VisitMut}; use syn::visit_mut::{self, VisitMut};
@ -20,16 +20,53 @@ use syn::Ident;
use super::{ObjectEntry, Schema, SchemaItem, SchemaObject}; use super::{ObjectEntry, Schema, SchemaItem, SchemaObject};
use crate::util::{self, FieldName, JSONObject, JSONValue, Maybe}; use crate::util::{self, FieldName, JSONObject, JSONValue, Maybe};
/// A return type in a schema can have an `optional` flag. Other than that it is just a regular
/// schema, but we also want to be able to reference external `ReturnType` values for this.
pub enum ReturnType {
Explicit(ReturnSchema),
Extern(syn::Expr),
}
impl ReturnType {
fn as_mut_schema(&mut self) -> Option<&mut Schema> {
match self {
ReturnType::Explicit(ReturnSchema { ref mut schema, .. }) => Some(schema),
_ => None,
}
}
}
impl TryFrom<JSONValue> for ReturnType {
type Error = syn::Error;
fn try_from(value: JSONValue) -> Result<Self, syn::Error> {
Ok(match value {
JSONValue::Object(obj) => ReturnType::Explicit(obj.try_into()?),
JSONValue::Expr(ext) => ReturnType::Extern(ext),
})
}
}
impl ReturnType {
fn to_schema(&self, ts: &mut TokenStream) -> Result<(), Error> {
match self {
ReturnType::Explicit(exp) => exp.to_schema(ts)?,
ReturnType::Extern(exp) => exp.to_tokens(ts),
}
Ok(())
}
}
/// A return type in a schema can have an `optional` flag. Other than that it is just a regular /// A return type in a schema can have an `optional` flag. Other than that it is just a regular
/// schema. /// schema.
pub struct ReturnType { pub struct ReturnSchema {
/// If optional, we store `Some(span)`, otherwise `None`. /// If optional, we store `Some(span)`, otherwise `None`.
optional: Option<Span>, optional: Option<Span>,
schema: Schema, schema: Schema,
} }
impl ReturnType { impl ReturnSchema {
fn to_schema(&self, ts: &mut TokenStream) -> Result<(), Error> { fn to_schema(&self, ts: &mut TokenStream) -> Result<(), Error> {
let optional = match self.optional { let optional = match self.optional {
Some(span) => quote_spanned! { span => true }, Some(span) => quote_spanned! { span => true },
@ -40,23 +77,15 @@ impl ReturnType {
self.schema.to_schema(&mut out)?; self.schema.to_schema(&mut out)?;
ts.extend(quote! { ts.extend(quote! {
::proxmox::api::router::ReturnType::new( #optional , &#out ) ::proxmox_schema::ReturnType::new( #optional , &#out )
}); });
Ok(()) Ok(())
} }
} }
impl TryFrom<JSONValue> for ReturnType { /// To go from a `JSONObject` to a `ReturnSchema` we first extract the `optional` flag, then forward
type Error = syn::Error;
fn try_from(value: JSONValue) -> Result<Self, syn::Error> {
Self::try_from(value.into_object("a return type definition")?)
}
}
/// To go from a `JSONObject` to a `ReturnType` we first extract the `optional` flag, then forward
/// to the `Schema` parser. /// to the `Schema` parser.
impl TryFrom<JSONObject> for ReturnType { impl TryFrom<JSONObject> for ReturnSchema {
type Error = syn::Error; type Error = syn::Error;
fn try_from(mut obj: JSONObject) -> Result<Self, syn::Error> { fn try_from(mut obj: JSONObject) -> Result<Self, syn::Error> {
@ -110,7 +139,7 @@ pub fn handle_method(mut attribs: JSONObject, mut func: syn::ItemFn) -> Result<T
let mut return_type: Option<ReturnType> = attribs let mut return_type: Option<ReturnType> = attribs
.remove("returns") .remove("returns")
.map(|ret| ret.into_object("return schema definition")?.try_into()) .map(|ret| ret.try_into())
.transpose()?; .transpose()?;
let access_setter = match attribs.remove("access") { let access_setter = match attribs.remove("access") {
@ -140,6 +169,12 @@ pub fn handle_method(mut attribs: JSONObject, mut func: syn::ItemFn) -> Result<T
.transpose()? .transpose()?
.unwrap_or(false); .unwrap_or(false);
let streaming: bool = attribs
.remove("streaming")
.map(TryFrom::try_from)
.transpose()?
.unwrap_or(false);
if !attribs.is_empty() { if !attribs.is_empty() {
error!( error!(
attribs.span(), attribs.span(),
@ -151,7 +186,7 @@ pub fn handle_method(mut attribs: JSONObject, mut func: syn::ItemFn) -> Result<T
let (doc_comment, doc_span) = util::get_doc_comments(&func.attrs)?; let (doc_comment, doc_span) = util::get_doc_comments(&func.attrs)?;
util::derive_descriptions( util::derive_descriptions(
&mut input_schema, &mut input_schema,
return_type.as_mut().map(|rs| &mut rs.schema), return_type.as_mut().and_then(ReturnType::as_mut_schema),
&doc_comment, &doc_comment,
doc_span, doc_span,
)?; )?;
@ -166,6 +201,7 @@ pub fn handle_method(mut attribs: JSONObject, mut func: syn::ItemFn) -> Result<T
&mut func, &mut func,
&mut wrapper_ts, &mut wrapper_ts,
&mut default_consts, &mut default_consts,
streaming,
)?; )?;
// input schema is done, let's give the method body a chance to extract default parameters: // input schema is done, let's give the method body a chance to extract default parameters:
@ -188,17 +224,18 @@ pub fn handle_method(mut attribs: JSONObject, mut func: syn::ItemFn) -> Result<T
returns_schema_setter = quote! { .returns(#inner) }; returns_schema_setter = quote! { .returns(#inner) };
} }
let api_handler = if is_async { let api_handler = match (streaming, is_async) {
quote! { ::proxmox::api::ApiHandler::Async(&#api_func_name) } (true, true) => quote! { ::proxmox_router::ApiHandler::StreamingAsync(&#api_func_name) },
} else { (true, false) => quote! { ::proxmox_router::ApiHandler::StreamingSync(&#api_func_name) },
quote! { ::proxmox::api::ApiHandler::Sync(&#api_func_name) } (false, true) => quote! { ::proxmox_router::ApiHandler::Async(&#api_func_name) },
(false, false) => quote! { ::proxmox_router::ApiHandler::Sync(&#api_func_name) },
}; };
Ok(quote_spanned! { func.sig.span() => Ok(quote_spanned! { func.sig.span() =>
#input_schema_code #input_schema_code
#vis const #api_method_name: ::proxmox::api::ApiMethod = #vis const #api_method_name: ::proxmox_router::ApiMethod =
::proxmox::api::ApiMethod::new_full( ::proxmox_router::ApiMethod::new_full(
&#api_handler, &#api_handler,
#input_schema_parameter, #input_schema_parameter,
) )
@ -246,10 +283,11 @@ fn check_input_type(input: &syn::FnArg) -> Result<(&syn::PatType, &syn::PatIdent
fn handle_function_signature( fn handle_function_signature(
input_schema: &mut Schema, input_schema: &mut Schema,
return_type: &mut Option<ReturnType>, _return_type: &mut Option<ReturnType>,
func: &mut syn::ItemFn, func: &mut syn::ItemFn,
wrapper_ts: &mut TokenStream, wrapper_ts: &mut TokenStream,
default_consts: &mut TokenStream, default_consts: &mut TokenStream,
streaming: bool,
) -> Result<Ident, Error> { ) -> Result<Ident, Error> {
let sig = &func.sig; let sig = &func.sig;
let is_async = sig.asyncness.is_some(); let is_async = sig.asyncness.is_some();
@ -272,7 +310,7 @@ fn handle_function_signature(
// For any named type which exists on the function signature... // For any named type which exists on the function signature...
if let Some(entry) = input_schema.find_obj_property_by_ident_mut(&pat.ident.to_string()) { if let Some(entry) = input_schema.find_obj_property_by_ident_mut(&pat.ident.to_string()) {
// try to infer the type in the schema if it is not specified explicitly: // try to infer the type in the schema if it is not specified explicitly:
let is_option = util::infer_type(&mut entry.schema, &*pat_type.ty)?; let is_option = util::infer_type(&mut entry.schema, &pat_type.ty)?;
let has_default = entry.schema.find_schema_property("default").is_some(); let has_default = entry.schema.find_schema_property("default").is_some();
if !is_option && entry.optional.expect_bool() && !has_default { if !is_option && entry.optional.expect_bool() && !has_default {
error!(pat_type => "optional types need a default or be an Option<T>"); error!(pat_type => "optional types need a default or be an Option<T>");
@ -327,7 +365,7 @@ fn handle_function_signature(
// Found an explicit parameter: extract it: // Found an explicit parameter: extract it:
ParameterType::Normal(NormalParameter { ParameterType::Normal(NormalParameter {
ty: &pat_type.ty, ty: &pat_type.ty,
entry: &entry, entry,
}) })
} else if is_api_method_type(&pat_type.ty) { } else if is_api_method_type(&pat_type.ty) {
if api_method_param.is_some() { if api_method_param.is_some() {
@ -378,13 +416,14 @@ fn handle_function_signature(
*/ */
create_wrapper_function( create_wrapper_function(
input_schema, //input_schema,
return_type, //return_type,
param_list, param_list,
func, func,
wrapper_ts, wrapper_ts,
default_consts, default_consts,
is_async, is_async,
streaming,
) )
} }
@ -435,13 +474,14 @@ fn is_value_type(ty: &syn::Type) -> bool {
} }
fn create_wrapper_function( fn create_wrapper_function(
_input_schema: &Schema, //_input_schema: &Schema,
_returns_schema: &Option<ReturnType>, //_returns_schema: &Option<ReturnType>,
param_list: Vec<(FieldName, ParameterType)>, param_list: Vec<(FieldName, ParameterType)>,
func: &syn::ItemFn, func: &syn::ItemFn,
wrapper_ts: &mut TokenStream, wrapper_ts: &mut TokenStream,
default_consts: &mut TokenStream, default_consts: &mut TokenStream,
is_async: bool, is_async: bool,
streaming: bool,
) -> Result<Ident, Error> { ) -> Result<Ident, Error> {
let api_func_name = Ident::new( let api_func_name = Ident::new(
&format!("api_function_{}", &func.sig.ident), &format!("api_function_{}", &func.sig.ident),
@ -483,45 +523,83 @@ fn create_wrapper_function(
_ => Some(quote!(?)), _ => Some(quote!(?)),
}; };
let body = quote! { let body = if streaming {
if let ::serde_json::Value::Object(ref mut input_map) = &mut input_params { quote! {
#body if let ::serde_json::Value::Object(ref mut input_map) = &mut input_params {
Ok(::serde_json::to_value(#func_name(#args) #await_keyword #question_mark)?) #body
} else { let res = #func_name(#args) #await_keyword #question_mark;
::anyhow::bail!("api function wrapper called with a non-object json value"); let res: ::std::boxed::Box<dyn ::proxmox_router::SerializableReturn + Send> = ::std::boxed::Box::new(res);
Ok(res)
} else {
::anyhow::bail!("api function wrapper called with a non-object json value");
}
}
} else {
quote! {
if let ::serde_json::Value::Object(ref mut input_map) = &mut input_params {
#body
Ok(::serde_json::to_value(#func_name(#args) #await_keyword #question_mark)?)
} else {
::anyhow::bail!("api function wrapper called with a non-object json value");
}
} }
}; };
if is_async { match (streaming, is_async) {
wrapper_ts.extend(quote! { (true, true) => {
fn #api_func_name<'a>( wrapper_ts.extend(quote! {
mut input_params: ::serde_json::Value, fn #api_func_name<'a>(
api_method_param: &'static ::proxmox::api::ApiMethod, mut input_params: ::serde_json::Value,
rpc_env_param: &'a mut dyn ::proxmox::api::RpcEnvironment, api_method_param: &'static ::proxmox_router::ApiMethod,
) -> ::proxmox::api::ApiFuture<'a> { rpc_env_param: &'a mut dyn ::proxmox_router::RpcEnvironment,
//async fn func<'a>( ) -> ::proxmox_router::StreamingApiFuture<'a> {
// mut input_params: ::serde_json::Value, ::std::boxed::Box::pin(async move { #body })
// api_method_param: &'static ::proxmox::api::ApiMethod, }
// rpc_env_param: &'a mut dyn ::proxmox::api::RpcEnvironment, });
//) -> ::std::result::Result<::serde_json::Value, ::anyhow::Error> { }
// #body (true, false) => {
//} wrapper_ts.extend(quote! {
//::std::boxed::Box::pin(async move { fn #api_func_name(
// func(input_params, api_method_param, rpc_env_param).await mut input_params: ::serde_json::Value,
//}) api_method_param: &::proxmox_router::ApiMethod,
::std::boxed::Box::pin(async move { #body }) rpc_env_param: &mut dyn ::proxmox_router::RpcEnvironment,
} ) -> ::std::result::Result<::std::boxed::Box<dyn ::proxmox_router::SerializableReturn + Send>, ::anyhow::Error> {
}); #body
} else { }
wrapper_ts.extend(quote! { });
fn #api_func_name( }
mut input_params: ::serde_json::Value, (false, true) => {
api_method_param: &::proxmox::api::ApiMethod, wrapper_ts.extend(quote! {
rpc_env_param: &mut dyn ::proxmox::api::RpcEnvironment, fn #api_func_name<'a>(
) -> ::std::result::Result<::serde_json::Value, ::anyhow::Error> { mut input_params: ::serde_json::Value,
#body api_method_param: &'static ::proxmox_router::ApiMethod,
} rpc_env_param: &'a mut dyn ::proxmox_router::RpcEnvironment,
}); ) -> ::proxmox_router::ApiFuture<'a> {
//async fn func<'a>(
// mut input_params: ::serde_json::Value,
// api_method_param: &'static ::proxmox_router::ApiMethod,
// rpc_env_param: &'a mut dyn ::proxmox_router::RpcEnvironment,
//) -> ::std::result::Result<::serde_json::Value, ::anyhow::Error> {
// #body
//}
//::std::boxed::Box::pin(async move {
// func(input_params, api_method_param, rpc_env_param).await
//})
::std::boxed::Box::pin(async move { #body })
}
});
}
(false, false) => {
wrapper_ts.extend(quote! {
fn #api_func_name(
mut input_params: ::serde_json::Value,
api_method_param: &::proxmox_router::ApiMethod,
rpc_env_param: &mut dyn ::proxmox_router::RpcEnvironment,
) -> ::std::result::Result<::serde_json::Value, ::anyhow::Error> {
#body
}
});
}
} }
Ok(api_func_name) Ok(api_func_name)
@ -538,7 +616,7 @@ fn extract_normal_parameter(
) -> Result<(), Error> { ) -> Result<(), Error> {
let span = name_span; // renamed during refactorization let span = name_span; // renamed during refactorization
let name_str = syn::LitStr::new(name.as_str(), span); let name_str = syn::LitStr::new(name.as_str(), span);
let arg_name = Ident::new(&format!("input_arg_{}", name.as_ident().to_string()), span); let arg_name = Ident::new(&format!("input_arg_{}", name.as_ident()), span);
let default_value = param.entry.schema.find_schema_property("default"); let default_value = param.entry.schema.find_schema_property("default");
@ -621,7 +699,7 @@ fn extract_normal_parameter(
let ty = param.ty; let ty = param.ty;
body.extend(quote_spanned! { span => body.extend(quote_spanned! { span =>
let #arg_name = <#ty as ::serde::Deserialize>::deserialize( let #arg_name = <#ty as ::serde::Deserialize>::deserialize(
::proxmox::api::de::ExtractValueDeserializer::try_new( ::proxmox_schema::de::ExtractValueDeserializer::try_new(
input_map, input_map,
#schema_ref, #schema_ref,
) )
@ -674,10 +752,10 @@ fn serialize_input_schema(
input_schema.to_typed_schema(&mut ts)?; input_schema.to_typed_schema(&mut ts)?;
return Ok(( return Ok((
quote_spanned! { func_sig_span => quote_spanned! { func_sig_span =>
pub const #input_schema_name: ::proxmox::api::schema::ObjectSchema = #ts; pub const #input_schema_name: ::proxmox_schema::ObjectSchema = #ts;
}, },
quote_spanned! { func_sig_span => quote_spanned! { func_sig_span =>
::proxmox::api::schema::ParameterSchema::Object(&#input_schema_name) ::proxmox_schema::ParameterSchema::Object(&#input_schema_name)
}, },
)); ));
} }
@ -729,7 +807,7 @@ fn serialize_input_schema(
( (
quote_spanned!(func_sig_span => quote_spanned!(func_sig_span =>
const #inner_schema_name: ::proxmox::api::schema::Schema = #obj_schema; const #inner_schema_name: ::proxmox_schema::Schema = #obj_schema;
), ),
quote_spanned!(func_sig_span => &#inner_schema_name,), quote_spanned!(func_sig_span => &#inner_schema_name,),
) )
@ -742,8 +820,8 @@ fn serialize_input_schema(
quote_spanned! { func_sig_span => quote_spanned! { func_sig_span =>
#inner_schema #inner_schema
pub const #input_schema_name: ::proxmox::api::schema::AllOfSchema = pub const #input_schema_name: ::proxmox_schema::AllOfSchema =
::proxmox::api::schema::AllOfSchema::new( ::proxmox_schema::AllOfSchema::new(
#description, #description,
&[ &[
#inner_schema_ref #inner_schema_ref
@ -752,7 +830,7 @@ fn serialize_input_schema(
); );
}, },
quote_spanned! { func_sig_span => quote_spanned! { func_sig_span =>
::proxmox::api::schema::ParameterSchema::AllOf(&#input_schema_name) ::proxmox_schema::ParameterSchema::AllOf(&#input_schema_name)
}, },
)) ))
} }

View File

@ -19,6 +19,7 @@ use syn::{Expr, ExprPath, Ident};
use crate::util::{FieldName, JSONObject, JSONValue, Maybe}; use crate::util::{FieldName, JSONObject, JSONValue, Maybe};
mod attributes;
mod enums; mod enums;
mod method; mod method;
mod structs; mod structs;
@ -146,9 +147,9 @@ impl Schema {
/// Create the token stream for a reference schema (`ExternType` or `ExternSchema`). /// Create the token stream for a reference schema (`ExternType` or `ExternSchema`).
fn to_schema_reference(&self) -> Option<TokenStream> { fn to_schema_reference(&self) -> Option<TokenStream> {
match &self.item { match &self.item {
SchemaItem::ExternType(path) => { SchemaItem::ExternType(path) => Some(
Some(quote_spanned! { path.span() => &#path::API_SCHEMA }) quote_spanned! { path.span() => &<#path as ::proxmox_schema::ApiType>::API_SCHEMA },
} ),
SchemaItem::ExternSchema(path) => Some(quote_spanned! { path.span() => &#path }), SchemaItem::ExternSchema(path) => Some(quote_spanned! { path.span() => &#path }),
_ => None, _ => None,
} }
@ -198,6 +199,12 @@ impl Schema {
.and_then(|obj| obj.find_property_by_ident_mut(key)) .and_then(|obj| obj.find_property_by_ident_mut(key))
} }
fn remove_obj_property_by_ident(&mut self, key: &str) -> bool {
self.as_object_mut()
.expect("tried to remove object property on non-object schema")
.remove_property_by_ident(key)
}
// FIXME: Should we turn the property list into a map? We used to have no need to find keys in // FIXME: Should we turn the property list into a map? We used to have no need to find keys in
// it, but we do now... // it, but we do now...
fn find_schema_property(&self, key: &str) -> Option<&syn::Expr> { fn find_schema_property(&self, key: &str) -> Option<&syn::Expr> {
@ -316,31 +323,31 @@ impl SchemaItem {
SchemaItem::Null(span) => { SchemaItem::Null(span) => {
let description = check_description()?; let description = check_description()?;
ts.extend(quote_spanned! { *span => ts.extend(quote_spanned! { *span =>
::proxmox::api::schema::NullSchema::new(#description) ::proxmox_schema::NullSchema::new(#description)
}); });
} }
SchemaItem::Boolean(span) => { SchemaItem::Boolean(span) => {
let description = check_description()?; let description = check_description()?;
ts.extend(quote_spanned! { *span => ts.extend(quote_spanned! { *span =>
::proxmox::api::schema::BooleanSchema::new(#description) ::proxmox_schema::BooleanSchema::new(#description)
}); });
} }
SchemaItem::Integer(span) => { SchemaItem::Integer(span) => {
let description = check_description()?; let description = check_description()?;
ts.extend(quote_spanned! { *span => ts.extend(quote_spanned! { *span =>
::proxmox::api::schema::IntegerSchema::new(#description) ::proxmox_schema::IntegerSchema::new(#description)
}); });
} }
SchemaItem::Number(span) => { SchemaItem::Number(span) => {
let description = check_description()?; let description = check_description()?;
ts.extend(quote_spanned! { *span => ts.extend(quote_spanned! { *span =>
::proxmox::api::schema::NumberSchema::new(#description) ::proxmox_schema::NumberSchema::new(#description)
}); });
} }
SchemaItem::String(span) => { SchemaItem::String(span) => {
let description = check_description()?; let description = check_description()?;
ts.extend(quote_spanned! { *span => ts.extend(quote_spanned! { *span =>
::proxmox::api::schema::StringSchema::new(#description) ::proxmox_schema::StringSchema::new(#description)
}); });
} }
SchemaItem::Object(obj) => { SchemaItem::Object(obj) => {
@ -348,7 +355,7 @@ impl SchemaItem {
let mut elems = TokenStream::new(); let mut elems = TokenStream::new();
obj.to_schema_inner(&mut elems)?; obj.to_schema_inner(&mut elems)?;
ts.extend(quote_spanned! { obj.span => ts.extend(quote_spanned! { obj.span =>
::proxmox::api::schema::ObjectSchema::new(#description, &[#elems]) ::proxmox_schema::ObjectSchema::new(#description, &[#elems])
}); });
} }
SchemaItem::Array(array) => { SchemaItem::Array(array) => {
@ -356,7 +363,7 @@ impl SchemaItem {
let mut items = TokenStream::new(); let mut items = TokenStream::new();
array.to_schema(&mut items)?; array.to_schema(&mut items)?;
ts.extend(quote_spanned! { array.span => ts.extend(quote_spanned! { array.span =>
::proxmox::api::schema::ArraySchema::new(#description, &#items) ::proxmox_schema::ArraySchema::new(#description, &#items)
}); });
} }
SchemaItem::ExternType(path) => { SchemaItem::ExternType(path) => {
@ -368,7 +375,7 @@ impl SchemaItem {
error!(description => "description not allowed on external type"); error!(description => "description not allowed on external type");
} }
ts.extend(quote_spanned! { path.span() => #path::API_SCHEMA }); ts.extend(quote_spanned! { path.span() => <#path as ::proxmox_schema::ApiType>::API_SCHEMA });
return Ok(true); return Ok(true);
} }
SchemaItem::ExternSchema(path) => { SchemaItem::ExternSchema(path) => {
@ -436,7 +443,7 @@ pub enum OptionType {
/// An updater type uses its "base" type's field's updaters to determine whether the field is /// An updater type uses its "base" type's field's updaters to determine whether the field is
/// supposed to be an option. /// supposed to be an option.
Updater(syn::Type), Updater(Box<syn::Type>),
} }
impl OptionType { impl OptionType {
@ -458,7 +465,7 @@ impl From<bool> for OptionType {
impl From<syn::Type> for OptionType { impl From<syn::Type> for OptionType {
fn from(ty: syn::Type) -> Self { fn from(ty: syn::Type) -> Self {
OptionType::Updater(ty) OptionType::Updater(Box::new(ty))
} }
} }
@ -466,9 +473,7 @@ impl ToTokens for OptionType {
fn to_tokens(&self, tokens: &mut TokenStream) { fn to_tokens(&self, tokens: &mut TokenStream) {
match self { match self {
OptionType::Bool(b) => b.to_tokens(tokens), OptionType::Bool(b) => b.to_tokens(tokens),
OptionType::Updater(ty) => tokens.extend(quote! { OptionType::Updater(_) => tokens.extend(quote! { true }),
<#ty as ::proxmox::api::schema::Updatable>::UPDATER_IS_OPTION
}),
} }
} }
} }
@ -644,6 +649,20 @@ impl SchemaObject {
.find(|p| p.name.as_ident_str() == key) .find(|p| p.name.as_ident_str() == key)
} }
fn remove_property_by_ident(&mut self, key: &str) -> bool {
match self
.properties_
.iter()
.position(|p| p.name.as_ident_str() == key)
{
Some(idx) => {
self.properties_.remove(idx);
true
}
None => false,
}
}
fn extend_properties(&mut self, new_fields: Vec<ObjectEntry>) { fn extend_properties(&mut self, new_fields: Vec<ObjectEntry>) {
self.properties_.extend(new_fields); self.properties_.extend(new_fields);
self.sort_properties(); self.sort_properties();

View File

@ -18,6 +18,7 @@ use anyhow::Error;
use proc_macro2::{Ident, Span, TokenStream}; use proc_macro2::{Ident, Span, TokenStream};
use quote::quote_spanned; use quote::quote_spanned;
use super::attributes::UpdaterFieldAttributes;
use super::Schema; use super::Schema;
use crate::api::{self, ObjectEntry, SchemaItem}; use crate::api::{self, ObjectEntry, SchemaItem};
use crate::serde; use crate::serde;
@ -58,7 +59,15 @@ fn handle_unit_struct(attribs: JSONObject, stru: syn::ItemStruct) -> Result<Toke
get_struct_description(&mut schema, &stru)?; get_struct_description(&mut schema, &stru)?;
finish_schema(schema, &stru, &stru.ident) let name = &stru.ident;
let mut schema = finish_schema(schema, &stru, name)?;
schema.extend(quote_spanned! { name.span() =>
impl ::proxmox_schema::UpdaterType for #name {
type Updater = Option<Self>;
}
});
Ok(schema)
} }
fn finish_schema( fn finish_schema(
@ -74,8 +83,9 @@ fn finish_schema(
Ok(quote_spanned! { name.span() => Ok(quote_spanned! { name.span() =>
#stru #stru
impl #name {
pub const API_SCHEMA: ::proxmox::api::schema::Schema = #schema; impl ::proxmox_schema::ApiType for #name {
const API_SCHEMA: ::proxmox_schema::Schema = #schema;
} }
}) })
} }
@ -159,7 +169,7 @@ fn handle_regular_struct(
.ok_or_else(|| format_err!(field => "field without name?"))?; .ok_or_else(|| format_err!(field => "field without name?"))?;
if let Some(renamed) = attrs.rename { if let Some(renamed) = attrs.rename {
(renamed.into_str(), ident.span()) (renamed.value(), ident.span())
} else if let Some(rename_all) = container_attrs.rename_all { } else if let Some(rename_all) = container_attrs.rename_all {
let name = rename_all.apply_to_field(&ident.to_string()); let name = rename_all.apply_to_field(&ident.to_string());
(name, ident.span()) (name, ident.span())
@ -261,7 +271,16 @@ fn handle_regular_struct(
} }
}); });
if derive { if derive {
derive_updater(stru.clone(), schema.clone(), &mut stru)? let updater = derive_updater(stru.clone(), schema.clone(), &mut stru)?;
// make sure we don't leave #[updater] attributes on the original struct:
if let syn::Fields::Named(fields) = &mut stru.fields {
for field in &mut fields.named {
let _ = UpdaterFieldAttributes::from_attributes(&mut field.attrs);
}
}
updater
} else { } else {
TokenStream::new() TokenStream::new()
} }
@ -317,7 +336,7 @@ fn finish_all_of_struct(
( (
quote_spanned!(name.span() => quote_spanned!(name.span() =>
const INNER_API_SCHEMA: ::proxmox::api::schema::Schema = #obj_schema; const INNER_API_SCHEMA: ::proxmox_schema::Schema = #obj_schema;
), ),
quote_spanned!(name.span() => &Self::INNER_API_SCHEMA,), quote_spanned!(name.span() => &Self::INNER_API_SCHEMA,),
) )
@ -328,10 +347,14 @@ fn finish_all_of_struct(
Ok(quote_spanned!(name.span() => Ok(quote_spanned!(name.span() =>
#stru #stru
impl #name { impl #name {
#inner_schema #inner_schema
pub const API_SCHEMA: ::proxmox::api::schema::Schema = }
::proxmox::api::schema::AllOfSchema::new(
impl ::proxmox_schema::ApiType for #name {
const API_SCHEMA: ::proxmox_schema::Schema =
::proxmox_schema::AllOfSchema::new(
#description, #description,
&[ &[
#inner_schema_ref #inner_schema_ref
@ -378,77 +401,35 @@ fn derive_updater(
mut schema: Schema, mut schema: Schema,
original_struct: &mut syn::ItemStruct, original_struct: &mut syn::ItemStruct,
) -> Result<TokenStream, Error> { ) -> Result<TokenStream, Error> {
let original_name = &original_struct.ident;
stru.ident = Ident::new(&format!("{}Updater", stru.ident), stru.ident.span()); stru.ident = Ident::new(&format!("{}Updater", stru.ident), stru.ident.span());
if !util::derived_items(&original_struct.attrs).any(|p| p.is_ident("Default")) { if !util::derives_trait(&original_struct.attrs, "Default") {
original_struct.attrs.push(util::make_derive_attribute( stru.attrs.push(util::make_derive_attribute(
Span::call_site(), Span::call_site(),
quote::quote! { Default }, quote::quote! { Default },
)); ));
} }
original_struct.attrs.push(util::make_derive_attribute(
Span::call_site(),
quote::quote! { ::proxmox::api::schema::Updatable },
));
let updater_name = &stru.ident; let updater_name = &stru.ident;
let updater_name_str = syn::LitStr::new(&updater_name.to_string(), updater_name.span());
original_struct.attrs.push(util::make_attribute(
Span::call_site(),
util::make_path(Span::call_site(), false, &["updatable"]),
quote::quote! { (updater = #updater_name_str) },
));
let mut all_of_schemas = TokenStream::new(); let mut all_of_schemas = TokenStream::new();
let mut is_empty_impl = TokenStream::new(); let mut is_empty_impl = TokenStream::new();
if let syn::Fields::Named(fields) = &mut stru.fields { if let syn::Fields::Named(fields) = &mut stru.fields {
for field in &mut fields.named { for mut field in std::mem::take(&mut fields.named) {
let field_name = field.ident.as_ref().expect("unnamed field in FieldsNamed"); match handle_updater_field(
let field_name_string = field_name.to_string(); &mut field,
&mut schema,
let field_schema = match schema.find_obj_property_by_ident_mut(&field_name_string) { &mut all_of_schemas,
Some(obj) => obj, &mut is_empty_impl,
None => { ) {
error!( Ok(FieldAction::Keep) => fields.named.push(field),
field_name.span(), Ok(FieldAction::Skip) => (),
"failed to find schema entry for {:?}", field_name_string, Err(err) => {
); crate::add_error(err);
continue; fields.named.push(field);
} }
};
field_schema.optional = field.ty.clone().into();
let span = Span::call_site();
let updater = syn::TypePath {
qself: Some(syn::QSelf {
lt_token: syn::token::Lt { spans: [span] },
ty: Box::new(field.ty.clone()),
position: 4, // 'Updater' is the 4th item in the 'segments' below
as_token: Some(syn::token::As { span }),
gt_token: syn::token::Gt { spans: [span] },
}),
path: util::make_path(
span,
true,
&["proxmox", "api", "schema", "Updatable", "Updater"],
),
};
field.ty = syn::Type::Path(updater);
if field_schema.flatten_in_struct {
let updater_ty = &field.ty;
all_of_schemas.extend(quote::quote! {&#updater_ty::API_SCHEMA,});
} }
if !is_empty_impl.is_empty() {
is_empty_impl.extend(quote::quote! { && });
}
is_empty_impl.extend(quote::quote! {
self.#field_name.is_empty()
});
} }
} }
@ -459,15 +440,105 @@ fn derive_updater(
}; };
if !is_empty_impl.is_empty() { if !is_empty_impl.is_empty() {
output = quote::quote!( output.extend(quote::quote!(
#output impl ::proxmox_schema::Updater for #updater_name {
impl ::proxmox::api::schema::Updater for #updater_name {
fn is_empty(&self) -> bool { fn is_empty(&self) -> bool {
#is_empty_impl #is_empty_impl
} }
} }
); ));
} }
output.extend(quote::quote!(
impl ::proxmox_schema::UpdaterType for #original_name {
type Updater = #updater_name;
}
));
Ok(output) Ok(output)
} }
enum FieldAction {
Keep,
Skip,
}
fn handle_updater_field(
field: &mut syn::Field,
schema: &mut Schema,
all_of_schemas: &mut TokenStream,
is_empty_impl: &mut TokenStream,
) -> Result<FieldAction, syn::Error> {
let updater_attrs = UpdaterFieldAttributes::from_attributes(&mut field.attrs);
let field_name = field.ident.as_ref().expect("unnamed field in FieldsNamed");
let field_name_string = field_name.to_string();
if updater_attrs.skip() {
if !schema.remove_obj_property_by_ident(&field_name_string) {
bail!(
field_name.span(),
"failed to find schema entry for {:?}",
field_name_string,
);
}
return Ok(FieldAction::Skip);
}
let field_schema = match schema.find_obj_property_by_ident_mut(&field_name_string) {
Some(obj) => obj,
None => {
bail!(
field_name.span(),
"failed to find schema entry for {:?}",
field_name_string,
);
}
};
let span = Span::call_site();
field_schema.optional = field.ty.clone().into();
let updater = match updater_attrs.ty() {
Some(ty) => ty.clone(),
None => {
syn::TypePath {
qself: Some(syn::QSelf {
lt_token: syn::token::Lt { spans: [span] },
ty: Box::new(field.ty.clone()),
position: 2, // 'Updater' is item index 2 in the 'segments' below
as_token: Some(syn::token::As { span }),
gt_token: syn::token::Gt { spans: [span] },
}),
path: util::make_path(span, true, &["proxmox_schema", "UpdaterType", "Updater"]),
}
}
};
updater_attrs.replace_serde_attributes(&mut field.attrs);
// we also need to update the schema to point to the updater's schema for `type: Foo` entries
if let SchemaItem::ExternType(path) = &mut field_schema.schema.item {
*path = syn::ExprPath {
attrs: Vec::new(),
qself: updater.qself.clone(),
path: updater.path.clone(),
};
}
field.ty = syn::Type::Path(updater);
if field_schema.flatten_in_struct {
let updater_ty = &field.ty;
all_of_schemas
.extend(quote::quote! {&<#updater_ty as ::proxmox_schema::ApiType>::API_SCHEMA,});
}
if !is_empty_impl.is_empty() {
is_empty_impl.extend(quote::quote! { && });
}
is_empty_impl.extend(quote::quote! {
self.#field_name.is_empty()
});
Ok(FieldAction::Keep)
}

View File

@ -69,7 +69,7 @@ fn router_do(item: TokenStream) -> Result<TokenStream, Error> {
``` ```
# use proxmox_api_macro::api; # use proxmox_api_macro::api;
# use proxmox::api::{ApiMethod, RpcEnvironment}; # use proxmox_router::{ApiMethod, RpcEnvironment};
use anyhow::Error; use anyhow::Error;
use serde_json::Value; use serde_json::Value;
@ -103,7 +103,7 @@ fn router_do(item: TokenStream) -> Result<TokenStream, Error> {
}, },
"CSRFPreventionToken": { "CSRFPreventionToken": {
type: String, type: String,
description: "Cross Site Request Forgerty Prevention Token.", description: "Cross Site Request Forgery Prevention Token.",
}, },
}, },
}, },
@ -178,19 +178,19 @@ fn router_do(item: TokenStream) -> Result<TokenStream, Error> {
```no_run ```no_run
# struct RenamedStruct; # struct RenamedStruct;
impl RenamedStruct { impl RenamedStruct {
pub const API_SCHEMA: &'static ::proxmox::api::schema::Schema = pub const API_SCHEMA: &'static ::proxmox_schema::Schema =
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"An example of a struct with renamed fields.", "An example of a struct with renamed fields.",
&[ &[
( (
"test-string", "test-string",
false, false,
&::proxmox::api::schema::StringSchema::new("A test string.").schema(), &::proxmox_schema::StringSchema::new("A test string.").schema(),
), ),
( (
"SomeOther", "SomeOther",
true, true,
&::proxmox::api::schema::StringSchema::new( &::proxmox_schema::StringSchema::new(
"An optional auto-derived value for testing:", "An optional auto-derived value for testing:",
) )
.schema(), .schema(),
@ -231,7 +231,7 @@ fn router_do(item: TokenStream) -> Result<TokenStream, Error> {
# Deriving an `Updater`: # Deriving an `Updater`:
An "Updater" struct can be generated automatically for a type. This affects the `Updatable` An "Updater" struct can be generated automatically for a type. This affects the `UpdaterType`
trait implementation generated, as it will set the associated trait implementation generated, as it will set the associated
`type Updater = TheDerivedUpdater`. `type Updater = TheDerivedUpdater`.
@ -239,9 +239,13 @@ fn router_do(item: TokenStream) -> Result<TokenStream, Error> {
This is only supported for `struct`s with named fields and will generate a new `struct` whose This is only supported for `struct`s with named fields and will generate a new `struct` whose
name is suffixed with `Updater` containing the `Updater` types of each field as a member. name is suffixed with `Updater` containing the `Updater` types of each field as a member.
Additionally the `#[updater(fixed)]` option is available to make it illegal for an updater to For the updater, the following additional attributes can be used:
modify a field (generating an error if it is set), while still allowing it to be used to create
a new object via the `build_from()` method. - `#[updater(skip)]`: skip the field entirely in the updater
- `#[updater(type = "DifferentType")]`: use `DifferentType` instead of `Option<OriginalType>`
for the updater field.
- `#[updater(serde(<content>))]`: *replace* the `#[serde]` attributes in the generated updater
with `<content`>. This can be used to have different `skip_serializing_if` serde attributes.
```ignore ```ignore
#[api] #[api]
@ -265,10 +269,10 @@ fn router_do(item: TokenStream) -> Result<TokenStream, Error> {
#[api] #[api]
/// An example of a simple struct type. /// An example of a simple struct type.
pub struct MyTypeUpdater { pub struct MyTypeUpdater {
one: Option<String>, // really <String as Updatable>::Updater one: Option<String>, // really <String as UpdaterType>::Updater
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
opt: Option<String>, // really <Option<String> as Updatable>::Updater opt: Option<String>, // really <Option<String> as UpdaterType>::Updater
} }
impl Updater for MyTypeUpdater { impl Updater for MyTypeUpdater {
@ -277,36 +281,8 @@ fn router_do(item: TokenStream) -> Result<TokenStream, Error> {
} }
} }
impl Updatable for MyType { impl UpdaterType for MyType {
type Updater = MyTypeUpdater; type Updater = MyTypeUpdater;
fn update_from<T>(&mut self, from: MyTypeUpdater, delete: &[T]) -> Result<(), Error>
where
T: AsRef<str>,
{
for delete in delete {
match delete.as_ref() {
"opt" => { self.opt = None; }
_ => (),
}
}
self.one.update_from(from.one)?;
self.opt.update_from(from.opt)?;
Ok(())
}
fn try_build_from(from: MyTypeUpdater) -> Result<Self, Error> {
Ok(Self {
// This amounts to `from.one.ok_or_else("cannot build from None value")?`
one: Updatable::try_build_from(from.one)
.map_err(|err| format_err!("failed to build value for field 'one': {}", err))?,
// This amounts to `from.opt`
opt: Updatable::try_build_from(from.opt)
.map_err(|err| format_err!("failed to build value for field 'opt': {}", err))?,
})
}
} }
``` ```
@ -320,17 +296,21 @@ pub fn api(attr: TokenStream_1, item: TokenStream_1) -> TokenStream_1 {
/// This is a dummy derive macro actually handled by `#[api]`! /// This is a dummy derive macro actually handled by `#[api]`!
#[doc(hidden)] #[doc(hidden)]
#[proc_macro_derive(Updater, attributes(updater, updatable, serde))] #[proc_macro_derive(Updater, attributes(updater, serde))]
pub fn derive_updater(_item: TokenStream_1) -> TokenStream_1 { pub fn derive_updater(_item: TokenStream_1) -> TokenStream_1 {
TokenStream_1::new() TokenStream_1::new()
} }
/// Create the default `Updatable` implementation from an `Option<Self>`. /// Create the default `UpdaterType` implementation as an `Option<Self>`.
#[proc_macro_derive(Updatable, attributes(updatable, serde))] #[proc_macro_derive(UpdaterType, attributes(updater_type, serde))]
pub fn derive_updatable(item: TokenStream_1) -> TokenStream_1 { pub fn derive_updater_type(item: TokenStream_1) -> TokenStream_1 {
let _error_guard = init_local_error(); let _error_guard = init_local_error();
let item: TokenStream = item.into(); let item: TokenStream = item.into();
handle_error(item.clone(), updater::updatable(item).map_err(Error::from)).into() handle_error(
item.clone(),
updater::updater_type(item).map_err(Error::from),
)
.into()
} }
thread_local!(static NON_FATAL_ERRORS: RefCell<Option<TokenStream>> = RefCell::new(None)); thread_local!(static NON_FATAL_ERRORS: RefCell<Option<TokenStream>> = RefCell::new(None));

View File

@ -5,9 +5,10 @@
use std::convert::TryFrom; use std::convert::TryFrom;
use crate::util::{AttrArgs, FieldName}; use crate::util::AttrArgs;
/// Serde name types. /// Serde name types.
#[allow(clippy::enum_variant_names)]
#[derive(Clone, Copy, Debug, Eq, PartialEq)] #[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum RenameAll { pub enum RenameAll {
LowerCase, LowerCase,
@ -158,44 +159,51 @@ impl TryFrom<&[syn::Attribute]> for ContainerAttrib {
/// `serde` field/variant attributes we support /// `serde` field/variant attributes we support
#[derive(Default)] #[derive(Default)]
pub struct SerdeAttrib { pub struct SerdeAttrib {
pub rename: Option<FieldName>, pub rename: Option<syn::LitStr>,
pub flatten: bool, pub flatten: bool,
} }
impl SerdeAttrib {
pub fn parse_attribute(&mut self, attrib: &syn::Attribute) -> Result<(), syn::Error> {
use syn::{Meta, NestedMeta};
if !attrib.path.is_ident("serde") {
return Ok(());
}
let args: AttrArgs = syn::parse2(attrib.tokens.clone())?;
for arg in args.args {
match arg {
NestedMeta::Meta(Meta::NameValue(var)) if var.path.is_ident("rename") => {
match var.lit {
syn::Lit::Str(rename) => {
if self.rename.is_some() && self.rename.as_ref() != Some(&rename) {
error!(&rename => "multiple conflicting 'rename' attributes");
}
self.rename = Some(rename);
}
_ => error!(var.lit => "'rename' value must be a string literal"),
}
}
NestedMeta::Meta(Meta::Path(path)) if path.is_ident("flatten") => {
self.flatten = true;
}
_ => continue,
}
}
Ok(())
}
}
impl TryFrom<&[syn::Attribute]> for SerdeAttrib { impl TryFrom<&[syn::Attribute]> for SerdeAttrib {
type Error = syn::Error; type Error = syn::Error;
fn try_from(attributes: &[syn::Attribute]) -> Result<Self, syn::Error> { fn try_from(attributes: &[syn::Attribute]) -> Result<Self, syn::Error> {
use syn::{Meta, NestedMeta};
let mut this: Self = Default::default(); let mut this: Self = Default::default();
for attrib in attributes { for attrib in attributes {
if !attrib.path.is_ident("serde") { this.parse_attribute(attrib)?;
continue;
}
let args: AttrArgs = syn::parse2(attrib.tokens.clone())?;
for arg in args.args {
match arg {
NestedMeta::Meta(Meta::NameValue(var)) if var.path.is_ident("rename") => {
match var.lit {
syn::Lit::Str(lit) => {
let rename = FieldName::from(&lit);
if this.rename.is_some() && this.rename.as_ref() != Some(&rename) {
error!(lit => "multiple conflicting 'rename' attributes");
}
this.rename = Some(rename);
}
_ => error!(var.lit => "'rename' value must be a string literal"),
}
}
NestedMeta::Meta(Meta::Path(path)) if path.is_ident("flatten") => {
this.flatten = true;
}
_ => continue,
}
}
} }
Ok(this) Ok(this)

View File

@ -1,302 +1,35 @@
use std::convert::TryFrom;
use proc_macro2::{Ident, Span, TokenStream}; use proc_macro2::{Ident, Span, TokenStream};
use quote::{quote, quote_spanned}; use quote::quote_spanned;
use syn::spanned::Spanned; use syn::spanned::Spanned;
use crate::serde; pub(crate) fn updater_type(item: TokenStream) -> Result<TokenStream, syn::Error> {
use crate::util;
pub(crate) fn updatable(item: TokenStream) -> Result<TokenStream, syn::Error> {
let item: syn::Item = syn::parse2(item)?; let item: syn::Item = syn::parse2(item)?;
let full_span = item.span(); let full_span = item.span();
match item { Ok(match item {
syn::Item::Struct(syn::ItemStruct { syn::Item::Struct(syn::ItemStruct {
fields: syn::Fields::Named(named), ident, generics, ..
attrs, }) => derive_updater_type(full_span, ident, generics),
ident,
generics,
..
}) => derive_named_struct_updatable(attrs, full_span, ident, generics, named),
syn::Item::Struct(syn::ItemStruct {
attrs,
ident,
generics,
..
}) => derive_default_updatable(attrs, full_span, ident, generics),
syn::Item::Enum(syn::ItemEnum { syn::Item::Enum(syn::ItemEnum {
attrs, ident, generics, ..
ident, }) => derive_updater_type(full_span, ident, generics),
generics, _ => bail!(item => "`UpdaterType` cannot be derived for this type"),
.. })
}) => derive_default_updatable(attrs, full_span, ident, generics),
_ => bail!(item => "`Updatable` can only be derived for structs"),
}
} }
fn no_generics(generics: syn::Generics) { fn no_generics(generics: syn::Generics) {
if let Some(lt) = generics.lt_token { if let Some(lt) = generics.lt_token {
error!(lt => "deriving `Updatable` for a generic enum is not supported"); error!(lt => "deriving `UpdaterType` for a generic enum is not supported");
} else if let Some(wh) = generics.where_clause { } else if let Some(wh) = generics.where_clause {
error!(wh => "deriving `Updatable` on enums with generic bounds is not supported"); error!(wh => "deriving `UpdaterType` on enums with generic bounds is not supported");
} }
} }
fn derive_default_updatable( fn derive_updater_type(full_span: Span, ident: Ident, generics: syn::Generics) -> TokenStream {
attrs: Vec<syn::Attribute>,
full_span: Span,
ident: Ident,
generics: syn::Generics,
) -> Result<TokenStream, syn::Error> {
no_generics(generics); no_generics(generics);
let args = UpdatableArgs::from_attributes(attrs);
if let Some(updater) = args.updater {
error!(updater => "`updater` updater attribute not supported for this type");
}
Ok(default_updatable(full_span, ident))
}
fn default_updatable(full_span: Span, ident: Ident) -> TokenStream {
quote_spanned! { full_span => quote_spanned! { full_span =>
#[automatically_derived] impl ::proxmox_schema::UpdaterType for #ident {
impl ::proxmox::api::schema::Updatable for #ident { type Updater = Option<Self>;
type Updater = Option<#ident>;
const UPDATER_IS_OPTION: bool = true;
fn update_from<T: AsRef<str>>(
&mut self,
from: Option<#ident>,
_delete: &[T],
) -> Result<(), ::anyhow::Error> {
if let Some(val) = from {
*self = val;
}
Ok(())
}
fn try_build_from(from: Option<#ident>) -> Result<Self, ::anyhow::Error> {
from.ok_or_else(|| ::anyhow::format_err!("cannot build from None value"))
}
} }
} }
} }
fn derive_named_struct_updatable(
attrs: Vec<syn::Attribute>,
full_span: Span,
ident: Ident,
generics: syn::Generics,
fields: syn::FieldsNamed,
) -> Result<TokenStream, syn::Error> {
no_generics(generics);
let serde_container_attrs = serde::ContainerAttrib::try_from(&attrs[..])?;
let args = UpdatableArgs::from_attributes(attrs);
let updater = match args.updater {
Some(updater) => updater,
None => return Ok(default_updatable(full_span, ident)),
};
let mut delete = TokenStream::new();
let mut apply = TokenStream::new();
let mut build = TokenStream::new();
for field in fields.named {
let serde_attrs = serde::SerdeAttrib::try_from(&field.attrs[..])?;
let attrs = UpdaterFieldArgs::from_attributes(field.attrs);
let field_ident = field
.ident
.as_ref()
.expect("unnamed field in named struct?");
let field_name_string = if let Some(renamed) = serde_attrs.rename {
renamed.into_str()
} else if let Some(rename_all) = serde_container_attrs.rename_all {
let name = rename_all.apply_to_field(&field_ident.to_string());
name
} else {
field_ident.to_string()
};
let build_err = format!(
"failed to build value for field '{}': {{}}",
field_name_string
);
if util::is_option_type(&field.ty).is_some() {
delete.extend(quote! {
#field_name_string => { self.#field_ident = None; }
});
build.extend(quote! {
#field_ident: ::proxmox::api::schema::Updatable::try_build_from(
from.#field_ident
)
.map_err(|err| ::anyhow::format_err!(#build_err, err))?,
});
} else {
build.extend(quote! {
#field_ident: ::proxmox::api::schema::Updatable::try_build_from(
from.#field_ident
)
.map_err(|err| ::anyhow::format_err!(#build_err, err))?,
});
}
if attrs.fixed {
let error = format!(
"field '{}' must not be set when updating existing data",
field_ident
);
apply.extend(quote! {
if from.#field_ident.is_some() {
::anyhow::bail!(#error);
}
});
} else {
apply.extend(quote! {
::proxmox::api::schema::Updatable::update_from(
&mut self.#field_ident,
from.#field_ident,
delete,
)?;
});
}
}
if !delete.is_empty() {
delete = quote! {
for delete in delete {
match delete.as_ref() {
#delete
_ => continue,
}
}
};
}
Ok(quote! {
#[automatically_derived]
impl ::proxmox::api::schema::Updatable for #ident {
type Updater = #updater;
const UPDATER_IS_OPTION: bool = false;
fn update_from<T: AsRef<str>>(
&mut self,
from: Self::Updater,
delete: &[T],
) -> Result<(), ::anyhow::Error> {
#delete
#apply
Ok(())
}
fn try_build_from(from: Self::Updater) -> Result<Self, ::anyhow::Error> {
Ok(Self {
#build
})
}
}
})
}
#[derive(Default)]
struct UpdatableArgs {
updater: Option<syn::Type>,
}
impl UpdatableArgs {
fn from_attributes(attributes: Vec<syn::Attribute>) -> Self {
let mut this = Self::default();
for_attributes(attributes, "updatable", |meta| this.parse_nested(meta));
this
}
fn parse_nested(&mut self, meta: syn::NestedMeta) -> Result<(), syn::Error> {
match meta {
syn::NestedMeta::Meta(syn::Meta::NameValue(nv)) => self.parse_name_value(nv),
other => bail!(other => "invalid updater argument"),
}
}
fn parse_name_value(&mut self, nv: syn::MetaNameValue) -> Result<(), syn::Error> {
if nv.path.is_ident("updater") {
let updater: syn::Type = match nv.lit {
// we could use `s.parse()` but it doesn't make sense to put the original struct
// name as spanning info here, so instead, we use the call site:
syn::Lit::Str(s) => syn::parse_str(&s.value())?,
other => bail!(other => "updater argument must be a string literal"),
};
if self.updater.is_some() {
error!(updater.span(), "multiple 'updater' attributes not allowed");
}
self.updater = Some(updater);
Ok(())
} else {
bail!(nv.path => "unrecognized updater argument");
}
}
}
#[derive(Default)]
struct UpdaterFieldArgs {
/// A fixed field must not be set in the `Updater` when the data is updated via `update_from`,
/// but is still required for the `build()` method.
fixed: bool,
}
impl UpdaterFieldArgs {
fn from_attributes(attributes: Vec<syn::Attribute>) -> Self {
let mut this = Self::default();
for_attributes(attributes, "updater", |meta| this.parse_nested(meta));
this
}
fn parse_nested(&mut self, meta: syn::NestedMeta) -> Result<(), syn::Error> {
match meta {
syn::NestedMeta::Meta(syn::Meta::Path(path)) if path.is_ident("fixed") => {
self.fixed = true;
}
other => bail!(other => "invalid updater argument"),
}
Ok(())
}
}
/// Non-fatally go through all `updater` attributes.
fn for_attributes<F>(attributes: Vec<syn::Attribute>, attr_name: &str, mut func: F)
where
F: FnMut(syn::NestedMeta) -> Result<(), syn::Error>,
{
for meta in meta_iter(attributes) {
let list = match meta {
syn::Meta::List(list) if list.path.is_ident(attr_name) => list,
_ => continue,
};
for entry in list.nested {
match func(entry) {
Ok(()) => (),
Err(err) => crate::add_error(err),
}
}
}
}
fn meta_iter(
attributes: impl IntoIterator<Item = syn::Attribute>,
) -> impl Iterator<Item = syn::Meta> {
attributes.into_iter().filter_map(|attr| {
if attr.style != syn::AttrStyle::Outer {
return None;
}
attr.parse_meta().ok()
})
}

View File

@ -2,7 +2,7 @@ use std::borrow::Borrow;
use std::collections::HashMap; use std::collections::HashMap;
use std::convert::TryFrom; use std::convert::TryFrom;
use proc_macro2::{Ident, Span, TokenStream}; use proc_macro2::{Ident, Span, TokenStream, TokenTree};
use quote::ToTokens; use quote::ToTokens;
use syn::parse::{Parse, ParseStream}; use syn::parse::{Parse, ParseStream};
use syn::punctuated::Punctuated; use syn::punctuated::Punctuated;
@ -376,7 +376,7 @@ impl IntoIterator for JSONObject {
/// An element in a json style map. /// An element in a json style map.
struct JSONMapEntry { struct JSONMapEntry {
pub key: FieldName, pub key: FieldName,
pub colon_token: Token![:], _colon_token: Token![:],
pub value: JSONValue, pub value: JSONValue,
} }
@ -384,7 +384,7 @@ impl Parse for JSONMapEntry {
fn parse(input: ParseStream) -> syn::Result<Self> { fn parse(input: ParseStream) -> syn::Result<Self> {
Ok(Self { Ok(Self {
key: input.parse()?, key: input.parse()?,
colon_token: input.parse()?, _colon_token: input.parse()?,
value: input.parse()?, value: input.parse()?,
}) })
} }
@ -496,7 +496,12 @@ pub fn infer_type(schema: &mut Schema, ty: &syn::Type) -> Result<bool, syn::Erro
} else if api::NUMBERNAMES.iter().any(|n| path.path.is_ident(n)) { } else if api::NUMBERNAMES.iter().any(|n| path.path.is_ident(n)) {
schema.item = SchemaItem::Number(ty.span()); schema.item = SchemaItem::Number(ty.span());
} else { } else {
bail!(ty => "cannot infer parameter type from this rust type"); // bail!(ty => "cannot infer parameter type from this rust type");
schema.item = SchemaItem::ExternType(syn::ExprPath {
attrs: Vec::new(),
qself: path.qself.clone(),
path: path.path.clone(),
});
} }
} }
_ => (), _ => (),
@ -556,7 +561,7 @@ pub fn make_path(span: Span, leading_colon: bool, path: &[&str]) -> syn::Path {
None None
}, },
segments: path segments: path
.into_iter() .iter()
.map(|entry| syn::PathSegment { .map(|entry| syn::PathSegment {
ident: Ident::new(entry, span), ident: Ident::new(entry, span),
arguments: syn::PathArguments::None, arguments: syn::PathArguments::None,
@ -677,9 +682,9 @@ impl<T> Maybe<T> {
} }
} }
impl<T> Into<Option<T>> for Maybe<T> { impl<T> From<Maybe<T>> for Option<T> {
fn into(self) -> Option<T> { fn from(maybe: Maybe<T>) -> Option<T> {
match self { match maybe {
Maybe::Explicit(t) | Maybe::Derived(t) => Some(t), Maybe::Explicit(t) | Maybe::Derived(t) => Some(t),
Maybe::None => None, Maybe::None => None,
} }
@ -689,11 +694,16 @@ impl<T> Into<Option<T>> for Maybe<T> {
/// Helper to iterate over all the `#[derive(...)]` types found in an attribute list. /// Helper to iterate over all the `#[derive(...)]` types found in an attribute list.
pub fn derived_items(attributes: &[syn::Attribute]) -> DerivedItems { pub fn derived_items(attributes: &[syn::Attribute]) -> DerivedItems {
DerivedItems { DerivedItems {
attributes: attributes.into_iter(), attributes: attributes.iter(),
current: None, current: None,
} }
} }
/// Helper to check if a certain trait is being derived.
pub fn derives_trait(attributes: &[syn::Attribute], ident: &str) -> bool {
derived_items(attributes).any(|p| p.is_ident(ident))
}
/// Iterator over the types found in `#[derive(...)]` attributes. /// Iterator over the types found in `#[derive(...)]` attributes.
pub struct DerivedItems<'a> { pub struct DerivedItems<'a> {
current: Option<<Punctuated<syn::NestedMeta, Token![,]> as IntoIterator>::IntoIter>, current: Option<<Punctuated<syn::NestedMeta, Token![,]> as IntoIterator>::IntoIter>,
@ -811,3 +821,116 @@ pub fn make_derive_attribute(span: Span, content: TokenStream) -> syn::Attribute
quote::quote! { (#content) }, quote::quote! { (#content) },
) )
} }
/// Extract (remove) an attribute from a list and run a callback on its parameters.
pub fn extract_attributes(
attributes: &mut Vec<syn::Attribute>,
attr_name: &str,
mut func_matching: impl FnMut(&syn::Attribute, syn::NestedMeta) -> Result<(), syn::Error>,
) {
for attr in std::mem::take(attributes) {
if attr.style != syn::AttrStyle::Outer {
attributes.push(attr);
continue;
}
let meta = match attr.parse_meta() {
Ok(meta) => meta,
Err(err) => {
crate::add_error(err);
attributes.push(attr);
continue;
}
};
let list = match meta {
syn::Meta::List(list) if list.path.is_ident(attr_name) => list,
_ => {
attributes.push(attr);
continue;
}
};
for entry in list.nested {
match func_matching(&attr, entry) {
Ok(()) => (),
Err(err) => crate::add_error(err),
}
}
}
}
/// Helper to create an error about some duplicate attribute.
pub fn duplicate<T>(prev: &Option<T>, attr: &syn::Path) {
if prev.is_some() {
error!(attr => "duplicate attribute: '{:?}'", attr)
}
}
/// Set a boolean attribute to a value, producing a "duplication" error if it has already been set.
pub fn set_bool(b: &mut Option<syn::LitBool>, attr: &syn::Path, value: bool) {
duplicate(&*b, attr);
*b = Some(syn::LitBool::new(value, attr.span()));
}
pub fn default_false(o: Option<&syn::LitBool>) -> bool {
o.as_ref().map(|b| b.value).unwrap_or(false)
}
/// Parse the contents of a `LitStr`, preserving its span.
pub fn parse_lit_str<T: Parse>(s: &syn::LitStr) -> syn::parse::Result<T> {
parse_str(&s.value(), s.span())
}
/// Parse a literal string, giving the entire output the specified span.
pub fn parse_str<T: Parse>(s: &str, span: Span) -> syn::parse::Result<T> {
syn::parse2(respan_tokens(syn::parse_str(s)?, span))
}
/// Apply a `Span` to an entire `TokenStream`.
pub fn respan_tokens(stream: TokenStream, span: Span) -> TokenStream {
stream
.into_iter()
.map(|token| respan(token, span))
.collect()
}
/// Apply a `Span` to a `TokenTree`, recursively if it is a `Group`.
pub fn respan(mut token: TokenTree, span: Span) -> TokenTree {
use proc_macro2::Group;
match &mut token {
TokenTree::Group(g) => {
*g = Group::new(g.delimiter(), respan_tokens(g.stream(), span));
}
other => other.set_span(span),
}
token
}
/// Parse a string attribute into a value, producing a duplication error if it has already been
/// set.
pub fn parse_str_value_to_option<T: Parse>(target: &mut Option<T>, nv: &syn::MetaNameValue) {
duplicate(&*target, &nv.path);
match &nv.lit {
syn::Lit::Str(s) => match parse_lit_str(s) {
Ok(value) => *target = Some(value),
Err(err) => crate::add_error(err),
},
other => error!(other => "bad value for '{:?}' attribute", nv.path),
}
}
/*
pub fn parse_str_value<T: Parse>(nv: &syn::MetaNameValue) -> Result<T, syn::Error> {
match &nv.lit {
syn::Lit::Str(s) => super::parse_lit_str(s),
other => bail!(other => "bad value for '{:?}' attribute", nv.path),
}
}
pub fn default_true(o: Option<&syn::LitBool>) -> bool {
o.as_ref().map(|b| b.value).unwrap_or(true)
}
*/

View File

@ -4,8 +4,9 @@ use anyhow::Error;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::api::schema;
use proxmox_api_macro::api; use proxmox_api_macro::api;
use proxmox_schema as schema;
use proxmox_schema::ApiType;
pub const NAME_SCHEMA: schema::Schema = schema::StringSchema::new("Name.").schema(); pub const NAME_SCHEMA: schema::Schema = schema::StringSchema::new("Name.").schema();
pub const VALUE_SCHEMA: schema::Schema = schema::IntegerSchema::new("Value.").schema(); pub const VALUE_SCHEMA: schema::Schema = schema::IntegerSchema::new("Value.").schema();
@ -56,17 +57,16 @@ pub struct Nvit {
#[test] #[test]
fn test_nvit() { fn test_nvit() {
const TEST_NAME_VALUE_SCHEMA: ::proxmox::api::schema::Schema = const TEST_NAME_VALUE_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::ObjectSchema::new(
::proxmox::api::schema::ObjectSchema::new( "Name and value.",
"Name and value.", &[
&[ ("name", false, &NAME_SCHEMA),
("name", false, &NAME_SCHEMA), ("value", false, &VALUE_SCHEMA),
("value", false, &VALUE_SCHEMA), ],
], )
) .schema();
.schema();
const TEST_SCHEMA: ::proxmox::api::schema::Schema = ::proxmox::api::schema::AllOfSchema::new( const TEST_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::AllOfSchema::new(
"Name, value, index and text.", "Name, value, index and text.",
&[&TEST_NAME_VALUE_SCHEMA, &IndexText::API_SCHEMA], &[&TEST_NAME_VALUE_SCHEMA, &IndexText::API_SCHEMA],
) )
@ -96,17 +96,17 @@ struct WithExtra {
#[test] #[test]
fn test_extra() { fn test_extra() {
const INNER_SCHEMA: ::proxmox::api::schema::Schema = ::proxmox::api::schema::ObjectSchema::new( const INNER_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::ObjectSchema::new(
"<INNER: Extra Schema>", "<INNER: Extra Schema>",
&[( &[(
"extra", "extra",
false, false,
&::proxmox::api::schema::StringSchema::new("Extra field.").schema(), &::proxmox_schema::StringSchema::new("Extra field.").schema(),
)], )],
) )
.schema(); .schema();
const TEST_SCHEMA: ::proxmox::api::schema::Schema = ::proxmox::api::schema::AllOfSchema::new( const TEST_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::AllOfSchema::new(
"Extra Schema", "Extra Schema",
&[ &[
&INNER_SCHEMA, &INNER_SCHEMA,
@ -134,9 +134,9 @@ pub fn hello(it: IndexText, nv: NameValue) -> Result<(NameValue, IndexText), Err
#[test] #[test]
fn hello_schema_check() { fn hello_schema_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new_full( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new_full(
&::proxmox::api::ApiHandler::Sync(&api_function_hello), &::proxmox_router::ApiHandler::Sync(&api_function_hello),
::proxmox::api::schema::ParameterSchema::AllOf(&::proxmox::api::schema::AllOfSchema::new( ::proxmox_schema::ParameterSchema::AllOf(&::proxmox_schema::AllOfSchema::new(
"Hello method.", "Hello method.",
&[&IndexText::API_SCHEMA, &NameValue::API_SCHEMA], &[&IndexText::API_SCHEMA, &NameValue::API_SCHEMA],
)), )),
@ -164,19 +164,19 @@ pub fn with_extra(
#[test] #[test]
fn with_extra_schema_check() { fn with_extra_schema_check() {
const INNER_SCHEMA: ::proxmox::api::schema::Schema = ::proxmox::api::schema::ObjectSchema::new( const INNER_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::ObjectSchema::new(
"<INNER: Extra method.>", "<INNER: Extra method.>",
&[( &[(
"extra", "extra",
false, false,
&::proxmox::api::schema::StringSchema::new("An extra field.").schema(), &::proxmox_schema::StringSchema::new("An extra field.").schema(),
)], )],
) )
.schema(); .schema();
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new_full( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new_full(
&::proxmox::api::ApiHandler::Sync(&api_function_with_extra), &::proxmox_router::ApiHandler::Sync(&api_function_with_extra),
::proxmox::api::schema::ParameterSchema::AllOf(&::proxmox::api::schema::AllOfSchema::new( ::proxmox_schema::ParameterSchema::AllOf(&::proxmox_schema::AllOfSchema::new(
"Extra method.", "Extra method.",
&[ &[
&INNER_SCHEMA, &INNER_SCHEMA,
@ -189,7 +189,7 @@ fn with_extra_schema_check() {
} }
struct RpcEnv; struct RpcEnv;
impl proxmox::api::RpcEnvironment for RpcEnv { impl proxmox_router::RpcEnvironment for RpcEnv {
fn result_attrib_mut(&mut self) -> &mut Value { fn result_attrib_mut(&mut self) -> &mut Value {
panic!("result_attrib_mut called"); panic!("result_attrib_mut called");
} }
@ -199,7 +199,7 @@ impl proxmox::api::RpcEnvironment for RpcEnv {
} }
/// The environment type /// The environment type
fn env_type(&self) -> proxmox::api::RpcEnvironmentType { fn env_type(&self) -> proxmox_router::RpcEnvironmentType {
panic!("env_type called"); panic!("env_type called");
} }

View File

@ -3,7 +3,7 @@ use proxmox_api_macro::api;
use anyhow::Error; use anyhow::Error;
use serde_json::{json, Value}; use serde_json::{json, Value};
use proxmox::api::Permission; use proxmox_router::Permission;
#[api( #[api(
input: { input: {
@ -31,7 +31,7 @@ use proxmox::api::Permission;
}, },
"CSRFPreventionToken": { "CSRFPreventionToken": {
type: String, type: String,
description: "Cross Site Request Forgerty Prevention Token.", description: "Cross Site Request Forgery Prevention Token.",
}, },
}, },
}, },
@ -59,51 +59,49 @@ pub fn create_ticket(param: Value) -> Result<Value, Error> {
#[test] #[test]
fn create_ticket_schema_check() { fn create_ticket_schema_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Sync(&api_function_create_ticket), &::proxmox_router::ApiHandler::Sync(&api_function_create_ticket),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Create or verify authentication ticket.", "Create or verify authentication ticket.",
&[ &[
( (
"password", "password",
false, false,
&::proxmox::api::schema::StringSchema::new( &::proxmox_schema::StringSchema::new("The secret password or a valid ticket.")
"The secret password or a valid ticket.", .schema(),
)
.schema(),
), ),
( (
"username", "username",
false, false,
&::proxmox::api::schema::StringSchema::new("User name") &::proxmox_schema::StringSchema::new("User name")
.max_length(64) .max_length(64)
.schema(), .schema(),
), ),
], ],
), ),
) )
.returns(::proxmox::api::router::ReturnType::new( .returns(::proxmox_schema::ReturnType::new(
false, false,
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"A ticket.", "A ticket.",
&[ &[
( (
"CSRFPreventionToken", "CSRFPreventionToken",
false, false,
&::proxmox::api::schema::StringSchema::new( &::proxmox_schema::StringSchema::new(
"Cross Site Request Forgerty Prevention Token.", "Cross Site Request Forgery Prevention Token.",
) )
.schema(), .schema(),
), ),
( (
"ticket", "ticket",
false, false,
&::proxmox::api::schema::StringSchema::new("Auth ticket.").schema(), &::proxmox_schema::StringSchema::new("Auth ticket.").schema(),
), ),
( (
"username", "username",
false, false,
&::proxmox::api::schema::StringSchema::new("User name.").schema(), &::proxmox_schema::StringSchema::new("User name.").schema(),
), ),
], ],
) )
@ -140,7 +138,7 @@ fn create_ticket_schema_check() {
}, },
"CSRFPreventionToken": { "CSRFPreventionToken": {
type: String, type: String,
description: "Cross Site Request Forgerty Prevention Token.", description: "Cross Site Request Forgery Prevention Token.",
}, },
}, },
}, },
@ -162,51 +160,49 @@ pub fn create_ticket_direct(username: String, password: String) -> Result<&'stat
#[test] #[test]
fn create_ticket_direct_schema_check() { fn create_ticket_direct_schema_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Sync(&api_function_create_ticket_direct), &::proxmox_router::ApiHandler::Sync(&api_function_create_ticket_direct),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Create or verify authentication ticket.", "Create or verify authentication ticket.",
&[ &[
( (
"password", "password",
false, false,
&::proxmox::api::schema::StringSchema::new( &::proxmox_schema::StringSchema::new("The secret password or a valid ticket.")
"The secret password or a valid ticket.", .schema(),
)
.schema(),
), ),
( (
"username", "username",
false, false,
&::proxmox::api::schema::StringSchema::new("User name") &::proxmox_schema::StringSchema::new("User name")
.max_length(64) .max_length(64)
.schema(), .schema(),
), ),
], ],
), ),
) )
.returns(::proxmox::api::router::ReturnType::new( .returns(::proxmox_schema::ReturnType::new(
false, false,
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"A ticket.", "A ticket.",
&[ &[
( (
"CSRFPreventionToken", "CSRFPreventionToken",
false, false,
&::proxmox::api::schema::StringSchema::new( &::proxmox_schema::StringSchema::new(
"Cross Site Request Forgerty Prevention Token.", "Cross Site Request Forgery Prevention Token.",
) )
.schema(), .schema(),
), ),
( (
"ticket", "ticket",
false, false,
&::proxmox::api::schema::StringSchema::new("Auth ticket.").schema(), &::proxmox_schema::StringSchema::new("Auth ticket.").schema(),
), ),
( (
"username", "username",
false, false,
&::proxmox::api::schema::StringSchema::new("User name.").schema(), &::proxmox_schema::StringSchema::new("User name.").schema(),
), ),
], ],
) )
@ -239,6 +235,22 @@ pub fn basic_function() -> Result<(), Error> {
Ok(()) Ok(())
} }
#[api(
streaming: true,
)]
/// streaming async call
pub async fn streaming_async_call() -> Result<(), Error> {
Ok(())
}
#[api(
streaming: true,
)]
/// streaming sync call
pub fn streaming_sync_call() -> Result<(), Error> {
Ok(())
}
#[api( #[api(
input: { input: {
properties: { properties: {
@ -258,14 +270,14 @@ pub fn func_with_option(verbose: Option<bool>) -> Result<(), Error> {
#[test] #[test]
fn func_with_option_schema_check() { fn func_with_option_schema_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Sync(&api_function_func_with_option), &::proxmox_router::ApiHandler::Sync(&api_function_func_with_option),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Optional parameter", "Optional parameter",
&[( &[(
"verbose", "verbose",
true, true,
&::proxmox::api::schema::BooleanSchema::new("Verbose output.").schema(), &::proxmox_schema::BooleanSchema::new("Verbose output.").schema(),
)], )],
), ),
) )
@ -275,7 +287,7 @@ fn func_with_option_schema_check() {
} }
struct RpcEnv; struct RpcEnv;
impl proxmox::api::RpcEnvironment for RpcEnv { impl proxmox_router::RpcEnvironment for RpcEnv {
fn result_attrib_mut(&mut self) -> &mut Value { fn result_attrib_mut(&mut self) -> &mut Value {
panic!("result_attrib_mut called"); panic!("result_attrib_mut called");
} }
@ -285,7 +297,7 @@ impl proxmox::api::RpcEnvironment for RpcEnv {
} }
/// The environment type /// The environment type
fn env_type(&self) -> proxmox::api::RpcEnvironmentType { fn env_type(&self) -> proxmox_router::RpcEnvironmentType {
panic!("env_type called"); panic!("env_type called");
} }

View File

@ -34,14 +34,14 @@ pub async fn number(num: u32) -> Result<u32, Error> {
#[test] #[test]
fn number_schema_check() { fn number_schema_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Async(&api_function_number), &::proxmox_router::ApiHandler::Async(&api_function_number),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Return the number...", "Return the number...",
&[( &[(
"num", "num",
false, false,
&::proxmox::api::schema::IntegerSchema::new("The version to upgrade to") &::proxmox_schema::IntegerSchema::new("The version to upgrade to")
.minimum(0) .minimum(0)
.maximum(0xffffffff) .maximum(0xffffffff)
.schema(), .schema(),
@ -75,20 +75,20 @@ pub async fn more_async_params(param: Value) -> Result<(), Error> {
#[test] #[test]
fn more_async_params_schema_check() { fn more_async_params_schema_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Async(&api_function_more_async_params), &::proxmox_router::ApiHandler::Async(&api_function_more_async_params),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Return the number...", "Return the number...",
&[ &[
( (
"bar", "bar",
false, false,
&::proxmox::api::schema::StringSchema::new("The great Bar").schema(), &::proxmox_schema::StringSchema::new("The great Bar").schema(),
), ),
( (
"foo", "foo",
false, false,
&::proxmox::api::schema::StringSchema::new("The great Foo").schema(), &::proxmox_schema::StringSchema::new("The great Foo").schema(),
), ),
], ],
), ),
@ -116,14 +116,14 @@ pub async fn keyword_named_parameters(r#type: String) -> Result<(), Error> {
#[test] #[test]
fn keyword_named_parameters_check() { fn keyword_named_parameters_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Async(&api_function_keyword_named_parameters), &::proxmox_router::ApiHandler::Async(&api_function_keyword_named_parameters),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Returns nothing.", "Returns nothing.",
&[( &[(
"type", "type",
false, false,
&::proxmox::api::schema::StringSchema::new("The great Foo").schema(), &::proxmox_schema::StringSchema::new("The great Foo").schema(),
)], )],
), ),
) )

View File

@ -1,8 +1,9 @@
//! This should test the usage of "external" schemas. If a property is declared with a path instead //! This should test the usage of "external" schemas. If a property is declared with a path instead
//! of an object, we expect the path to lead to a schema. //! of an object, we expect the path to lead to a schema.
use proxmox::api::{schema, RpcEnvironment};
use proxmox_api_macro::api; use proxmox_api_macro::api;
use proxmox_router::RpcEnvironment;
use proxmox_schema as schema;
use anyhow::Error; use anyhow::Error;
use serde_json::{json, Value}; use serde_json::{json, Value};
@ -27,9 +28,9 @@ pub fn get_archive(archive_name: String) {
#[test] #[test]
fn get_archive_schema_check() { fn get_archive_schema_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Sync(&api_function_get_archive), &::proxmox_router::ApiHandler::Sync(&api_function_get_archive),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Get an archive.", "Get an archive.",
&[("archive-name", false, &NAME_SCHEMA)], &[("archive-name", false, &NAME_SCHEMA)],
), ),
@ -56,9 +57,9 @@ pub fn get_archive_2(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Va
#[test] #[test]
fn get_archive_2_schema_check() { fn get_archive_2_schema_check() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Sync(&api_function_get_archive_2), &::proxmox_router::ApiHandler::Sync(&api_function_get_archive_2),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Get an archive.", "Get an archive.",
&[("archive-name", false, &NAME_SCHEMA)], &[("archive-name", false, &NAME_SCHEMA)],
), ),
@ -88,14 +89,14 @@ pub fn get_data(param: Value) -> Result<(), Error> {
#[test] #[test]
fn get_data_schema_test() { fn get_data_schema_test() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Sync(&api_function_get_data), &::proxmox_router::ApiHandler::Sync(&api_function_get_data),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Get data.", "Get data.",
&[( &[(
"data", "data",
false, false,
&::proxmox::api::schema::ArraySchema::new("The data", &NAME_SCHEMA).schema(), &::proxmox_schema::ArraySchema::new("The data", &NAME_SCHEMA).schema(),
)], )],
), ),
) )

View File

@ -1,6 +1,7 @@
//! Test the automatic addition of integer limits. //! Test the automatic addition of integer limits.
use proxmox_api_macro::api; use proxmox_api_macro::api;
use proxmox_schema::ApiType;
/// An i16: -32768 to 32767. /// An i16: -32768 to 32767.
#[api] #[api]
@ -8,8 +9,8 @@ pub struct AnI16(i16);
#[test] #[test]
fn test_an_i16_schema() { fn test_an_i16_schema() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = const TEST_SCHEMA: ::proxmox_schema::Schema =
::proxmox::api::schema::IntegerSchema::new("An i16: -32768 to 32767.") ::proxmox_schema::IntegerSchema::new("An i16: -32768 to 32767.")
.minimum(-32768) .minimum(-32768)
.maximum(32767) .maximum(32767)
.schema(); .schema();
@ -23,8 +24,8 @@ pub struct I16G50(i16);
#[test] #[test]
fn test_i16g50_schema() { fn test_i16g50_schema() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = const TEST_SCHEMA: ::proxmox_schema::Schema =
::proxmox::api::schema::IntegerSchema::new("Already limited on one side.") ::proxmox_schema::IntegerSchema::new("Already limited on one side.")
.minimum(-50) .minimum(-50)
.maximum(32767) .maximum(32767)
.schema(); .schema();
@ -38,8 +39,8 @@ pub struct AnI32(i32);
#[test] #[test]
fn test_an_i32_schema() { fn test_an_i32_schema() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = const TEST_SCHEMA: ::proxmox_schema::Schema =
::proxmox::api::schema::IntegerSchema::new("An i32: -0x8000_0000 to 0x7fff_ffff.") ::proxmox_schema::IntegerSchema::new("An i32: -0x8000_0000 to 0x7fff_ffff.")
.minimum(-0x8000_0000) .minimum(-0x8000_0000)
.maximum(0x7fff_ffff) .maximum(0x7fff_ffff)
.schema(); .schema();
@ -53,8 +54,8 @@ pub struct AnU32(u32);
#[test] #[test]
fn test_an_u32_schema() { fn test_an_u32_schema() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = const TEST_SCHEMA: ::proxmox_schema::Schema =
::proxmox::api::schema::IntegerSchema::new("Unsigned implies a minimum of zero.") ::proxmox_schema::IntegerSchema::new("Unsigned implies a minimum of zero.")
.minimum(0) .minimum(0)
.maximum(0xffff_ffff) .maximum(0xffff_ffff)
.schema(); .schema();
@ -68,8 +69,8 @@ pub struct AnI64(i64);
#[test] #[test]
fn test_an_i64_schema() { fn test_an_i64_schema() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = const TEST_SCHEMA: ::proxmox_schema::Schema =
::proxmox::api::schema::IntegerSchema::new("An i64: this is left unlimited.").schema(); ::proxmox_schema::IntegerSchema::new("An i64: this is left unlimited.").schema();
assert_eq!(TEST_SCHEMA, AnI64::API_SCHEMA); assert_eq!(TEST_SCHEMA, AnI64::API_SCHEMA);
} }
@ -80,8 +81,8 @@ pub struct AnU64(u64);
#[test] #[test]
fn test_an_u64_schema() { fn test_an_u64_schema() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = const TEST_SCHEMA: ::proxmox_schema::Schema =
::proxmox::api::schema::IntegerSchema::new("Unsigned implies a minimum of zero.") ::proxmox_schema::IntegerSchema::new("Unsigned implies a minimum of zero.")
.minimum(0) .minimum(0)
.schema(); .schema();

View File

@ -40,7 +40,7 @@ pub fn test_default_macro(value: Option<isize>) -> Result<isize, Error> {
} }
struct RpcEnv; struct RpcEnv;
impl proxmox::api::RpcEnvironment for RpcEnv { impl proxmox_router::RpcEnvironment for RpcEnv {
fn result_attrib_mut(&mut self) -> &mut Value { fn result_attrib_mut(&mut self) -> &mut Value {
panic!("result_attrib_mut called"); panic!("result_attrib_mut called");
} }
@ -50,7 +50,7 @@ impl proxmox::api::RpcEnvironment for RpcEnv {
} }
/// The environment type /// The environment type
fn env_type(&self) -> proxmox::api::RpcEnvironmentType { fn env_type(&self) -> proxmox_router::RpcEnvironmentType {
panic!("env_type called"); panic!("env_type called");
} }

View File

@ -3,8 +3,9 @@
#![allow(dead_code)] #![allow(dead_code)]
use proxmox::api::schema::{self, EnumEntry};
use proxmox_api_macro::api; use proxmox_api_macro::api;
use proxmox_schema as schema;
use proxmox_schema::{ApiType, EnumEntry};
use anyhow::Error; use anyhow::Error;
use serde::Deserialize; use serde::Deserialize;
@ -23,13 +24,12 @@ pub struct OkString(String);
#[test] #[test]
fn ok_string() { fn ok_string() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = const TEST_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::StringSchema::new("A string")
::proxmox::api::schema::StringSchema::new("A string") .format(&schema::ApiStringFormat::Enum(&[
.format(&schema::ApiStringFormat::Enum(&[ EnumEntry::new("ok", "Ok"),
EnumEntry::new("ok", "Ok"), EnumEntry::new("not-ok", "Not OK"),
EnumEntry::new("not-ok", "Not OK"), ]))
])) .schema();
.schema();
assert_eq!(TEST_SCHEMA, OkString::API_SCHEMA); assert_eq!(TEST_SCHEMA, OkString::API_SCHEMA);
} }
@ -45,26 +45,23 @@ pub struct TestStruct {
#[test] #[test]
fn test_struct() { fn test_struct() {
pub const TEST_SCHEMA: ::proxmox::api::schema::Schema = pub const TEST_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::ObjectSchema::new(
::proxmox::api::schema::ObjectSchema::new( "An example of a simple struct type.",
"An example of a simple struct type.", &[
&[ (
( "another",
"another", true,
true, &::proxmox_schema::StringSchema::new("An optional auto-derived value for testing:")
&::proxmox::api::schema::StringSchema::new(
"An optional auto-derived value for testing:",
)
.schema(), .schema(),
), ),
( (
"test_string", "test_string",
false, false,
&::proxmox::api::schema::StringSchema::new("A test string.").schema(), &::proxmox_schema::StringSchema::new("A test string.").schema(),
), ),
], ],
) )
.schema(); .schema();
assert_eq!(TEST_SCHEMA, TestStruct::API_SCHEMA); assert_eq!(TEST_SCHEMA, TestStruct::API_SCHEMA);
} }
@ -84,21 +81,19 @@ pub struct RenamedStruct {
#[test] #[test]
fn renamed_struct() { fn renamed_struct() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = ::proxmox::api::schema::ObjectSchema::new( const TEST_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::ObjectSchema::new(
"An example of a struct with renamed fields.", "An example of a struct with renamed fields.",
&[ &[
( (
"SomeOther", "SomeOther",
true, true,
&::proxmox::api::schema::StringSchema::new( &::proxmox_schema::StringSchema::new("An optional auto-derived value for testing:")
"An optional auto-derived value for testing:", .schema(),
)
.schema(),
), ),
( (
"test-string", "test-string",
false, false,
&::proxmox::api::schema::StringSchema::new("A test string.").schema(), &::proxmox_schema::StringSchema::new("A test string.").schema(),
), ),
], ],
) )
@ -108,7 +103,7 @@ fn renamed_struct() {
} }
#[api] #[api]
#[derive(Deserialize)] #[derive(Default, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
/// A selection of either 'onekind', 'another-kind' or 'selection-number-three'. /// A selection of either 'onekind', 'another-kind' or 'selection-number-three'.
pub enum Selection { pub enum Selection {
@ -116,6 +111,7 @@ pub enum Selection {
#[serde(rename = "onekind")] #[serde(rename = "onekind")]
OneKind, OneKind,
/// Some other kind. /// Some other kind.
#[default]
AnotherKind, AnotherKind,
/// And yet another. /// And yet another.
SelectionNumberThree, SelectionNumberThree,
@ -123,14 +119,15 @@ pub enum Selection {
#[test] #[test]
fn selection_test() { fn selection_test() {
const TEST_SCHEMA: ::proxmox::api::schema::Schema = ::proxmox::api::schema::StringSchema::new( const TEST_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::StringSchema::new(
"A selection of either \'onekind\', \'another-kind\' or \'selection-number-three\'.", "A selection of either \'onekind\', \'another-kind\' or \'selection-number-three\'.",
) )
.format(&::proxmox::api::schema::ApiStringFormat::Enum(&[ .format(&::proxmox_schema::ApiStringFormat::Enum(&[
EnumEntry::new("onekind", "The first kind."), EnumEntry::new("onekind", "The first kind."),
EnumEntry::new("another-kind", "Some other kind."), EnumEntry::new("another-kind", "Some other kind."),
EnumEntry::new("selection-number-three", "And yet another."), EnumEntry::new("selection-number-three", "And yet another."),
])) ]))
.default("another-kind")
.schema(); .schema();
assert_eq!(TEST_SCHEMA, Selection::API_SCHEMA); assert_eq!(TEST_SCHEMA, Selection::API_SCHEMA);
@ -157,9 +154,9 @@ pub fn string_check(arg: Value, selection: Selection) -> Result<bool, Error> {
#[test] #[test]
fn string_check_schema_test() { fn string_check_schema_test() {
const TEST_METHOD: ::proxmox::api::ApiMethod = ::proxmox::api::ApiMethod::new( const TEST_METHOD: ::proxmox_router::ApiMethod = ::proxmox_router::ApiMethod::new(
&::proxmox::api::ApiHandler::Sync(&api_function_string_check), &::proxmox_router::ApiHandler::Sync(&api_function_string_check),
&::proxmox::api::schema::ObjectSchema::new( &::proxmox_schema::ObjectSchema::new(
"Check a string.", "Check a string.",
&[ &[
("arg", false, &OkString::API_SCHEMA), ("arg", false, &OkString::API_SCHEMA),
@ -167,9 +164,9 @@ fn string_check_schema_test() {
], ],
), ),
) )
.returns(::proxmox::api::router::ReturnType::new( .returns(::proxmox_schema::ReturnType::new(
true, true,
&::proxmox::api::schema::BooleanSchema::new("Whether the string was \"ok\".").schema(), &::proxmox_schema::BooleanSchema::new("Whether the string was \"ok\".").schema(),
)) ))
.protected(false); .protected(false);

View File

@ -1,32 +1,74 @@
#[cfg(not(feature = "noserde"))] #![allow(dead_code)]
use serde::{Deserialize, Serialize};
use serde_json::Value;
use proxmox::api::api; use serde::{Deserialize, Serialize};
use proxmox::api::schema::Updater;
use proxmox_schema::{api, ApiType, Updater, UpdaterType};
// Helpers for type checks:
struct AssertTypeEq<T>(T);
macro_rules! assert_type_eq {
($what:ident, $a:ty, $b:ty) => {
#[allow(dead_code, unreachable_patterns)]
fn $what(have: AssertTypeEq<$a>) {
match have {
AssertTypeEq::<$b>(_) => (),
}
}
};
}
#[api(min_length: 3, max_length: 64)]
#[derive(UpdaterType)]
/// Custom String.
pub struct Custom(String);
assert_type_eq!(
custom_type,
<Custom as UpdaterType>::Updater,
Option<Custom>
);
#[api] #[api]
/// An example of a simple struct type. /// An example of a simple struct type.
#[cfg_attr(not(feature = "noserde"), derive(Deserialize, Serialize))] #[derive(Updater)]
#[derive(Debug, PartialEq, Updater)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct Simple { pub struct Simple {
/// A test string. /// A test string.
one_field: String, one_field: String,
/// An optional auto-derived value for testing: /// Another test value.
#[serde(skip_serializing_if = "Option::is_empty")] #[serde(skip_serializing_if = "Option::is_empty")]
opt: Option<String>, opt: Option<String>,
} }
#[test]
fn test_simple() {
pub const TEST_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::ObjectSchema::new(
"An example of a simple struct type.",
&[
(
"one-field",
true,
&::proxmox_schema::StringSchema::new("A test string.").schema(),
),
(
"opt",
true,
&::proxmox_schema::StringSchema::new("Another test value.").schema(),
),
],
)
.schema();
assert_eq!(TEST_SCHEMA, SimpleUpdater::API_SCHEMA);
}
#[api( #[api(
properties: { properties: {
simple: { type: Simple }, simple: { type: Simple },
}, },
)] )]
/// A second struct so we can test flattening. /// A second struct so we can test flattening.
#[cfg_attr(not(feature = "noserde"), derive(Deserialize, Serialize))] #[derive(Updater)]
#[derive(Debug, PartialEq, Updater)]
pub struct Complex { pub struct Complex {
/// An extra field not part of the flattened struct. /// An extra field not part of the flattened struct.
extra: String, extra: String,
@ -44,288 +86,72 @@ pub struct Complex {
}, },
)] )]
/// One of the baaaad cases. /// One of the baaaad cases.
#[cfg_attr(not(feature = "noserde"), derive(Deserialize, Serialize))] #[derive(Updater)]
#[derive(Debug, PartialEq, Updater)] #[serde(rename_all = "kebab-case")]
pub struct SuperComplex { pub struct SuperComplex {
/// An extra field not part of the flattened struct. /// An extra field.
extra: String, extra: String,
#[serde(skip_serializing_if = "Updater::is_empty")] simple: Simple,
simple: Option<Simple>,
/// A field which should not appear in the updater.
#[updater(skip)]
not_in_updater: String,
/// A custom type with an Updatable implementation.
custom: Custom,
}
#[test]
fn test_super_complex() {
pub const TEST_SCHEMA: ::proxmox_schema::Schema = ::proxmox_schema::ObjectSchema::new(
"One of the baaaad cases.",
&[
("custom", true, &<Option<Custom> as ApiType>::API_SCHEMA),
(
"extra",
true,
&::proxmox_schema::StringSchema::new("An extra field.").schema(),
),
(
"simple",
true,
//&<<Simple as UpdaterType>::Updater as ApiType>::API_SCHEMA,
&SimpleUpdater::API_SCHEMA,
),
],
)
.schema();
assert_eq!(TEST_SCHEMA, SuperComplexUpdater::API_SCHEMA);
}
#[api]
/// A simple string wrapper.
#[derive(Default, Deserialize, Serialize, Updater)]
struct MyType(String);
impl proxmox_schema::UpdaterType for MyType {
type Updater = Option<Self>;
}
impl MyType {
fn should_skip(&self) -> bool {
self.0 == "skipme"
}
} }
#[api( #[api(
properties: { properties: {
complex: { type: Complex }, more: { type: MyType },
}, },
)] )]
/// Something with "fixed" values we cannot update but require for creation. /// A struct where we replace serde attributes.
#[cfg_attr(not(feature = "noserde"), derive(Deserialize, Serialize))] #[derive(Deserialize, Serialize, Updater)]
#[derive(Debug, PartialEq, Updater)] pub struct WithSerde {
pub struct Creatable { /// Simple string.
/// An ID which cannot be changed later. data: String,
#[updater(fixed)]
id: String,
/// Some parameter we're allowed to change with an updater. #[serde(skip_serializing_if = "MyType::should_skip", default)]
name: String, #[updater(serde(skip_serializing_if = "Option::is_none"))]
more: MyType,
/// Optional additional information.
#[serde(skip_serializing_if = "Updater::is_empty", default)]
info: Option<String>,
/// Optional additional information 2.
#[serde(skip_serializing_if = "Updater::is_empty", default)]
info2: Option<String>,
/// Super complex additional data
#[serde(flatten)]
complex: Complex,
}
struct RpcEnv;
impl proxmox::api::RpcEnvironment for RpcEnv {
fn result_attrib_mut(&mut self) -> &mut Value {
panic!("result_attrib_mut called");
}
fn result_attrib(&self) -> &Value {
panic!("result_attrib called");
}
/// The environment type
fn env_type(&self) -> proxmox::api::RpcEnvironmentType {
panic!("env_type called");
}
/// Set authentication id
fn set_auth_id(&mut self, user: Option<String>) {
let _ = user;
panic!("set_auth_id called");
}
/// Get authentication id
fn get_auth_id(&self) -> Option<String> {
panic!("get_auth_id called");
}
}
mod test_creatable {
use anyhow::{bail, Error};
use serde_json::json;
use proxmox::api::schema::Updatable;
use proxmox_api_macro::api;
use super::*;
static mut TEST_OBJECT: Option<Creatable> = None;
#[api(
input: {
properties: {
thing: { flatten: true, type: CreatableUpdater },
},
},
)]
/// Test method to create an object.
///
/// Returns: the object's ID.
pub fn create_thing(thing: CreatableUpdater) -> Result<String, Error> {
if unsafe { TEST_OBJECT.is_some() } {
bail!("object exists");
}
let obj = Creatable::try_build_from(thing)?;
let id = obj.id.clone();
unsafe {
TEST_OBJECT = Some(obj);
}
Ok(id)
}
#[api(
input: {
properties: {
thing: { flatten: true, type: CreatableUpdater },
delete: {
optional: true,
description: "list of properties to delete",
type: Array,
items: {
description: "field name to delete",
type: String,
},
},
},
},
)]
/// Test method to update an object.
pub fn update_thing(thing: CreatableUpdater, delete: Option<Vec<String>>) -> Result<(), Error> {
let delete = delete.unwrap_or_default();
match unsafe { &mut TEST_OBJECT } {
Some(obj) => obj.update_from(thing, &delete),
None => bail!("object has not been created yet"),
}
}
#[test]
fn test() {
let _ = api_function_create_thing(
json!({ "name": "The Name" }),
&API_METHOD_CREATE_THING,
&mut RpcEnv,
)
.expect_err("create_thing should fail without an ID");
let _ = api_function_create_thing(
json!({ "id": "Id1" }),
&API_METHOD_CREATE_THING,
&mut RpcEnv,
)
.expect_err("create_thing should fail without a name");
let value = api_function_create_thing(
json!({
"id": "Id1",
"name": "The Name",
"extra": "Extra Info",
"one-field": "Part of Simple",
"info2": "More Info 2",
}),
&API_METHOD_CREATE_THING,
&mut RpcEnv,
)
.expect("create_thing should work");
assert_eq!(value, "Id1");
assert_eq!(
unsafe { &TEST_OBJECT },
&Some(Creatable {
id: "Id1".to_string(),
name: "The Name".to_string(),
info: None,
info2: Some("More Info 2".to_string()),
complex: Complex {
extra: "Extra Info".to_string(),
simple: Simple {
one_field: "Part of Simple".to_string(),
opt: None,
},
},
}),
);
let _ = api_function_update_thing(
json!({
"id": "Poop",
}),
&API_METHOD_UPDATE_THING,
&mut RpcEnv,
)
.expect_err("shouldn't be allowed to update the ID");
let _ = api_function_update_thing(
json!({
"info": "Updated Info",
"delete": ["info2"],
}),
&API_METHOD_UPDATE_THING,
&mut RpcEnv,
)
.expect("should be allowed to update the optional field");
assert_eq!(
unsafe { &TEST_OBJECT },
&Some(Creatable {
id: "Id1".to_string(),
name: "The Name".to_string(),
info: Some("Updated Info".to_string()),
info2: None,
complex: Complex {
extra: "Extra Info".to_string(),
simple: Simple {
one_field: "Part of Simple".to_string(),
opt: None,
},
},
}),
);
let _ = api_function_update_thing(
json!({
"extra": "Partial flatten update",
}),
&API_METHOD_UPDATE_THING,
&mut RpcEnv,
)
.expect("should be allowed to update the parts of a flattened field");
assert_eq!(
unsafe { &TEST_OBJECT },
&Some(Creatable {
id: "Id1".to_string(),
name: "The Name".to_string(),
info: Some("Updated Info".to_string()),
info2: None,
complex: Complex {
extra: "Partial flatten update".to_string(),
simple: Simple {
one_field: "Part of Simple".to_string(),
opt: None,
},
},
}),
);
let _ = api_function_update_thing(
json!({
"opt": "Deeply nested optional update.",
}),
&API_METHOD_UPDATE_THING,
&mut RpcEnv,
)
.expect("should be allowed to update the parts of a deeply nested struct");
assert_eq!(
unsafe { &TEST_OBJECT },
&Some(Creatable {
id: "Id1".to_string(),
name: "The Name".to_string(),
info: Some("Updated Info".to_string()),
info2: None,
complex: Complex {
extra: "Partial flatten update".to_string(),
simple: Simple {
one_field: "Part of Simple".to_string(),
opt: Some("Deeply nested optional update.".to_string()),
},
},
}),
);
let _ = api_function_update_thing(
json!({
"delete": ["opt"],
}),
&API_METHOD_UPDATE_THING,
&mut RpcEnv,
)
.expect("should be allowed to remove parts of a deeply nested struct");
assert_eq!(
unsafe { &TEST_OBJECT },
&Some(Creatable {
id: "Id1".to_string(),
name: "The Name".to_string(),
info: Some("Updated Info".to_string()),
info2: None,
complex: Complex {
extra: "Partial flatten update".to_string(),
simple: Simple {
one_field: "Part of Simple".to_string(),
opt: None,
},
},
}),
);
}
} }

23
proxmox-apt/Cargo.toml Normal file
View File

@ -0,0 +1,23 @@
[package]
name = "proxmox-apt"
version = "0.9.4"
description = "Proxmox library for APT"
authors.workspace = true
edition.workspace = true
license.workspace = true
repository.workspace = true
homepage.workspace = true
exclude = [ "debian" ]
[dependencies]
anyhow.workspace = true
hex.workspace = true
once_cell.workspace = true
openssl.workspace = true
serde = { workspace = true, features = ["derive"] }
serde_json.workspace = true
rfc822-like = "0.2.1"
proxmox-schema = { workspace = true, features = [ "api-macro" ] }

View File

@ -0,0 +1,121 @@
rust-proxmox-apt (0.9.4-1) bullseye; urgency=medium
* repositories: also detect repository with next suite as configure
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Jun 2023 10:47:41 +0200
rust-proxmox-apt (0.9.3-1) stable; urgency=medium
* packages file: add section field
* deb822: source index support
-- Proxmox Support Team <support@proxmox.com> Wed, 19 Oct 2022 16:17:11 +0200
rust-proxmox-apt (0.9.2-1) stable; urgency=medium
* release: add Commands file reference type
* release: add 'architecture' helper
* release: fix typo in 'Acquire-By-Hash'
-- Proxmox Support Team <support@proxmox.com> Fri, 16 Sep 2022 14:17:10 +0200
rust-proxmox-apt (0.9.1-1) stable; urgency=medium
* release-file: improve invalid file-reference handling
* add ceph quincy repositories
-- Proxmox Support Team <support@proxmox.com> Tue, 6 Sep 2022 10:33:17 +0200
rust-proxmox-apt (0.9.0-1) stable; urgency=medium
* AptRepositoryFile: make path optional
-- Proxmox Support Team <support@proxmox.com> Thu, 21 Jul 2022 13:25:20 +0200
rust-proxmox-apt (0.8.1-1) stable; urgency=medium
* upgrade to 2021 edition
* check suites: add special check for Debian security repository
* file: add pre-parsed content variant
* add module for parsing Packages and Release (deb822 like) files
-- Proxmox Support Team <support@proxmox.com> Thu, 21 Jul 2022 12:08:23 +0200
rust-proxmox-apt (0.8.0-1) stable; urgency=medium
* update to proxox-schema crate
-- Proxmox Support Team <support@proxmox.com> Fri, 08 Oct 2021 11:55:47 +0200
rust-proxmox-apt (0.7.0-1) stable; urgency=medium
* update to proxmox 0.13.0
-- Proxmox Support Team <support@proxmox.com> Tue, 24 Aug 2021 15:38:52 +0200
rust-proxmox-apt (0.6.0-1) stable; urgency=medium
* standard repos: add suite parameter for stricter detection
* check repos: have caller specify the current suite
* add type DebianCodename
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Jul 2021 18:06:54 +0200
rust-proxmox-apt (0.5.1-1) stable; urgency=medium
* depend on proxmox 0.12.0
-- Proxmox Support Team <support@proxmox.com> Tue, 20 Jul 2021 13:18:02 +0200
rust-proxmox-apt (0.5.0-1) stable; urgency=medium
* standard repo detection: handle alternative URI for PVE repos
-- Proxmox Support Team <support@proxmox.com> Fri, 16 Jul 2021 16:19:06 +0200
rust-proxmox-apt (0.4.0-1) stable; urgency=medium
* support quote-word parsing for one-line format
* avoid backtick unicode symbol in string
-- Proxmox Support Team <support@proxmox.com> Thu, 01 Jul 2021 18:33:12 +0200
rust-proxmox-apt (0.3.1-1) stable; urgency=medium
* standard repos: allow conversion from handle and improve information
-- Proxmox Support Team <support@proxmox.com> Wed, 30 Jun 2021 20:42:52 +0200
rust-proxmox-apt (0.3.0-1) stable; urgency=medium
* add get_cached_origin method and an initial config module
* check: return 'origin' property instead of 'badge' for official host
* standard repos: drop product acronym from repo name
-- Proxmox Support Team <support@proxmox.com> Wed, 30 Jun 2021 13:29:13 +0200
rust-proxmox-apt (0.2.0-1) stable; urgency=medium
* Add functions to check repositories.
* Add handling of standard Proxmox repositories.
-- Proxmox Support Team <support@proxmox.com> Wed, 23 Jun 2021 14:57:52 +0200
rust-proxmox-apt (0.1.0-1) stable; urgency=medium
* Initial release.
-- Proxmox Support Team <support@proxmox.com> Thu, 18 Feb 2021 10:20:44 +0100

View File

@ -0,0 +1,52 @@
Source: rust-proxmox-apt
Section: rust
Priority: optional
Build-Depends: debhelper (>= 12),
dh-cargo (>= 25),
cargo:native <!nocheck>,
rustc:native <!nocheck>,
libstd-rust-dev <!nocheck>,
librust-anyhow-1+default-dev <!nocheck>,
librust-hex-0.4+default-dev <!nocheck>,
librust-once-cell-1+default-dev (>= 1.3.1-~~) <!nocheck>,
librust-openssl-0.10+default-dev <!nocheck>,
librust-proxmox-schema-1+api-macro-dev (>= 1.3.7-~~) <!nocheck>,
librust-proxmox-schema-1+default-dev (>= 1.3.7-~~) <!nocheck>,
librust-rfc822-like-0.2+default-dev (>= 0.2.1-~~) <!nocheck>,
librust-serde-1+default-dev <!nocheck>,
librust-serde-1+derive-dev <!nocheck>,
librust-serde-json-1+default-dev <!nocheck>
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.6.1
Vcs-Git: git://git.proxmox.com/git/proxmox-apt.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox-apt.git
Homepage: https://www.proxmox.com
X-Cargo-Crate: proxmox-apt
Rules-Requires-Root: no
Package: librust-proxmox-apt-dev
Architecture: any
Multi-Arch: same
Depends:
${misc:Depends},
librust-anyhow-1+default-dev,
librust-hex-0.4+default-dev,
librust-once-cell-1+default-dev (>= 1.3.1-~~),
librust-openssl-0.10+default-dev,
librust-proxmox-schema-1+api-macro-dev (>= 1.3.7-~~),
librust-proxmox-schema-1+default-dev (>= 1.3.7-~~),
librust-rfc822-like-0.2+default-dev (>= 0.2.1-~~),
librust-serde-1+default-dev,
librust-serde-1+derive-dev,
librust-serde-json-1+default-dev
Provides:
librust-proxmox-apt+default-dev (= ${binary:Version}),
librust-proxmox-apt-0-dev (= ${binary:Version}),
librust-proxmox-apt-0+default-dev (= ${binary:Version}),
librust-proxmox-apt-0.9-dev (= ${binary:Version}),
librust-proxmox-apt-0.9+default-dev (= ${binary:Version}),
librust-proxmox-apt-0.9.3-dev (= ${binary:Version}),
librust-proxmox-apt-0.9.3+default-dev (= ${binary:Version})
Description: Proxmox library for APT - Rust source code
This package contains the source for the Rust proxmox-apt crate, packaged by
debcargo for use with cargo and dh-cargo.

View File

@ -0,0 +1,18 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Files:
*
Copyright: 2019 - 2023 Proxmox Server Solutions GmbH <support@proxmox.com>
License: AGPL-3.0-or-later
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU Affero General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your option) any
later version.
.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
details.
.
You should have received a copy of the GNU Affero General Public License along
with this program. If not, see <https://www.gnu.org/licenses/>.

View File

@ -0,0 +1,7 @@
overlay = "."
crate_src_path = ".."
maintainer = "Proxmox Support Team <support@proxmox.com>"
[source]
vcs_git = "git://git.proxmox.com/git/proxmox-apt.git"
vcs_browser = "https://git.proxmox.com/?p=proxmox-apt.git"

35
proxmox-apt/src/config.rs Normal file
View File

@ -0,0 +1,35 @@
use once_cell::sync::OnceCell;
static GLOBAL_CONFIG: OnceCell<APTConfig> = OnceCell::new();
/// APT configuration variables.
pub struct APTConfig {
/// Dir::State
pub dir_state: String,
/// Dir::State::Lists
pub dir_state_lists: String,
}
impl APTConfig {
/// Create a new configuration overriding the provided values.
pub fn new(dir_state: Option<&str>, dir_state_lists: Option<&str>) -> Self {
Self {
dir_state: dir_state.unwrap_or("/var/lib/apt/").to_string(),
dir_state_lists: dir_state_lists.unwrap_or("lists/").to_string(),
}
}
}
/// Get the configuration.
///
/// Initializes with default values if init() wasn't called before.
pub fn get() -> &'static APTConfig {
GLOBAL_CONFIG.get_or_init(|| APTConfig::new(None, None))
}
/// Initialize the configuration.
///
/// Only has an effect if no init() or get() has been called yet.
pub fn init(config: APTConfig) -> &'static APTConfig {
GLOBAL_CONFIG.get_or_init(|| config)
}

View File

@ -0,0 +1,75 @@
mod release_file;
use anyhow::{bail, Error};
pub use release_file::{CompressionType, FileReference, FileReferenceType, ReleaseFile};
mod packages_file;
pub use packages_file::PackagesFile;
mod sources_file;
pub use sources_file::SourcesFile;
#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord)]
pub struct CheckSums {
pub md5: Option<[u8; 16]>,
pub sha1: Option<[u8; 20]>,
pub sha256: Option<[u8; 32]>,
pub sha512: Option<[u8; 64]>,
}
macro_rules! assert_hash_equality {
($l:expr, $r:expr) => {{
if $l != $r {
bail!("hash mismatch: {} != {}", hex::encode($l), hex::encode($r));
}
}};
}
impl CheckSums {
pub fn is_secure(&self) -> bool {
self.sha256.is_some() || self.sha512.is_some()
}
pub fn verify(&self, input: &[u8]) -> Result<(), Error> {
if !self.is_secure() {
bail!("No SHA256/SHA512 checksum specified.");
}
if let Some(expected) = self.sha512 {
let digest = openssl::sha::sha512(input);
assert_hash_equality!(digest, expected);
Ok(())
} else if let Some(expected) = self.sha256 {
let digest = openssl::sha::sha256(input);
assert_hash_equality!(digest, expected);
Ok(())
} else {
bail!("No trusted checksum found - verification not possible.");
}
}
/// Merge two instances of `CheckSums`.
pub fn merge(&mut self, rhs: &CheckSums) -> Result<(), Error> {
match (self.sha512, rhs.sha512) {
(_, None) => {}
(None, Some(sha512)) => self.sha512 = Some(sha512),
(Some(left), Some(right)) => assert_hash_equality!(left, right),
};
match (self.sha256, rhs.sha256) {
(_, None) => {}
(None, Some(sha256)) => self.sha256 = Some(sha256),
(Some(left), Some(right)) => assert_hash_equality!(left, right),
};
match (self.sha1, rhs.sha1) {
(_, None) => {}
(None, Some(sha1)) => self.sha1 = Some(sha1),
(Some(left), Some(right)) => assert_hash_equality!(left, right),
};
match (self.md5, rhs.md5) {
(_, None) => {}
(None, Some(md5)) => self.md5 = Some(md5),
(Some(left), Some(right)) => assert_hash_equality!(left, right),
};
Ok(())
}
}

View File

@ -0,0 +1,171 @@
use std::collections::HashMap;
use anyhow::{bail, Error};
use rfc822_like::de::Deserializer;
use serde::Deserialize;
use serde_json::Value;
use super::CheckSums;
#[derive(Debug, Deserialize)]
#[serde(rename_all = "PascalCase")]
pub struct PackagesFileRaw {
pub package: String,
pub source: Option<String>,
pub version: String,
pub section: String,
pub priority: String,
pub architecture: String,
pub essential: Option<String>,
pub depends: Option<String>,
pub recommends: Option<String>,
pub suggests: Option<String>,
pub breaks: Option<String>,
pub conflicts: Option<String>,
#[serde(rename = "Installed-Size")]
pub installed_size: Option<String>,
pub maintainer: String,
pub description: String,
pub filename: String,
pub size: String,
#[serde(rename = "Multi-Arch")]
pub multi_arch: Option<String>,
#[serde(rename = "MD5sum")]
pub md5_sum: Option<String>,
#[serde(rename = "SHA1")]
pub sha1: Option<String>,
#[serde(rename = "SHA256")]
pub sha256: Option<String>,
#[serde(rename = "SHA512")]
pub sha512: Option<String>,
#[serde(rename = "Description-md5")]
pub description_md5: Option<String>,
#[serde(flatten)]
pub extra_fields: HashMap<String, Value>,
}
#[derive(Debug, PartialEq, Eq)]
pub struct PackageEntry {
pub package: String,
pub source: Option<String>,
pub version: String,
pub architecture: String,
pub file: String,
pub size: usize,
pub installed_size: Option<usize>,
pub checksums: CheckSums,
pub section: String,
}
#[derive(Debug, Default, PartialEq, Eq)]
/// A parsed representation of a Release file
pub struct PackagesFile {
pub files: Vec<PackageEntry>,
}
impl TryFrom<PackagesFileRaw> for PackageEntry {
type Error = Error;
fn try_from(value: PackagesFileRaw) -> Result<Self, Self::Error> {
let installed_size = match value.installed_size {
Some(val) => Some(val.parse::<usize>()?),
None => None,
};
let mut parsed = PackageEntry {
package: value.package,
source: value.source,
version: value.version,
architecture: value.architecture,
file: value.filename,
size: value.size.parse::<usize>()?,
installed_size,
checksums: CheckSums::default(),
section: value.section,
};
if let Some(md5) = value.md5_sum {
let mut bytes = [0u8; 16];
hex::decode_to_slice(md5, &mut bytes)?;
parsed.checksums.md5 = Some(bytes);
};
if let Some(sha1) = value.sha1 {
let mut bytes = [0u8; 20];
hex::decode_to_slice(sha1, &mut bytes)?;
parsed.checksums.sha1 = Some(bytes);
};
if let Some(sha256) = value.sha256 {
let mut bytes = [0u8; 32];
hex::decode_to_slice(sha256, &mut bytes)?;
parsed.checksums.sha256 = Some(bytes);
};
if let Some(sha512) = value.sha512 {
let mut bytes = [0u8; 64];
hex::decode_to_slice(sha512, &mut bytes)?;
parsed.checksums.sha512 = Some(bytes);
};
if !parsed.checksums.is_secure() {
bail!(
"no strong checksum found for package entry '{}'",
parsed.package
);
}
Ok(parsed)
}
}
impl TryFrom<String> for PackagesFile {
type Error = Error;
fn try_from(value: String) -> Result<Self, Self::Error> {
value.as_bytes().try_into()
}
}
impl TryFrom<&[u8]> for PackagesFile {
type Error = Error;
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
let deserialized = <Vec<PackagesFileRaw>>::deserialize(Deserializer::new(value))?;
deserialized.try_into()
}
}
impl TryFrom<Vec<PackagesFileRaw>> for PackagesFile {
type Error = Error;
fn try_from(value: Vec<PackagesFileRaw>) -> Result<Self, Self::Error> {
let mut files = Vec::with_capacity(value.len());
for entry in value {
let entry: PackageEntry = entry.try_into()?;
files.push(entry);
}
Ok(Self { files })
}
}
#[test]
pub fn test_deb_packages_file() {
let input = include_str!(concat!(
env!("CARGO_MANIFEST_DIR"),
"/tests/deb822/packages/deb.debian.org_debian_dists_bullseye_main_binary-amd64_Packages"
));
let deserialized =
<Vec<PackagesFileRaw>>::deserialize(Deserializer::new(input.as_bytes())).unwrap();
//println!("{:?}", deserialized);
let parsed: PackagesFile = deserialized.try_into().unwrap();
//println!("{:?}", parsed);
assert_eq!(parsed.files.len(), 58618);
}

View File

@ -0,0 +1,618 @@
use std::collections::HashMap;
use anyhow::{bail, format_err, Error};
use rfc822_like::de::Deserializer;
use serde::Deserialize;
use serde_json::Value;
use super::CheckSums;
#[derive(Debug, Deserialize)]
#[serde(rename_all = "PascalCase")]
pub struct ReleaseFileRaw {
pub architectures: Option<String>,
pub changelogs: Option<String>,
pub codename: Option<String>,
pub components: Option<String>,
pub date: Option<String>,
pub description: Option<String>,
pub label: Option<String>,
pub origin: Option<String>,
pub suite: Option<String>,
pub version: Option<String>,
#[serde(rename = "MD5Sum")]
pub md5_sum: Option<String>,
#[serde(rename = "SHA1")]
pub sha1: Option<String>,
#[serde(rename = "SHA256")]
pub sha256: Option<String>,
#[serde(rename = "SHA512")]
pub sha512: Option<String>,
#[serde(flatten)]
pub extra_fields: HashMap<String, Value>,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub enum CompressionType {
Bzip2,
Gzip,
Lzma,
Xz,
}
pub type Architecture = String;
pub type Component = String;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
/// Type of file reference extraced from path.
///
/// `Packages` and `Sources` will contain further reference to binary or source package files.
/// These are handled in `PackagesFile` and `SourcesFile` respectively.
pub enum FileReferenceType {
/// A `Commands` index listing command to package mappings
Commands(Architecture, Option<CompressionType>),
/// A `Contents` index listing contents of binary packages
Contents(Architecture, Option<CompressionType>),
/// A `Contents` index listing contents of binary udeb packages
ContentsUdeb(Architecture, Option<CompressionType>),
/// A DEP11 `Components` metadata file or `icons` archive
Dep11(Option<CompressionType>),
/// Referenced files which are not really part of the APT repository but only signed for trust-anchor reasons
Ignored,
/// PDiff indices
PDiff,
/// A `Packages` index listing binary package metadata and references
Packages(Architecture, Option<CompressionType>),
/// A compat `Release` file with no relevant content
PseudoRelease(Option<Architecture>),
/// A `Sources` index listing source package metadata and references
Sources(Option<CompressionType>),
/// A `Translation` file
Translation(Option<CompressionType>),
/// Unknown file reference
Unknown,
}
impl FileReferenceType {
fn match_compression(value: &str) -> Result<Option<CompressionType>, Error> {
if value.is_empty() {
return Ok(None);
}
let value = if let Some(stripped) = value.strip_prefix('.') {
stripped
} else {
value
};
match value {
"bz2" => Ok(Some(CompressionType::Bzip2)),
"gz" => Ok(Some(CompressionType::Gzip)),
"lzma" => Ok(Some(CompressionType::Lzma)),
"xz" => Ok(Some(CompressionType::Xz)),
other => bail!("Unexpected file extension '{other}'."),
}
}
pub fn parse(component: &str, path: &str) -> Result<FileReferenceType, Error> {
// everything referenced in a release file should be component-specific
let rest = path
.strip_prefix(&format!("{component}/"))
.ok_or_else(|| format_err!("Doesn't start with component '{component}'"))?;
let parse_binary_dir =
|file_name: &str, arch: &str| parse_binary_dir(file_name, arch, path);
if let Some((dir, rest)) = rest.split_once('/') {
// reference into another subdir
match dir {
"source" => {
// Sources or compat-Release
if let Some((dir, _rest)) = rest.split_once('/') {
if dir == "Sources.diff" {
Ok(FileReferenceType::PDiff)
} else {
Ok(FileReferenceType::Unknown)
}
} else if rest == "Release" {
Ok(FileReferenceType::PseudoRelease(None))
} else if let Some(ext) = rest.strip_prefix("Sources") {
let comp = FileReferenceType::match_compression(ext)?;
Ok(FileReferenceType::Sources(comp))
} else {
Ok(FileReferenceType::Unknown)
}
}
"cnf" => {
if let Some(rest) = rest.strip_prefix("Commands-") {
if let Some((arch, ext)) = rest.rsplit_once('.') {
Ok(FileReferenceType::Commands(
arch.to_owned(),
FileReferenceType::match_compression(ext).ok().flatten(),
))
} else {
Ok(FileReferenceType::Commands(rest.to_owned(), None))
}
} else {
Ok(FileReferenceType::Unknown)
}
}
"dep11" => {
if let Some((_path, ext)) = rest.rsplit_once('.') {
Ok(FileReferenceType::Dep11(
FileReferenceType::match_compression(ext).ok().flatten(),
))
} else {
Ok(FileReferenceType::Dep11(None))
}
}
"debian-installer" => {
// another layer, then like regular repo but pointing at udebs
if let Some((dir, rest)) = rest.split_once('/') {
if let Some(arch) = dir.strip_prefix("binary-") {
// Packages or compat-Release
return parse_binary_dir(rest, arch);
}
}
// all the rest
Ok(FileReferenceType::Unknown)
}
"i18n" => {
if let Some((dir, _rest)) = rest.split_once('/') {
if dir.starts_with("Translation") && dir.ends_with(".diff") {
Ok(FileReferenceType::PDiff)
} else {
Ok(FileReferenceType::Unknown)
}
} else if let Some((_, ext)) = rest.split_once('.') {
Ok(FileReferenceType::Translation(
FileReferenceType::match_compression(ext)?,
))
} else {
Ok(FileReferenceType::Translation(None))
}
}
_ => {
if let Some(arch) = dir.strip_prefix("binary-") {
// Packages or compat-Release
parse_binary_dir(rest, arch)
} else if let Some(_arch) = dir.strip_prefix("installer-") {
// netboot installer checksum files
Ok(FileReferenceType::Ignored)
} else {
// all the rest
Ok(FileReferenceType::Unknown)
}
}
}
} else if let Some(rest) = rest.strip_prefix("Contents-") {
// reference to a top-level file - Contents-*
let (rest, udeb) = if let Some(rest) = rest.strip_prefix("udeb-") {
(rest, true)
} else {
(rest, false)
};
let (arch, comp) = match rest.split_once('.') {
Some((arch, comp_str)) => (
arch.to_owned(),
FileReferenceType::match_compression(comp_str)?,
),
None => (rest.to_owned(), None),
};
if udeb {
Ok(FileReferenceType::ContentsUdeb(arch, comp))
} else {
Ok(FileReferenceType::Contents(arch, comp))
}
} else {
Ok(FileReferenceType::Unknown)
}
}
pub fn compression(&self) -> Option<CompressionType> {
match *self {
FileReferenceType::Commands(_, comp)
| FileReferenceType::Contents(_, comp)
| FileReferenceType::ContentsUdeb(_, comp)
| FileReferenceType::Packages(_, comp)
| FileReferenceType::Sources(comp)
| FileReferenceType::Translation(comp)
| FileReferenceType::Dep11(comp) => comp,
FileReferenceType::Unknown
| FileReferenceType::PDiff
| FileReferenceType::PseudoRelease(_)
| FileReferenceType::Ignored => None,
}
}
pub fn architecture(&self) -> Option<&Architecture> {
match self {
FileReferenceType::Commands(arch, _)
| FileReferenceType::Contents(arch, _)
| FileReferenceType::ContentsUdeb(arch, _)
| FileReferenceType::Packages(arch, _) => Some(arch),
FileReferenceType::PseudoRelease(arch) => arch.as_ref(),
FileReferenceType::Unknown
| FileReferenceType::PDiff
| FileReferenceType::Sources(_)
| FileReferenceType::Dep11(_)
| FileReferenceType::Translation(_)
| FileReferenceType::Ignored => None,
}
}
pub fn is_package_index(&self) -> bool {
matches!(self, FileReferenceType::Packages(_, _) | FileReferenceType::Sources(_))
}
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub struct FileReference {
pub path: String,
pub size: usize,
pub checksums: CheckSums,
pub component: Component,
pub file_type: FileReferenceType,
}
impl FileReference {
pub fn basename(&self) -> Result<String, Error> {
match self.file_type.compression() {
Some(_) => {
let (base, _ext) = self
.path
.rsplit_once('.')
.ok_or_else(|| format_err!("compressed file without file extension"))?;
Ok(base.to_owned())
}
None => Ok(self.path.clone()),
}
}
}
#[derive(Debug, Default, PartialEq, Eq)]
/// A parsed representation of a Release file
pub struct ReleaseFile {
/// List of architectures, e.g., `amd64` or `all`.
pub architectures: Vec<String>,
// TODO No-Support-for-Architecture-all
/// URL for changelog queries via `apt changelog`.
pub changelogs: Option<String>,
/// Release codename - single word, e.g., `bullseye`.
pub codename: Option<String>,
/// List of repository areas, e.g., `main`.
pub components: Vec<String>,
/// UTC timestamp of release file generation
pub date: Option<u64>,
/// UTC timestamp of release file expiration
pub valid_until: Option<u64>,
/// Repository description -
// TODO exact format?
pub description: Option<String>,
/// Repository label - single line
pub label: Option<String>,
/// Repository origin - single line
pub origin: Option<String>,
/// Release suite - single word, e.g., `stable`.
pub suite: Option<String>,
/// Release version
pub version: Option<String>,
/// Whether by-hash retrieval of referenced files is possible
pub aquire_by_hash: bool,
/// Files referenced by this `Release` file, e.g., packages indices.
///
/// Grouped by basename, since only the compressed version needs to actually exist on the repository server.
pub files: HashMap<String, Vec<FileReference>>,
}
impl TryFrom<ReleaseFileRaw> for ReleaseFile {
type Error = Error;
fn try_from(value: ReleaseFileRaw) -> Result<Self, Self::Error> {
let mut parsed = ReleaseFile {
architectures: whitespace_split_to_vec(
&value
.architectures
.ok_or_else(|| format_err!("'Architectures' field missing."))?,
),
components: whitespace_split_to_vec(
&value
.components
.ok_or_else(|| format_err!("'Components' field missing."))?,
),
changelogs: value.changelogs,
codename: value.codename,
date: value.date.as_deref().map(parse_date),
valid_until: value
.extra_fields
.get("Valid-Until")
.map(|val| parse_date(&val.to_string())),
description: value.description,
label: value.label,
origin: value.origin,
suite: value.suite,
files: HashMap::new(),
aquire_by_hash: false,
version: value.version,
};
if let Some(val) = value.extra_fields.get("Acquire-By-Hash") {
parsed.aquire_by_hash = *val == "yes";
}
// Fixup bullseye-security release files which have invalid components
if parsed.label.as_deref() == Some("Debian-Security")
&& parsed.codename.as_deref() == Some("bullseye-security")
{
parsed.components = parsed
.components
.into_iter()
.map(|comp| {
if let Some(stripped) = comp.strip_prefix("updates/") {
stripped.to_owned()
} else {
comp
}
})
.collect();
}
let mut references_map: HashMap<String, HashMap<String, FileReference>> = HashMap::new();
if let Some(md5) = value.md5_sum {
for line in md5.lines() {
let (mut file_ref, checksum) =
parse_file_reference(line, 16, parsed.components.as_ref())?;
let checksum = checksum
.try_into()
.map_err(|_err| format_err!("unexpected checksum length"))?;
file_ref.checksums.md5 = Some(checksum);
merge_references(&mut references_map, file_ref)?;
}
}
if let Some(sha1) = value.sha1 {
for line in sha1.lines() {
let (mut file_ref, checksum) =
parse_file_reference(line, 20, parsed.components.as_ref())?;
let checksum = checksum
.try_into()
.map_err(|_err| format_err!("unexpected checksum length"))?;
file_ref.checksums.sha1 = Some(checksum);
merge_references(&mut references_map, file_ref)?;
}
}
if let Some(sha256) = value.sha256 {
for line in sha256.lines() {
let (mut file_ref, checksum) =
parse_file_reference(line, 32, parsed.components.as_ref())?;
let checksum = checksum
.try_into()
.map_err(|_err| format_err!("unexpected checksum length"))?;
file_ref.checksums.sha256 = Some(checksum);
merge_references(&mut references_map, file_ref)?;
}
}
if let Some(sha512) = value.sha512 {
for line in sha512.lines() {
let (mut file_ref, checksum) =
parse_file_reference(line, 64, parsed.components.as_ref())?;
let checksum = checksum
.try_into()
.map_err(|_err| format_err!("unexpected checksum length"))?;
file_ref.checksums.sha512 = Some(checksum);
merge_references(&mut references_map, file_ref)?;
}
}
parsed.files =
references_map
.into_iter()
.fold(parsed.files, |mut map, (base, inner_map)| {
map.insert(base, inner_map.into_values().collect());
map
});
if let Some(insecure) = parsed
.files
.values()
.flatten()
.find(|file| !file.checksums.is_secure())
{
bail!(
"found file reference without strong checksum: {}",
insecure.path
);
}
Ok(parsed)
}
}
impl TryFrom<String> for ReleaseFile {
type Error = Error;
fn try_from(value: String) -> Result<Self, Self::Error> {
value.as_bytes().try_into()
}
}
impl TryFrom<&[u8]> for ReleaseFile {
type Error = Error;
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
let deserialized = ReleaseFileRaw::deserialize(Deserializer::new(value))?;
deserialized.try_into()
}
}
fn whitespace_split_to_vec(list_str: &str) -> Vec<String> {
list_str
.split_ascii_whitespace()
.map(|arch| arch.to_owned())
.collect()
}
fn parse_file_reference(
line: &str,
csum_len: usize,
components: &[String],
) -> Result<(FileReference, Vec<u8>), Error> {
let mut split = line.split_ascii_whitespace();
let checksum = split
.next()
.ok_or_else(|| format_err!("No 'checksum' field in the file reference line."))?;
if checksum.len() > csum_len * 2 {
bail!(
"invalid checksum length: '{}', expected {} bytes",
checksum,
csum_len
);
}
let checksum = hex::decode(checksum)?;
let size = split
.next()
.ok_or_else(|| format_err!("No 'size' field in file reference line."))?
.parse::<usize>()?;
let file = split
.next()
.ok_or_else(|| format_err!("No 'path' field in file reference line."))?
.to_string();
let (component, file_type) = components
.iter()
.find_map(|component| {
if !file.starts_with(&format!("{component}/")) {
return None;
}
Some(
FileReferenceType::parse(component, &file)
.map(|file_type| (component.clone(), file_type)),
)
})
.unwrap_or_else(|| Ok(("UNKNOWN".to_string(), FileReferenceType::Unknown)))?;
Ok((
FileReference {
path: file,
size,
checksums: CheckSums::default(),
component,
file_type,
},
checksum,
))
}
fn parse_date(_date_str: &str) -> u64 {
// TODO implement
0
}
fn parse_binary_dir(file_name: &str, arch: &str, path: &str) -> Result<FileReferenceType, Error> {
if let Some((dir, _rest)) = file_name.split_once('/') {
if dir == "Packages.diff" {
// TODO re-evaluate?
Ok(FileReferenceType::PDiff)
} else {
Ok(FileReferenceType::Unknown)
}
} else if file_name == "Release" {
Ok(FileReferenceType::PseudoRelease(Some(arch.to_owned())))
} else {
let comp = match file_name.strip_prefix("Packages") {
None => {
bail!("found unexpected non-Packages reference to '{path}'")
}
Some(ext) => FileReferenceType::match_compression(ext)?,
};
//println!("compression: {comp:?}");
Ok(FileReferenceType::Packages(arch.to_owned(), comp))
}
}
fn merge_references(
base_map: &mut HashMap<String, HashMap<String, FileReference>>,
file_ref: FileReference,
) -> Result<(), Error> {
let base = file_ref.basename()?;
match base_map.get_mut(&base) {
None => {
let mut map = HashMap::new();
map.insert(file_ref.path.clone(), file_ref);
base_map.insert(base, map);
}
Some(entries) => {
match entries.get_mut(&file_ref.path) {
Some(entry) => {
if entry.size != file_ref.size {
bail!(
"Multiple entries for '{}' with size mismatch: {} / {}",
entry.path,
file_ref.size,
entry.size
);
}
entry.checksums.merge(&file_ref.checksums).map_err(|err| {
format_err!("Multiple checksums for '{}' - {err}", entry.path)
})?;
}
None => {
entries.insert(file_ref.path.clone(), file_ref);
}
};
}
};
Ok(())
}
#[test]
pub fn test_deb_release_file() {
let input = include_str!(concat!(
env!("CARGO_MANIFEST_DIR"),
"/tests/deb822/release/deb.debian.org_debian_dists_bullseye_Release"
));
let deserialized = ReleaseFileRaw::deserialize(Deserializer::new(input.as_bytes())).unwrap();
//println!("{:?}", deserialized);
let parsed: ReleaseFile = deserialized.try_into().unwrap();
//println!("{:?}", parsed);
assert_eq!(parsed.files.len(), 315);
}
#[test]
pub fn test_deb_release_file_insecure() {
let input = include_str!(concat!(
env!("CARGO_MANIFEST_DIR"),
"/tests/deb822/release/deb.debian.org_debian_dists_bullseye_Release_insecure"
));
let deserialized = ReleaseFileRaw::deserialize(Deserializer::new(input.as_bytes())).unwrap();
//println!("{:?}", deserialized);
let parsed: Result<ReleaseFile, Error> = deserialized.try_into();
assert!(parsed.is_err());
println!("{:?}", parsed);
}

View File

@ -0,0 +1,257 @@
use std::collections::HashMap;
use anyhow::{bail, Error, format_err};
use rfc822_like::de::Deserializer;
use serde::Deserialize;
use serde_json::Value;
use super::CheckSums;
//Uploaders
//
//Homepage
//
//Version Control System (VCS) fields
//
//Testsuite
//
//Dgit
//
//Standards-Version (mandatory)
//
//Build-Depends et al
//
//Package-List (recommended)
//
//Checksums-Sha1 and Checksums-Sha256 (mandatory)
//
//Files (mandatory)
#[derive(Debug, Deserialize)]
#[serde(rename_all = "PascalCase")]
pub struct SourcesFileRaw {
pub format: String,
pub package: String,
pub binary: Option<Vec<String>>,
pub version: String,
pub section: Option<String>,
pub priority: Option<String>,
pub maintainer: String,
pub uploaders: Option<String>,
pub architecture: Option<String>,
pub directory: String,
pub files: String,
#[serde(rename = "Checksums-Sha256")]
pub sha256: Option<String>,
#[serde(rename = "Checksums-Sha512")]
pub sha512: Option<String>,
#[serde(flatten)]
pub extra_fields: HashMap<String, Value>,
}
#[derive(Debug, PartialEq, Eq)]
pub struct SourcePackageEntry {
pub format: String,
pub package: String,
pub binary: Option<Vec<String>>,
pub version: String,
pub architecture: Option<String>,
pub section: Option<String>,
pub priority: Option<String>,
pub maintainer: String,
pub uploaders: Option<String>,
pub directory: String,
pub files: HashMap<String, SourcePackageFileReference>,
}
#[derive(Debug, PartialEq, Eq)]
pub struct SourcePackageFileReference {
pub file: String,
pub size: usize,
pub checksums: CheckSums,
}
impl SourcePackageEntry {
pub fn size(&self) -> usize {
self.files.values().map(|f| f.size).sum()
}
}
#[derive(Debug, Default, PartialEq, Eq)]
/// A parsed representation of a Release file
pub struct SourcesFile {
pub source_packages: Vec<SourcePackageEntry>,
}
impl TryFrom<SourcesFileRaw> for SourcePackageEntry {
type Error = Error;
fn try_from(value: SourcesFileRaw) -> Result<Self, Self::Error> {
let mut parsed = SourcePackageEntry {
package: value.package,
binary: value.binary,
version: value.version,
architecture: value.architecture,
files: HashMap::new(),
format: value.format,
section: value.section,
priority: value.priority,
maintainer: value.maintainer,
uploaders: value.uploaders,
directory: value.directory,
};
for file_reference in value.files.lines() {
let (file_name, size, md5) = parse_file_reference(file_reference, 16)?;
let entry = parsed.files.entry(file_name.clone()).or_insert_with(|| SourcePackageFileReference { file: file_name, size, checksums: CheckSums::default()});
entry.checksums.md5 = Some(md5.try_into().map_err(|_|format_err!("unexpected checksum length"))?);
if entry.size != size {
bail!("Size mismatch: {} != {}", entry.size, size);
}
}
if let Some(sha256) = value.sha256 {
for line in sha256.lines() {
let (file_name, size, sha256) = parse_file_reference(line, 32)?;
let entry = parsed.files.entry(file_name.clone()).or_insert_with(|| SourcePackageFileReference { file: file_name, size, checksums: CheckSums::default()});
entry.checksums.sha256 = Some(sha256.try_into().map_err(|_|format_err!("unexpected checksum length"))?);
if entry.size != size {
bail!("Size mismatch: {} != {}", entry.size, size);
}
}
};
if let Some(sha512) = value.sha512 {
for line in sha512.lines() {
let (file_name, size, sha512) = parse_file_reference(line, 64)?;
let entry = parsed.files.entry(file_name.clone()).or_insert_with(|| SourcePackageFileReference { file: file_name, size, checksums: CheckSums::default()});
entry.checksums.sha512 = Some(sha512.try_into().map_err(|_|format_err!("unexpected checksum length"))?);
if entry.size != size {
bail!("Size mismatch: {} != {}", entry.size, size);
}
}
};
for (file_name, reference) in &parsed.files {
if !reference.checksums.is_secure() {
bail!(
"no strong checksum found for source entry '{}'",
file_name
);
}
}
Ok(parsed)
}
}
impl TryFrom<String> for SourcesFile {
type Error = Error;
fn try_from(value: String) -> Result<Self, Self::Error> {
value.as_bytes().try_into()
}
}
impl TryFrom<&[u8]> for SourcesFile {
type Error = Error;
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
let deserialized = <Vec<SourcesFileRaw>>::deserialize(Deserializer::new(value))?;
deserialized.try_into()
}
}
impl TryFrom<Vec<SourcesFileRaw>> for SourcesFile {
type Error = Error;
fn try_from(value: Vec<SourcesFileRaw>) -> Result<Self, Self::Error> {
let mut source_packages = Vec::with_capacity(value.len());
for entry in value {
let entry: SourcePackageEntry = entry.try_into()?;
source_packages.push(entry);
}
Ok(Self { source_packages })
}
}
fn parse_file_reference(
line: &str,
csum_len: usize,
) -> Result<(String, usize, Vec<u8>), Error> {
let mut split = line.split_ascii_whitespace();
let checksum = split
.next()
.ok_or_else(|| format_err!("Missing 'checksum' field."))?;
if checksum.len() > csum_len * 2 {
bail!(
"invalid checksum length: '{}', expected {} bytes",
checksum,
csum_len
);
}
let checksum = hex::decode(checksum)?;
let size = split
.next()
.ok_or_else(|| format_err!("Missing 'size' field."))?
.parse::<usize>()?;
let file = split
.next()
.ok_or_else(|| format_err!("Missing 'file name' field."))?
.to_string();
Ok((file, size, checksum))
}
#[test]
pub fn test_deb_packages_file() {
// NOTE: test is over an excerpt from packages starting with 0-9, a, b & z using:
// http://snapshot.debian.org/archive/debian/20221017T212657Z/dists/bullseye/main/source/Sources.xz
let input = include_str!(concat!(
env!("CARGO_MANIFEST_DIR"),
"/tests/deb822/sources/deb.debian.org_debian_dists_bullseye_main_source_Sources"
));
let deserialized =
<Vec<SourcesFileRaw>>::deserialize(Deserializer::new(input.as_bytes())).unwrap();
assert_eq!(deserialized.len(), 1558);
let parsed: SourcesFile = deserialized.try_into().unwrap();
assert_eq!(parsed.source_packages.len(), 1558);
let found = parsed.source_packages.iter().find(|source| source.package == "base-files").expect("test file contains 'base-files' entry");
assert_eq!(found.package, "base-files");
assert_eq!(found.format, "3.0 (native)");
assert_eq!(found.architecture.as_deref(), Some("any"));
assert_eq!(found.directory, "pool/main/b/base-files");
assert_eq!(found.section.as_deref(), Some("admin"));
assert_eq!(found.version, "11.1+deb11u5");
let binary_packages = found.binary.as_ref().expect("base-files source package builds base-files binary package");
assert_eq!(binary_packages.len(), 1);
assert_eq!(binary_packages[0], "base-files");
let references = &found.files;
assert_eq!(references.len(), 2);
let dsc_file = "base-files_11.1+deb11u5.dsc";
let dsc = references.get(dsc_file).expect("base-files source package contains 'dsc' reference");
assert_eq!(dsc.file, dsc_file);
assert_eq!(dsc.size, 1110);
assert_eq!(dsc.checksums.md5.expect("dsc has md5 checksum"), hex::decode("741c34ac0151262a03de8d5a07bc4271").unwrap()[..]);
assert_eq!(dsc.checksums.sha256.expect("dsc has sha256 checksum"), hex::decode("c41a7f00d57759f27e6068240d1ea7ad80a9a752e4fb43850f7e86e967422bd3").unwrap()[..]);
let tar_file = "base-files_11.1+deb11u5.tar.xz";
let tar = references.get(tar_file).expect("base-files source package contains 'tar' reference");
assert_eq!(tar.file, tar_file);
assert_eq!(tar.size, 65612);
assert_eq!(tar.checksums.md5.expect("tar has md5 checksum"), hex::decode("995df33642118b566a4026410e1c6aac").unwrap()[..]);
assert_eq!(tar.checksums.sha256.expect("tar has sha256 checksum"), hex::decode("31c9e5745845a73f3d5c8a7868c379d77aaca42b81194679d7ab40cc28e3a0e9").unwrap()[..]);
}

3
proxmox-apt/src/lib.rs Normal file
View File

@ -0,0 +1,3 @@
pub mod config;
pub mod deb822;
pub mod repositories;

View File

@ -0,0 +1,469 @@
use std::fmt::Display;
use std::path::{Path, PathBuf};
use anyhow::{format_err, Error};
use serde::{Deserialize, Serialize};
use crate::repositories::release::DebianCodename;
use crate::repositories::repository::{
APTRepository, APTRepositoryFileType, APTRepositoryPackageType,
};
use proxmox_schema::api;
mod list_parser;
use list_parser::APTListFileParser;
mod sources_parser;
use sources_parser::APTSourcesFileParser;
trait APTRepositoryParser {
/// Parse all repositories including the disabled ones and push them onto
/// the provided vector.
fn parse_repositories(&mut self) -> Result<Vec<APTRepository>, Error>;
}
#[api(
properties: {
"file-type": {
type: APTRepositoryFileType,
},
repositories: {
description: "List of APT repositories.",
type: Array,
items: {
type: APTRepository,
},
},
digest: {
description: "Digest for the content of the file.",
optional: true,
type: Array,
items: {
description: "Digest byte.",
type: Integer,
},
},
},
)]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Represents an abstract APT repository file.
pub struct APTRepositoryFile {
/// The path to the file. If None, `contents` must be set directly.
#[serde(skip_serializing_if = "Option::is_none")]
pub path: Option<String>,
/// The type of the file.
pub file_type: APTRepositoryFileType,
/// List of repositories in the file.
pub repositories: Vec<APTRepository>,
/// The file content, if already parsed.
pub content: Option<String>,
/// Digest of the original contents.
pub digest: Option<[u8; 32]>,
}
#[api]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Error type for problems with APT repository files.
pub struct APTRepositoryFileError {
/// The path to the problematic file.
pub path: String,
/// The error message.
pub error: String,
}
impl Display for APTRepositoryFileError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "proxmox-apt error for '{}' - {}", self.path, self.error)
}
}
impl std::error::Error for APTRepositoryFileError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
None
}
}
#[api]
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Additional information for a repository.
pub struct APTRepositoryInfo {
/// Path to the defining file.
#[serde(skip_serializing_if = "String::is_empty")]
pub path: String,
/// Index of the associated respository within the file (starting from 0).
pub index: usize,
/// The property from which the info originates (e.g. "Suites")
#[serde(skip_serializing_if = "Option::is_none")]
pub property: Option<String>,
/// Info kind (e.g. "warning")
pub kind: String,
/// Info message
pub message: String,
}
impl APTRepositoryFile {
/// Creates a new `APTRepositoryFile` without parsing.
///
/// If the file is hidden, the path points to a directory, or the extension
/// is usually ignored by APT (e.g. `.orig`), `Ok(None)` is returned, while
/// invalid file names yield an error.
pub fn new<P: AsRef<Path>>(path: P) -> Result<Option<Self>, APTRepositoryFileError> {
let path: PathBuf = path.as_ref().to_path_buf();
let new_err = |path_string: String, err: &str| APTRepositoryFileError {
path: path_string,
error: err.to_string(),
};
let path_string = path
.clone()
.into_os_string()
.into_string()
.map_err(|os_string| {
new_err(
os_string.to_string_lossy().to_string(),
"path is not valid unicode",
)
})?;
let new_err = |err| new_err(path_string.clone(), err);
if path.is_dir() {
return Ok(None);
}
let file_name = match path.file_name() {
Some(file_name) => file_name
.to_os_string()
.into_string()
.map_err(|_| new_err("invalid path"))?,
None => return Err(new_err("invalid path")),
};
if file_name.starts_with('.') || file_name.ends_with('~') {
return Ok(None);
}
let extension = match path.extension() {
Some(extension) => extension
.to_os_string()
.into_string()
.map_err(|_| new_err("invalid path"))?,
None => return Err(new_err("invalid extension")),
};
// See APT's apt-pkg/init.cc
if extension.starts_with("dpkg-")
|| extension.starts_with("ucf-")
|| matches!(
extension.as_str(),
"disabled" | "bak" | "save" | "orig" | "distUpgrade"
)
{
return Ok(None);
}
let file_type = APTRepositoryFileType::try_from(&extension[..])
.map_err(|_| new_err("invalid extension"))?;
if !file_name
.chars()
.all(|x| x.is_ascii_alphanumeric() || x == '_' || x == '-' || x == '.')
{
return Err(new_err("invalid characters in file name"));
}
Ok(Some(Self {
path: Some(path_string),
file_type,
repositories: vec![],
digest: None,
content: None,
}))
}
pub fn with_content(content: String, content_type: APTRepositoryFileType) -> Self {
Self {
file_type: content_type,
content: Some(content),
path: None,
repositories: vec![],
digest: None,
}
}
/// Check if the file exists.
pub fn exists(&self) -> bool {
if let Some(path) = &self.path {
PathBuf::from(path).exists()
} else {
false
}
}
pub fn read_with_digest(&self) -> Result<(Vec<u8>, [u8; 32]), APTRepositoryFileError> {
if let Some(path) = &self.path {
let content = std::fs::read(path).map_err(|err| self.err(format_err!("{}", err)))?;
let digest = openssl::sha::sha256(&content);
Ok((content, digest))
} else if let Some(ref content) = self.content {
let content = content.as_bytes();
let digest = openssl::sha::sha256(content);
Ok((content.to_vec(), digest))
} else {
Err(self.err(format_err!(
"Neither 'path' nor 'content' set, cannot read APT repository info."
)))
}
}
/// Create an `APTRepositoryFileError`.
pub fn err(&self, error: Error) -> APTRepositoryFileError {
APTRepositoryFileError {
path: self.path.clone().unwrap_or_default(),
error: error.to_string(),
}
}
/// Parses the APT repositories configured in the file on disk, including
/// disabled ones.
///
/// Resets the current repositories and digest, even on failure.
pub fn parse(&mut self) -> Result<(), APTRepositoryFileError> {
self.repositories.clear();
self.digest = None;
let (content, digest) = self.read_with_digest()?;
let mut parser: Box<dyn APTRepositoryParser> = match self.file_type {
APTRepositoryFileType::List => Box::new(APTListFileParser::new(&content[..])),
APTRepositoryFileType::Sources => Box::new(APTSourcesFileParser::new(&content[..])),
};
let repos = parser.parse_repositories().map_err(|err| self.err(err))?;
for (n, repo) in repos.iter().enumerate() {
repo.basic_check()
.map_err(|err| self.err(format_err!("check for repository {} - {}", n + 1, err)))?;
}
self.repositories = repos;
self.digest = Some(digest);
Ok(())
}
/// Writes the repositories to the file on disk.
///
/// If a digest is provided, checks that the current content of the file still
/// produces the same one.
pub fn write(&self) -> Result<(), APTRepositoryFileError> {
let path = match &self.path {
Some(path) => path,
None => {
return Err(self.err(format_err!(
"Cannot write to APT repository file without path."
)));
}
};
if let Some(digest) = self.digest {
if !self.exists() {
return Err(self.err(format_err!("digest specified, but file does not exist")));
}
let (_, current_digest) = self.read_with_digest()?;
if digest != current_digest {
return Err(self.err(format_err!("digest mismatch")));
}
}
if self.repositories.is_empty() {
return std::fs::remove_file(&path)
.map_err(|err| self.err(format_err!("unable to remove file - {}", err)));
}
let mut content = vec![];
for (n, repo) in self.repositories.iter().enumerate() {
repo.basic_check()
.map_err(|err| self.err(format_err!("check for repository {} - {}", n + 1, err)))?;
repo.write(&mut content)
.map_err(|err| self.err(format_err!("writing repository {} - {}", n + 1, err)))?;
}
let path = PathBuf::from(&path);
let dir = match path.parent() {
Some(dir) => dir,
None => return Err(self.err(format_err!("invalid path"))),
};
std::fs::create_dir_all(dir)
.map_err(|err| self.err(format_err!("unable to create parent dir - {}", err)))?;
let pid = std::process::id();
let mut tmp_path = path.clone();
tmp_path.set_extension("tmp");
tmp_path.set_extension(format!("{}", pid));
if let Err(err) = std::fs::write(&tmp_path, content) {
let _ = std::fs::remove_file(&tmp_path);
return Err(self.err(format_err!("writing {:?} failed - {}", path, err)));
}
if let Err(err) = std::fs::rename(&tmp_path, &path) {
let _ = std::fs::remove_file(&tmp_path);
return Err(self.err(format_err!("rename failed for {:?} - {}", path, err)));
}
Ok(())
}
/// Checks if old or unstable suites are configured and that the Debian security repository
/// has the correct suite. Also checks that the `stable` keyword is not used.
pub fn check_suites(&self, current_codename: DebianCodename) -> Vec<APTRepositoryInfo> {
let mut infos = vec![];
let path = match &self.path {
Some(path) => path.clone(),
None => return vec![],
};
for (n, repo) in self.repositories.iter().enumerate() {
if !repo.types.contains(&APTRepositoryPackageType::Deb) {
continue;
}
let is_security_repo = repo.uris.iter().any(|uri| {
let uri = uri.trim_end_matches('/');
let uri = uri.strip_suffix("debian-security").unwrap_or(uri);
let uri = uri.trim_end_matches('/');
matches!(
uri,
"http://security.debian.org" | "https://security.debian.org",
)
});
let require_suffix = match is_security_repo {
true if current_codename >= DebianCodename::Bullseye => Some("-security"),
true => Some("/updates"),
false => None,
};
let mut add_info = |kind: &str, message| {
infos.push(APTRepositoryInfo {
path: path.clone(),
index: n,
property: Some("Suites".to_string()),
kind: kind.to_string(),
message,
})
};
let message_old = |suite| format!("old suite '{}' configured!", suite);
let message_new =
|suite| format!("suite '{}' should not be used in production!", suite);
let message_stable = "use the name of the stable distribution instead of 'stable'!";
for suite in repo.suites.iter() {
let (base_suite, suffix) = suite_variant(suite);
match base_suite {
"oldoldstable" | "oldstable" => {
add_info("warning", message_old(base_suite));
}
"testing" | "unstable" | "experimental" | "sid" => {
add_info("warning", message_new(base_suite));
}
"stable" => {
add_info("warning", message_stable.to_string());
}
_ => (),
};
let codename: DebianCodename = match base_suite.try_into() {
Ok(codename) => codename,
Err(_) => continue,
};
if codename < current_codename {
add_info("warning", message_old(base_suite));
}
if Some(codename) == current_codename.next() {
add_info("ignore-pre-upgrade-warning", message_new(base_suite));
} else if codename > current_codename {
add_info("warning", message_new(base_suite));
}
if let Some(require_suffix) = require_suffix {
if suffix != require_suffix {
add_info(
"warning",
format!("expected suite '{}{}'", current_codename, require_suffix),
);
}
}
}
}
infos
}
/// Checks for official URIs.
pub fn check_uris(&self) -> Vec<APTRepositoryInfo> {
let mut infos = vec![];
let path = match &self.path {
Some(path) => path,
None => return vec![],
};
for (n, repo) in self.repositories.iter().enumerate() {
let mut origin = match repo.get_cached_origin() {
Ok(option) => option,
Err(_) => None,
};
if origin.is_none() {
origin = repo.origin_from_uris();
}
if let Some(origin) = origin {
infos.push(APTRepositoryInfo {
path: path.clone(),
index: n,
kind: "origin".to_string(),
property: None,
message: origin,
});
}
}
infos
}
}
/// Splits the suite into its base part and variant.
/// Does not expect the base part to contain either `-` or `/`.
fn suite_variant(suite: &str) -> (&str, &str) {
match suite.find(&['-', '/'][..]) {
Some(n) => (&suite[0..n], &suite[n..]),
None => (suite, ""),
}
}

View File

@ -0,0 +1,250 @@
use std::io::BufRead;
use std::iter::Iterator;
use anyhow::{bail, format_err, Error};
use crate::repositories::{APTRepository, APTRepositoryFileType, APTRepositoryOption};
use super::APTRepositoryParser;
// TODO convert %-escape characters. Also adapt printing back accordingly,
// because at least '%' needs to be re-escaped when printing.
/// See APT's ParseQuoteWord in contrib/strutl.cc
///
/// Doesn't split on whitespace when between `[]` or `""` and strips `"` from the word.
///
/// Currently, %-escaped characters are not interpreted, but passed along as is.
struct SplitQuoteWord {
rest: String,
position: usize,
}
impl SplitQuoteWord {
pub fn new(string: String) -> Self {
Self {
rest: string,
position: 0,
}
}
}
impl Iterator for SplitQuoteWord {
type Item = Result<String, Error>;
fn next(&mut self) -> Option<Self::Item> {
let rest = &self.rest[self.position..];
let mut start = None;
let mut wait_for = None;
for (n, c) in rest.chars().enumerate() {
self.position += 1;
if let Some(wait_for_char) = wait_for {
if wait_for_char == c {
wait_for = None;
}
continue;
}
if char::is_ascii_whitespace(&c) {
if let Some(start) = start {
return Some(Ok(rest[start..n].replace('"', "")));
}
continue;
}
if start == None {
start = Some(n);
}
if c == '"' {
wait_for = Some('"');
}
if c == '[' {
wait_for = Some(']');
}
}
if let Some(wait_for) = wait_for {
return Some(Err(format_err!("missing terminating '{}'", wait_for)));
}
if let Some(start) = start {
return Some(Ok(rest[start..].replace('"', "")));
}
None
}
}
pub struct APTListFileParser<R: BufRead> {
input: R,
line_nr: usize,
comment: String,
}
impl<R: BufRead> APTListFileParser<R> {
pub fn new(reader: R) -> Self {
Self {
input: reader,
line_nr: 0,
comment: String::new(),
}
}
/// Helper to parse options from the existing token stream.
///
/// Also returns `Ok(())` if there are no options.
///
/// Errors when options are invalid or not closed by `']'`.
fn parse_options(
options: &mut Vec<APTRepositoryOption>,
tokens: &mut SplitQuoteWord,
) -> Result<(), Error> {
let mut finished = false;
loop {
let mut option = match tokens.next() {
Some(token) => token?,
None => bail!("options not closed by ']'"),
};
if let Some(stripped) = option.strip_suffix(']') {
option = stripped.to_string();
if option.is_empty() {
break;
}
finished = true; // but still need to handle the last one
};
if let Some(mid) = option.find('=') {
let (key, mut value_str) = option.split_at(mid);
value_str = &value_str[1..];
if key.is_empty() {
bail!("option has no key: '{}'", option);
}
if value_str.is_empty() {
bail!("option has no value: '{}'", option);
}
let values: Vec<String> = value_str
.split(',')
.map(|value| value.to_string())
.collect();
options.push(APTRepositoryOption {
key: key.to_string(),
values,
});
} else if !option.is_empty() {
bail!("got invalid option - '{}'", option);
}
if finished {
break;
}
}
Ok(())
}
/// Parse a repository or comment in one-line format.
///
/// Commented out repositories are also detected and returned with the
/// `enabled` property set to `false`.
///
/// If the line contains a repository, `self.comment` is added to the
/// `comment` property.
///
/// If the line contains a comment, it is added to `self.comment`.
fn parse_one_line(&mut self, mut line: &str) -> Result<Option<APTRepository>, Error> {
line = line.trim_matches(|c| char::is_ascii_whitespace(&c));
// check for commented out repository first
if let Some(commented_out) = line.strip_prefix('#') {
if let Ok(Some(mut repo)) = self.parse_one_line(commented_out) {
repo.set_enabled(false);
return Ok(Some(repo));
}
}
let mut repo = APTRepository::new(APTRepositoryFileType::List);
// now handle "real" comment
if let Some(comment_start) = line.find('#') {
let (line_start, comment) = line.split_at(comment_start);
self.comment = format!("{}{}\n", self.comment, &comment[1..]);
line = line_start;
}
// e.g. quoted "deb" is not accepted by APT, so no need for quote word parsing here
line = match line.split_once(|c| char::is_ascii_whitespace(&c)) {
Some((package_type, rest)) => {
repo.types.push(package_type.try_into()?);
rest
}
None => return Ok(None), // empty line
};
line = line.trim_start_matches(|c| char::is_ascii_whitespace(&c));
let has_options = match line.strip_prefix('[') {
Some(rest) => {
// avoid the start of the options to be interpreted as the start of a quote word
line = rest;
true
}
None => false,
};
let mut tokens = SplitQuoteWord::new(line.to_string());
if has_options {
Self::parse_options(&mut repo.options, &mut tokens)?;
}
// the rest of the line is just '<uri> <suite> [<components>...]'
repo.uris
.push(tokens.next().ok_or_else(|| format_err!("missing URI"))??);
repo.suites.push(
tokens
.next()
.ok_or_else(|| format_err!("missing suite"))??,
);
for token in tokens {
repo.components.push(token?);
}
repo.comment = std::mem::take(&mut self.comment);
Ok(Some(repo))
}
}
impl<R: BufRead> APTRepositoryParser for APTListFileParser<R> {
fn parse_repositories(&mut self) -> Result<Vec<APTRepository>, Error> {
let mut repos = vec![];
let mut line = String::new();
loop {
self.line_nr += 1;
line.clear();
match self.input.read_line(&mut line) {
Err(err) => bail!("input error - {}", err),
Ok(0) => break,
Ok(_) => match self.parse_one_line(&line) {
Ok(Some(repo)) => repos.push(repo),
Ok(None) => continue,
Err(err) => bail!("malformed entry on line {} - {}", self.line_nr, err),
},
}
}
Ok(repos)
}
}

View File

@ -0,0 +1,203 @@
use std::io::BufRead;
use std::iter::Iterator;
use anyhow::{bail, Error};
use crate::repositories::{
APTRepository, APTRepositoryFileType, APTRepositoryOption, APTRepositoryPackageType,
};
use super::APTRepositoryParser;
pub struct APTSourcesFileParser<R: BufRead> {
input: R,
stanza_nr: usize,
comment: String,
}
/// See `man sources.list` and `man deb822` for the format specification.
impl<R: BufRead> APTSourcesFileParser<R> {
pub fn new(reader: R) -> Self {
Self {
input: reader,
stanza_nr: 1,
comment: String::new(),
}
}
/// Based on APT's `StringToBool` in `strutl.cc`
fn string_to_bool(string: &str, default: bool) -> bool {
let string = string.trim_matches(|c| char::is_ascii_whitespace(&c));
let string = string.to_lowercase();
match &string[..] {
"1" | "yes" | "true" | "with" | "on" | "enable" => true,
"0" | "no" | "false" | "without" | "off" | "disable" => false,
_ => default,
}
}
/// Checks if `key` is valid according to deb822
fn valid_key(key: &str) -> bool {
if key.starts_with('-') {
return false;
};
return key.chars().all(|c| matches!(c, '!'..='9' | ';'..='~'));
}
/// Try parsing a repository in stanza format from `lines`.
///
/// Returns `Ok(None)` when no stanza can be found.
///
/// Comments are added to `self.comments`. If a stanza can be found,
/// `self.comment` is added to the repository's `comment` property.
///
/// Fully commented out stanzas are treated as comments.
fn parse_stanza(&mut self, lines: &str) -> Result<Option<APTRepository>, Error> {
let mut repo = APTRepository::new(APTRepositoryFileType::Sources);
// Values may be folded into multiple lines.
// Those lines have to start with a space or a tab.
let lines = lines.replace("\n ", " ");
let lines = lines.replace("\n\t", " ");
let mut got_something = false;
for line in lines.lines() {
let line = line.trim_matches(|c| char::is_ascii_whitespace(&c));
if line.is_empty() {
continue;
}
if let Some(commented_out) = line.strip_prefix('#') {
self.comment = format!("{}{}\n", self.comment, commented_out);
continue;
}
if let Some(mid) = line.find(':') {
let (key, value_str) = line.split_at(mid);
let value_str = &value_str[1..];
let key = key.trim_matches(|c| char::is_ascii_whitespace(&c));
if key.is_empty() {
bail!("option has no key: '{}'", line);
}
if value_str.is_empty() {
// ignored by APT
eprintln!("option has no value: '{}'", line);
continue;
}
if !Self::valid_key(key) {
// ignored by APT
eprintln!("option with invalid key '{}'", key);
continue;
}
let values: Vec<String> = value_str
.split_ascii_whitespace()
.map(|value| value.to_string())
.collect();
match &key.to_lowercase()[..] {
"types" => {
if !repo.types.is_empty() {
eprintln!("key 'Types' was defined twice");
}
let mut types = Vec::<APTRepositoryPackageType>::new();
for package_type in values {
types.push((&package_type[..]).try_into()?);
}
repo.types = types;
}
"uris" => {
if !repo.uris.is_empty() {
eprintln!("key 'URIs' was defined twice");
}
repo.uris = values;
}
"suites" => {
if !repo.suites.is_empty() {
eprintln!("key 'Suites' was defined twice");
}
repo.suites = values;
}
"components" => {
if !repo.components.is_empty() {
eprintln!("key 'Components' was defined twice");
}
repo.components = values;
}
"enabled" => {
repo.set_enabled(Self::string_to_bool(value_str, true));
}
_ => repo.options.push(APTRepositoryOption {
key: key.to_string(),
values,
}),
}
} else {
bail!("got invalid line - '{:?}'", line);
}
got_something = true;
}
if !got_something {
return Ok(None);
}
repo.comment = std::mem::take(&mut self.comment);
Ok(Some(repo))
}
/// Helper function for `parse_repositories`.
fn try_parse_stanza(
&mut self,
lines: &str,
repos: &mut Vec<APTRepository>,
) -> Result<(), Error> {
match self.parse_stanza(lines) {
Ok(Some(repo)) => {
repos.push(repo);
self.stanza_nr += 1;
}
Ok(None) => (),
Err(err) => bail!("malformed entry in stanza {} - {}", self.stanza_nr, err),
}
Ok(())
}
}
impl<R: BufRead> APTRepositoryParser for APTSourcesFileParser<R> {
fn parse_repositories(&mut self) -> Result<Vec<APTRepository>, Error> {
let mut repos = vec![];
let mut lines = String::new();
loop {
let old_length = lines.len();
match self.input.read_line(&mut lines) {
Err(err) => bail!("input error - {}", err),
Ok(0) => {
self.try_parse_stanza(&lines[..], &mut repos)?;
break;
}
Ok(_) => {
if (lines[old_length..])
.trim_matches(|c| char::is_ascii_whitespace(&c))
.is_empty()
{
// detected end of stanza
self.try_parse_stanza(&lines[..], &mut repos)?;
lines.clear();
}
}
}
}
Ok(repos)
}
}

View File

@ -0,0 +1,191 @@
use std::collections::BTreeMap;
use std::path::PathBuf;
use anyhow::{bail, Error};
mod repository;
pub use repository::{
APTRepository, APTRepositoryFileType, APTRepositoryOption, APTRepositoryPackageType,
};
mod file;
pub use file::{APTRepositoryFile, APTRepositoryFileError, APTRepositoryInfo};
mod release;
pub use release::{get_current_release_codename, DebianCodename};
mod standard;
pub use standard::{APTRepositoryHandle, APTStandardRepository};
const APT_SOURCES_LIST_FILENAME: &str = "/etc/apt/sources.list";
const APT_SOURCES_LIST_DIRECTORY: &str = "/etc/apt/sources.list.d/";
/// Calculates a common digest for successfully parsed repository files.
///
/// The digest is invariant with respect to file order.
///
/// Files without a digest are ignored.
fn common_digest(files: &[APTRepositoryFile]) -> [u8; 32] {
let mut digests = BTreeMap::new();
for file in files.iter() {
digests.insert(file.path.clone(), &file.digest);
}
let mut common_raw = Vec::<u8>::with_capacity(digests.len() * 32);
for digest in digests.values() {
match digest {
Some(digest) => common_raw.extend_from_slice(&digest[..]),
None => (),
}
}
openssl::sha::sha256(&common_raw[..])
}
/// Provides additional information about the repositories.
///
/// The kind of information can be:
/// `warnings` for bad suites.
/// `ignore-pre-upgrade-warning` when the next stable suite is configured.
/// `badge` for official URIs.
pub fn check_repositories(
files: &[APTRepositoryFile],
current_suite: DebianCodename,
) -> Vec<APTRepositoryInfo> {
let mut infos = vec![];
for file in files.iter() {
infos.append(&mut file.check_suites(current_suite));
infos.append(&mut file.check_uris());
}
infos
}
/// Get the repository associated to the handle and the path where it is usually configured.
pub fn get_standard_repository(
handle: APTRepositoryHandle,
product: &str,
suite: DebianCodename,
) -> (APTRepository, String) {
let repo = handle.to_repository(product, &suite.to_string());
let path = handle.path(product);
(repo, path)
}
/// Return handles for standard Proxmox repositories and their status, where
/// `None` means not configured, and `Some(bool)` indicates enabled or disabled.
pub fn standard_repositories(
files: &[APTRepositoryFile],
product: &str,
suite: DebianCodename,
) -> Vec<APTStandardRepository> {
let mut result = vec![
APTStandardRepository::from(APTRepositoryHandle::Enterprise),
APTStandardRepository::from(APTRepositoryHandle::NoSubscription),
APTStandardRepository::from(APTRepositoryHandle::Test),
];
if product == "pve" {
result.append(&mut vec![
APTStandardRepository::from(APTRepositoryHandle::CephQuincy),
APTStandardRepository::from(APTRepositoryHandle::CephQuincyTest),
APTStandardRepository::from(APTRepositoryHandle::CephPacific),
APTStandardRepository::from(APTRepositoryHandle::CephPacificTest),
APTStandardRepository::from(APTRepositoryHandle::CephOctopus),
APTStandardRepository::from(APTRepositoryHandle::CephOctopusTest),
]);
}
for file in files.iter() {
for repo in file.repositories.iter() {
for entry in result.iter_mut() {
if entry.status == Some(true) {
continue;
}
if repo.is_referenced_repository(entry.handle, product, &suite.to_string())
|| repo.is_referenced_repository(
entry.handle,
product,
&suite.next().unwrap().to_string(),
) {
entry.status = Some(repo.enabled);
}
}
}
}
result
}
/// Returns all APT repositories configured in `/etc/apt/sources.list` and
/// in `/etc/apt/sources.list.d` including disabled repositories.
///
/// Returns the succesfully parsed files, a list of errors for files that could
/// not be read or parsed and a common digest for the succesfully parsed files.
///
/// The digest is guaranteed to be set for each successfully parsed file.
pub fn repositories() -> Result<
(
Vec<APTRepositoryFile>,
Vec<APTRepositoryFileError>,
[u8; 32],
),
Error,
> {
let to_result = |files: Vec<APTRepositoryFile>, errors: Vec<APTRepositoryFileError>| {
let common_digest = common_digest(&files);
(files, errors, common_digest)
};
let mut files = vec![];
let mut errors = vec![];
let sources_list_path = PathBuf::from(APT_SOURCES_LIST_FILENAME);
let sources_list_d_path = PathBuf::from(APT_SOURCES_LIST_DIRECTORY);
match APTRepositoryFile::new(sources_list_path) {
Ok(Some(mut file)) => match file.parse() {
Ok(()) => files.push(file),
Err(err) => errors.push(err),
},
_ => bail!("internal error with '{}'", APT_SOURCES_LIST_FILENAME),
}
if !sources_list_d_path.exists() {
return Ok(to_result(files, errors));
}
if !sources_list_d_path.is_dir() {
errors.push(APTRepositoryFileError {
path: APT_SOURCES_LIST_DIRECTORY.to_string(),
error: "not a directory!".to_string(),
});
return Ok(to_result(files, errors));
}
for entry in std::fs::read_dir(sources_list_d_path)? {
let path = entry?.path();
match APTRepositoryFile::new(path) {
Ok(Some(mut file)) => match file.parse() {
Ok(()) => {
if file.digest.is_none() {
bail!("internal error - digest not set");
}
files.push(file);
}
Err(err) => errors.push(err),
},
Ok(None) => (),
Err(err) => errors.push(err),
}
}
Ok(to_result(files, errors))
}

View File

@ -0,0 +1,100 @@
use std::fmt::Display;
use std::io::{BufRead, BufReader};
use anyhow::{bail, format_err, Error};
/// The code names of Debian releases. Does not include `sid`.
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum DebianCodename {
Lenny = 5,
Squeeze,
Wheezy,
Jessie,
Stretch,
Buster,
Bullseye,
Bookworm,
Trixie,
}
impl DebianCodename {
pub fn next(&self) -> Option<Self> {
match (*self as u8 + 1).try_into() {
Ok(codename) => Some(codename),
Err(_) => None,
}
}
}
impl TryFrom<&str> for DebianCodename {
type Error = Error;
fn try_from(string: &str) -> Result<Self, Error> {
match string {
"lenny" => Ok(DebianCodename::Lenny),
"squeeze" => Ok(DebianCodename::Squeeze),
"wheezy" => Ok(DebianCodename::Wheezy),
"jessie" => Ok(DebianCodename::Jessie),
"stretch" => Ok(DebianCodename::Stretch),
"buster" => Ok(DebianCodename::Buster),
"bullseye" => Ok(DebianCodename::Bullseye),
"bookworm" => Ok(DebianCodename::Bookworm),
"trixie" => Ok(DebianCodename::Trixie),
_ => bail!("unknown Debian code name '{}'", string),
}
}
}
impl TryFrom<u8> for DebianCodename {
type Error = Error;
fn try_from(number: u8) -> Result<Self, Error> {
match number {
5 => Ok(DebianCodename::Lenny),
6 => Ok(DebianCodename::Squeeze),
7 => Ok(DebianCodename::Wheezy),
8 => Ok(DebianCodename::Jessie),
9 => Ok(DebianCodename::Stretch),
10 => Ok(DebianCodename::Buster),
11 => Ok(DebianCodename::Bullseye),
12 => Ok(DebianCodename::Bookworm),
13 => Ok(DebianCodename::Trixie),
_ => bail!("unknown Debian release number '{}'", number),
}
}
}
impl Display for DebianCodename {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
DebianCodename::Lenny => write!(f, "lenny"),
DebianCodename::Squeeze => write!(f, "squeeze"),
DebianCodename::Wheezy => write!(f, "wheezy"),
DebianCodename::Jessie => write!(f, "jessie"),
DebianCodename::Stretch => write!(f, "stretch"),
DebianCodename::Buster => write!(f, "buster"),
DebianCodename::Bullseye => write!(f, "bullseye"),
DebianCodename::Bookworm => write!(f, "bookworm"),
DebianCodename::Trixie => write!(f, "trixie"),
}
}
}
/// Read the `VERSION_CODENAME` from `/etc/os-release`.
pub fn get_current_release_codename() -> Result<DebianCodename, Error> {
let raw = std::fs::read("/etc/os-release")
.map_err(|err| format_err!("unable to read '/etc/os-release' - {}", err))?;
let reader = BufReader::new(&*raw);
for line in reader.lines() {
let line = line.map_err(|err| format_err!("unable to read '/etc/os-release' - {}", err))?;
if let Some(codename) = line.strip_prefix("VERSION_CODENAME=") {
let codename = codename.trim_matches(&['"', '\''][..]);
return codename.try_into();
}
}
bail!("unable to parse codename from '/etc/os-release'");
}

View File

@ -0,0 +1,555 @@
use std::fmt::Display;
use std::io::{BufRead, BufReader, Write};
use std::path::PathBuf;
use anyhow::{bail, format_err, Error};
use serde::{Deserialize, Serialize};
use proxmox_schema::api;
use crate::repositories::standard::APTRepositoryHandle;
#[api]
#[derive(Debug, Copy, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "lowercase")]
pub enum APTRepositoryFileType {
/// One-line-style format
List,
/// DEB822-style format
Sources,
}
impl TryFrom<&str> for APTRepositoryFileType {
type Error = Error;
fn try_from(string: &str) -> Result<Self, Error> {
match string {
"list" => Ok(APTRepositoryFileType::List),
"sources" => Ok(APTRepositoryFileType::Sources),
_ => bail!("invalid file type '{}'", string),
}
}
}
impl Display for APTRepositoryFileType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
APTRepositoryFileType::List => write!(f, "list"),
APTRepositoryFileType::Sources => write!(f, "sources"),
}
}
}
#[api]
#[derive(Debug, Copy, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "kebab-case")]
pub enum APTRepositoryPackageType {
/// Debian package
Deb,
/// Debian source package
DebSrc,
}
impl TryFrom<&str> for APTRepositoryPackageType {
type Error = Error;
fn try_from(string: &str) -> Result<Self, Error> {
match string {
"deb" => Ok(APTRepositoryPackageType::Deb),
"deb-src" => Ok(APTRepositoryPackageType::DebSrc),
_ => bail!("invalid package type '{}'", string),
}
}
}
impl Display for APTRepositoryPackageType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
APTRepositoryPackageType::Deb => write!(f, "deb"),
APTRepositoryPackageType::DebSrc => write!(f, "deb-src"),
}
}
}
#[api(
properties: {
Key: {
description: "Option key.",
type: String,
},
Values: {
description: "Option values.",
type: Array,
items: {
description: "Value.",
type: String,
},
},
},
)]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")] // for consistency
/// Additional options for an APT repository.
/// Used for both single- and mutli-value options.
pub struct APTRepositoryOption {
/// Option key.
pub key: String,
/// Option value(s).
pub values: Vec<String>,
}
#[api(
properties: {
Types: {
description: "List of package types.",
type: Array,
items: {
type: APTRepositoryPackageType,
},
},
URIs: {
description: "List of repository URIs.",
type: Array,
items: {
description: "Repository URI.",
type: String,
},
},
Suites: {
description: "List of distributions.",
type: Array,
items: {
description: "Package distribution.",
type: String,
},
},
Components: {
description: "List of repository components.",
type: Array,
items: {
description: "Repository component.",
type: String,
},
},
Options: {
type: Array,
optional: true,
items: {
type: APTRepositoryOption,
},
},
Comment: {
description: "Associated comment.",
type: String,
optional: true,
},
FileType: {
type: APTRepositoryFileType,
},
Enabled: {
description: "Whether the repository is enabled or not.",
type: Boolean,
},
},
)]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes an APT repository.
pub struct APTRepository {
/// List of package types.
#[serde(skip_serializing_if = "Vec::is_empty")]
pub types: Vec<APTRepositoryPackageType>,
/// List of repository URIs.
#[serde(skip_serializing_if = "Vec::is_empty")]
#[serde(rename = "URIs")]
pub uris: Vec<String>,
/// List of package distributions.
#[serde(skip_serializing_if = "Vec::is_empty")]
pub suites: Vec<String>,
/// List of repository components.
#[serde(skip_serializing_if = "Vec::is_empty")]
pub components: Vec<String>,
/// Additional options.
#[serde(skip_serializing_if = "Vec::is_empty")]
pub options: Vec<APTRepositoryOption>,
/// Associated comment.
#[serde(skip_serializing_if = "String::is_empty")]
pub comment: String,
/// Format of the defining file.
pub file_type: APTRepositoryFileType,
/// Whether the repository is enabled or not.
pub enabled: bool,
}
impl APTRepository {
/// Crates an empty repository.
pub fn new(file_type: APTRepositoryFileType) -> Self {
Self {
types: vec![],
uris: vec![],
suites: vec![],
components: vec![],
options: vec![],
comment: String::new(),
file_type,
enabled: true,
}
}
/// Changes the `enabled` flag and makes sure the `Enabled` option for
/// `APTRepositoryPackageType::Sources` repositories is updated too.
pub fn set_enabled(&mut self, enabled: bool) {
self.enabled = enabled;
if self.file_type == APTRepositoryFileType::Sources {
let enabled_string = match enabled {
true => "true".to_string(),
false => "false".to_string(),
};
for option in self.options.iter_mut() {
if option.key == "Enabled" {
option.values = vec![enabled_string];
return;
}
}
self.options.push(APTRepositoryOption {
key: "Enabled".to_string(),
values: vec![enabled_string],
});
}
}
/// Makes sure that all basic properties of a repository are present and
/// not obviously invalid.
pub fn basic_check(&self) -> Result<(), Error> {
if self.types.is_empty() {
bail!("missing package type(s)");
}
if self.uris.is_empty() {
bail!("missing URI(s)");
}
if self.suites.is_empty() {
bail!("missing suite(s)");
}
for uri in self.uris.iter() {
if !uri.contains(':') || uri.len() < 3 {
bail!("invalid URI: '{}'", uri);
}
}
for suite in self.suites.iter() {
if !suite.ends_with('/') && self.components.is_empty() {
bail!("missing component(s)");
} else if suite.ends_with('/') && !self.components.is_empty() {
bail!("absolute suite '{}' does not allow component(s)", suite);
}
}
if self.file_type == APTRepositoryFileType::List {
if self.types.len() > 1 {
bail!("more than one package type");
}
if self.uris.len() > 1 {
bail!("more than one URI");
}
if self.suites.len() > 1 {
bail!("more than one suite");
}
}
Ok(())
}
/// Checks if the repository is the one referenced by the handle.
pub fn is_referenced_repository(
&self,
handle: APTRepositoryHandle,
product: &str,
suite: &str,
) -> bool {
let (package_type, handle_uris, component) = handle.info(product);
let mut found_uri = false;
for uri in self.uris.iter() {
let uri = uri.trim_end_matches('/');
found_uri = found_uri || handle_uris.iter().any(|handle_uri| handle_uri == uri);
}
self.types.contains(&package_type)
&& found_uri
// using contains would require a &String
&& self.suites.iter().any(|self_suite| self_suite == suite)
&& self.components.contains(&component)
}
/// Guess the origin from the repository's URIs.
///
/// Intended to be used as a fallback for get_cached_origin.
pub fn origin_from_uris(&self) -> Option<String> {
for uri in self.uris.iter() {
if let Some(host) = host_from_uri(uri) {
if host == "proxmox.com" || host.ends_with(".proxmox.com") {
return Some("Proxmox".to_string());
}
if host == "debian.org" || host.ends_with(".debian.org") {
return Some("Debian".to_string());
}
}
}
None
}
/// Get the `Origin:` value from a cached InRelease file.
pub fn get_cached_origin(&self) -> Result<Option<String>, Error> {
for uri in self.uris.iter() {
for suite in self.suites.iter() {
let file = in_release_filename(uri, suite);
if !file.exists() {
continue;
}
let raw = std::fs::read(&file)
.map_err(|err| format_err!("unable to read {:?} - {}", file, err))?;
let reader = BufReader::new(&*raw);
for line in reader.lines() {
let line =
line.map_err(|err| format_err!("unable to read {:?} - {}", file, err))?;
if let Some(value) = line.strip_prefix("Origin:") {
return Ok(Some(
value
.trim_matches(|c| char::is_ascii_whitespace(&c))
.to_string(),
));
}
}
}
}
Ok(None)
}
/// Writes a repository in the corresponding format followed by a blank.
///
/// Expects that `basic_check()` for the repository was successful.
pub fn write(&self, w: &mut dyn Write) -> Result<(), Error> {
match self.file_type {
APTRepositoryFileType::List => write_one_line(self, w),
APTRepositoryFileType::Sources => write_stanza(self, w),
}
}
}
/// Get the path to the cached InRelease file.
fn in_release_filename(uri: &str, suite: &str) -> PathBuf {
let mut path = PathBuf::from(&crate::config::get().dir_state);
path.push(&crate::config::get().dir_state_lists);
let filename = uri_to_filename(uri);
path.push(format!(
"{}_dists_{}_InRelease",
filename,
suite.replace('/', "_"), // e.g. for buster/updates
));
path
}
/// See APT's URItoFileName in contrib/strutl.cc
fn uri_to_filename(uri: &str) -> String {
let mut filename = uri;
if let Some(begin) = filename.find("://") {
filename = &filename[(begin + 3)..];
}
if uri.starts_with("http://") || uri.starts_with("https://") {
if let Some(begin) = filename.find('@') {
filename = &filename[(begin + 1)..];
}
}
// APT seems to only strip one final slash, so do the same
filename = filename.strip_suffix('/').unwrap_or(filename);
let encode_chars = "\\|{}[]<>\"^~_=!@#$%^&*";
let mut encoded = String::with_capacity(filename.len());
for b in filename.as_bytes().iter() {
if *b <= 0x20 || *b >= 0x7F || encode_chars.contains(*b as char) {
let mut hex = [0u8; 2];
// unwrap: we're hex-encoding a single byte into a 2-byte slice
hex::encode_to_slice(&[*b], &mut hex).unwrap();
let hex = unsafe { std::str::from_utf8_unchecked(&hex) };
encoded = format!("{}%{}", encoded, hex);
} else {
encoded.push(*b as char);
}
}
encoded.replace('/', "_")
}
/// Get the host part from a given URI.
fn host_from_uri(uri: &str) -> Option<&str> {
let host = uri.strip_prefix("http")?;
let host = host.strip_prefix('s').unwrap_or(host);
let mut host = host.strip_prefix("://")?;
if let Some(end) = host.find('/') {
host = &host[..end];
}
if let Some(begin) = host.find('@') {
host = &host[(begin + 1)..];
}
if let Some(end) = host.find(':') {
host = &host[..end];
}
Some(host)
}
/// Strips existing double quotes from the string first, and then adds double quotes at
/// the beginning and end if there is an ASCII whitespace in the `string`, which is not
/// escaped by `[]`.
fn quote_for_one_line(string: &str) -> String {
let mut add_quotes = false;
let mut wait_for_bracket = false;
// easier to just quote the whole string, so ignore pre-existing quotes
// currently, parsing removes them anyways, but being on the safe side is rather cheap
let string = string.replace('"', "");
for c in string.chars() {
if wait_for_bracket {
if c == ']' {
wait_for_bracket = false;
}
continue;
}
if char::is_ascii_whitespace(&c) {
add_quotes = true;
break;
}
if c == '[' {
wait_for_bracket = true;
}
}
match add_quotes {
true => format!("\"{}\"", string),
false => string,
}
}
/// Writes a repository in one-line format followed by a blank line.
///
/// Expects that `repo.file_type == APTRepositoryFileType::List`.
///
/// Expects that `basic_check()` for the repository was successful.
fn write_one_line(repo: &APTRepository, w: &mut dyn Write) -> Result<(), Error> {
if repo.file_type != APTRepositoryFileType::List {
bail!("not a .list repository");
}
if !repo.comment.is_empty() {
for line in repo.comment.lines() {
writeln!(w, "#{}", line)?;
}
}
if !repo.enabled {
write!(w, "# ")?;
}
write!(w, "{} ", repo.types[0])?;
if !repo.options.is_empty() {
write!(w, "[ ")?;
for option in repo.options.iter() {
let option = quote_for_one_line(&format!("{}={}", option.key, option.values.join(",")));
write!(w, "{} ", option)?;
}
write!(w, "] ")?;
};
write!(w, "{} ", quote_for_one_line(&repo.uris[0]))?;
write!(w, "{} ", quote_for_one_line(&repo.suites[0]))?;
writeln!(
w,
"{}",
repo.components
.iter()
.map(|comp| quote_for_one_line(comp))
.collect::<Vec<String>>()
.join(" ")
)?;
writeln!(w)?;
Ok(())
}
/// Writes a single stanza followed by a blank line.
///
/// Expects that `repo.file_type == APTRepositoryFileType::Sources`.
fn write_stanza(repo: &APTRepository, w: &mut dyn Write) -> Result<(), Error> {
if repo.file_type != APTRepositoryFileType::Sources {
bail!("not a .sources repository");
}
if !repo.comment.is_empty() {
for line in repo.comment.lines() {
writeln!(w, "#{}", line)?;
}
}
write!(w, "Types:")?;
repo.types
.iter()
.try_for_each(|package_type| write!(w, " {}", package_type))?;
writeln!(w)?;
writeln!(w, "URIs: {}", repo.uris.join(" "))?;
writeln!(w, "Suites: {}", repo.suites.join(" "))?;
if !repo.components.is_empty() {
writeln!(w, "Components: {}", repo.components.join(" "))?;
}
for option in repo.options.iter() {
writeln!(w, "{}: {}", option.key, option.values.join(" "))?;
}
writeln!(w)?;
Ok(())
}
#[test]
fn test_uri_to_filename() {
let filename = uri_to_filename("https://some_host/some/path");
assert_eq!(filename, "some%5fhost_some_path".to_string());
}

View File

@ -0,0 +1,273 @@
use std::fmt::Display;
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use crate::repositories::repository::{
APTRepository, APTRepositoryFileType, APTRepositoryPackageType,
};
use proxmox_schema::api;
#[api(
properties: {
handle: {
description: "Handle referencing a standard repository.",
type: String,
},
},
)]
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "kebab-case")]
/// Reference to a standard repository and configuration status.
pub struct APTStandardRepository {
/// Handle referencing a standard repository.
pub handle: APTRepositoryHandle,
/// Configuration status of the associated repository.
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<bool>,
/// Display name of the repository.
pub name: String,
/// Description of the repository.
pub description: String,
}
#[api]
#[derive(Debug, Copy, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "kebab-case")]
/// Handles for Proxmox repositories.
pub enum APTRepositoryHandle {
/// The enterprise repository for production use.
Enterprise,
/// The repository that can be used without subscription.
NoSubscription,
/// The test repository.
Test,
/// Ceph Quincy repository.
CephQuincy,
/// Ceph Quincy test repository.
CephQuincyTest,
/// Ceph Pacific repository.
CephPacific,
/// Ceph Pacific test repository.
CephPacificTest,
/// Ceph Octoput repository.
CephOctopus,
/// Ceph Octoput test repository.
CephOctopusTest,
}
impl From<APTRepositoryHandle> for APTStandardRepository {
fn from(handle: APTRepositoryHandle) -> Self {
APTStandardRepository {
handle,
status: None,
name: handle.name(),
description: handle.description(),
}
}
}
impl TryFrom<&str> for APTRepositoryHandle {
type Error = Error;
fn try_from(string: &str) -> Result<Self, Error> {
match string {
"enterprise" => Ok(APTRepositoryHandle::Enterprise),
"no-subscription" => Ok(APTRepositoryHandle::NoSubscription),
"test" => Ok(APTRepositoryHandle::Test),
"ceph-quincy" => Ok(APTRepositoryHandle::CephQuincy),
"ceph-quincy-test" => Ok(APTRepositoryHandle::CephQuincyTest),
"ceph-pacific" => Ok(APTRepositoryHandle::CephPacific),
"ceph-pacific-test" => Ok(APTRepositoryHandle::CephPacificTest),
"ceph-octopus" => Ok(APTRepositoryHandle::CephOctopus),
"ceph-octopus-test" => Ok(APTRepositoryHandle::CephOctopusTest),
_ => bail!("unknown repository handle '{}'", string),
}
}
}
impl Display for APTRepositoryHandle {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
APTRepositoryHandle::Enterprise => write!(f, "enterprise"),
APTRepositoryHandle::NoSubscription => write!(f, "no-subscription"),
APTRepositoryHandle::Test => write!(f, "test"),
APTRepositoryHandle::CephQuincy => write!(f, "ceph-quincy"),
APTRepositoryHandle::CephQuincyTest => write!(f, "ceph-quincy-test"),
APTRepositoryHandle::CephPacific => write!(f, "ceph-pacific"),
APTRepositoryHandle::CephPacificTest => write!(f, "ceph-pacific-test"),
APTRepositoryHandle::CephOctopus => write!(f, "ceph-octopus"),
APTRepositoryHandle::CephOctopusTest => write!(f, "ceph-octopus-test"),
}
}
}
impl APTRepositoryHandle {
/// Get the description for the repository.
pub fn description(self) -> String {
match self {
APTRepositoryHandle::Enterprise => {
"This is the default, stable, and recommended repository, available for all \
Proxmox subscription users."
}
APTRepositoryHandle::NoSubscription => {
"This is the recommended repository for testing and non-production use. \
Its packages are not as heavily tested and validated as the production ready \
enterprise repository. You don't need a subscription key to access this repository."
}
APTRepositoryHandle::Test => {
"This repository contains the latest packages and is primarily used for test labs \
and by developers to test new features."
}
APTRepositoryHandle::CephQuincy => {
"This repository holds the main Proxmox Ceph Quincy packages."
}
APTRepositoryHandle::CephQuincyTest => {
"This repository contains the Ceph Quincy packages before they are moved to the \
main repository."
}
APTRepositoryHandle::CephPacific => {
"This repository holds the main Proxmox Ceph Pacific packages."
}
APTRepositoryHandle::CephPacificTest => {
"This repository contains the Ceph Pacific packages before they are moved to the \
main repository."
}
APTRepositoryHandle::CephOctopus => {
"This repository holds the main Proxmox Ceph Octopus packages."
}
APTRepositoryHandle::CephOctopusTest => {
"This repository contains the Ceph Octopus packages before they are moved to the \
main repository."
}
}
.to_string()
}
/// Get the display name of the repository.
pub fn name(self) -> String {
match self {
APTRepositoryHandle::Enterprise => "Enterprise",
APTRepositoryHandle::NoSubscription => "No-Subscription",
APTRepositoryHandle::Test => "Test",
APTRepositoryHandle::CephQuincy => "Ceph Quincy",
APTRepositoryHandle::CephQuincyTest => "Ceph Quincy Test",
APTRepositoryHandle::CephPacific => "Ceph Pacific",
APTRepositoryHandle::CephPacificTest => "Ceph Pacific Test",
APTRepositoryHandle::CephOctopus => "Ceph Octopus",
APTRepositoryHandle::CephOctopusTest => "Ceph Octopus Test",
}
.to_string()
}
/// Get the standard file path for the repository referenced by the handle.
pub fn path(self, product: &str) -> String {
match self {
APTRepositoryHandle::Enterprise => {
format!("/etc/apt/sources.list.d/{}-enterprise.list", product)
}
APTRepositoryHandle::NoSubscription => "/etc/apt/sources.list".to_string(),
APTRepositoryHandle::Test => "/etc/apt/sources.list".to_string(),
APTRepositoryHandle::CephQuincy => "/etc/apt/sources.list.d/ceph.list".to_string(),
APTRepositoryHandle::CephQuincyTest => "/etc/apt/sources.list.d/ceph.list".to_string(),
APTRepositoryHandle::CephPacific => "/etc/apt/sources.list.d/ceph.list".to_string(),
APTRepositoryHandle::CephPacificTest => "/etc/apt/sources.list.d/ceph.list".to_string(),
APTRepositoryHandle::CephOctopus => "/etc/apt/sources.list.d/ceph.list".to_string(),
APTRepositoryHandle::CephOctopusTest => "/etc/apt/sources.list.d/ceph.list".to_string(),
}
}
/// Get package type, possible URIs and the component associated with the handle.
///
/// The first URI is the preferred one.
pub fn info(self, product: &str) -> (APTRepositoryPackageType, Vec<String>, String) {
match self {
APTRepositoryHandle::Enterprise => (
APTRepositoryPackageType::Deb,
match product {
"pve" => vec![
"https://enterprise.proxmox.com/debian/pve".to_string(),
"https://enterprise.proxmox.com/debian".to_string(),
],
_ => vec![format!("https://enterprise.proxmox.com/debian/{}", product)],
},
format!("{}-enterprise", product),
),
APTRepositoryHandle::NoSubscription => (
APTRepositoryPackageType::Deb,
match product {
"pve" => vec![
"http://download.proxmox.com/debian/pve".to_string(),
"http://download.proxmox.com/debian".to_string(),
],
_ => vec![format!("http://download.proxmox.com/debian/{}", product)],
},
format!("{}-no-subscription", product),
),
APTRepositoryHandle::Test => (
APTRepositoryPackageType::Deb,
match product {
"pve" => vec![
"http://download.proxmox.com/debian/pve".to_string(),
"http://download.proxmox.com/debian".to_string(),
],
_ => vec![format!("http://download.proxmox.com/debian/{}", product)],
},
format!("{}test", product),
),
APTRepositoryHandle::CephQuincy => (
APTRepositoryPackageType::Deb,
vec!["http://download.proxmox.com/debian/ceph-quincy".to_string()],
"main".to_string(),
),
APTRepositoryHandle::CephQuincyTest => (
APTRepositoryPackageType::Deb,
vec!["http://download.proxmox.com/debian/ceph-quincy".to_string()],
"test".to_string(),
),
APTRepositoryHandle::CephPacific => (
APTRepositoryPackageType::Deb,
vec!["http://download.proxmox.com/debian/ceph-pacific".to_string()],
"main".to_string(),
),
APTRepositoryHandle::CephPacificTest => (
APTRepositoryPackageType::Deb,
vec!["http://download.proxmox.com/debian/ceph-pacific".to_string()],
"test".to_string(),
),
APTRepositoryHandle::CephOctopus => (
APTRepositoryPackageType::Deb,
vec!["http://download.proxmox.com/debian/ceph-octopus".to_string()],
"main".to_string(),
),
APTRepositoryHandle::CephOctopusTest => (
APTRepositoryPackageType::Deb,
vec!["http://download.proxmox.com/debian/ceph-octopus".to_string()],
"test".to_string(),
),
}
}
/// Get the standard repository referenced by the handle.
///
/// An URI in the result is not '/'-terminated (under the assumption that no valid
/// product name is).
pub fn to_repository(self, product: &str, suite: &str) -> APTRepository {
let (package_type, uris, component) = self.info(product);
APTRepository {
types: vec![package_type],
uris: vec![uris.into_iter().next().unwrap()],
suites: vec![suite.to_string()],
components: vec![component],
options: vec![],
comment: String::new(),
file_type: APTRepositoryFileType::List,
enabled: true,
}
}
}

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,38 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Architectures: amd64
Codename: bullseye
Components: pvetest
Date: Mon, 28 Jun 2021 18:05:11 +0000
Label: Proxmox Debian Repository
Origin: Proxmox
Suite: stable
MD5Sum:
9bea8f79b808979720721f86da15b8c5 163424 pvetest/binary-amd64/Packages
a3286322ee6bfd3d5fec80d7c665f9c0 48621 pvetest/binary-amd64/Packages.gz
SHA1:
80fcafa4bf4a0c3c61c45d3b2fabc44d87772d42 163424 pvetest/binary-amd64/Packages
8ddbdacf5c6e4543300650e9abbfd91557ebae17 48621 pvetest/binary-amd64/Packages.gz
SHA256:
2b37f06ef01b4735db37a87f426c8042d2dce0fe3e571ba2a70259ef0d25d37b 163424 pvetest/binary-amd64/Packages
86cd183c6684d620dbea9419ac4585d2e6d35e2ecd83ccf3b63cd638d7fec93d 48621 pvetest/binary-amd64/Packages.gz
SHA512:
d2095e4a621159d339682a6340ba3fd9ad32ec2c5c25f39594f8abea803f5c90dfc8c161e1ab3bfd0ec0830ae00165ffb219e18be2e33a8be17c68b86d187ddf 163424 pvetest/binary-amd64/Packages
5af312c9b85252a071d24b434585579980ce28ee27073f076b78f7085b102d089026ded32ea9a00589f72e328362b42ee2efaf7282459d532be52e1766b8b007 48621 pvetest/binary-amd64/Packages.gz
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEKBOaL4ML1oR4oaAf3UujkX4jv1kFAmDaD1sACgkQ3UujkX4j
v1nT4A/9EwXscYVy3TVw/t1h21EdP/ADvwmJVWJbmhENh7FPxvRNnr9Q7r/4ZcOr
//gYAWm7e6DaiZEye5R7aH7BKKr16a5LOnXRv0nDWNSj50U/1B3KReGHFgRDRmyJ
EZ+bg99PzyRzsD5UCjeMdbFvOBLiX8nK430jDg4ccRNdhxzYcu3L7Ds1GiS8C/kO
bjB0hEEStqK0V4Lj5ZTrDOa+qoj9IS1N9Y4+loi+iEHRfF3q6vCXnqz/ZH53ese5
SDbKVaBNFX4zsXfeowLV2tqWkuYBgzjOBLNtSEezlWLm83hVQ5cr7MB7zt1yPAYy
V5+J7gIthKpJy9HE0DJ5qf+aekc/jt+tMVkrpwmwEmMIpyzY3Rh4tvpoyqC5ccqt
roll1XlnUjjWzPGJUh2W0SGOnvKLb16p/yqDALIYLDgzjnP/B6iserrekWa15Cvz
/HyffedsRbJ/JBDxJ54rV5KoPbRUsXkAp/fMZqOZu2wOxBHaTLqnHtHVJC9+pnef
dXeJixX0Bb34vZPbYRRByUrv6DueqlS0M27YIdLoO1KIvRzQL4OWfni6Fu9DF3M6
Ux1JF10Q/IwksQKXAyTgiGsStqnAGz13m0PJ+/swwzHK+Ktj7OoikKVnHZUkuljr
easVE4GoqJEwuKC6nI0Dosluy96wkPo5KecsUUdcSeToeaad07U=
=/mk7
-----END PGP SIGNATURE-----

View File

@ -0,0 +1,394 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Origin: Debian
Label: Debian
Suite: testing-updates
Codename: bullseye-updates
Date: Tue, 29 Jun 2021 08:07:21 UTC
Valid-Until: Tue, 06 Jul 2021 08:07:21 UTC
Acquire-By-Hash: yes
No-Support-for-Architecture-all: Packages
Architectures: all amd64 arm64 armel armhf i386 mips mips64el mipsel ppc64el s390x
Components: main contrib non-free
Description: Debian x.y Testing distribution Updates - Not Released
SHA256:
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-all
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-all.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-source
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-source.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-all
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-all.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-all/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-all/Packages.xz
68897647a7a7574bc3cbf2752a8197b599a92bb51594114dc309f189a57d2d76 112 contrib/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-amd64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-amd64/Packages.xz
1b82761373d3072d18c5da66c06c445463e3a96413df6801d67bc8f593994b4c 114 contrib/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-arm64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-arm64/Packages.xz
90a6100dbdb528f71c742453100ed389bf1889f230f97850ad447f5f1842ee66 114 contrib/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-armel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-armel/Packages.xz
160791b0d348565711501f5a64731aa163f4ffc264adf0941783b126edacd0fe 114 contrib/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-armhf/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-armhf/Packages.xz
f9c32c87b588d6cebbcc15ca1019e6079cd70d0bb470c349948044ac1d708ca9 114 contrib/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-i386/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-i386/Packages.xz
065ff088c1eb921afe428d9b9995c10ded4e1460eee96b0d18946f40a581d051 113 contrib/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-mips/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-mips/Packages.xz
371aa544d22e9bffa4743e38f48fcc39279371c17ecfc11506ab5e969d4209e9 113 contrib/binary-mips/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-mips64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-mips64el/Packages.xz
952b96752ea34a4236ed7d8e96931db30f5f1152be9e979421cbec33e1281d93 117 contrib/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-mipsel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-mipsel/Packages.xz
1269cc174396d41f61bee7ef18394c0cd55062c331392184a228f80e10a92e50 115 contrib/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-ppc64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-ppc64el/Packages.xz
9ceedee057d170036270046025f9763a90be0a0f622af390aacc5dc917cbe2aa 116 contrib/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-s390x/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-s390x/Packages.xz
bb3f0c6a788afab2b7996008c2b5306a374dc20b34873ea4b0ce267aaaea533c 114 contrib/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-all/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-all/Packages.xz
68897647a7a7574bc3cbf2752a8197b599a92bb51594114dc309f189a57d2d76 112 contrib/debian-installer/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-amd64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-amd64/Packages.xz
1b82761373d3072d18c5da66c06c445463e3a96413df6801d67bc8f593994b4c 114 contrib/debian-installer/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-arm64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-arm64/Packages.xz
90a6100dbdb528f71c742453100ed389bf1889f230f97850ad447f5f1842ee66 114 contrib/debian-installer/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-armel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-armel/Packages.xz
160791b0d348565711501f5a64731aa163f4ffc264adf0941783b126edacd0fe 114 contrib/debian-installer/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-armhf/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-armhf/Packages.xz
f9c32c87b588d6cebbcc15ca1019e6079cd70d0bb470c349948044ac1d708ca9 114 contrib/debian-installer/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-i386/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-i386/Packages.xz
065ff088c1eb921afe428d9b9995c10ded4e1460eee96b0d18946f40a581d051 113 contrib/debian-installer/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-mips/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-mips/Packages.xz
371aa544d22e9bffa4743e38f48fcc39279371c17ecfc11506ab5e969d4209e9 113 contrib/debian-installer/binary-mips/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-mips64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-mips64el/Packages.xz
952b96752ea34a4236ed7d8e96931db30f5f1152be9e979421cbec33e1281d93 117 contrib/debian-installer/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-mipsel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-mipsel/Packages.xz
1269cc174396d41f61bee7ef18394c0cd55062c331392184a228f80e10a92e50 115 contrib/debian-installer/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-ppc64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-ppc64el/Packages.xz
9ceedee057d170036270046025f9763a90be0a0f622af390aacc5dc917cbe2aa 116 contrib/debian-installer/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-s390x/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-s390x/Packages.xz
bb3f0c6a788afab2b7996008c2b5306a374dc20b34873ea4b0ce267aaaea533c 114 contrib/debian-installer/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/i18n/Translation-en
d3dda84eb03b9738d118eb2be78e246106900493c0ae07819ad60815134a8058 14 contrib/i18n/Translation-en.bz2
66ff0b3e39b97b69ae41026692402554a34da7523806e9484dadaa201d874826 115 contrib/source/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/source/Sources
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/source/Sources.xz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-all
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-all.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-source
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-source.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-all
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-all.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-all/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-all/Packages.xz
4ec10e1952b6b67f18ba86e49a9fdadae2702fc8dd9caf81fce3fa67b26933c1 109 main/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-amd64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-amd64/Packages.xz
11235baea1cc51f728b67e6a72b440eb5c45476bccd33d4c93308bc3ac5aebb3 111 main/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-arm64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-arm64/Packages.xz
f9638553772bb2f185419f349d1a0835ecc23074a1c867f64ed81b43b79cec85 111 main/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-armel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-armel/Packages.xz
e6dc2d69357d199629d876d48880f3fe4675aa3a7174ac868a58d7fa3853a9d3 111 main/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-armhf/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-armhf/Packages.xz
93df18bef47710a17488b390419821583b4894ef1639a7d89657e422bf5fc915 111 main/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-i386/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-i386/Packages.xz
a3458e3b58f39105b96c2ec4c92a977c398f53aa29f7838da3c537de78546a6a 110 main/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-mips/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-mips/Packages.xz
027758abbd72b9bb64fd321e4d5c5f6288fa20e376c11f4ecf8bf4ade3acb47c 110 main/binary-mips/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-mips64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-mips64el/Packages.xz
6fbea5916d47ff86a4b93e9643f078b59dcfaf3a25c2f5f2ba4e5416ce4c8117 114 main/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-mipsel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-mipsel/Packages.xz
f160fb8cee5639caf0f75a5c4aab1fff1262f824a9621fb45ca15dfa698baf03 112 main/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-ppc64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-ppc64el/Packages.xz
014cead9abbc4cce0ba6e078dce34a8d1a56e6b16b7bee1a463fd9f67eed2c33 113 main/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-s390x/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-s390x/Packages.xz
78b704e8437b918152687d47061e2211a5119015e2a13a0d6a46d31dcb32491b 111 main/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-all/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-all/Packages.xz
4ec10e1952b6b67f18ba86e49a9fdadae2702fc8dd9caf81fce3fa67b26933c1 109 main/debian-installer/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-amd64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-amd64/Packages.xz
11235baea1cc51f728b67e6a72b440eb5c45476bccd33d4c93308bc3ac5aebb3 111 main/debian-installer/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-arm64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-arm64/Packages.xz
f9638553772bb2f185419f349d1a0835ecc23074a1c867f64ed81b43b79cec85 111 main/debian-installer/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-armel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-armel/Packages.xz
e6dc2d69357d199629d876d48880f3fe4675aa3a7174ac868a58d7fa3853a9d3 111 main/debian-installer/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-armhf/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-armhf/Packages.xz
93df18bef47710a17488b390419821583b4894ef1639a7d89657e422bf5fc915 111 main/debian-installer/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-i386/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-i386/Packages.xz
a3458e3b58f39105b96c2ec4c92a977c398f53aa29f7838da3c537de78546a6a 110 main/debian-installer/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-mips/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-mips/Packages.xz
027758abbd72b9bb64fd321e4d5c5f6288fa20e376c11f4ecf8bf4ade3acb47c 110 main/debian-installer/binary-mips/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-mips64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-mips64el/Packages.xz
6fbea5916d47ff86a4b93e9643f078b59dcfaf3a25c2f5f2ba4e5416ce4c8117 114 main/debian-installer/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-mipsel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-mipsel/Packages.xz
f160fb8cee5639caf0f75a5c4aab1fff1262f824a9621fb45ca15dfa698baf03 112 main/debian-installer/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-ppc64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-ppc64el/Packages.xz
014cead9abbc4cce0ba6e078dce34a8d1a56e6b16b7bee1a463fd9f67eed2c33 113 main/debian-installer/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-s390x/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-s390x/Packages.xz
78b704e8437b918152687d47061e2211a5119015e2a13a0d6a46d31dcb32491b 111 main/debian-installer/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/i18n/Translation-en
d3dda84eb03b9738d118eb2be78e246106900493c0ae07819ad60815134a8058 14 main/i18n/Translation-en.bz2
008e62f7c379b27d3f00fe8ea8eaa139809f11900ca89d6617ccf934a9024142 112 main/source/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/source/Sources
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/source/Sources.xz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-all
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-all.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-source
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-source.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-all
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-all.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-all/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-all/Packages.xz
28dec0023644dd6907ab629efa820fdd61b10c19eb43ddf0d8737c5057e0c7b4 113 non-free/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-amd64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-amd64/Packages.xz
1b520c87f6a798699fc39e1d75ebac7e78fdd4a201b18426109f0f338f18e366 115 non-free/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-arm64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-arm64/Packages.xz
9d15a08c7396b78ed595c22f5d1cce574d4cf0dcda3d5458722270dce0ddf276 115 non-free/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-armel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-armel/Packages.xz
c19938ccb9743e0318b2bb3e7871a2347d74afadfd23e23f63465ed31d01e2db 115 non-free/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-armhf/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-armhf/Packages.xz
88c4a76de56dd43f88d925c471093ace5e7aac17a6d6a8b01135fb3cf6a1d80f 115 non-free/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-i386/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-i386/Packages.xz
bfab31be4f95d8f8eec40f7e7dfe7caad494a1496945989a0b5936d24125a711 114 non-free/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-mips/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-mips/Packages.xz
37a9b6efa8e5b0f9ee74f3202db161c209fa066830268ea30b206d8b1f9052c1 114 non-free/binary-mips/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-mips64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-mips64el/Packages.xz
d50cbb62225425908c57af80d918ccc99859ba4803be62b44da8d8159eb86847 118 non-free/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-mipsel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-mipsel/Packages.xz
a30e01cfad8b7f5e5f7115787590a2a3342ddea51857ccb230180e555b9c68a4 116 non-free/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-ppc64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-ppc64el/Packages.xz
33da686a19b40e79ba83a7daf8d863010c36ff9f6d063ecc698a91d3cf626f53 117 non-free/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-s390x/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-s390x/Packages.xz
acc91e4fa6121d24aba8b1e8f39cffde4512bb549044cbf88f998cd4dc0423df 115 non-free/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-all/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-all/Packages.xz
28dec0023644dd6907ab629efa820fdd61b10c19eb43ddf0d8737c5057e0c7b4 113 non-free/debian-installer/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-amd64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-amd64/Packages.xz
1b520c87f6a798699fc39e1d75ebac7e78fdd4a201b18426109f0f338f18e366 115 non-free/debian-installer/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-arm64/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-arm64/Packages.xz
9d15a08c7396b78ed595c22f5d1cce574d4cf0dcda3d5458722270dce0ddf276 115 non-free/debian-installer/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-armel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-armel/Packages.xz
c19938ccb9743e0318b2bb3e7871a2347d74afadfd23e23f63465ed31d01e2db 115 non-free/debian-installer/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-armhf/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-armhf/Packages.xz
88c4a76de56dd43f88d925c471093ace5e7aac17a6d6a8b01135fb3cf6a1d80f 115 non-free/debian-installer/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-i386/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-i386/Packages.xz
bfab31be4f95d8f8eec40f7e7dfe7caad494a1496945989a0b5936d24125a711 114 non-free/debian-installer/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-mips/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-mips/Packages.xz
37a9b6efa8e5b0f9ee74f3202db161c209fa066830268ea30b206d8b1f9052c1 114 non-free/debian-installer/binary-mips/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-mips64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-mips64el/Packages.xz
d50cbb62225425908c57af80d918ccc99859ba4803be62b44da8d8159eb86847 118 non-free/debian-installer/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-mipsel/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-mipsel/Packages.xz
a30e01cfad8b7f5e5f7115787590a2a3342ddea51857ccb230180e555b9c68a4 116 non-free/debian-installer/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-ppc64el/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-ppc64el/Packages.xz
33da686a19b40e79ba83a7daf8d863010c36ff9f6d063ecc698a91d3cf626f53 117 non-free/debian-installer/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-s390x/Packages
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-s390x/Packages.xz
acc91e4fa6121d24aba8b1e8f39cffde4512bb549044cbf88f998cd4dc0423df 115 non-free/debian-installer/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/i18n/Translation-en
d3dda84eb03b9738d118eb2be78e246106900493c0ae07819ad60815134a8058 14 non-free/i18n/Translation-en.bz2
75a93381931d1caa336f5674cd3cb576876aa87c13d0c66dd6be20046df7a1b1 116 non-free/source/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/source/Sources
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/source/Sources.xz
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEFukLP99l7eOqfzI8BO5yN7fUU+wFAmDa1L8ACgkQBO5yN7fU
U+xlnw/+N8x7oU6dO9Tofgv3aFmHVOOyzbO0a6o1xkweoSxixcZVebSEv0o0tOpg
FIHSc47OvfQGI4HrSNAwO3iyNHVqY0gxEaXr9kOeIC/4v8Vi8FYOsHI3TQ21ZUIS
ZBVPAHi5gB0TztO5MArDXiJuo6EY/g810gPsQrp3op3Xn6EhSB9Bi2Z/Bi/rP2Pk
D/vq+JqztcJ7ufsS0v17Z24gYdR4MZlTi/CLrQeuqkqcWdx0lLa/YX9GTMXK8YTm
j8O0dfJWqtgdq/0vN0zez9r49rkxszp8q7lF1+U4QvGxstUdMtZzSVVM0qnE2fys
4lb2J+Qamv+ycqflWccINQvVbHI8TruthzGOi7hn5OKDVE84Jd2etAczd8eKvAIa
eckMCMI0LzqbuMYWglVIUCFbUDNrc+UnLT022k+l3mkV9zpQEmkDRvPHZRXxPCaF
unETH3IsJ+g+HK8OskgJoFcV4efHRW3za20CNhvGExf/Yl0/UmQl2OO1DeK9JhBQ
LYGyxKNXSsutdN1iDVGmNl6xwuwN5JRf65LHrO4VSJ+wF3fVpri7xqrg7+irMccd
nC7UNT064HIREumseM0PqkFx3Rw0Ysiv/znsCW85YOyc3p8xBDlzBlJVgiAMZsmx
gkTOZRzHbZ34a4VI5vpN2Cf7qAAN+kuAtwsBE0905UnpLDIyYraJAjMEAQEIAB0W
IQQBRtxtSgspFL3tNNtkis/WIvPROAUCYNrU2gAKCRBkis/WIvPROMtID/4+d48X
MbZYDh8KSjz66qt/juhsTKeGQlRKxnbwxUKRRnhXrwd4MmHsUDqJQk3br4B8TqAp
8V7H8ZT7hhcvMD8ghj4Sw/dG8BPLXntDLxsSRnb5oMZxvwfeeH1smnTu6WOQUgzr
J3p9CEBiR8VBm3hy10QNPepkqhggPdZvGM6N8G7OBSteFwVCRsA6MW8A+SN24t9g
qp7898+iebhRCxhCXNiP6pMXI67dZtZbCWKSPivFC7IoCedSlOPmmtXgJZo3fhEt
vr09VKSyr7+w8IBpRZlhElq2WFHPm5TnK13iGmxhKR0msApj4RciVH/nFLLaRUHC
05qXmA/4vDgrlzIme14sTHiaEVHeFk4DYWxae69MbbppxrYhv488jHoeIcUxJlTw
ydi+TmmMwtpZdugYtAHKBxvHtyxiB5d4DHRykkwvCdv0OWPja6GDPDTMO1ix+86E
ruidHOr6g4maV4EvjqqXkE2MFYAHxVaMkSz2jPHtEKlneusLSdnYgghVCByf4xbl
FTho8AKy88F4qT0oDNsedrlv9OZe7eif4hi9r6NabVUcI+4wAKjMV40mg5EpXR/F
otGmeLMo4LO9kdnHuMhFNdlNJCOdBy2czJV/f7G3KprPXj4TSgDV3jIimwnGf4MQ
6OIWLDlKkCemDssg/bCLX8Z8GacZAWvI+YySPQ==
=dXdg
-----END PGP SIGNATURE-----

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,426 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Origin: Debian
Label: Debian-Security
Suite: testing-security
Codename: bullseye-security
Date: Tue, 29 Jun 2021 11:21:24 UTC
Valid-Until: Tue, 06 Jul 2021 11:21:24 UTC
Acquire-By-Hash: yes
Architectures: amd64 arm64 armel armhf i386 mips64el mipsel ppc64el s390x
Components: updates/main updates/contrib updates/non-free
Description: Debian x.y Testing - Security Updates - Not Released
SHA256:
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-source
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-source.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/Contents-udeb-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/Contents-udeb-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-all/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-all/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-all/Packages.xz
dad6f4ef22f149e992d811397605c205e6200f0f997310befb7b30281fb20529 130 contrib/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-amd64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-amd64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-amd64/Packages.xz
95155862fd275db390e032008b507c6de9f5c88b6675067a2213b733ff65a71b 132 contrib/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-arm64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-arm64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-arm64/Packages.xz
dc44f7c5477b840abd146ba41d3c1295a078cabdd0b8a5c3e8de1f8981bb4cab 132 contrib/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-armel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-armel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-armel/Packages.xz
518a30fe51451f86b09975c1d86fa8eaae5f946f016acdbf4c9c84629deb6323 132 contrib/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-armhf/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-armhf/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-armhf/Packages.xz
a786baa4947efa2435a6fe9aa388ac9e0b1f44f058ea032c3f5717eb8abb69b2 132 contrib/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-i386/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-i386/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-i386/Packages.xz
0c21f39be33458944fdc075f9a56a29101e606f36169a4a8e915768f5213f8c1 131 contrib/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-mips64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-mips64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-mips64el/Packages.xz
312b1b4faf7b4353c795af321e782b07c1cb0e06b1308e17f8a28b5309604616 135 contrib/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-mipsel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-mipsel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-mipsel/Packages.xz
770d42f09c2fe61ad419b63a36995e5824cb50d95244aa92aadcbe5b699443b6 133 contrib/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-ppc64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-ppc64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-ppc64el/Packages.xz
41441fd2370acf8ecc41cc58113534747d97f6484c8f02f28310113f302428ab 134 contrib/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/binary-s390x/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/binary-s390x/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/binary-s390x/Packages.xz
991da76f3ef2b5ac24b81475e9de36464b139b435335cd98d073425bae2f7b59 132 contrib/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-all/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-all/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-all/Packages.xz
dad6f4ef22f149e992d811397605c205e6200f0f997310befb7b30281fb20529 130 contrib/debian-installer/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-amd64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-amd64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-amd64/Packages.xz
95155862fd275db390e032008b507c6de9f5c88b6675067a2213b733ff65a71b 132 contrib/debian-installer/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-arm64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-arm64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-arm64/Packages.xz
dc44f7c5477b840abd146ba41d3c1295a078cabdd0b8a5c3e8de1f8981bb4cab 132 contrib/debian-installer/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-armel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-armel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-armel/Packages.xz
518a30fe51451f86b09975c1d86fa8eaae5f946f016acdbf4c9c84629deb6323 132 contrib/debian-installer/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-armhf/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-armhf/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-armhf/Packages.xz
a786baa4947efa2435a6fe9aa388ac9e0b1f44f058ea032c3f5717eb8abb69b2 132 contrib/debian-installer/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-i386/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-i386/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-i386/Packages.xz
0c21f39be33458944fdc075f9a56a29101e606f36169a4a8e915768f5213f8c1 131 contrib/debian-installer/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-mips64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-mips64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-mips64el/Packages.xz
312b1b4faf7b4353c795af321e782b07c1cb0e06b1308e17f8a28b5309604616 135 contrib/debian-installer/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-mipsel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-mipsel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-mipsel/Packages.xz
770d42f09c2fe61ad419b63a36995e5824cb50d95244aa92aadcbe5b699443b6 133 contrib/debian-installer/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-ppc64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-ppc64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-ppc64el/Packages.xz
41441fd2370acf8ecc41cc58113534747d97f6484c8f02f28310113f302428ab 134 contrib/debian-installer/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/debian-installer/binary-s390x/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/debian-installer/binary-s390x/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/debian-installer/binary-s390x/Packages.xz
991da76f3ef2b5ac24b81475e9de36464b139b435335cd98d073425bae2f7b59 132 contrib/debian-installer/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/i18n/Translation-en
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/i18n/Translation-en.xz
47fd677ef64107683fd0b8bac29352c4a047b2d7310462753ca47eeb3096ee3b 133 contrib/source/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 contrib/source/Sources
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 contrib/source/Sources.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 contrib/source/Sources.xz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-source
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-source.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/Contents-udeb-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/Contents-udeb-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/binary-all/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/binary-all/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/binary-all/Packages.xz
0a4d9c1090736291f4a41265e239709235907edec6c725bedccfc41ce1e52a05 127 main/binary-all/Release
bbae2caedf9d88a82df30e5af6ec0eb493604d347fd98b7a9749568c304df22a 1928 main/binary-amd64/Packages
ac756c56a08c59f831db4d6c17d480b21f2704b3d51961fa2ff284a0a82a769a 763 main/binary-amd64/Packages.gz
16278569504f0663d817c45332e2034d8343107e1c445cde9994f6136b3fcb00 812 main/binary-amd64/Packages.xz
26279f9ed5f2d43aa257b16534a3e1f31e83c4d4148a6c6450b6f08152659a8c 129 main/binary-amd64/Release
89a8e73a35cc1ff8a08b9ca35e097823dadf2a9d31ae5375f5f7b7f2fe4e2b65 1928 main/binary-arm64/Packages
a8c3b768b8b335fdc885c55601c5132f4d3a917b2cfd687d2e89ea185866ee1d 762 main/binary-arm64/Packages.gz
3c2ce72e75afb59119f1e2f83516ab4d727ed74286fbdc799185eb0fb28e2e55 808 main/binary-arm64/Packages.xz
bd8c91fe3be0b96a19b100325c41dc9fd49623d2df10436c9db044a276fac392 129 main/binary-arm64/Release
19f9cc6b15e0c8e42af1b02c29419d3e863a3511400ec9d2a5d5c020df19abe4 1928 main/binary-armel/Packages
1b96bcd0d2e611afd97c3a7d46781c8ef2fdba6c1eb45aaf6e1d1787c637eb25 763 main/binary-armel/Packages.gz
960909e14220da768efa538f27cd2ae2bdbe3e17ea82ef03506b22f0fb8f4179 804 main/binary-armel/Packages.xz
2e8e8045b28d0ad3bdae7e86e6ba22c103a7775af0e7ae766e482b59b559371b 129 main/binary-armel/Release
d14cd548e7903c2a734afeea287bc8a6fc1e0c84b570b0e04af074e485e4cf02 1928 main/binary-armhf/Packages
99eeb65ade7992b28a12b558ccd3ef443aad85b36cf7cd21989993f016ad736e 764 main/binary-armhf/Packages.gz
ae2120dcd2dd3c6c616984dd206745f2b895e2e98968dea1c4a7e6041de9a644 808 main/binary-armhf/Packages.xz
3dcb7a44da260ad8a0d0ba7e25233f7c2b35722c565b6d435fd0d493b018d355 129 main/binary-armhf/Release
e51c53b7af69fed94ba0b0f947fead354e327f923b00263f919cc790a9426dd3 1922 main/binary-i386/Packages
4cfcd11b768856e7f1de9d3bd985880c8cac118d4e9128831032f1fe1e75c9b2 763 main/binary-i386/Packages.gz
6e863d5d686d7d35a77935bbcd92e3c31ddc2cbd97a50cdfa939d628f1080283 808 main/binary-i386/Packages.xz
e9fc3062aeec4399fab10a52f6e7e81a887ce6fec27efb40974621b13d973c9f 128 main/binary-i386/Release
3b08feaf135dc663036881aeed987dc3aba8652d740f276feba59eac8e293b3d 1946 main/binary-mips64el/Packages
dcce951cd1bef0b08fa7b464b37d805de6a9b59bce36472f68b77535e565f317 759 main/binary-mips64el/Packages.gz
9b909a22910a80c29dc6d80caa246af17413784049d600e415cd77543b06bfe9 812 main/binary-mips64el/Packages.xz
664edd8fcb5b7b5c163cbc0ad06cb93882b63c2ec67f41ac5b3b1c2c042e5a44 132 main/binary-mips64el/Release
d2823a214b46aa9cb76340cc4b0cc79e331f204ed5ff87a3161fa480743c219f 1934 main/binary-mipsel/Packages
f8c288471b2c6c2fc27100d4d380057f43fd5a8bd335e74be9e63e48244e951e 764 main/binary-mipsel/Packages.gz
118034c37ac931ec11e26a51e1c48014374181f64bf90b001ce661534ba0157d 808 main/binary-mipsel/Packages.xz
29fff1f8b8a24fb9558402acc8f62d81152d9acfaa5e93b84a743c1305c8480b 130 main/binary-mipsel/Release
b89f554d34172357ce4986fe329b9d76e409c8421cfcab9390e6e9c616b47ed6 1940 main/binary-ppc64el/Packages
800486b18b08eb73d535d6d36ef3e8782595765f4504ca8ce769b02e341df5d5 762 main/binary-ppc64el/Packages.gz
d9c8e0c73dd37d02a2159c0e120e620489b324bb5c18e9dae15364d26558fbb2 812 main/binary-ppc64el/Packages.xz
962bb7d7a66b74d348f41c770a8d67d45ac44ebaddebdaa221d9cd97650b47f3 131 main/binary-ppc64el/Release
25f871341e67f34847523c31d0bd302aa00eb394ef3b47c83153f3a3fb1bc2f1 1928 main/binary-s390x/Packages
4bb0ea6e9fcafc492bf50d5ea24783dbdae1924d5803c792f94373521e6dfc99 764 main/binary-s390x/Packages.gz
03d65434978fbf391da00927b0b587d1e5364cb254f03391efe02f29fffea462 808 main/binary-s390x/Packages.xz
9f779b6d069ad34c2d19613aef3d4292ed4bde866ceef7dd2c54caa1f61859a1 129 main/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-all/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-all/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-all/Packages.xz
0a4d9c1090736291f4a41265e239709235907edec6c725bedccfc41ce1e52a05 127 main/debian-installer/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-amd64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-amd64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-amd64/Packages.xz
26279f9ed5f2d43aa257b16534a3e1f31e83c4d4148a6c6450b6f08152659a8c 129 main/debian-installer/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-arm64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-arm64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-arm64/Packages.xz
bd8c91fe3be0b96a19b100325c41dc9fd49623d2df10436c9db044a276fac392 129 main/debian-installer/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-armel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-armel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-armel/Packages.xz
2e8e8045b28d0ad3bdae7e86e6ba22c103a7775af0e7ae766e482b59b559371b 129 main/debian-installer/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-armhf/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-armhf/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-armhf/Packages.xz
3dcb7a44da260ad8a0d0ba7e25233f7c2b35722c565b6d435fd0d493b018d355 129 main/debian-installer/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-i386/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-i386/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-i386/Packages.xz
e9fc3062aeec4399fab10a52f6e7e81a887ce6fec27efb40974621b13d973c9f 128 main/debian-installer/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-mips64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-mips64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-mips64el/Packages.xz
664edd8fcb5b7b5c163cbc0ad06cb93882b63c2ec67f41ac5b3b1c2c042e5a44 132 main/debian-installer/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-mipsel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-mipsel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-mipsel/Packages.xz
29fff1f8b8a24fb9558402acc8f62d81152d9acfaa5e93b84a743c1305c8480b 130 main/debian-installer/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-ppc64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-ppc64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-ppc64el/Packages.xz
962bb7d7a66b74d348f41c770a8d67d45ac44ebaddebdaa221d9cd97650b47f3 131 main/debian-installer/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 main/debian-installer/binary-s390x/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 main/debian-installer/binary-s390x/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 main/debian-installer/binary-s390x/Packages.xz
9f779b6d069ad34c2d19613aef3d4292ed4bde866ceef7dd2c54caa1f61859a1 129 main/debian-installer/binary-s390x/Release
b54ef4560b294e63b9b4552e1e0828060c8c5038db75d25a9f8bdd1414205feb 1190 main/i18n/Translation-en
05b17d5fe2084e0d21081a714c09751aa17c147d01e3148bd42d48643e54f3e9 596 main/i18n/Translation-en.xz
545c498daba5eefe133c68836f2086ebbc6027e496a09af436609e0e0f3130c0 130 main/source/Release
194afb538a6393cbc3515804a8577b6d9f939e4dfa28b2b56b5b24c68799e299 1210 main/source/Sources
2253816c6e56c03e9dd8b7d11780ea40c2959c81327264257d60938858e7c62d 727 main/source/Sources.gz
82bff31f0c30a1c0cb5c5691430a19a52ec855d69c101d7fb927e7fc38b5e9b3 780 main/source/Sources.xz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-source
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-source.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-amd64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-amd64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-arm64
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-arm64.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-armel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-armel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-armhf
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-armhf.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-i386
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-i386.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-mips
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-mips.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-mips64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-mips64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-mipsel
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-mipsel.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-ppc64el
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-ppc64el.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/Contents-udeb-s390x
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/Contents-udeb-s390x.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-all/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-all/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-all/Packages.xz
09692a805a856602cecf048c99e3371a98d391bbde03f6f3b1b348fd14988f57 131 non-free/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-amd64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-amd64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-amd64/Packages.xz
62f830e8bbe3cdae15bd5eb6a49a7eab262e3b4f1ea05dd15334d150e44d2067 133 non-free/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-arm64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-arm64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-arm64/Packages.xz
d9bfd44bcf537cce90d40ea23f58b87c5b9a648bf72fdeaeb1d15cd797db1575 133 non-free/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-armel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-armel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-armel/Packages.xz
7bf07a333833bea75d4279a96627a4da49631dc3b81f486fcd2ead096fa96831 133 non-free/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-armhf/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-armhf/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-armhf/Packages.xz
4cddc7a3787e70376316186145565b1de2bbf50ade4f5183eeab85d2c7069b83 133 non-free/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-i386/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-i386/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-i386/Packages.xz
149101f480b7ab75f59882ea752607f27fac96be5b8696e7d467ac4fb6f459c3 132 non-free/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-mips64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-mips64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-mips64el/Packages.xz
4eb4c5bc3f0b35263adef552c3e843e678c102af0bee91b90d8958516f8592c6 136 non-free/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-mipsel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-mipsel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-mipsel/Packages.xz
a9c9896fa7a441c2554b9b369b8bb8f5fdf2c9098d60b5a49b5208314ec076a0 134 non-free/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-ppc64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-ppc64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-ppc64el/Packages.xz
a175762edf42deda74e5e1944cb613c84c769916b8186d71939b784de861edf6 135 non-free/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/binary-s390x/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/binary-s390x/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/binary-s390x/Packages.xz
ecc1d55a4e9884e2ec3cebc0fa7e774e220c388406390e9b6c87e0884d627121 133 non-free/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-all/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-all/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-all/Packages.xz
09692a805a856602cecf048c99e3371a98d391bbde03f6f3b1b348fd14988f57 131 non-free/debian-installer/binary-all/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-amd64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-amd64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-amd64/Packages.xz
62f830e8bbe3cdae15bd5eb6a49a7eab262e3b4f1ea05dd15334d150e44d2067 133 non-free/debian-installer/binary-amd64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-arm64/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-arm64/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-arm64/Packages.xz
d9bfd44bcf537cce90d40ea23f58b87c5b9a648bf72fdeaeb1d15cd797db1575 133 non-free/debian-installer/binary-arm64/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-armel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-armel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-armel/Packages.xz
7bf07a333833bea75d4279a96627a4da49631dc3b81f486fcd2ead096fa96831 133 non-free/debian-installer/binary-armel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-armhf/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-armhf/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-armhf/Packages.xz
4cddc7a3787e70376316186145565b1de2bbf50ade4f5183eeab85d2c7069b83 133 non-free/debian-installer/binary-armhf/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-i386/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-i386/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-i386/Packages.xz
149101f480b7ab75f59882ea752607f27fac96be5b8696e7d467ac4fb6f459c3 132 non-free/debian-installer/binary-i386/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-mips64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-mips64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-mips64el/Packages.xz
4eb4c5bc3f0b35263adef552c3e843e678c102af0bee91b90d8958516f8592c6 136 non-free/debian-installer/binary-mips64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-mipsel/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-mipsel/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-mipsel/Packages.xz
a9c9896fa7a441c2554b9b369b8bb8f5fdf2c9098d60b5a49b5208314ec076a0 134 non-free/debian-installer/binary-mipsel/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-ppc64el/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-ppc64el/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-ppc64el/Packages.xz
a175762edf42deda74e5e1944cb613c84c769916b8186d71939b784de861edf6 135 non-free/debian-installer/binary-ppc64el/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/debian-installer/binary-s390x/Packages
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/debian-installer/binary-s390x/Packages.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/debian-installer/binary-s390x/Packages.xz
ecc1d55a4e9884e2ec3cebc0fa7e774e220c388406390e9b6c87e0884d627121 133 non-free/debian-installer/binary-s390x/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/i18n/Translation-en
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/i18n/Translation-en.xz
16dd4854d2bfc851746dec540a6f40c65db70dd5f8dcbeb2450182ce0fa3e306 134 non-free/source/Release
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 non-free/source/Sources
f61f27bd17de546264aa58f40f3aafaac7021e0ef69c17f6b1b4cd7664a037ec 20 non-free/source/Sources.gz
0040f94d11d0039505328a90b2ff48968db873e9e7967307631bf40ef5679275 32 non-free/source/Sources.xz
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEN5SD2LYBYLFVs3Ldqo6BtDMff1AFAmDbAjQACgkQqo6BtDMf
f1AeSA//efWHS9SsdL69FqFvfFA4TBI5HRcWRnRDiaVrq4c1FpfUjV+FwgCzFjql
y5MqjOfhI2bnn9T8DHLiK1NY3q6H6RVTpjzD0YdVZ7i8Eobp+9YOkjgQ+fhVqDpS
Xuka5kw9rYbkZgONIHNpsQwgnPupfG8O5OTHPDCwO49hPfa8g0Ar9kwYbixguVRX
qBYXV/EOyx6TXM5u69nqnbIzgdOhTHipJYVi9AiEyiO+lwAbzvogaSn574EWDb5y
9pruXxqHYFea4uGShJtambZMeC528CQALbkZpHzeFCdlBtapSH9Lxmye3ciqC9kT
7Ns09+G/C/3kTWVBfA7rVztJ5Bw4PWPZafe/YceenuK0EipsOa94PpmKeDkWzRL6
1obFDHWpDXqEJIomkq1+BkflWq8mQiI97v36NpIBXKew/f+jWw+ydoIZMESdWM2H
LGXF4fM3tCLGOYVqz6sAfUv7jfemJuDQq6ZNDVvfVIwwdA/p+oeTdu5NaYcOEGRQ
nXxS4MLWox6kc9UUWpE72NHHm5Anq62L/F7JVKbl7pw9tw0C/pvgF/6ZlnloOYZv
g3JBpZd7bt3UCDMU8MoEKs/ZDVk3x6Q/k9H/+S6B3M98UAjRQI0p2cgXQqvWuYdL
E0mXb4vKcS5wE/qyeFq7hSB1MP6q4x5kcKZe8333Odj4K4BDPCGJAjMEAQEIAB0W
IQRSN87u8hLz1Rx0q+ARJpWg5WKzKgUCYNsCNAAKCRARJpWg5WKzKp93EACBl90s
J9hX0OjJbMlIlDR+bBYaD9k6EjONtZUKGMVGWWQkpuQKlHZqEEjcQaU1kIpmyiEv
C0R44im2ULdb4jmuyMKmt3KdGMMCdD1yIaaLEoY/7GE9ZANojIWLsDdhgCADW4o5
WPruSxtpBKxKmyANw4EkbHUTyNtZ109cPdyX+isqElD47JNx+yypf1JOUdgW75xF
P96EKOFmQJahhAz2cCAjgBjy5awjGiz7AFtS13sBhbd7rn6CwTTdZ7JMxkh+UP/5
Fl93hmIRe3b1c12woB960BS2ic2dLW7nCIUId8LSl13xGc3y2qY7jd+XKVCbsuaZ
6R67dMGuMi2wmvu2aKbzuopVgSRJHpjZWiR5InOTesRczRw4UPrlfUKmuElxeGTi
BZxC1QtdB8ioRWmU7ArHO90v6zkxfXOhLCs5wOdkMB6kDB95CzmEjU985w3H5Hiz
GRAxThFpkcAgn96IFeIFvXk8xRLzPTyr3Xdm5QGiX6rZOYcM7BeH7+wyuboXsz4z
KCaTE53e1ANJSQOlxzozrHuPQ9c05lD4tos+gwBOm15y6Jwdf8+uCM7dwFhHjp50
65+QMfdcqkTGcRY3fcJsS3Yfu9gluQ916YBuUU03PNUDeaC6XPtbTVMLMsbQshWu
za+2eAEWLuyd9bvQ2VKilIJu7bozK8E5Y9UPng==
=Bno8
-----END PGP SIGNATURE-----

View File

@ -0,0 +1,422 @@
use std::path::PathBuf;
use anyhow::{bail, format_err, Error};
use proxmox_apt::config::APTConfig;
use proxmox_apt::repositories::{
check_repositories, get_current_release_codename, standard_repositories, APTRepositoryFile,
APTRepositoryHandle, APTRepositoryInfo, APTStandardRepository, DebianCodename,
};
#[test]
fn test_parse_write() -> Result<(), Error> {
test_parse_write_dir("sources.list.d")?;
test_parse_write_dir("sources.list.d.expected")?; // check if it's idempotent
Ok(())
}
fn test_parse_write_dir(read_dir: &str) -> Result<(), Error> {
let test_dir = std::env::current_dir()?.join("tests");
let read_dir = test_dir.join(read_dir);
let write_dir = test_dir.join("sources.list.d.actual");
let expected_dir = test_dir.join("sources.list.d.expected");
if write_dir.is_dir() {
std::fs::remove_dir_all(&write_dir)
.map_err(|err| format_err!("unable to remove dir {:?} - {}", write_dir, err))?;
}
std::fs::create_dir_all(&write_dir)
.map_err(|err| format_err!("unable to create dir {:?} - {}", write_dir, err))?;
let mut files = vec![];
let mut errors = vec![];
for entry in std::fs::read_dir(read_dir)? {
let path = entry?.path();
match APTRepositoryFile::new(&path)? {
Some(mut file) => match file.parse() {
Ok(()) => files.push(file),
Err(err) => errors.push(err),
},
None => bail!("unexpected None for '{:?}'", path),
}
}
assert!(errors.is_empty());
for file in files.iter_mut() {
let path = match &file.path {
Some(path) => path,
None => continue,
};
let path = PathBuf::from(path);
let new_path = write_dir.join(path.file_name().unwrap());
file.path = Some(new_path.into_os_string().into_string().unwrap());
file.digest = None;
file.write()?;
}
let mut expected_count = 0;
for entry in std::fs::read_dir(expected_dir)? {
expected_count += 1;
let expected_path = entry?.path();
let actual_path = write_dir.join(expected_path.file_name().unwrap());
let expected_contents = std::fs::read(&expected_path)
.map_err(|err| format_err!("unable to read {:?} - {}", expected_path, err))?;
let actual_contents = std::fs::read(&actual_path)
.map_err(|err| format_err!("unable to read {:?} - {}", actual_path, err))?;
assert_eq!(
expected_contents, actual_contents,
"Use\n\ndiff {:?} {:?}\n\nif you're not fluent in byte decimals",
expected_path, actual_path
);
}
let actual_count = std::fs::read_dir(write_dir)?.count();
assert_eq!(expected_count, actual_count);
Ok(())
}
#[test]
fn test_digest() -> Result<(), Error> {
let test_dir = std::env::current_dir()?.join("tests");
let read_dir = test_dir.join("sources.list.d");
let write_dir = test_dir.join("sources.list.d.digest");
if write_dir.is_dir() {
std::fs::remove_dir_all(&write_dir)
.map_err(|err| format_err!("unable to remove dir {:?} - {}", write_dir, err))?;
}
std::fs::create_dir_all(&write_dir)
.map_err(|err| format_err!("unable to create dir {:?} - {}", write_dir, err))?;
let path = read_dir.join("standard.list");
let mut file = APTRepositoryFile::new(&path)?.unwrap();
file.parse()?;
let new_path = write_dir.join(path.file_name().unwrap());
file.path = Some(new_path.clone().into_os_string().into_string().unwrap());
let old_digest = file.digest.unwrap();
// file does not exist yet...
assert!(file.read_with_digest().is_err());
assert!(file.write().is_err());
// ...but it should work if there's no digest
file.digest = None;
file.write()?;
// overwrite with old contents...
std::fs::copy(path, new_path)?;
// modify the repo
let mut repo = file.repositories.first_mut().unwrap();
repo.enabled = !repo.enabled;
// ...then it should work
file.digest = Some(old_digest);
file.write()?;
// expect a different digest, because the repo was modified
let (_, new_digest) = file.read_with_digest()?;
assert_ne!(old_digest, new_digest);
assert!(file.write().is_err());
Ok(())
}
#[test]
fn test_empty_write() -> Result<(), Error> {
let test_dir = std::env::current_dir()?.join("tests");
let read_dir = test_dir.join("sources.list.d");
let write_dir = test_dir.join("sources.list.d.remove");
if write_dir.is_dir() {
std::fs::remove_dir_all(&write_dir)
.map_err(|err| format_err!("unable to remove dir {:?} - {}", write_dir, err))?;
}
std::fs::create_dir_all(&write_dir)
.map_err(|err| format_err!("unable to create dir {:?} - {}", write_dir, err))?;
let path = read_dir.join("standard.list");
let mut file = APTRepositoryFile::new(&path)?.unwrap();
file.parse()?;
let new_path = write_dir.join(path.file_name().unwrap());
file.path = Some(new_path.into_os_string().into_string().unwrap());
file.digest = None;
file.write()?;
assert!(file.exists());
file.repositories.clear();
file.write()?;
assert!(!file.exists());
Ok(())
}
#[test]
fn test_check_repositories() -> Result<(), Error> {
let test_dir = std::env::current_dir()?.join("tests");
let read_dir = test_dir.join("sources.list.d");
proxmox_apt::config::init(APTConfig::new(
Some(&test_dir.into_os_string().into_string().unwrap()),
None,
));
let absolute_suite_list = read_dir.join("absolute_suite.list");
let mut file = APTRepositoryFile::new(&absolute_suite_list)?.unwrap();
file.parse()?;
let infos = check_repositories(&[file], DebianCodename::Bullseye);
assert!(infos.is_empty());
let pve_list = read_dir.join("pve.list");
let mut file = APTRepositoryFile::new(&pve_list)?.unwrap();
file.parse()?;
let path_string = pve_list.into_os_string().into_string().unwrap();
let origins = [
"Debian", "Debian", "Proxmox", "Proxmox", "Proxmox", "Debian",
];
let mut expected_infos = vec![];
for (n, origin) in origins.into_iter().enumerate() {
expected_infos.push(APTRepositoryInfo {
path: path_string.clone(),
index: n,
property: None,
kind: "origin".to_string(),
message: origin.to_string(),
});
}
expected_infos.sort();
let mut infos = check_repositories(&[file], DebianCodename::Bullseye);
infos.sort();
assert_eq!(infos, expected_infos);
let bad_sources = read_dir.join("bad.sources");
let mut file = APTRepositoryFile::new(&bad_sources)?.unwrap();
file.parse()?;
let path_string = bad_sources.into_os_string().into_string().unwrap();
let mut expected_infos = vec![
APTRepositoryInfo {
path: path_string.clone(),
index: 0,
property: Some("Suites".to_string()),
kind: "warning".to_string(),
message: "suite 'sid' should not be used in production!".to_string(),
},
APTRepositoryInfo {
path: path_string.clone(),
index: 1,
property: Some("Suites".to_string()),
kind: "warning".to_string(),
message: "old suite 'lenny' configured!".to_string(),
},
APTRepositoryInfo {
path: path_string.clone(),
index: 2,
property: Some("Suites".to_string()),
kind: "warning".to_string(),
message: "old suite 'stretch' configured!".to_string(),
},
APTRepositoryInfo {
path: path_string.clone(),
index: 3,
property: Some("Suites".to_string()),
kind: "warning".to_string(),
message: "use the name of the stable distribution instead of 'stable'!".to_string(),
},
APTRepositoryInfo {
path: path_string.clone(),
index: 4,
property: Some("Suites".to_string()),
kind: "ignore-pre-upgrade-warning".to_string(),
message: "suite 'bookworm' should not be used in production!".to_string(),
},
APTRepositoryInfo {
path: path_string.clone(),
index: 5,
property: Some("Suites".to_string()),
kind: "warning".to_string(),
message: "suite 'testing' should not be used in production!".to_string(),
},
];
for n in 0..=5 {
expected_infos.push(APTRepositoryInfo {
path: path_string.clone(),
index: n,
property: None,
kind: "origin".to_string(),
message: "Debian".to_string(),
});
}
expected_infos.sort();
let mut infos = check_repositories(&[file], DebianCodename::Bullseye);
infos.sort();
assert_eq!(infos, expected_infos);
let bad_security = read_dir.join("bad-security.list");
let mut file = APTRepositoryFile::new(&bad_security)?.unwrap();
file.parse()?;
let path_string = bad_security.into_os_string().into_string().unwrap();
let mut expected_infos = vec![];
for n in 0..=1 {
expected_infos.push(APTRepositoryInfo {
path: path_string.clone(),
index: n,
property: Some("Suites".to_string()),
kind: "warning".to_string(),
message: "expected suite 'bullseye-security'".to_string(),
});
}
for n in 0..=1 {
expected_infos.push(APTRepositoryInfo {
path: path_string.clone(),
index: n,
property: None,
kind: "origin".to_string(),
message: "Debian".to_string(),
});
}
expected_infos.sort();
let mut infos = check_repositories(&[file], DebianCodename::Bullseye);
infos.sort();
assert_eq!(infos, expected_infos);
Ok(())
}
#[test]
fn test_get_cached_origin() -> Result<(), Error> {
let test_dir = std::env::current_dir()?.join("tests");
let read_dir = test_dir.join("sources.list.d");
proxmox_apt::config::init(APTConfig::new(
Some(&test_dir.into_os_string().into_string().unwrap()),
None,
));
let pve_list = read_dir.join("pve.list");
let mut file = APTRepositoryFile::new(&pve_list)?.unwrap();
file.parse()?;
let origins = [
Some("Debian".to_string()),
Some("Debian".to_string()),
Some("Proxmox".to_string()),
None, // no cache file exists
None, // no cache file exists
Some("Debian".to_string()),
];
assert_eq!(file.repositories.len(), origins.len());
for (n, repo) in file.repositories.iter().enumerate() {
assert_eq!(repo.get_cached_origin()?, origins[n]);
}
Ok(())
}
#[test]
fn test_standard_repositories() -> Result<(), Error> {
let test_dir = std::env::current_dir()?.join("tests");
let read_dir = test_dir.join("sources.list.d");
let mut expected = vec![
APTStandardRepository::from(APTRepositoryHandle::Enterprise),
APTStandardRepository::from(APTRepositoryHandle::NoSubscription),
APTStandardRepository::from(APTRepositoryHandle::Test),
APTStandardRepository::from(APTRepositoryHandle::CephQuincy),
APTStandardRepository::from(APTRepositoryHandle::CephQuincyTest),
APTStandardRepository::from(APTRepositoryHandle::CephPacific),
APTStandardRepository::from(APTRepositoryHandle::CephPacificTest),
APTStandardRepository::from(APTRepositoryHandle::CephOctopus),
APTStandardRepository::from(APTRepositoryHandle::CephOctopusTest),
];
let absolute_suite_list = read_dir.join("absolute_suite.list");
let mut file = APTRepositoryFile::new(&absolute_suite_list)?.unwrap();
file.parse()?;
let std_repos = standard_repositories(&[file], "pve", DebianCodename::Bullseye);
assert_eq!(std_repos, expected);
let pve_list = read_dir.join("pve.list");
let mut file = APTRepositoryFile::new(&pve_list)?.unwrap();
file.parse()?;
let file_vec = vec![file];
let std_repos = standard_repositories(&file_vec, "pbs", DebianCodename::Bullseye);
assert_eq!(&std_repos, &expected[0..=2]);
expected[0].status = Some(false);
expected[1].status = Some(true);
let std_repos = standard_repositories(&file_vec, "pve", DebianCodename::Bullseye);
assert_eq!(std_repos, expected);
let pve_alt_list = read_dir.join("pve-alt.list");
let mut file = APTRepositoryFile::new(&pve_alt_list)?.unwrap();
file.parse()?;
let file_vec = vec![file];
expected[0].status = Some(true);
expected[1].status = Some(true);
expected[2].status = Some(false);
let std_repos = standard_repositories(&file_vec, "pve", DebianCodename::Bullseye);
assert_eq!(std_repos, expected);
Ok(())
}
#[test]
fn test_get_current_release_codename() -> Result<(), Error> {
let codename = get_current_release_codename()?;
assert!(codename == DebianCodename::Bullseye);
Ok(())
}

View File

@ -0,0 +1,5 @@
# From Debian Administrator's Handbook
deb http://packages.falcot.com/ updates/
deb http://user.name@packages.falcot.com:80/ internal/

View File

@ -0,0 +1,5 @@
# From Debian Administrator's Handbook
Types: deb
URIs: http://packages.falcot.com/
Suites: updates/ internal/

View File

@ -0,0 +1,4 @@
deb http://security.debian.org/debian-security/ bullseye/updates main contrib
deb https://security.debian.org bullseye/updates main contrib

View File

@ -0,0 +1,30 @@
Types: deb
URIs: http://ftp.at.debian.org/debian
Suites: sid
Components: main contrib
Types: deb
URIs: http://ftp.at.debian.org/debian
Suites: lenny-backports
Components: contrib
Types: deb
URIs: http://security.debian.org:80
Suites: stretch/updates
Components: main contrib
Types: deb
URIs: http://ftp.at.debian.org:80/debian
Suites: stable
Components: main
Types: deb
URIs: http://ftp.at.debian.org/debian
Suites: bookworm
Components: main
Types: deb
URIs: http://ftp.at.debian.org/debian
Suites: testing
Components: main

View File

@ -0,0 +1,16 @@
# comment in here
Types: deb deb-src
URIs: http://ftp.at.debian.org/debian
Suites: bullseye-updates
Components: main contrib
languages: it de fr
Enabled: false
languages-Add: ja
languages-Remove: de
# comment in here
Types: deb deb-src
URIs: http://ftp.at.debian.org/debian
Suites: bullseye
Components: main contrib

View File

@ -0,0 +1,10 @@
# deb [ trusted=yes ] cdrom:[Proxmox VE 5.1]/ stretch pve
# deb [ trusted=yes ] cdrom:[Proxmox VE 5.1]/proxmox/packages/ /
deb [ trusted=yes ] cdrom:[Proxmox VE 7.0 BETA]/ bullseye pve
deb cdrom:[Proxmox VE 7.0 BETA]/proxmox/packages/ /
deb [ trusted=yes ] cdrom:[Debian GNU/Linux 10.6.0 _Buster_ - Official amd64 NETINST 20200926-10:16]/ buster main

View File

@ -0,0 +1,4 @@
deb [ trusted=yes ] "file:///some/spacey/mount point/" bullseye pve
deb [ lang=it ] "file:///some/spacey/mount point/proxmox/packages/" /

View File

@ -0,0 +1,10 @@
# comment in here
Types: deb deb-src
URIs: http://ftp.at.debian.org/debian
Suites: bullseye bullseye-updates
Components: main contrib
Languages: it de fr
Enabled: false
Languages-Add: ja
Languages-Remove: de

View File

@ -0,0 +1,6 @@
# comment
deb [ lang=it,de arch=amd64 ] http://ftp.at.debian.org/debian bullseye main contrib
# non-free :(
deb [ lang=it,de arch=amd64 lang+=fr lang-=de ] http://ftp.at.debian.org/debian bullseye non-free

View File

@ -0,0 +1,2 @@
deb https://enterprise.proxmox.com/debian/pbs bullseye pbs-enterprise

View File

@ -0,0 +1,8 @@
deb https://enterprise.proxmox.com/debian bullseye pve-enterprise
deb http://download.proxmox.com/debian/ bullseye pve-no-subscription
# deb http://download.proxmox.com/debian bullseye pvetest
deb-src http://download.proxmox.com/debian bullseye pvetest

View File

@ -0,0 +1,15 @@
deb http://ftp.debian.org/debian bullseye main contrib
deb http://ftp.debian.org/debian bullseye-updates main contrib
# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
# deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise
deb-src https://enterprise.proxmox.com/debian/pve buster pve-enterprise
# security updates
deb http://security.debian.org/debian-security bullseye-security main contrib

View File

@ -0,0 +1,7 @@
deb http://ftp.at.debian.org/debian bullseye main contrib
deb http://ftp.at.debian.org/debian bullseye-updates main contrib
# security updates
deb http://security.debian.org bullseye-security main contrib

View File

@ -0,0 +1,11 @@
Types: deb
URIs: http://ftp.at.debian.org/debian
Suites: bullseye bullseye-updates
Components: main contrib
# security updates
Types: deb
URIs: http://security.debian.org
Suites: bullseye-security
Components: main contrib

View File

@ -0,0 +1,4 @@
# From Debian Administrator's Handbook
deb http://packages.falcot.com/ updates/
deb http://user.name@packages.falcot.com:80/ internal/

View File

@ -0,0 +1,5 @@
# From Debian Administrator's Handbook
Types: deb
URIs: http://packages.falcot.com/
Suites: updates/ internal/

View File

@ -0,0 +1,4 @@
deb http://security.debian.org/debian-security/ bullseye/updates main contrib
deb https://security.debian.org bullseye/updates main contrib

View File

@ -0,0 +1,29 @@
Types: deb
URIs: http://ftp.at.debian.org/debian
Suites: sid
Components: main contrib
Types: deb
URIs: http://ftp.at.debian.org/debian
Suites: lenny-backports
Components: contrib
Types: deb
URIs: http://security.debian.org:80
Suites: stretch/updates
Components: main contrib
Suites: stable
URIs: http://ftp.at.debian.org:80/debian
Components: main
Types: deb
Suites: bookworm
URIs: http://ftp.at.debian.org/debian
Components: main
Types: deb
Suites: testing
URIs: http://ftp.at.debian.org/debian
Components: main
Types: deb

View File

@ -0,0 +1,17 @@
tYpeS: deb deb-src
uRis: http://ftp.at.debian.org/debian
suiTes: bullseye-updates
# comment in here
CompOnentS: main contrib
languages: it
de
fr
Enabled: off
languages-Add: ja
languages-Remove: de
types: deb deb-src
Uris: http://ftp.at.debian.org/debian
suites: bullseye
# comment in here
components: main contrib

View File

@ -0,0 +1,7 @@
#deb [trusted=yes] cdrom:[Proxmox VE 5.1]/ stretch pve
#deb [trusted=yes] cdrom:[Proxmox VE 5.1]/proxmox/packages/ /
deb [trusted=yes] cdrom:[Proxmox VE 7.0 BETA]/ bullseye pve
deb cdrom:[Proxmox VE 7.0 BETA]/proxmox/packages/ /
deb [ "trusted=yes" ] cdrom:[Debian GNU/Linux 10.6.0 _Buster_ - Official amd64 NETINST 20200926-10:16]/ buster main

View File

@ -0,0 +1,2 @@
deb [trusted=yes] "file:///some/spacey/mount point/" bullseye pve
deb [lang="it"] file:///some/spacey/"mount point"/proxmox/packages/ /

View File

@ -0,0 +1,11 @@
Types: deb deb-src
URIs: http://ftp.at.debian.org/debian
Suites: bullseye bullseye-updates
# comment in here
Components: main contrib
Languages: it
de
fr
Enabled: off
Languages-Add: ja
Languages-Remove: de

View File

@ -0,0 +1,3 @@
deb [ lang=it,de arch=amd64 ] http://ftp.at.debian.org/debian bullseye main contrib # comment
deb [ lang=it,de arch=amd64 lang+=fr lang-=de ] http://ftp.at.debian.org/debian bullseye non-free # non-free :(

View File

@ -0,0 +1 @@
deb https://enterprise.proxmox.com/debian/pbs bullseye pbs-enterprise

View File

@ -0,0 +1,6 @@
deb https://enterprise.proxmox.com/debian bullseye pve-enterprise
deb http://download.proxmox.com/debian/ bullseye pve-no-subscription
# deb http://download.proxmox.com/debian bullseye pvetest
deb-src http://download.proxmox.com/debian bullseye pvetest

View File

@ -0,0 +1,12 @@
deb http://ftp.debian.org/debian bullseye main contrib
deb http://ftp.debian.org/debian bullseye-updates main contrib
# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
# deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise
deb-src https://enterprise.proxmox.com/debian/pve buster pve-enterprise
# security updates
deb http://security.debian.org/debian-security bullseye-security main contrib

View File

@ -0,0 +1,6 @@
deb http://ftp.at.debian.org/debian bullseye main contrib
deb http://ftp.at.debian.org/debian bullseye-updates main contrib
# security updates
deb http://security.debian.org bullseye-security main contrib

View File

@ -0,0 +1,10 @@
Types: deb
URIs: http://ftp.at.debian.org/debian
Suites: bullseye bullseye-updates
Components: main contrib
# security updates
Types: deb
URIs: http://security.debian.org
Suites: bullseye-security
Components: main contrib

23
proxmox-async/Cargo.toml Normal file
View File

@ -0,0 +1,23 @@
[package]
name = "proxmox-async"
version = "0.4.1"
authors.workspace = true
edition.workspace = true
license.workspace = true
repository.workspace = true
description = "Proxmox async/tokio helpers"
exclude.workspace = true
[dependencies]
anyhow.workspace = true
futures.workspace = true
lazy_static.workspace = true
pin-utils.workspace = true
tokio = { workspace = true, features = [ "net", "rt", "rt-multi-thread", "sync"] }
proxmox-io = { workspace = true, features = [ "tokio" ] }
proxmox-lang.workspace = true
[dev-dependencies]
tokio = { workspace = true, features = [ "macros" ] }

View File

@ -0,0 +1,75 @@
rust-proxmox-async (0.4.1) unstable; urgency=medium
* add SenderWriter
-- Proxmox Support Team <support@proxmox.com> Tue, 12 Apr 2022 14:21:53 +0200
rust-proxmox-async (0.4.0) unstable; urgency=medium
* use io error macros from proxmox-lang 1.1 instead of proxmox-sys
* drop compression code (moved to proxmox-compression)
-- Proxmox Support Team <support@proxmox.com> Mon, 21 Feb 2022 14:17:39 +0100
rust-proxmox-async (0.3.3) unstable; urgency=medium
* add net::udp::connect() helper
-- Proxmox Support Team <support@proxmox.com> Wed, 02 Feb 2022 12:57:41 +0100
rust-proxmox-async (0.3.2) unstable; urgency=medium
* replace RawWaker with the Wake trait from std, fixes a refcount leak
-- Proxmox Support Team <support@proxmox.com> Thu, 20 Jan 2022 10:08:25 +0100
rust-proxmox-async (0.3.1) unstable; urgency=medium
* fix #3618: proxmox-async: zip: add conditional EFS flag to zip files
-- Proxmox Support Team <support@proxmox.com> Wed, 12 Jan 2022 15:46:48 +0100
rust-proxmox-async (0.3.0) unstable; urgency=medium
* rebuild using proxmox-sys 0.2.0
-- Proxmox Support Team <support@proxmox.com> Tue, 23 Nov 2021 12:17:49 +0100
rust-proxmox-async (0.2.0) stable; urgency=medium
* improve dev docs
* move AsyncChannelWriter to src/io
* move TokioWriterAdapter to blocking
* remove duplicate src/stream/wrapped_reader_stream.rs
* split stream.rs into separate files
* split blocking.rs into separate files
* add copyright file
-- Proxmox Support Team <support@proxmox.com> Sat, 20 Nov 2021 16:54:58 +0100
rust-proxmox-async (0.1.0) stable; urgency=medium
* imported pbs-tools/src/zip.rs
* imported pbs-tools/src/compression.rs
* imported pbs-tools/src/tokio/tokio_writer_adapter.rs
* imported pbs-tools/src/stream.rs
* imported pbs-tools/src/broadcast_future.rs
* imported pbs-tools/src/blocking.rs
* imported pbs-runtime/src/lib.rs to runtime.rs
* initial release
-- Proxmox Support Team <support@proxmox.com> Fri, 19 Nov 2021 15:43:44 +0100

View File

@ -0,0 +1,59 @@
Source: rust-proxmox-async
Section: rust
Priority: optional
Build-Depends: debhelper (>= 12),
dh-cargo (>= 25),
cargo:native <!nocheck>,
rustc:native <!nocheck>,
libstd-rust-dev <!nocheck>,
librust-anyhow-1+default-dev <!nocheck>,
librust-futures-0.3+default-dev <!nocheck>,
librust-lazy-static-1+default-dev (>= 1.4-~~) <!nocheck>,
librust-pin-utils-0.1+default-dev <!nocheck>,
librust-proxmox-io-1+default-dev <!nocheck>,
librust-proxmox-io-1+tokio-dev <!nocheck>,
librust-proxmox-lang-1+default-dev (>= 1.1-~~) <!nocheck>,
librust-tokio-1+default-dev (>= 1.6-~~) <!nocheck>,
librust-tokio-1+net-dev (>= 1.6-~~) <!nocheck>,
librust-tokio-1+rt-dev (>= 1.6-~~) <!nocheck>,
librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~) <!nocheck>,
librust-tokio-1+sync-dev (>= 1.6-~~) <!nocheck>,
libssl-dev <!nocheck>,
uuid-dev <!nocheck>
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.6.1
Vcs-Git: git://git.proxmox.com/git/proxmox.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
X-Cargo-Crate: proxmox-async
Rules-Requires-Root: no
Package: librust-proxmox-async-dev
Architecture: any
Multi-Arch: same
Depends:
${misc:Depends},
librust-anyhow-1+default-dev,
librust-futures-0.3+default-dev,
librust-lazy-static-1+default-dev (>= 1.4-~~),
librust-pin-utils-0.1+default-dev,
librust-proxmox-io-1+default-dev,
librust-proxmox-io-1+tokio-dev,
librust-proxmox-lang-1+default-dev (>= 1.1-~~),
librust-tokio-1+default-dev (>= 1.6-~~),
librust-tokio-1+net-dev (>= 1.6-~~),
librust-tokio-1+rt-dev (>= 1.6-~~),
librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~),
librust-tokio-1+sync-dev (>= 1.6-~~),
libssl-dev,
uuid-dev
Provides:
librust-proxmox-async+default-dev (= ${binary:Version}),
librust-proxmox-async-0-dev (= ${binary:Version}),
librust-proxmox-async-0+default-dev (= ${binary:Version}),
librust-proxmox-async-0.4-dev (= ${binary:Version}),
librust-proxmox-async-0.4+default-dev (= ${binary:Version}),
librust-proxmox-async-0.4.1-dev (= ${binary:Version}),
librust-proxmox-async-0.4.1+default-dev (= ${binary:Version})
Description: Proxmox async/tokio helpers - Rust source code
This package contains the source for the Rust proxmox-async crate, packaged by
debcargo for use with cargo and dh-cargo.

View File

@ -0,0 +1,18 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Files:
*
Copyright: 2019 - 2023 Proxmox Server Solutions GmbH <support@proxmox.com>
License: AGPL-3.0-or-later
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU Affero General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your option) any
later version.
.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
details.
.
You should have received a copy of the GNU Affero General Public License along
with this program. If not, see <https://www.gnu.org/licenses/>.

View File

@ -0,0 +1,10 @@
overlay = "."
crate_src_path = ".."
maintainer = "Proxmox Support Team <support@proxmox.com>"
[source]
vcs_git = "git://git.proxmox.com/git/proxmox.git"
vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
[packages.lib]
depends = [ "libssl-dev", "uuid-dev" ]

View File

@ -0,0 +1,14 @@
//! Async wrappers for blocking I/O (adding `block_in_place` around
//! channels/readers)
mod std_channel_stream;
pub use std_channel_stream::StdChannelStream;
mod tokio_writer_adapter;
pub use tokio_writer_adapter::TokioWriterAdapter;
mod wrapped_reader_stream;
pub use wrapped_reader_stream::WrappedReaderStream;
mod sender_writer;
pub use sender_writer::SenderWriter;

View File

@ -0,0 +1,47 @@
use std::io;
use anyhow::Error;
use tokio::sync::mpsc::Sender;
/// Wrapper struct around [`tokio::sync::mpsc::Sender`] for `Result<Vec<u8>, Error>` that implements [`std::io::Write`]
pub struct SenderWriter {
sender: Sender<Result<Vec<u8>, Error>>,
}
impl SenderWriter {
pub fn from_sender(sender: tokio::sync::mpsc::Sender<Result<Vec<u8>, Error>>) -> Self {
Self { sender }
}
fn write_impl(&mut self, buf: &[u8]) -> io::Result<usize> {
if let Err(err) = self.sender.blocking_send(Ok(buf.to_vec())) {
return Err(io::Error::new(
io::ErrorKind::UnexpectedEof,
format!("could not send: {}", err),
));
}
Ok(buf.len())
}
fn flush_impl(&mut self) -> io::Result<()> {
Ok(())
}
}
impl io::Write for SenderWriter {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.write_impl(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.flush_impl()
}
}
impl Drop for SenderWriter {
fn drop(&mut self) {
// ignore errors
let _ = self.flush_impl();
}
}

View File

@ -0,0 +1,21 @@
use std::pin::Pin;
use std::sync::mpsc::Receiver;
use std::task::{Context, Poll};
use futures::stream::Stream;
use crate::runtime::block_in_place;
/// Wrapper struct to convert a sync channel [Receiver] into a [Stream]
pub struct StdChannelStream<T>(pub Receiver<T>);
impl<T> Stream for StdChannelStream<T> {
type Item = T;
fn poll_next(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Option<Self::Item>> {
match block_in_place(|| self.0.recv()) {
Ok(data) => Poll::Ready(Some(data)),
Err(_) => Poll::Ready(None), // channel closed
}
}
}

View File

@ -0,0 +1,26 @@
use std::io::Write;
use tokio::task::block_in_place;
/// Wrapper around [Write] which adds [block_in_place]
///
/// wraps each write with a [block_in_place] so that
/// any (blocking) writer can be safely used in async context in a
/// tokio runtime
pub struct TokioWriterAdapter<W: Write>(W);
impl<W: Write> TokioWriterAdapter<W> {
pub fn new(writer: W) -> Self {
Self(writer)
}
}
impl<W: Write> Write for TokioWriterAdapter<W> {
fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
block_in_place(|| self.0.write(buf))
}
fn flush(&mut self) -> Result<(), std::io::Error> {
block_in_place(|| self.0.flush())
}
}

View File

@ -0,0 +1,82 @@
use std::io::{self, Read};
use std::pin::Pin;
use std::task::{Context, Poll};
use futures::stream::Stream;
use proxmox_io::vec;
use crate::runtime::block_in_place;
/// Wrapper struct to convert sync [Read] into a [Stream]
pub struct WrappedReaderStream<R: Read + Unpin> {
reader: R,
buffer: Vec<u8>,
}
impl<R: Read + Unpin> WrappedReaderStream<R> {
pub fn new(reader: R) -> Self {
Self {
reader,
buffer: vec::undefined(64 * 1024),
}
}
}
impl<R: Read + Unpin> Stream for WrappedReaderStream<R> {
type Item = Result<Vec<u8>, io::Error>;
fn poll_next(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Option<Self::Item>> {
let this = self.get_mut();
match block_in_place(|| this.reader.read(&mut this.buffer)) {
Ok(n) => {
if n == 0 {
// EOF
Poll::Ready(None)
} else {
Poll::Ready(Some(Ok(this.buffer[..n].to_vec())))
}
}
Err(err) => Poll::Ready(Some(Err(err))),
}
}
}
#[cfg(test)]
mod test {
use std::io;
use anyhow::Error;
use futures::stream::TryStreamExt;
#[test]
fn test_wrapped_stream_reader() -> Result<(), Error> {
crate::runtime::main(async { run_wrapped_stream_reader_test().await })
}
struct DummyReader(usize);
impl io::Read for DummyReader {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
self.0 += 1;
if self.0 >= 10 {
return Ok(0);
}
unsafe {
std::ptr::write_bytes(buf.as_mut_ptr(), 0, buf.len());
}
Ok(buf.len())
}
}
async fn run_wrapped_stream_reader_test() -> Result<(), Error> {
let mut reader = super::WrappedReaderStream::new(DummyReader(0));
while let Some(_data) = reader.try_next().await? {
// just waiting
}
Ok(())
}
}

View File

@ -0,0 +1,188 @@
//! Broadcast results to registered listeners
use std::future::Future;
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use anyhow::{format_err, Error};
use futures::future::{FutureExt, TryFutureExt};
use tokio::sync::oneshot;
/// Broadcast results to registered listeners using asnyc oneshot channels
#[derive(Default)]
pub struct BroadcastData<T> {
result: Option<Result<T, String>>,
listeners: Vec<oneshot::Sender<Result<T, Error>>>,
}
impl<T: Clone> BroadcastData<T> {
pub fn new() -> Self {
Self {
result: None,
listeners: vec![],
}
}
pub fn notify_listeners(&mut self, result: Result<T, String>) {
self.result = Some(result.clone());
loop {
match self.listeners.pop() {
None => {
break;
}
Some(ch) => match &result {
Ok(result) => {
let _ = ch.send(Ok(result.clone()));
}
Err(err) => {
let _ = ch.send(Err(format_err!("{}", err)));
}
},
}
}
}
pub fn listen(&mut self) -> impl Future<Output = Result<T, Error>> {
use futures::future::{ok, Either};
match &self.result {
None => {}
Some(Ok(result)) => return Either::Left(ok(result.clone())),
Some(Err(err)) => return Either::Left(futures::future::err(format_err!("{}", err))),
}
let (tx, rx) = oneshot::channel::<Result<T, Error>>();
self.listeners.push(tx);
Either::Right(rx.map(|res| match res {
Ok(Ok(t)) => Ok(t),
Ok(Err(e)) => Err(e),
Err(e) => Err(Error::from(e)),
}))
}
}
type SourceFuture<T> = Pin<Box<dyn Future<Output = Result<T, Error>> + Send>>;
struct BroadCastFutureBinding<T> {
broadcast: BroadcastData<T>,
future: Option<SourceFuture<T>>,
}
/// Broadcast future results to registered listeners
pub struct BroadcastFuture<T> {
inner: Arc<Mutex<BroadCastFutureBinding<T>>>,
}
impl<T: Clone + Send + 'static> BroadcastFuture<T> {
/// Create instance for specified source future.
///
/// The result of the future is sent to all registered listeners.
pub fn new(source: Box<dyn Future<Output = Result<T, Error>> + Send>) -> Self {
let inner = BroadCastFutureBinding {
broadcast: BroadcastData::new(),
future: Some(Pin::from(source)),
};
Self {
inner: Arc::new(Mutex::new(inner)),
}
}
/// Creates a new instance with a oneshot channel as trigger
pub fn new_oneshot() -> (Self, oneshot::Sender<Result<T, Error>>) {
let (tx, rx) = oneshot::channel::<Result<T, Error>>();
let rx = rx.map_err(Error::from).and_then(futures::future::ready);
(Self::new(Box::new(rx)), tx)
}
fn notify_listeners(inner: Arc<Mutex<BroadCastFutureBinding<T>>>, result: Result<T, String>) {
let mut data = inner.lock().unwrap();
data.broadcast.notify_listeners(result);
}
fn spawn(
inner: Arc<Mutex<BroadCastFutureBinding<T>>>,
) -> impl Future<Output = Result<T, Error>> {
let mut data = inner.lock().unwrap();
if let Some(source) = data.future.take() {
let inner1 = inner.clone();
let task = source.map(move |value| match value {
Ok(value) => Self::notify_listeners(inner1, Ok(value)),
Err(err) => Self::notify_listeners(inner1, Err(err.to_string())),
});
tokio::spawn(task);
}
data.broadcast.listen()
}
/// Register a listener
pub fn listen(&self) -> impl Future<Output = Result<T, Error>> {
let inner2 = self.inner.clone();
async move { Self::spawn(inner2).await }
}
}
#[test]
fn test_broadcast_future() {
use std::sync::atomic::{AtomicUsize, Ordering};
static CHECKSUM: AtomicUsize = AtomicUsize::new(0);
let (sender, trigger) = BroadcastFuture::new_oneshot();
let receiver1 = sender
.listen()
.map_ok(|res| {
CHECKSUM.fetch_add(res, Ordering::SeqCst);
})
.map_err(|err| {
panic!("got error {}", err);
})
.map(|_| ());
let receiver2 = sender
.listen()
.map_ok(|res| {
CHECKSUM.fetch_add(res * 2, Ordering::SeqCst);
})
.map_err(|err| {
panic!("got error {}", err);
})
.map(|_| ());
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async move {
let r1 = tokio::spawn(receiver1);
let r2 = tokio::spawn(receiver2);
trigger.send(Ok(1)).unwrap();
let _ = r1.await;
let _ = r2.await;
});
let result = CHECKSUM.load(Ordering::SeqCst);
assert_eq!(result, 3);
// the result stays available until the BroadcastFuture is dropped
rt.block_on(
sender
.listen()
.map_ok(|res| {
CHECKSUM.fetch_add(res * 4, Ordering::SeqCst);
})
.map_err(|err| {
panic!("got error {}", err);
})
.map(|_| ()),
);
let result = CHECKSUM.load(Ordering::SeqCst);
assert_eq!(result, 7);
}

View File

@ -0,0 +1,108 @@
//! Wrappers between async readers and streams.
use std::future::Future;
use std::io;
use std::pin::Pin;
use std::task::{Context, Poll};
use anyhow::{Error, Result};
use futures::future::FutureExt;
use futures::ready;
use tokio::io::AsyncWrite;
use tokio::sync::mpsc::Sender;
use proxmox_io::ByteBuffer;
use proxmox_lang::{error::io_err_other, io_format_err};
/// Wrapper around tokio::sync::mpsc::Sender, which implements Write
pub struct AsyncChannelWriter {
sender: Option<Sender<Result<Vec<u8>, Error>>>,
buf: ByteBuffer,
state: WriterState,
}
type SendResult = io::Result<Sender<Result<Vec<u8>>>>;
enum WriterState {
Ready,
Sending(Pin<Box<dyn Future<Output = SendResult> + Send + 'static>>),
}
impl AsyncChannelWriter {
pub fn new(sender: Sender<Result<Vec<u8>, Error>>, buf_size: usize) -> Self {
Self {
sender: Some(sender),
buf: ByteBuffer::with_capacity(buf_size),
state: WriterState::Ready,
}
}
fn poll_write_impl(
&mut self,
cx: &mut Context,
buf: &[u8],
flush: bool,
) -> Poll<io::Result<usize>> {
loop {
match &mut self.state {
WriterState::Ready => {
if flush {
if self.buf.is_empty() {
return Poll::Ready(Ok(0));
}
} else {
let free_size = self.buf.free_size();
if free_size > buf.len() || self.buf.is_empty() {
let count = free_size.min(buf.len());
self.buf.get_free_mut_slice()[..count].copy_from_slice(&buf[..count]);
self.buf.add_size(count);
return Poll::Ready(Ok(count));
}
}
let sender = match self.sender.take() {
Some(sender) => sender,
None => return Poll::Ready(Err(io_err_other("no sender"))),
};
let data = self.buf.remove_data(self.buf.len()).to_vec();
let future = async move {
sender
.send(Ok(data))
.await
.map(move |_| sender)
.map_err(|err| io_format_err!("could not send: {}", err))
};
self.state = WriterState::Sending(future.boxed());
}
WriterState::Sending(ref mut future) => match ready!(future.as_mut().poll(cx)) {
Ok(sender) => {
self.sender = Some(sender);
self.state = WriterState::Ready;
}
Err(err) => return Poll::Ready(Err(err)),
},
}
}
}
}
impl AsyncWrite for AsyncChannelWriter {
fn poll_write(self: Pin<&mut Self>, cx: &mut Context, buf: &[u8]) -> Poll<io::Result<usize>> {
let this = self.get_mut();
this.poll_write_impl(cx, buf, false)
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context) -> Poll<io::Result<()>> {
let this = self.get_mut();
match ready!(this.poll_write_impl(cx, &[], true)) {
Ok(_) => Poll::Ready(Ok(())),
Err(err) => Poll::Ready(Err(err)),
}
}
fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context) -> Poll<io::Result<()>> {
self.poll_flush(cx)
}
}

View File

@ -0,0 +1,4 @@
//! Helper which implements AsyncRead/AsyncWrite
mod async_channel_writer;
pub use async_channel_writer::AsyncChannelWriter;

6
proxmox-async/src/lib.rs Normal file
View File

@ -0,0 +1,6 @@
pub mod blocking;
pub mod broadcast_future;
pub mod io;
pub mod net;
pub mod runtime;
pub mod stream;

View File

@ -0,0 +1 @@
pub mod udp;

View File

@ -0,0 +1,36 @@
use std::io;
use std::net::{Ipv4Addr, Ipv6Addr, SocketAddr};
use tokio::net::{ToSocketAddrs, UdpSocket};
/// Helper to connect to UDP addresses without having to manually bind to the correct ip address
pub async fn connect<A: ToSocketAddrs>(addr: A) -> io::Result<UdpSocket> {
let mut last_err = None;
for address in tokio::net::lookup_host(&addr).await? {
let bind_address = match address {
SocketAddr::V4(_) => SocketAddr::new(Ipv4Addr::UNSPECIFIED.into(), 0),
SocketAddr::V6(_) => SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 0),
};
let socket = match UdpSocket::bind(bind_address).await {
Ok(sock) => sock,
Err(err) => {
last_err = Some(err);
continue;
}
};
match socket.connect(address).await {
Ok(()) => return Ok(socket),
Err(err) => {
last_err = Some(err);
continue;
}
}
}
Err(last_err.unwrap_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidInput,
"could not resolve to any addresses",
)
}))
}

View File

@ -0,0 +1,182 @@
//! Helpers for quirks of the current tokio runtime.
use std::cell::RefCell;
use std::future::Future;
use std::sync::{Arc, Mutex, Weak};
use std::task::{Context, Poll, Waker};
use std::thread::{self, Thread};
use lazy_static::lazy_static;
use pin_utils::pin_mut;
use tokio::runtime::{self, Runtime};
thread_local! {
static BLOCKING: RefCell<bool> = RefCell::new(false);
}
fn is_in_tokio() -> bool {
tokio::runtime::Handle::try_current().is_ok()
}
fn is_blocking() -> bool {
BLOCKING.with(|v| *v.borrow())
}
struct BlockingGuard(bool);
impl BlockingGuard {
fn set() -> Self {
Self(BLOCKING.with(|v| {
let old = *v.borrow();
*v.borrow_mut() = true;
old
}))
}
}
impl Drop for BlockingGuard {
fn drop(&mut self) {
BLOCKING.with(|v| {
*v.borrow_mut() = self.0;
});
}
}
lazy_static! {
// avoid openssl bug: https://github.com/openssl/openssl/issues/6214
// by dropping the runtime as early as possible
static ref RUNTIME: Mutex<Weak<Runtime>> = Mutex::new(Weak::new());
}
#[link(name = "crypto")]
extern "C" {
fn OPENSSL_thread_stop();
}
/// Get or create the current main tokio runtime.
///
/// This makes sure that tokio's worker threads are marked for us so that we know whether we
/// can/need to use `block_in_place` in our `block_on` helper.
pub fn get_runtime_with_builder<F: Fn() -> runtime::Builder>(get_builder: F) -> Arc<Runtime> {
let mut guard = RUNTIME.lock().unwrap();
if let Some(rt) = guard.upgrade() {
return rt;
}
let mut builder = get_builder();
builder.on_thread_stop(|| {
// avoid openssl bug: https://github.com/openssl/openssl/issues/6214
// call OPENSSL_thread_stop to avoid race with openssl cleanup handlers
unsafe {
OPENSSL_thread_stop();
}
});
let runtime = builder.build().expect("failed to spawn tokio runtime");
let rt = Arc::new(runtime);
*guard = Arc::downgrade(&rt);
rt
}
/// Get or create the current main tokio runtime.
///
/// This calls get_runtime_with_builder() using the tokio default threaded scheduler
pub fn get_runtime() -> Arc<Runtime> {
get_runtime_with_builder(|| {
let mut builder = runtime::Builder::new_multi_thread();
builder.enable_all();
builder
})
}
/// Block on a synchronous piece of code.
pub fn block_in_place<R>(fut: impl FnOnce() -> R) -> R {
// don't double-exit the context (tokio doesn't like that)
// also, if we're not actually in a tokio-worker we must not use block_in_place() either
if is_blocking() || !is_in_tokio() {
fut()
} else {
// we are in an actual tokio worker thread, block it:
tokio::task::block_in_place(move || {
let _guard = BlockingGuard::set();
fut()
})
}
}
/// Block on a future in this thread.
pub fn block_on<F: Future>(fut: F) -> F::Output {
// don't double-exit the context (tokio doesn't like that)
if is_blocking() {
block_on_local_future(fut)
} else if is_in_tokio() {
// inside a tokio worker we need to tell tokio that we're about to really block:
tokio::task::block_in_place(move || {
let _guard = BlockingGuard::set();
block_on_local_future(fut)
})
} else {
// not a worker thread, not associated with a runtime, make sure we have a runtime (spawn
// it on demand if necessary), then enter it
let _guard = BlockingGuard::set();
let _enter_guard = get_runtime().enter();
get_runtime().block_on(fut)
}
}
/*
fn block_on_impl<F>(mut fut: F) -> F::Output
where
F: Future + Send,
F::Output: Send + 'static,
{
let (tx, rx) = tokio::sync::oneshot::channel();
let fut_ptr = &mut fut as *mut F as usize; // hack to not require F to be 'static
tokio::spawn(async move {
let fut: F = unsafe { std::ptr::read(fut_ptr as *mut F) };
tx
.send(fut.await)
.map_err(drop)
.expect("failed to send block_on result to channel")
});
futures::executor::block_on(async move {
rx.await.expect("failed to receive block_on result from channel")
})
std::mem::forget(fut);
}
*/
/// This used to be our tokio main entry point. Now this just calls out to `block_on` for
/// compatibility, which will perform all the necessary tasks on-demand anyway.
pub fn main<F: Future>(fut: F) -> F::Output {
block_on(fut)
}
struct ThreadWaker(Thread);
impl std::task::Wake for ThreadWaker {
fn wake(self: Arc<Self>) {
self.0.unpark();
}
fn wake_by_ref(self: &Arc<Self>) {
self.0.unpark();
}
}
fn block_on_local_future<F: Future>(fut: F) -> F::Output {
pin_mut!(fut);
let waker = Waker::from(Arc::new(ThreadWaker(thread::current())));
let mut context = Context::from_waker(&waker);
loop {
match fut.as_mut().poll(&mut context) {
Poll::Ready(out) => return out,
Poll::Pending => thread::park(),
}
}
}

Some files were not shown because too many files have changed in this diff Show More