Compare commits

165 Commits

Author SHA1 Message Date
fe85fb7b32 cargo vendor 2024-11-07 15:06:21 +03:00
98cad74cd7 Provided APT perl module installation in .spec 2024-10-31 15:06:25 +03:00
3d83ef76a8 Cargo vendor 2024-10-31 13:33:41 +03:00
7e0b4422e8 Release alt2 2024-10-31 13:21:55 +03:00
16b9e5b03a Reverted apt module disabling 2024-10-31 13:16:15 +03:00
c2fb7aee65 BuildRequires for apt-pkg-native 2024-10-28 18:39:49 +03:00
c60ecec147 git ignored vendored .lock file 2024-10-28 18:26:55 +03:00
685ab6f86c cargo vendor 2024-10-28 18:19:13 +03:00
e0cff923ae alt-linux feature is used by default (Maybe shouldn't) 2024-10-28 18:13:11 +03:00
a47c7bac89 Deps path overrides (temporary refers to Gitea) and alt-linux feature in .toml 2024-10-28 17:53:55 +03:00
843f7a8f6a Revert: ALT: don't use proxmox-apt dependency
Revert da7d89e229
2024-10-28 17:26:11 +03:00
Alexander Burmatov
fb259594b4 0.3.4-alt1
- Update:
  + libproxmox-rs-perl 0.3.4
  + libpve-rs-perl 0.8.10
2024-09-09 18:51:44 +03:00
Alexander Burmatov
304bbbf0b7 cargo vendor 2024-09-09 18:35:32 +03:00
Alexander Burmatov
97190b9336 Local path overrides 2024-09-09 18:32:22 +03:00
Alexander Burmatov
48384cab82 Merge remote-tracking branch 'upstream/master'
Revert clear common/src/apt/repositories.rs when proxmox-apt will be
ready.
2024-09-09 18:18:40 +03:00
Alexander Burmatov
a63e5fa613 Revert "ALT: fix dependencies overrides"
This reverts commit a8f1616aab.
2024-09-09 16:39:24 +03:00
Alexander Burmatov
9113dad57e 0.3.3-alt3
- some notifications fixes (thx andy@)
- Update:
  + libpve-rs-perl 0.8.9
2024-09-06 16:27:24 +03:00
Alexander Burmatov
da7d89e229 ALT: don't use proxmox-apt dependency
Revert this commit when proxmox-apt will be ready.
2024-09-06 02:01:48 +03:00
Alexander Burmatov
a8f1616aab ALT: fix dependencies overrides
Revert this in the next version.
2024-09-06 02:01:48 +03:00
Alexander Burmatov
0bb97dbe0f cargo vendor 2024-09-06 02:01:40 +03:00
Alexander Burmatov
e90948a803 ALT: fix deprecated warning 2024-09-05 19:19:29 +03:00
Alexander Burmatov
a976211471 Merge commit 'upstream' 2024-09-05 17:58:52 +03:00
Wolfgang Bumiller
ae27b307b8 pve: fix use vs mod grouping
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-29 10:31:31 +02:00
Wolfgang Bumiller
dfd8a2e321 bump common to 0.3.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-09 14:22:00 +02:00
Wolfgang Bumiller
a3e466af88 bump pmg-rs to 0.7.6
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-09 14:20:50 +02:00
Wolfgang Bumiller
cdc792005e pve: bump version to 0.8.10
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-09 13:46:22 +02:00
Lukas Wagner
ea4d87816b cache: add bindings for SharedCache
This is a simple, cache implementation which can be accessed from
multiple processes. It also supports storing a range of historical
values.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
[wb: also update pmg-rs/Cargo.toml and both d/control files]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-09 13:21:01 +02:00
Fabian Grünbichler
9a91594ee6 update to proxmox-log 0.2
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-08-06 14:13:31 +02:00
Wolfgang Bumiller
885830935c update to sys 0.6 and proxmox-log crate
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-17 12:44:46 +02:00
Wolfgang Bumiller
b3b8b375c2 apt: minor parameter cleanup
We cannot use &[&str] - since this would be a poitner to a `[&str]`
data structure, that's not how perl stores strings.
But we *can* use Vec<&str> - here, the Vec will be allocated, but the
contents will borrow. We don't need to transform this afterwards.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-08 15:33:58 +02:00
Lukas Wagner
6789b14986 pve-rs: common: send apt update notification via proxmox-notify
For PMG we for now only provide an empty stub and warn to syslog -
we need basic notification system integration there first.
On PMG, we still use a pure Perl implementation at the moment,
so this should not be an issue unless we change that.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-07-08 15:33:58 +02:00
Dietmar Maurer
89d9debadb perl-rs: add further apt api calls
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2024-07-08 15:33:57 +02:00
Dietmar Maurer
5c994bf942 perl-rs: use api functions from proxmox-apt
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2024-07-08 15:33:32 +02:00
Dietmar Maurer
9eda29d688 perl-rs: use proxmox-apt-api-types
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2024-07-08 15:33:29 +02:00
Wolfgang Bumiller
83427e9204 bump proxmox-tfa to 5.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-04 10:48:48 +02:00
Wolfgang Bumiller
61b2f69a45 bump proxmox-time to 2.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-20 14:11:34 +02:00
Fabian Grünbichler
c873ac57d5 build: adapt Makefile to moved cargo config
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-20 12:20:42 +02:00
Lukas Wagner
7e3ea35595 pve-rs: pmg-rs: move deprecated .cargo/config to .cargo/config.toml
Fixes the following new warning that appeared after switching
to rust 1.77:

warning: `proxmox-perl-rs/pve-rs/.cargo/config` is deprecated in
favor of `config.toml`

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-06-20 12:19:01 +02:00
Fabian Grünbichler
a34b31054d build: force debug symbols in release build
they then get stripped into their own package anyway, but without this we don't
get debug symbols at all with rustc >= 1.77

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-20 10:09:19 +02:00
Wolfgang Bumiller
da068b1a47 pmg: add api-types feature of proxmox-acme
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-10 12:47:03 +02:00
Wolfgang Bumiller
cd0e7b8cd2 pve: bump version to 0.8.9
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-04 11:01:22 +02:00
Wolfgang Bumiller
0b6800b0bd pve: the notify chagnes now break libpve-notify-perl <<8.0.7
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-04 10:42:03 +02:00
Lukas Wagner
dc02255bdc notify: adapt to Option<Vec<T>> to Vec<T> changes in proxmox_notify
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
2024-06-04 10:40:53 +02:00
Lukas Wagner
7ac7fa5b00 notify: don't pass config structs by reference
proxmox_notify's api functions have been changed so that they take
ownership of config structs.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
2024-06-04 10:40:53 +02:00
Lukas Wagner
627a95bf89 notify: use file based notification templates
Instead of passing literal template strings to the notification
system, we now only pass an identifier. This identifier will be used
load the template files from a product-specific directory.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
2024-06-04 10:40:53 +02:00
Wolfgang Bumiller
d0633ac98e pve,pmg: bump proxmox-notify to 0.4
build fixes for pve follow

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-04 10:40:53 +02:00
Wolfgang Bumiller
45a1af8ad2 remove old toplevel Makefile
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-04 10:40:53 +02:00
Wolfgang Bumiller
6bed9c40bc remove old rustfmt.toml
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-04 10:40:53 +02:00
Wolfgang Bumiller
2860777e61 buildsys improvements for generated files
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-03 14:00:03 +02:00
Wolfgang Bumiller
4e6598ef85 common: cleanup Proxmox/RS in make clean
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-03 13:25:32 +02:00
Andrew A. Vasilyev
30f39e2340 0.3.3-alt2
- update cargo vendor
2024-04-03 13:46:51 +03:00
Andrew A. Vasilyev
fc51d3f03a ALT: remove dpkg 2024-04-03 13:46:31 +03:00
Andrew A. Vasilyev
ee57ce9bab ALT: remove missing PL_use_safe_putenv 2024-04-03 13:45:56 +03:00
Andrew A. Vasilyev
4cb67f813c cargo vendor 2024-04-03 13:05:29 +03:00
Andrew A. Vasilyev
cdc3fe47c5 0.3.3-alt1
- Update:
  + libproxmox-rs-perl 0.3.3
  + libpve-rs-perl 0.8.8
- new building scheme
2024-02-28 20:26:16 +03:00
Andrew A. Vasilyev
75f1ce8d85 cargo vendor 2024-02-27 23:16:50 +03:00
Andrew A. Vasilyev
bf5c737c8f Merge branch 'upstream' 2024-02-22 17:31:51 +03:00
Wolfgang Bumiller
27a7f2e252 pve: bump version to 0.8.8
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-01-10 14:20:00 +01:00
Wolfgang Bumiller
199be72401 pve,pmg: bump proxmox-notify dependency to 0.3.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-01-10 14:11:09 +01:00
Wolfgang Bumiller
d6df8340c5 pve, pmg: bump perlmod-bin to 0.3.0-3
fixes a syntax error in the generated pm file

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-01-02 14:20:12 +01:00
Wolfgang Bumiller
427fdb13c0 pve: update testcase PVE.pm with fixed library path
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-18 10:39:30 +01:00
Wolfgang Bumiller
a5330e34d2 pmg: upgrade perlmod-bin dependency to 0.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-18 10:22:18 +01:00
Wolfgang Bumiller
c57e1868e7 pve: upgrade perlmod-bin dependency to 0.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-18 10:22:09 +01:00
Wolfgang Bumiller
ec95bb1c53 pve: build fix
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-18 10:20:16 +01:00
Wolfgang Bumiller
fb5f1be6dc bump pmg-rs to 0.7.5
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-07 09:58:06 +01:00
Wolfgang Bumiller
237b276028 bump common to 0.3.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-07 09:58:06 +01:00
Wolfgang Bumiller
16c41f1a91 pve: load SslProbe in Proxmox/Lib/PVE.pm
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-07 09:56:05 +01:00
Wolfgang Bumiller
86706cc049 pmg: load SslProbe in Proxmox/Lib/PMG.pm
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-07 09:56:05 +01:00
Wolfgang Bumiller
62fc43fea9 common: move probe into a new SslProbe package
Because Proxmox::Lib::Common isn't actually `use`d by most packages.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-07 09:56:05 +01:00
Wolfgang Bumiller
6a31f73fa3 bump common to 0.3.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-06 11:23:09 +01:00
Wolfgang Bumiller
9525623c19 bump pmg-rs to 0.7.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-06 11:23:09 +01:00
Wolfgang Bumiller
089e555d51 fixate openssl-probe dependency, probe env vars in perl
This fixes an issue with `openssl-probe` calling `setenv` when (issued
via the `native-tls` crate with the ACME client) which crashes perl.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-06 11:21:25 +01:00
Wolfgang Bumiller
b9185327f4 update d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-04 11:58:36 +01:00
Folke Gleumes
ce550d06e2 acme: add eab fields for pmg
Signed-off-by: Folke Gleumes <f.gleumes@proxmox.com>
2023-12-04 11:54:05 +01:00
Wolfgang Bumiller
5ac44c9fbb pmg: bump acme-rs to 0.5
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-12-04 11:54:05 +01:00
Thomas Lamprecht
4c54abcea8 pve: bump version to 0.8.7
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-11-17 13:42:07 +01:00
Thomas Lamprecht
61ab181b01 cargo: depend on notify 0.3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-11-17 13:38:28 +01:00
Lukas Wagner
036236c278 notify: support 'origin' paramter
This parameter shows the origin of a config entry (builtin,
user-created, modified-builtin)

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-11-17 13:30:57 +01:00
Lukas Wagner
36fbb76145 notify: add 'disable' parameter
This parameter disables a matcher/a target.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-11-17 13:30:57 +01:00
Lukas Wagner
b905cfd03d pve-rs: notify: remove notify_context for PVE
The context has now been moved to `proxmox-notify` due to the fact
that we also need it in `proxmox-mail-forward` now.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-11-17 13:30:57 +01:00
Lukas Wagner
7f8cb0c5c3 notify: add bindings for smtp API calls
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-11-17 13:30:57 +01:00
Lukas Wagner
29602a4b01 notify: adapt to new matcher-based notification routing
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-11-17 13:30:57 +01:00
Andrew A. Vasilyev
260684a5e7 0.2.1-alt3
- libpve-rs-perl: add linking with libperl (Closes: #48330)
2023-11-07 22:24:28 +03:00
Andrew A. Vasilyev
cd089b4954 libpve-rs-perl: add linking with libperl 2023-11-07 22:24:22 +03:00
Fiona Ebner
bfc7f2c518 pve: test: resource scheduling: add another test where memory is secondary to CPU
but this time, without any start load on the node. This test fails
with librust-proxmox-resource-scheduling-dev=0.2.0-1

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-06 18:21:06 +01:00
Fiona Ebner
14a3de9826 pve: test: resource scheduling: add test where memory is secondary to CPU
because memory usage differences are small.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-06 18:21:06 +01:00
Wolfgang Bumiller
a04d26b0d2 expose use_safe_putenv via Proxmox::Lib::PMG
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-10-09 08:47:23 +02:00
Wolfgang Bumiller
c8d4db7836 bump perlmod to 0.13.4 for use_safe_putenv
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-10-06 09:09:15 +02:00
Wolfgang Bumiller
1c2ff27e75 pve: switch openid to use magic
Instead of blessed raw pointers as these can easily lead to double
free corruptions when they're copied in perl.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-10-04 09:09:45 +02:00
Wolfgang Bumiller
e3bc763de4 pmg: switch acme to use magic
Instead of blessed raw pointers as these can easily lead to double
free corruptions when they're copied in perl.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-10-04 09:07:23 +02:00
Wolfgang Bumiller
e9c2ba606d pmg: buildsys: add notify related dependencies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-10-04 09:07:23 +02:00
Fabian Grünbichler
4c6cc7e241 update to env_logger 0.10
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-10-03 10:00:27 +02:00
Lukas Wagner
50f372fe7e notify context: fix 'default_sendmail_from' context method
The name of the configuration option in datacenter.cfg is `email_from`
and not `mail_from`.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-10-02 12:31:11 +02:00
Thomas Lamprecht
8d031134e1 bump version to 0.8.6
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-09-05 16:32:45 +02:00
Wolfgang Bumiller
e52b4ea877 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-28 11:51:53 +02:00
Wolfgang Bumiller
8ff4471ee6 bump notify dependency
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-28 11:51:28 +02:00
Lukas Wagner
76b63ed6a8 notify: use new HttpError type
Use `proxmox-http-error::HttpError` instead of
`proxmox-notify::api::ApiError`.

Also factoring out the digest decoding into a small helper.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-28 11:50:08 +02:00
Wolfgang Bumiller
2be21ff9fa bump common to 0.3.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-24 14:14:06 +02:00
Wolfgang Bumiller
47b7ebbc96 common: bump pve-rs dep to 0.8.5 for Proxmox::RS::Notify
Note: this is more of a soft requirement, since as long as the Notify
module isn't loaded we don't need the latest version.
This is important to keep in mind since we do not currently have a
`pmg-rs` notify `Context` implementation and thus cannot depend on a
newer `pmg-rs`. However, as long as pmg code doesn't try to *use* the
Notify module, this won't be a problem.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-24 14:14:06 +02:00
Wolfgang Bumiller
af7ff77ac7 bump pve-rs to 0.8.5
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-24 14:14:06 +02:00
Wolfgang Bumiller
d5ff7165a2 remove leftover PVE::RS::Notify module
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-24 14:01:23 +02:00
Lukas Wagner
703cfbd212 notify: rename PVE::RS::Notify to Proxmox::RS::Notify
Also splitting PVE-specific context into its own file.

Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 13:58:26 +02:00
Lukas Wagner
69d2eb953d notify: add wrapper for get_referenced_entities
The function returns all other entities referenced by a filter/target.
This is useful for permission checks, where the user must have the
appropriate permissions for all entities.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:45 +02:00
Lukas Wagner
de59ffe4ec notify: add context for getting http_proxy from datacenter.cfg
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:43 +02:00
Lukas Wagner
178196e1ae notify: implement context for getting default author/mailfrom
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:42 +02:00
Lukas Wagner
a5ee03ed0f notify: sendmail: support the mailto-user parameter
This parameter allows to send mails to the email address configured
for users from the product's user database.

`proxmox-notify` now has a `Context` that must be set via
`proxmox_notify::context::set_context` before the crate is used.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:41 +02:00
Lukas Wagner
79f339d136 notify: add api for notification filters
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:39 +02:00
Lukas Wagner
6b5dbc3238 notify: add api for gotify endpoints
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:38 +02:00
Lukas Wagner
a73ba69716 notify: add api for sendmail endpoints
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:36 +02:00
Lukas Wagner
4b64b63ff7 notify: add api for notification groups
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:35 +02:00
Lukas Wagner
350cdd6b59 notify: add api for sending notifications/testing endpoints
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:33 +02:00
Lukas Wagner
b9c4756445 add PVE::RS::Notify module
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-07-24 11:17:31 +02:00
Wolfgang Bumiller
cd8984a954 buildsys: both: check crate vs debian version
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-05 13:50:17 +02:00
Wolfgang Bumiller
225b640f1f bump pmg-rs to 0.7.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-05 13:35:43 +02:00
Wolfgang Bumiller
8759447585 bump pve-rs to 0.8.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-05 13:30:27 +02:00
Wolfgang Bumiller
e2c950bf4c pmg: bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-05 11:03:08 +02:00
Wolfgang Bumiller
0be7076578 pve: bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-05 11:02:40 +02:00
Wolfgang Bumiller
470849f974 pmg: reset tfa failure count on unlock
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-05 11:01:45 +02:00
Wolfgang Bumiller
5c6a27da1d pve: reset tfa failure count on unlock
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-05 11:01:36 +02:00
Wolfgang Bumiller
06f325fd9d bump proxmox-tfa dependency to 4.0.4
This allows resetting the tfa failure counters on unlock.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-07-05 10:59:52 +02:00
Wolfgang Bumiller
3df4aecac0 bump pmg-rs to 0.7.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-06-27 16:01:55 +02:00
Wolfgang Bumiller
fdcdd326c3 pmg: enable tfa lockout
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-06-27 15:59:39 +02:00
Wolfgang Bumiller
aed1657598 pmg: add tfa_lock_status_query and api_unlock_tfa
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-06-27 15:59:14 +02:00
Wolfgang Bumiller
39a7399c2c pmg: bump proxmox-tfa to 4.0.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-06-27 15:56:02 +02:00
Wolfgang Bumiller
7bd8036ff0 bump pve-rs to 0.8.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-06-05 12:55:12 +02:00
Wolfgang Bumiller
e1f6379b02 bump proxmox-tfa dep to 4.0.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-06-05 12:53:22 +02:00
Wolfgang Bumiller
0d530835cb pve: add tfa_lock_status query sub
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-06-05 09:25:05 +02:00
Lukas Wagner
d0cab6371a log: set default log level to 'info', add product specific logging env var
Logging behaviour can be overridden by the {PMG,PVE}_LOG environment
variable.

This commit also disables styled output and  timestamps in log messages,
since we usually log to the journal anyway. The log output is configured
to match with other log messages in task logs.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2023-06-05 09:25:05 +02:00
Thomas Lamprecht
15e7531f3c pve: bump version to 0.8.2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-06-04 18:34:02 +02:00
Wolfgang Bumiller
3037864e4d bump pve-rs to 0.8.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-31 14:18:13 +02:00
Wolfgang Bumiller
590af894ef pve: enable tfa lockout, add api_unlock_tfa method
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-31 14:15:42 +02:00
Wolfgang Bumiller
10472bc265 pve-rs: bump proxmox-tfa dep to 4.0.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-31 14:15:42 +02:00
Wolfgang Bumiller
a4610c6a0f bump proxmox-apt,http,openid,subscription,sys crate dependencies
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-24 16:05:33 +02:00
Thomas Lamprecht
f7bb45a38b pve: update & wrap-and-sort d/control
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-18 13:07:28 +02:00
Thomas Lamprecht
181b19e2ef buildsys: add clean target for common package
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-18 12:46:27 +02:00
Thomas Lamprecht
e3d4bb03c9 common: wrap-and-sort & refresh
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-18 12:45:44 +02:00
Thomas Lamprecht
a53d4737d3 common: d/changelog: fixup distribution to bookworm
got (correctly) uploaded to bookworm, not bullseye

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-18 12:34:43 +02:00
Thomas Lamprecht
6b92c01349 pmg: bump version to 0.7.1
as the cargo one wasn't bumped, d/changelog still listed bullseye as
distribution for the original 0.7.0 upload and d/control was a bit
dusted, so to avoid any confusion just re-bumped with no actual code
change.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-18 12:34:43 +02:00
Thomas Lamprecht
0d049201e9 pmg: refresh d/control and note that debcargo.toml isn't canonical source
Also run `wrap-and-sort -tkn`

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-18 12:28:58 +02:00
Thomas Lamprecht
f7a9ddfdfd buildsys: cleanup and expand clean target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-18 12:00:17 +02:00
Thomas Lamprecht
c0bc3436ee pmg: d/changelog: fixup distribution to bookworm
this release got uploaded to bookworm only.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-05-18 11:43:49 +02:00
Wolfgang Bumiller
6beb0ffa6b buildsys: add missing deb targets
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-17 15:57:02 +02:00
Wolfgang Bumiller
4917bd4ead bump proxmox-rs-perl to 0.3.0, pmg-rs to 0.7.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-17 15:53:38 +02:00
Wolfgang Bumiller
3255c3b59c buildsys: pmg-rs: dsc and sbuild updates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-17 15:45:59 +02:00
Wolfgang Bumiller
1b499b7611 bump pve-rs to 0.8.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-17 11:02:12 +02:00
Wolfgang Bumiller
f863004159 buildsys: make pve-rs sbuild compatible
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-17 11:02:12 +02:00
Wolfgang Bumiller
34a0068618 undo rust workspace change in preparation for .dsc builds
The library ending up a level above the actual code just
makes .dsc/sbuild building very inconvenient, and pve-rs and
pmg-rs often grow independently from one another.

All we need is the common code available.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-05-17 11:02:00 +02:00
701a8c7b74 0.2.1-alt2
- Update libpve-rs-perl 0.7.3
2023-03-20 21:51:51 +03:00
1e21635421 update rust modules 2023-03-17 21:17:43 +03:00
109d180cf0 Merge branch 'upstream' 2023-03-17 20:42:11 +03:00
2bee9ba32f 0.2.1-alt1
- Update:
  + libproxmox-rs-perl 0.2.1
  + libpve-rs-perl 0.7.2
2022-10-03 17:15:57 +03:00
3d461874e4 update rust modules with cargo vendor 2022-10-03 17:07:30 +03:00
40eaa21b04 Merge remote-tracking branch 'upstream/master' 2022-10-03 16:56:25 +03:00
Andrew A. Vasilyev
b7fcd13b94 0.1.0-alt1.2
- add %set_perl_req_method relaxed
2022-05-06 18:43:46 +03:00
062f772881 0.1.0-alt1.1
- Update libpve-rs-perl.
2022-05-06 01:58:55 +03:00
a0d50f983b update rust modules by cargo vendor 2022-05-06 01:08:31 +03:00
3b28889273 fix cargo requires 2022-05-06 01:07:23 +03:00
ee80163d7a Merge remote-tracking branch 'upstream/master' 2022-05-06 00:38:06 +03:00
ca809d3ff7 0.1.0-alt1
- initial build.
2022-03-18 12:23:07 +03:00
65c34958ec update Makefile 2022-03-06 14:26:11 +03:00
a483f7475e update rust modules by cargo vendor 2022-03-06 14:23:20 +03:00
e51b9127e0 disable apt support 2022-03-06 14:22:01 +03:00
6e5fab9a6a fix cargo requires 2022-03-06 14:20:19 +03:00
038a4e2ea4 update cargo config for vendoring 2022-03-06 14:17:32 +03:00
5a54d85161 gear-remotes-save 2022-03-06 14:17:21 +03:00
15404 changed files with 7478909 additions and 390 deletions

View File

@@ -0,0 +1,29 @@
From 3e04ed627631c0891a825c7011353f7ca863abf8 Mon Sep 17 00:00:00 2001
From: Alexander Burmatov <thatman@altlinux.org>
Date: Fri, 6 Sep 2024 16:01:29 +0300
Subject: [PATCH] ALT: set correct context (not None)
---
pve-rs/vendor/proxmox-notify/src/context/mod.rs | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/pve-rs/vendor/proxmox-notify/src/context/mod.rs b/pve-rs/vendor/proxmox-notify/src/context/mod.rs
index c0a5a13b..aeaed9c9 100644
--- a/pve-rs/vendor/proxmox-notify/src/context/mod.rs
+++ b/pve-rs/vendor/proxmox-notify/src/context/mod.rs
@@ -32,8 +32,10 @@ pub trait Context: Send + Sync + Debug {
) -> Result<Option<String>, Error>;
}
-#[cfg(not(test))]
-static CONTEXT: Mutex<Option<&'static dyn Context>> = Mutex::new(None);
+#[cfg(feature = "pbs-context")]
+static CONTEXT: Mutex<Option<&'static dyn Context>> = Mutex::new(Some(&pbs::PBSContext));
+#[cfg(feature = "pve-context")]
+static CONTEXT: Mutex<Option<&'static dyn Context>> = Mutex::new(Some(&pve::PVEContext));
#[cfg(test)]
static CONTEXT: Mutex<Option<&'static dyn Context>> = Mutex::new(Some(&test::TestContext));
--
2.42.2

242
.gear/genpackage.pl Executable file
View File

@@ -0,0 +1,242 @@
#!/usr/bin/env perl
# Create a perl package given a product and package name.
use strict;
use warnings;
use File::Path qw(make_path);
my @packages;
my $opts = {
'lib-tag' => [
'TAG',
'An identifier used to avoid loading multiple libraries with the same shared code',
],
'lib-package' => [
'Package',
'Main package to generate for loading the library',
],
'lib-prefix' => [
'Prefix',
'Package prefix used for documentation in the library package.',
],
'lib' => [
'LIBNAME',
"The .so name without the 'lib' prefix.",
],
'debug-libpath' => [
'PATH',
"Path to a debug library, usually ./target/debug.",
],
'include-file' => [
'PATH',
"Path to additional perl code to include in the package after the 'use' statements",
],
};
sub help {
my ($fd) = @_;
print {$fd} "usage: $0 OPTIONS <packages...>\n";
print {$fd} "mandatory OPTIONS are:\n";
for my $o (sort keys %$opts) {
my ($arg, $desc) = $opts->{$o}->@*;
my $p = "--$o=$arg";
printf {$fd} " %20s %s\n", $p, $desc;
}
}
if (!@ARGV) {
help(\*STDERR);
exit(1);
}
my $params = {
'include-file' => [],
};
ARGPARSE: while (@ARGV) {
my $arg = shift @ARGV;
last if $arg eq '--';
if ($arg eq '-h' || $arg eq '--help') {
help(\*STDOUT);
exit(0);
}
for my $o (keys %$opts) {
if ($arg =~ /^(?:--\Q$o\E=)(.+)$/) {
$arg = $1;
} elsif ($arg =~ /^--\Q$o\E$/) {
$arg = shift @ARGV;
} else {
next;
};
die "--$o requires an argument\n" if !defined($arg);
if (ref($params->{$o}) eq 'ARRAY') {
push $params->{$o}->@*, $arg;
} else {
die "multiple --$o options provided\n" if defined($params->{$o});
$params->{$o} = $arg;
}
next ARGPARSE;
}
if ($arg =~ /^-/) {
help(\*STDERR);
exit(1);
}
unshift @ARGV, $arg;
last;
}
my $lib_package = $params->{'lib-package'}
or die "missing --lib-package parameter\n";
my $lib_prefix = $params->{'lib-prefix'}
or die "missing --lib-prefix parameter\n";
my $lib = $params->{'lib'}
or die "missing --lib parameter\n";
my $lib_tag = $params->{'lib-tag'};
my $debug_libpath = $params->{'debug-libpath'} // '';
my $extra_code = '';
for my $file ($params->{'include-file'}->@*) {
open(my $fh, '<', $file) or die "failed to open file '$file' - $!\n";
my $more = do { local $/ = undef; <$fh> };
die "error reading '$file': $!\n" if !defined($more);
$extra_code .= $more;
}
sub pkg2file {
return ($_[0] =~ s@::@/@gr) . ".pm";
}
sub parentdir {
if ($_[0] =~ m@^(.*)/[^/]+@) {
return $1
} else {
die "bad path: '$_[0]', try adding a directory\n";
}
}
my $template = <<'EOF';
package {{LIBRARY_PACKAGE}};
=head1 NAME
{{LIBRARY_PACKAGE}} - base module for {{LIBRARY_PREFIX}} rust bindings
=head1 SYNOPSIS
package {{LIBRARY_PREFIX}}::RS::SomeBindings;
use base '{{LIBRARY_PACKAGE}}';
BEGIN { __PACKAGE__->bootstrap(); }
1;
=head1 DESCRIPTION
This is the base module of all {{LIBRARY_PREFIX}} bindings.
Its job is to ensure the 'lib{{LIBRARY}}.so' library is loaded and provide a 'bootstrap'
method to load the actual code.
=cut
use strict;
use warnings;
use DynaLoader;
{{EXTRA_CODE}}
sub library { '{{LIBRARY}}' }
sub autodirs { map { "$_/auto" } @INC; }
sub envdirs { grep { length($_) } split(/:+/, $ENV{LD_LIBRARY_PATH} // '') }
sub find_lib {
my ($mod_name) = @_;
my @dirs = map { "-L$_" } (envdirs(), autodirs());
return DynaLoader::dl_findfile(@dirs, $mod_name);
}
# Keep on a single line, potentially modified by testsuite!
sub libfile { find_lib(library()) }
sub load : prototype($) {
my ($pkg) = @_;
my $mod_name = $pkg->library();
my $mod_file = libfile();
die "failed to locate shared library for $mod_name (lib${mod_name}.so)\n" if !$mod_file;
my $lib = DynaLoader::dl_load_file($mod_file)
or die "failed to load library '$mod_file'\n";
my $data = ($::{'{{LIBRARY_TAG}}-rs-library'} //= {});
$data->{$mod_name} = $lib;
$data->{-current} //= $lib;
$data->{-package} //= $pkg;
}
sub bootstrap {
my ($pkg) = @_;
my $mod_name = $pkg->library();
my $bootstrap_name = 'boot_' . ($pkg =~ s/::/__/gr);
my $lib = $::{'{{LIBRARY_TAG}}-rs-library'}
or die "rust library not available for '{{LIBRARY_PREFIX}}'\n";
$lib = $lib->{$mod_name};
my $sym = DynaLoader::dl_find_symbol($lib, $bootstrap_name);
die "failed to locate '$bootstrap_name'\n" if !defined $sym;
my $boot = DynaLoader::dl_install_xsub($bootstrap_name, $sym, "src/FIXME.rs");
$boot->();
}
BEGIN {
__PACKAGE__->load();
__PACKAGE__->bootstrap();
init() if __PACKAGE__->can("init");
}
1;
EOF
$template =~ s/\{\{EXTRA_CODE\}\}/$extra_code/g;
$template =~ s/\{\{LIBRARY_PACKAGE\}\}/$lib_package/g;
$template =~ s/\{\{LIBRARY_PREFIX\}\}/$lib_prefix/g;
$template =~ s/\{\{LIBRARY_TAG\}\}/$lib_tag/g;
$template =~ s/\{\{LIBRARY\}\}/$lib/g;
$template =~ s/\{\{DEBUG_LIBPATH\}\}/$debug_libpath/g;
if ($lib ne '-') {
my $path = pkg2file($lib_package);
print "Generating $path\n";
make_path(parentdir($path), { mode => 0755 });
open(my $fh, '>', $path) or die "failed to open '$path' for writing: $!\n";
print {$fh} $template;
close($fh);
}
for my $package (@ARGV) {
my $path = ($package =~ s@::@/@gr) . ".pm";
print "Generating $path\n";
$path =~ m@^(.*)/[^/]+@;
make_path($1, { mode => 0755 });
open(my $fh, '>', $path) or die "failed to open '$path' for writing: $!\n";
print {$fh} "package $package;\n";
print {$fh} "use base '$lib_package';\n";
print {$fh} "BEGIN { __PACKAGE__->bootstrap(); }\n";
print {$fh} "1;\n";
close($fh);
}

168
.gear/proxmox-perl-rs.spec Normal file
View File

@@ -0,0 +1,168 @@
%global _unpackaged_files_terminate_build 1
%def_without check
Name: proxmox-perl-rs
Version: 0.3.4
Release: alt2
Summary: PVE and PMG common parts which have been ported to Rust
License: AGPL-3.0+
Group: Development/Other
URL: https://www.proxmox.com
Vcs: git://git.proxmox.com/git/proxmox-perl-rs.git
Source: %name-%version.tar
Patch: %name-%version.patch
Patch1: 0001-ALT-set-correct-context-not-None.patch
Source1: genpackage.pl
ExclusiveArch: x86_64 aarch64
BuildRequires(pre): rpm-macros-rust
BuildRequires: rpm-build-rust clang-devel perl-devel
BuildRequires: libssl-devel libacl-devel libuuid-devel
BuildRequires: cargo-vendor-checksum
BuildRequires: build-essential libapt-devel
BuildRequires: /proc
%set_perl_req_method relaxed
%description
Contains the perl side of modules provided by the libraries of both
libpve-rs-perl and libpmg-rs-perl, loading whichever is available.
%package -n libproxmox-rs-perl
Summary: PVE/PMG common parts which have been ported to Rust
Version: 0.3.4
Group: Development/Other
Provides: proxmox-perl-rs = %EVR
Provides: proxmox-rs-perl = %EVR
%description -n libproxmox-rs-perl
%summary
%package -n libpve-rs-perl
Summary: PVE parts which have been ported to Rust
Version: 0.8.10
Group: Development/Other
Provides: pve-perl-rs = %EVR
Provides: pve-rs-perl = %EVR
%description -n libpve-rs-perl
%summary
%package -n libpmg-rs-perl
Summary: Components of Proxmox Mail Gateway which have been ported to Rust
Version: 0.7.6
Group: Development/Other
Provides: pmg-perl-rs = %EVR
Provides: pmg-rs-perl = %EVR
%description -n libpmg-rs-perl
%summary
%prep
%setup
%patch -p1
%patch1 -p1
pushd pve-rs
sed -i 's/PL_use_safe_putenv = on ? TRUE : FALSE;//' vendor/perlmod/src/glue.c
cargo-vendor-checksum --vendor vendor -f perlmod/src/glue.c
%ifarch aarch64
sed -i 's/mut i8/mut u8/' vendor/proxmox-sys/src/fs/dir.rs
# Checksum update for patched files
cargo-vendor-checksum --vendor vendor -f proxmox-sys/src/fs/dir.rs
%endif
cargo-vendor-checksum --vendor vendor -f proxmox-notify/src/context/mod.rs
popd
%build
export BUILD_MODE=release
export PERLMOD_WRITE_PACKAGES=1
export BUILD_TARGET=pve
export RUSTFLAGS="-L/usr/lib64/perl5/CORE -lperl"
cp %SOURCE1 common/pkg/
cp %SOURCE1 pmg-rs/
cp %SOURCE1 pve-rs/
sed -i 's|/usr/lib/perlmod/genpackage.pl|./genpackage.pl|' common/pkg/Makefile pmg-rs/Makefile pve-rs/Makefile
# Build only in pve-rs:
pushd pve-rs
%make
popd
pushd common/pkg
%make
popd
%install
pushd pve-rs
install -pD -m0644 target/release/libpve_rs.so %buildroot%perl_vendor_autolib/libpve_rs.so
mkdir -p %buildroot%perl_vendor_privlib/{PVE/RS/ResourceScheduling,PVE/RS/APT,Proxmox/Lib,Proxmox/RS,Proxmox/RS/APT}
install -m0644 PVE/RS/*.pm %buildroot%perl_vendor_privlib/PVE/RS/
install -m0644 PVE/RS/ResourceScheduling/*.pm %buildroot%perl_vendor_privlib/PVE/RS/ResourceScheduling/
install -m0644 PVE/RS/APT/*.pm %buildroot%perl_vendor_privlib/PVE/RS/APT/
install -m0644 common/pkg/PVE/RS/*.pm %buildroot%perl_vendor_privlib/PVE/RS/
install -m0644 Proxmox/RS/*.pm %buildroot%perl_vendor_privlib/Proxmox/RS/
install -m0644 Proxmox/RS/APT/*.pm %buildroot%perl_vendor_privlib/Proxmox/RS/APT
install -m0644 common/pkg/Proxmox/Lib/Common.pm Proxmox/Lib/PVE.pm %buildroot%perl_vendor_privlib/Proxmox/Lib/
%check
pushd pve-rs
LD_LIBRARY_PATH='$LD_LIBRARY_PATH:../target/release' make check
%files -n libpve-rs-perl
%perl_vendor_autolib/libpve_rs.so
%dir %perl_vendor_privlib/PVE/RS
%dir %perl_vendor_privlib/PVE/RS/ResourceScheduling
%dir %perl_vendor_privlib/PVE/RS/APT
%perl_vendor_privlib/PVE/RS/*.pm
%perl_vendor_privlib/PVE/RS/ResourceScheduling/*.pm
%perl_vendor_privlib/PVE/RS/APT/*.pm
%files -n libproxmox-rs-perl
%dir %perl_vendor_privlib/Proxmox/RS
%dir %perl_vendor_privlib/Proxmox/RS/APT
%perl_vendor_privlib/Proxmox/RS/*
%perl_vendor_privlib/Proxmox/RS/APT/*
%dir %perl_vendor_privlib/Proxmox/Lib
%perl_vendor_privlib/Proxmox/Lib/*
%changelog
* Mon Sep 09 2024 Alexander Burmatov <thatman@altlinux.org> 0.3.4-alt1
- Update:
+ libproxmox-rs-perl 0.3.4
+ libpve-rs-perl 0.8.10
* Fri Sep 06 2024 Alexander Burmatov <thatman@altlinux.org> 0.3.3-alt3
- some notifications fixes (thx andy@)
- Update:
+ libpve-rs-perl 0.8.9
* Wed Apr 03 2024 Andrew A. Vasilyev <andy@altlinux.org> 0.3.3-alt2
- update cargo vendor
* Thu Feb 22 2024 Andrew A. Vasilyev <andy@altlinux.org> 0.3.3-alt1
- Update:
+ libproxmox-rs-perl 0.3.3
+ libpve-rs-perl 0.8.8
- new building scheme
* Tue Nov 07 2023 Andrew A. Vasilyev <andy@altlinux.org> 0.2.1-alt3
- libpve-rs-perl: add linking with libperl (Closes: #48330)
* Fri Mar 17 2023 Alexey Shabalin <shaba@altlinux.org> 0.2.1-alt2
- Update libpve-rs-perl 0.7.3
* Mon Oct 03 2022 Alexey Shabalin <shaba@altlinux.org> 0.2.1-alt1
- Update:
+ libproxmox-rs-perl 0.2.1
+ libpve-rs-perl 0.7.2
* Fri May 06 2022 Andrew A. Vasilyev <andy@altlinux.org> 0.1.0-alt1.2
- add %%set_perl_req_method relaxed
* Fri May 06 2022 Alexey Shabalin <shaba@altlinux.org> 0.1.0-alt1.1
- Update libpve-rs-perl.
* Sun Mar 06 2022 Alexey Shabalin <shaba@altlinux.org> 0.1.0-alt1
- initial build.

5
.gear/rules Normal file
View File

@@ -0,0 +1,5 @@
spec: .gear/proxmox-perl-rs.spec
tar: upstream:.
diff: upstream:. . name=@name@-@version@.patch
copy?: .gear/*.pl
copy?: .gear/*.patch

1
.gear/tags/list Normal file
View File

@@ -0,0 +1 @@
ae27b307b8593c652d18afd41658c43a1e68070a upstream

3
.gear/upstream/remotes Normal file
View File

@@ -0,0 +1,3 @@
[remote "upstream"]
url = git://git.proxmox.com/git/proxmox-perl-rs.git
fetch = +refs/heads/*:refs/remotes/upstream/*

1
.gitignore vendored
View File

@@ -1,7 +1,6 @@
/target
/*/target
/build
Cargo.lock
/test.pl
/PVE
/PMG

View File

@@ -1,7 +1,8 @@
CARGO ?= cargo
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release
CARGO_BUILD_ARGS += --release --offline
DEBUG_LIBPATH :=
else
endif

View File

@@ -1,4 +1,4 @@
include /usr/share/dpkg/pkg-info.mk
#include /usr/share/dpkg/pkg-info.mk
PACKAGE=libproxmox-rs-perl
@@ -20,18 +20,24 @@ PERLMOD_GENPACKAGE := /usr/lib/perlmod/genpackage.pl \
--lib-package=Proxmox::Lib::Common \
--lib-prefix=Proxmox
# Point to any generated pm file (Proxmox/ dir is already present in this package)
Proxmox/RS/CalendarEvent.pm:
$(PERLMOD_GENPACKAGE) \
PERLMOD_PACKAGES := \
Proxmox::RS::APT::Repositories \
Proxmox::RS::CalendarEvent \
Proxmox::RS::Notify \
Proxmox::RS::SharedCache \
Proxmox::RS::Subscription
all: Proxmox/RS/CalendarEvent.pm
PERLMOD_PACKAGE_FILES := $(addsuffix .pm,$(subst ::,/,$(PERLMOD_PACKAGES)))
Proxmox/RS: $(PERLMOD_PACKAGE_FILES)
$(PERLMOD_PACKAGE_FILES) &:
$(PERLMOD_GENPACKAGE) $(PERLMOD_PACKAGES)
all: Proxmox/RS
true
.PHONY: install
install: Proxmox/RS/CalendarEvent.pm
install: Proxmox/RS
install -d -m755 $(DESTDIR)$(PERL_INSTALLVENDORLIB)
find PVE \! -type d -print -exec install -Dm644 '{}' $(DESTDIR)$(PERL_INSTALLVENDORLIB)'/{}' ';'
find Proxmox \! -type d -print -exec install -Dm644 '{}' $(DESTDIR)$(PERL_INSTALLVENDORLIB)'/{}' ';'
@@ -67,3 +73,4 @@ upload: $(DEB)
clean:
rm -f *.deb *.dsc *.tar.* *.build *.buildinfo *.changes
rm -rf $(PACKAGE)-[0-9]*/
rm -rf Proxmox/RS

View File

@@ -0,0 +1,100 @@
package Proxmox::Lib::SslProbe;
use strict;
use warnings;
=head1 Environment Variable Safety
Perl's handling of environment variables was completely messed up until v5.38.
Using `setenv` such as use din the `openssl-probe` crate would cause it to
crash later on, therefore we provide a perl-version of env var probing instead,
and override the crate with one that doesn't replace the variables if they are
already set correctly.
=cut
BEGIN {
# Copied from openssl-probe
my @cert_dirs = (
"/var/ssl",
"/usr/share/ssl",
"/usr/local/ssl",
"/usr/local/openssl",
"/usr/local/etc/openssl",
"/usr/local/share",
"/usr/lib/ssl",
"/usr/ssl",
"/etc/openssl",
"/etc/pki/ca-trust/extracted/pem",
"/etc/pki/tls",
"/etc/ssl",
"/etc/certs",
"/opt/etc/ssl",
"/data/data/com.termux/files/usr/etc/tls",
"/boot/system/data/ssl",
);
# Copied from openssl-probe
my @cert_file_names = (
"cert.pem",
"certs.pem",
"ca-bundle.pem",
"cacert.pem",
"ca-certificates.crt",
"certs/ca-certificates.crt",
"certs/ca-root-nss.crt",
"certs/ca-bundle.crt",
"CARootCertificates.pem",
"tls-ca-bundle.pem",
);
my $probed_ssl_vars = 0;
# The algorithm here is taken from the `openssl-probe` crate and should
# produce the exact same result in order to ensure the rust code does not
# call `setenv()`.
my sub probe_ssl_vars : prototype() {
return if $probed_ssl_vars;
$probed_ssl_vars = 1;
my $result_file = $ENV{SSL_CERT_FILE};
my $result_file_changed = 0;
my $result_dir = $ENV{SSL_CERT_DIR};
my $result_dir_changed = 0;
for my $certs_dir (@cert_dirs) {
if (!defined($result_file)) {
for my $file (@cert_file_names) {
my $path = "$certs_dir/$file";
if (-e $path) {
$result_file = $path;
$result_file_changed = 1;
last;
}
}
}
if (!defined($result_dir)) {
for my $file (@cert_file_names) {
my $path = "$certs_dir/certs";
if (-d $path) {
$result_dir = $path;
$result_dir_changed = 1;
last;
}
}
}
last if defined($result_file) && defined($result_dir);
}
if ($result_file_changed && defined($result_file)) {
$ENV{SSL_CERT_FILE} = $result_file;
}
if ($result_dir_changed && defined($result_dir)) {
$ENV{SSL_CERT_DIR} = $result_dir;
}
}
probe_ssl_vars();
}
1;

View File

@@ -1,3 +1,33 @@
libproxmox-rs-perl (0.3.4) bookworm; urgency=medium
* add bindings for proxmox-shared-cache crate
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Aug 2024 14:21:41 +0200
libproxmox-rs-perl (0.3.3) bookworm; urgency=medium
* move ssl var probing to Proxmox::Lib::SslProbe
-- Proxmox Support Team <support@proxmox.com> Thu, 07 Dec 2023 09:57:33 +0100
libproxmox-rs-perl (0.3.2) bookworm; urgency=medium
* add Proxmox::Lib::Common::probe_ssl_vars() helper
-- Proxmox Support Team <support@proxmox.com> Tue, 05 Dec 2023 10:46:39 +0100
libproxmox-rs-perl (0.3.1) bookworm; urgency=medium
* add Proxmox::RS::Notify module
-- Proxmox Support Team <support@proxmox.com> Mon, 24 Jul 2023 14:02:17 +0200
libproxmox-rs-perl (0.3.0) bookworm; urgency=medium
* rebuild for Debian 12 Bookworm based release series
-- Proxmox Support Team <support@proxmox.com> Wed, 17 May 2023 15:48:41 +0200
libproxmox-rs-perl (0.2.1) bullseye; urgency=medium
* update to proxmox-subscription 0.3 / proxmox-http 0.7

View File

@@ -1 +0,0 @@
12

View File

@@ -1,11 +1,9 @@
Source: libproxmox-rs-perl
Section: perl
Priority: optional
Build-Depends:
debhelper (>= 12),
perlmod-bin,
Build-Depends: debhelper-compat (= 13), perlmod-bin,
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.5.1
Standards-Version: 4.6.2
Vcs-Git: git://git.proxmox.com/git/proxmox-perl-rs.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox-perl-rs.git
Homepage: https://www.proxmox.com
@@ -15,15 +13,12 @@ Package: libproxmox-rs-perl
Architecture: any
# always bump both versioned Depends and Breaks, otherwise systems with both
# libpmg-rs-perl and libpve-rs-perl might load an outdated lib and break
Depends:
${misc:Depends},
${perl:Depends},
${shlibs:Depends},
libpve-rs-perl (>= 0.7.2) | libpmg-rs-perl (>= 0.6.2),
Breaks:
libpve-rs-perl (<< 0.7.2),
libpmg-rs-perl (<< 0.6.2),
Replaces: libpve-rs-perl (<< 0.6.0)
Description: PVE/PMG common parts which have been ported to Rust - Perl packages
Contains the perl side of modules provided by the libraries of both libpve-rs-perl and
libpmg-rs-perl, loading whichever is available.
Depends: libpve-rs-perl (>= 0.8.10) | libpmg-rs-perl (>= 0.7.6),
${misc:Depends},
${perl:Depends},
${shlibs:Depends},
Breaks: libpmg-rs-perl (<< 0.6.2), libpve-rs-perl (<< 0.7.2),
Replaces: libpve-rs-perl (<< 0.6.0),
Description: PVE/PMG common perl parts for Rust perlmod bindings
Contains the perl side of modules provided by the libraries of both
libpve-rs-perl and libpmg-rs-perl, loading whichever is available.

View File

@@ -1,62 +1,18 @@
#[perlmod::package(name = "Proxmox::RS::APT::Repositories")]
pub mod export {
use std::convert::TryInto;
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use anyhow::Error;
use proxmox_apt::repositories::{
APTRepositoryFile, APTRepositoryFileError, APTRepositoryHandle, APTRepositoryInfo,
APTStandardRepository,
use proxmox_apt_api_types::{
APTChangeRepositoryOptions, APTGetChangelogOptions, APTRepositoriesResult,
APTRepositoryHandle, APTUpdateInfo, APTUpdateOptions,
};
#[derive(Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Result for the repositories() function
pub struct RepositoriesResult {
/// Successfully parsed files.
pub files: Vec<APTRepositoryFile>,
/// Errors for files that could not be parsed or read.
pub errors: Vec<APTRepositoryFileError>,
/// Common digest for successfully parsed files.
pub digest: String,
/// Additional information/warnings about repositories.
pub infos: Vec<APTRepositoryInfo>,
/// Standard repositories and their configuration status.
pub standard_repos: Vec<APTStandardRepository>,
}
#[derive(Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// For changing an existing repository.
pub struct ChangeProperties {
/// Whether the repository should be enabled or not.
pub enabled: Option<bool>,
}
use proxmox_config_digest::ConfigDigest;
/// Get information about configured repositories and standard repositories for `product`.
#[export]
pub fn repositories(product: &str) -> Result<RepositoriesResult, Error> {
let (files, errors, digest) = proxmox_apt::repositories::repositories()?;
let digest = hex::encode(&digest);
let suite = proxmox_apt::repositories::get_current_release_codename()?;
let infos = proxmox_apt::repositories::check_repositories(&files, suite);
let standard_repos =
proxmox_apt::repositories::standard_repositories(&files, product, suite);
Ok(RepositoriesResult {
files,
errors,
digest,
infos,
standard_repos,
})
pub fn repositories(product: &str) -> Result<APTRepositoriesResult, Error> {
proxmox_apt::list_repositories(product)
}
/// Add the repository identified by the `handle` and `product`.
@@ -64,65 +20,12 @@ pub mod export {
///
/// The `digest` parameter asserts that the configuration has not been modified.
#[export]
pub fn add_repository(handle: &str, product: &str, digest: Option<&str>) -> Result<(), Error> {
let (mut files, errors, current_digest) = proxmox_apt::repositories::repositories()?;
let handle: APTRepositoryHandle = handle.try_into()?;
let suite = proxmox_apt::repositories::get_current_release_codename()?;
if let Some(digest) = digest {
let expected_digest = hex::decode(digest)?;
if expected_digest != current_digest {
bail!("detected modified configuration - file changed by other user? Try again.");
}
}
// check if it's already configured first
for file in files.iter_mut() {
for repo in file.repositories.iter_mut() {
if repo.is_referenced_repository(handle, product, &suite.to_string()) {
if repo.enabled {
return Ok(());
}
repo.set_enabled(true);
file.write()?;
return Ok(());
}
}
}
let (repo, path) =
proxmox_apt::repositories::get_standard_repository(handle, product, suite);
if let Some(error) = errors.iter().find(|error| error.path == path) {
bail!(
"unable to parse existing file {} - {}",
error.path,
error.error,
);
}
if let Some(file) = files
.iter_mut()
.find(|file| file.path.as_ref() == Some(&path))
{
file.repositories.push(repo);
file.write()?;
} else {
let mut file = match APTRepositoryFile::new(&path)? {
Some(file) => file,
None => bail!("invalid path - {}", path),
};
file.repositories.push(repo);
file.write()?;
}
Ok(())
pub fn add_repository(
handle: APTRepositoryHandle,
product: &str,
digest: Option<ConfigDigest>,
) -> Result<(), Error> {
proxmox_apt::add_repository_handle(product, handle, digest)
}
/// Change the properties of the specified repository.
@@ -132,39 +35,55 @@ pub mod export {
pub fn change_repository(
path: &str,
index: usize,
options: ChangeProperties,
digest: Option<&str>,
options: APTChangeRepositoryOptions,
digest: Option<ConfigDigest>,
) -> Result<(), Error> {
let (mut files, errors, current_digest) = proxmox_apt::repositories::repositories()?;
proxmox_apt::change_repository(path, index, &options, digest)
}
if let Some(digest) = digest {
let expected_digest = hex::decode(digest)?;
if expected_digest != current_digest {
bail!("detected modified configuration - file changed by other user? Try again.");
}
}
/// Retrieve the changelog of the specified package.
#[export]
pub fn get_changelog(options: APTGetChangelogOptions) -> Result<String, Error> {
proxmox_apt::get_changelog(&options)
}
if let Some(error) = errors.iter().find(|error| error.path == path) {
bail!("unable to parse file {} - {}", error.path, error.error);
}
/// List available APT updates
///
/// Automatically updates an expired package cache.
#[export]
pub fn list_available_apt_update(apt_state_file: &str) -> Result<Vec<APTUpdateInfo>, Error> {
proxmox_apt::list_available_apt_update(apt_state_file)
}
if let Some(file) = files
.iter_mut()
.find(|file| file.path.as_ref() == Some(&path.to_string()))
{
if let Some(repo) = file.repositories.get_mut(index) {
if let Some(enabled) = options.enabled {
repo.set_enabled(enabled);
}
/// Update the APT database
///
/// You should update the APT proxy configuration before running this.
#[export]
pub fn update_database(apt_state_file: &str, options: APTUpdateOptions) -> Result<(), Error> {
proxmox_apt::update_database(
apt_state_file,
&options,
|updates: &[&APTUpdateInfo]| -> Result<(), Error> {
// fixme: howto send notifgications?
crate::send_updates_available(updates)?;
Ok(())
},
)
}
file.write()?;
} else {
bail!("invalid index - {}", index);
}
} else {
bail!("invalid path - {}", path);
}
Ok(())
/// Get package information for a list of important product packages.
#[export]
pub fn get_package_versions(
product_virtual_package: &str,
api_server_package: &str,
running_api_server_version: &str,
package_list: Vec<&str>,
) -> Result<Vec<APTUpdateInfo>, Error> {
proxmox_apt::get_package_versions(
product_virtual_package,
api_server_package,
running_api_server_version,
&package_list,
)
}
}

View File

@@ -1,6 +1,12 @@
use anyhow::Error;
/// Initialize logging. Should only be called once
pub fn init() {
if let Err(e) = env_logger::try_init() {
eprintln!("could not set up env_logger: {e}");
pub fn init(env_var_name: &str, default_log_level: &str) {
if let Err(e) = default_log_level
.parse()
.map_err(Error::from)
.and_then(|default_log_level| proxmox_log::init_logger(env_var_name, default_log_level))
{
eprintln!("could not set up env_logger: {e:?}");
}
}

View File

@@ -1,4 +1,6 @@
pub mod apt;
mod calendar_event;
pub mod logger;
pub mod notify;
pub mod shared_cache;
mod subscription;

506
common/src/notify.rs Normal file
View File

@@ -0,0 +1,506 @@
#[perlmod::package(name = "Proxmox::RS::Notify")]
mod export {
use std::collections::HashMap;
use std::sync::Mutex;
use anyhow::{bail, Error};
use serde_json::Value as JSONValue;
use perlmod::Value;
use proxmox_http_error::HttpError;
use proxmox_notify::endpoints::gotify::{
DeleteableGotifyProperty, GotifyConfig, GotifyConfigUpdater, GotifyPrivateConfig,
GotifyPrivateConfigUpdater,
};
use proxmox_notify::endpoints::sendmail::{
DeleteableSendmailProperty, SendmailConfig, SendmailConfigUpdater,
};
use proxmox_notify::endpoints::smtp::{
DeleteableSmtpProperty, SmtpConfig, SmtpConfigUpdater, SmtpMode, SmtpPrivateConfig,
SmtpPrivateConfigUpdater,
};
use proxmox_notify::matcher::{
CalendarMatcher, DeleteableMatcherProperty, FieldMatcher, MatchModeOperator, MatcherConfig,
MatcherConfigUpdater, SeverityMatcher,
};
use proxmox_notify::{api, Config, Notification, Severity};
pub struct NotificationConfig {
config: Mutex<Config>,
}
perlmod::declare_magic!(Box<NotificationConfig> : &NotificationConfig as "Proxmox::RS::Notify");
/// Support `dclone` so this can be put into the `ccache` of `PVE::Cluster`.
#[export(name = "STORABLE_freeze", raw_return)]
fn storable_freeze(
#[try_from_ref] this: &NotificationConfig,
cloning: bool,
) -> Result<Value, Error> {
if !cloning {
bail!("freezing Notification config not supported!");
}
let mut cloned = Box::new(NotificationConfig {
config: Mutex::new(this.config.lock().unwrap().clone()),
});
let value = Value::new_pointer::<NotificationConfig>(&mut *cloned);
let _perl = Box::leak(cloned);
Ok(value)
}
/// Instead of `thaw` we implement `attach` for `dclone`.
#[export(name = "STORABLE_attach", raw_return)]
fn storable_attach(
#[raw] class: Value,
cloning: bool,
#[raw] serialized: Value,
) -> Result<Value, Error> {
if !cloning {
bail!("STORABLE_attach called with cloning=false");
}
let data = unsafe { Box::from_raw(serialized.pv_raw::<NotificationConfig>()?) };
Ok(perlmod::instantiate_magic!(&class, MAGIC => data))
}
#[export(raw_return)]
fn parse_config(
#[raw] class: Value,
raw_config: &[u8],
raw_private_config: &[u8],
) -> Result<Value, Error> {
let raw_config = std::str::from_utf8(raw_config)?;
let raw_private_config = std::str::from_utf8(raw_private_config)?;
Ok(perlmod::instantiate_magic!(&class, MAGIC => Box::new(
NotificationConfig {
config: Mutex::new(Config::new(raw_config, raw_private_config)?)
}
)))
}
#[export]
fn write_config(#[try_from_ref] this: &NotificationConfig) -> Result<(String, String), Error> {
Ok(this.config.lock().unwrap().write()?)
}
#[export]
fn digest(#[try_from_ref] this: &NotificationConfig) -> String {
let config = this.config.lock().unwrap();
hex::encode(config.digest())
}
#[export(serialize_error)]
fn send(
#[try_from_ref] this: &NotificationConfig,
severity: Severity,
template_name: String,
template_data: Option<JSONValue>,
fields: Option<HashMap<String, String>>,
) -> Result<(), HttpError> {
let config = this.config.lock().unwrap();
let notification = Notification::from_template(
severity,
template_name,
template_data.unwrap_or_default(),
fields.unwrap_or_default(),
);
api::common::send(&config, &notification)
}
#[export(serialize_error)]
fn test_target(
#[try_from_ref] this: &NotificationConfig,
target: &str,
) -> Result<(), HttpError> {
let config = this.config.lock().unwrap();
api::common::test_target(&config, target)
}
#[export(serialize_error)]
fn get_sendmail_endpoints(
#[try_from_ref] this: &NotificationConfig,
) -> Result<Vec<SendmailConfig>, HttpError> {
let config = this.config.lock().unwrap();
api::sendmail::get_endpoints(&config)
}
#[export(serialize_error)]
fn get_sendmail_endpoint(
#[try_from_ref] this: &NotificationConfig,
id: &str,
) -> Result<SendmailConfig, HttpError> {
let config = this.config.lock().unwrap();
api::sendmail::get_endpoint(&config, id)
}
#[export(serialize_error)]
#[allow(clippy::too_many_arguments)]
fn add_sendmail_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: String,
mailto: Option<Vec<String>>,
mailto_user: Option<Vec<String>>,
from_address: Option<String>,
author: Option<String>,
comment: Option<String>,
disable: Option<bool>,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
api::sendmail::add_endpoint(
&mut config,
SendmailConfig {
name,
mailto: mailto.unwrap_or_default(),
mailto_user: mailto_user.unwrap_or_default(),
from_address,
author,
comment,
disable,
filter: None,
origin: None,
},
)
}
#[export(serialize_error)]
#[allow(clippy::too_many_arguments)]
fn update_sendmail_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: &str,
mailto: Option<Vec<String>>,
mailto_user: Option<Vec<String>>,
from_address: Option<String>,
author: Option<String>,
comment: Option<String>,
disable: Option<bool>,
delete: Option<Vec<DeleteableSendmailProperty>>,
digest: Option<&str>,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
let digest = decode_digest(digest)?;
api::sendmail::update_endpoint(
&mut config,
name,
SendmailConfigUpdater {
mailto,
mailto_user,
from_address,
author,
comment,
disable,
},
delete.as_deref(),
digest.as_deref(),
)
}
#[export(serialize_error)]
fn delete_sendmail_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: &str,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
api::sendmail::delete_endpoint(&mut config, name)
}
#[export(serialize_error)]
fn get_gotify_endpoints(
#[try_from_ref] this: &NotificationConfig,
) -> Result<Vec<GotifyConfig>, HttpError> {
let config = this.config.lock().unwrap();
api::gotify::get_endpoints(&config)
}
#[export(serialize_error)]
fn get_gotify_endpoint(
#[try_from_ref] this: &NotificationConfig,
id: &str,
) -> Result<GotifyConfig, HttpError> {
let config = this.config.lock().unwrap();
api::gotify::get_endpoint(&config, id)
}
#[export(serialize_error)]
fn add_gotify_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: String,
server: String,
token: String,
comment: Option<String>,
disable: Option<bool>,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
api::gotify::add_endpoint(
&mut config,
GotifyConfig {
name: name.clone(),
server,
comment,
disable,
filter: None,
origin: None,
},
GotifyPrivateConfig { name, token },
)
}
#[export(serialize_error)]
#[allow(clippy::too_many_arguments)]
fn update_gotify_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: &str,
server: Option<String>,
token: Option<String>,
comment: Option<String>,
disable: Option<bool>,
delete: Option<Vec<DeleteableGotifyProperty>>,
digest: Option<&str>,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
let digest = decode_digest(digest)?;
api::gotify::update_endpoint(
&mut config,
name,
GotifyConfigUpdater {
server,
comment,
disable,
},
GotifyPrivateConfigUpdater { token },
delete.as_deref(),
digest.as_deref(),
)
}
#[export(serialize_error)]
fn delete_gotify_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: &str,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
api::gotify::delete_gotify_endpoint(&mut config, name)
}
#[export(serialize_error)]
fn get_smtp_endpoints(
#[try_from_ref] this: &NotificationConfig,
) -> Result<Vec<SmtpConfig>, HttpError> {
let config = this.config.lock().unwrap();
api::smtp::get_endpoints(&config)
}
#[export(serialize_error)]
fn get_smtp_endpoint(
#[try_from_ref] this: &NotificationConfig,
id: &str,
) -> Result<SmtpConfig, HttpError> {
let config = this.config.lock().unwrap();
api::smtp::get_endpoint(&config, id)
}
#[export(serialize_error)]
#[allow(clippy::too_many_arguments)]
fn add_smtp_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: String,
server: String,
port: Option<u16>,
mode: Option<SmtpMode>,
username: Option<String>,
password: Option<String>,
mailto: Option<Vec<String>>,
mailto_user: Option<Vec<String>>,
from_address: String,
author: Option<String>,
comment: Option<String>,
disable: Option<bool>,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
api::smtp::add_endpoint(
&mut config,
SmtpConfig {
name: name.clone(),
server,
port,
mode,
username,
mailto: mailto.unwrap_or_default(),
mailto_user: mailto_user.unwrap_or_default(),
from_address,
author,
comment,
disable,
origin: None,
},
SmtpPrivateConfig { name, password },
)
}
#[export(serialize_error)]
#[allow(clippy::too_many_arguments)]
fn update_smtp_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: &str,
server: Option<String>,
port: Option<u16>,
mode: Option<SmtpMode>,
username: Option<String>,
password: Option<String>,
mailto: Option<Vec<String>>,
mailto_user: Option<Vec<String>>,
from_address: Option<String>,
author: Option<String>,
comment: Option<String>,
disable: Option<bool>,
delete: Option<Vec<DeleteableSmtpProperty>>,
digest: Option<&str>,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
let digest = decode_digest(digest)?;
api::smtp::update_endpoint(
&mut config,
name,
SmtpConfigUpdater {
server,
port,
mode,
username,
mailto,
mailto_user,
from_address,
author,
comment,
disable,
},
SmtpPrivateConfigUpdater { password },
delete.as_deref(),
digest.as_deref(),
)
}
#[export(serialize_error)]
fn delete_smtp_endpoint(
#[try_from_ref] this: &NotificationConfig,
name: &str,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
api::smtp::delete_endpoint(&mut config, name)
}
#[export(serialize_error)]
fn get_matchers(
#[try_from_ref] this: &NotificationConfig,
) -> Result<Vec<MatcherConfig>, HttpError> {
let config = this.config.lock().unwrap();
api::matcher::get_matchers(&config)
}
#[export(serialize_error)]
fn get_matcher(
#[try_from_ref] this: &NotificationConfig,
id: &str,
) -> Result<MatcherConfig, HttpError> {
let config = this.config.lock().unwrap();
api::matcher::get_matcher(&config, id)
}
#[export(serialize_error)]
#[allow(clippy::too_many_arguments)]
fn add_matcher(
#[try_from_ref] this: &NotificationConfig,
name: String,
target: Option<Vec<String>>,
match_severity: Option<Vec<SeverityMatcher>>,
match_field: Option<Vec<FieldMatcher>>,
match_calendar: Option<Vec<CalendarMatcher>>,
mode: Option<MatchModeOperator>,
invert_match: Option<bool>,
comment: Option<String>,
disable: Option<bool>,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
api::matcher::add_matcher(
&mut config,
MatcherConfig {
name,
match_severity: match_severity.unwrap_or_default(),
match_field: match_field.unwrap_or_default(),
match_calendar: match_calendar.unwrap_or_default(),
target: target.unwrap_or_default(),
mode,
invert_match,
comment,
disable,
origin: None,
},
)
}
#[export(serialize_error)]
#[allow(clippy::too_many_arguments)]
fn update_matcher(
#[try_from_ref] this: &NotificationConfig,
name: &str,
target: Option<Vec<String>>,
match_severity: Option<Vec<SeverityMatcher>>,
match_field: Option<Vec<FieldMatcher>>,
match_calendar: Option<Vec<CalendarMatcher>>,
mode: Option<MatchModeOperator>,
invert_match: Option<bool>,
comment: Option<String>,
disable: Option<bool>,
delete: Option<Vec<DeleteableMatcherProperty>>,
digest: Option<&str>,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
let digest = decode_digest(digest)?;
api::matcher::update_matcher(
&mut config,
name,
MatcherConfigUpdater {
match_severity,
match_field,
match_calendar,
target,
mode,
invert_match,
comment,
disable,
},
delete.as_deref(),
digest.as_deref(),
)
}
#[export(serialize_error)]
fn delete_matcher(
#[try_from_ref] this: &NotificationConfig,
name: &str,
) -> Result<(), HttpError> {
let mut config = this.config.lock().unwrap();
api::matcher::delete_matcher(&mut config, name)
}
#[export]
fn get_referenced_entities(
#[try_from_ref] this: &NotificationConfig,
name: &str,
) -> Result<Vec<String>, HttpError> {
let config = this.config.lock().unwrap();
api::common::get_referenced_entities(&config, name)
}
fn decode_digest(digest: Option<&str>) -> Result<Option<Vec<u8>>, HttpError> {
digest
.map(hex::decode)
.transpose()
.map_err(|e| api::http_err!(BAD_REQUEST, "invalid digest: {e}"))
}
}

View File

@@ -0,0 +1,68 @@
#[perlmod::package(name = "Proxmox::RS::SharedCache")]
mod export {
use std::time::Duration;
use anyhow::Error;
use nix::sys::stat::Mode;
use serde::Deserialize;
use serde_json::Value as JSONValue;
use perlmod::Value;
use proxmox_shared_cache::SharedCache;
use proxmox_sys::fs::CreateOptions;
pub struct CacheWrapper(SharedCache);
perlmod::declare_magic!(Box<CacheWrapper> : &CacheWrapper as "Proxmox::RS::SharedCache");
#[derive(Deserialize)]
struct Params {
path: String,
owner: u32,
group: u32,
entry_mode: u32,
keep_old: u32,
}
#[export(raw_return)]
fn new(#[raw] class: Value, params: Params) -> Result<Value, Error> {
let options = CreateOptions::new()
.owner(params.owner.into())
.group(params.group.into())
.perm(Mode::from_bits_truncate(params.entry_mode));
Ok(perlmod::instantiate_magic!(&class, MAGIC => Box::new(
CacheWrapper (
SharedCache::new(params.path, options, params.keep_old)?
)
)))
}
#[export]
fn set(
#[try_from_ref] this: &CacheWrapper,
value: JSONValue,
lock_timeout: u64,
) -> Result<(), Error> {
this.0.set(&value, Duration::from_secs(lock_timeout))
}
#[export]
fn get(#[try_from_ref] this: &CacheWrapper) -> Result<Option<JSONValue>, Error> {
this.0.get()
}
#[export]
fn get_last(
#[try_from_ref] this: &CacheWrapper,
number_of_old_entries: u32,
) -> Result<Vec<JSONValue>, Error> {
this.0.get_last(number_of_old_entries)
}
#[export]
fn delete(#[try_from_ref] this: &CacheWrapper, lock_timeout: u64) -> Result<(), Error> {
this.0.delete(Duration::from_secs(lock_timeout))
}
}

View File

@@ -3,3 +3,6 @@
directory = "/usr/share/cargo/registry"
[source.crates-io]
replace-with = "debian-packages"
[profile.release]
debug = true

View File

@@ -1,6 +1,6 @@
[package]
name = "pmg-rs"
version = "0.6.3"
version = "0.7.6"
description = "PMG parts which have been ported to rust"
homepage = "https://www.proxmox.com"
authors = ["Proxmox Support Team <support@proxmox.com>"]
@@ -8,34 +8,37 @@ edition = "2021"
license = "AGPL-3"
repository = "https://git.proxmox.com/?p=proxmox.git"
exclude = [
"build",
"debian",
"PMG",
]
exclude = ["build", "debian", "PMG"]
[lib]
crate-type = [ "cdylib" ]
crate-type = ["cdylib"]
[dependencies]
anyhow = "1.0"
env_logger = "0.9"
hex = "0.4"
http = "0.2.7"
libc = "0.2"
log = "0.4.17"
nix = "0.26"
openssl = "0.10.40"
serde = "1.0"
serde_bytes = "0.11"
serde_json = "1.0"
tracing = "0.1.37"
url = "2"
perlmod = { version = "0.13", features = [ "exporter" ] }
perlmod = { version = "0.13.4", features = ["exporter"] }
proxmox-acme-rs = { version = "0.4", features = ["client"] }
proxmox-apt = "0.9.4"
proxmox-http = { version = "0.8", features = ["client-sync", "client-trait"] }
proxmox-subscription = "0.3"
proxmox-sys = "0.4.2"
proxmox-tfa = { version = "4", features = ["api"] }
proxmox-time = "1.1.3"
proxmox-acme = { version = "0.5", features = ["client", "api-types"] }
proxmox-apt = { version = "0.11", features = ["cache"] }
proxmox-apt-api-types = "1.0"
proxmox-config-digest = "0.1"
proxmox-http = { version = "0.9", features = ["client-sync", "client-trait"] }
proxmox-http-error = "0.1.0"
proxmox-log = "0.2"
proxmox-notify = "0.4"
proxmox-shared-cache = "0.1.0"
proxmox-subscription = "0.4"
proxmox-sys = "0.6"
proxmox-tfa = { version = "5", features = ["api"] }
proxmox-time = "2"

4
pmg-rs/Fixup.pm Normal file
View File

@@ -0,0 +1,4 @@
# BEGIN Fixup.pm
# This is prepended to the current PMG.pm to force-include the temporary `openssl-probe` fixup.
use Proxmox::Lib::SslProbe;
# END Fixup.pm

View File

@@ -1,4 +1,4 @@
include /usr/share/dpkg/pkg-info.mk
#include /usr/share/dpkg/pkg-info.mk
PACKAGE=libpmg-rs-perl
@@ -22,7 +22,8 @@ PERLMOD_GENPACKAGE := /usr/lib/perlmod/genpackage.pl \
--lib=pmg_rs \
--lib-tag=proxmox \
--lib-package=Proxmox::Lib::PMG \
--lib-prefix=PMG
--lib-prefix=PMG \
--include-file=Fixup.pm
PERLMOD_PACKAGES := \
PMG::RS::APT::Repositories \
@@ -31,8 +32,10 @@ PERLMOD_PACKAGES := \
PMG::RS::OpenId \
PMG::RS::TFA
PERLMOD_PACKAGE_FILES := $(addsuffix .pm,$(subst ::,/,$(PERLMOD_PACKAGES)))
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release
CARGO_BUILD_ARGS += --release --offline
TARGET_DIR=release
else
TARGET_DIR=debug
@@ -41,12 +44,13 @@ endif
all: PMG
cargo build $(CARGO_BUILD_ARGS)
PMG: Proxmox/Lib/PMG.pm
Proxmox/Lib/PMG.pm:
Proxmox: Proxmox/Lib/PMG.pm
PMG: $(PERLMOD_PACKAGE_FILES)
Proxmox/Lib/PMG.pm $(PERLMOD_PACKAGE_FILES) &: Fixup.pm
$(PERLMOD_GENPACKAGE) $(PERLMOD_PACKAGES)
.PHONY: install
install: target/release/libpmg_rs.so Proxmox/Lib/PMG.pm PMG
install: target/release/libpmg_rs.so Proxmox/Lib/PMG.pm $(PERLMOD_PACKAGE_FILES)
install -d -m755 $(DESTDIR)$(PERL_INSTALLVENDORARCH)/auto
install -m644 target/release/libpmg_rs.so $(DESTDIR)$(PERL_INSTALLVENDORARCH)/auto/libpmg_rs.so
install -d -m755 $(DESTDIR)$(PERL_INSTALLVENDORLIB)
@@ -56,6 +60,7 @@ install: target/release/libpmg_rs.so Proxmox/Lib/PMG.pm PMG
distclean: clean
clean:
rm -rf PMG Proxmox
cargo clean
rm -f *.deb *.dsc *.tar.* *.build *.buildinfo *.changes Cargo.lock
rm -rf $(PACKAGE)-[0-9]*/
@@ -71,11 +76,11 @@ upload: $(DEBS)
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - $(DEBS) | ssh -X repoman@repo.proxmox.com upload --product pmg --dist $(DEB_DISTRIBUTION)
$(BUILDDIR): src debian common/src Cargo.toml Makefile .cargo/config
$(BUILDDIR): src debian common/src Cargo.toml Makefile .cargo/config.toml
rm -rf $(BUILDDIR) $(BUILDDIR).tmp
mkdir $(BUILDDIR).tmp
mkdir $(BUILDDIR).tmp/common
cp -a -t $(BUILDDIR).tmp src debian Cargo.toml Makefile .cargo
cp -a -t $(BUILDDIR).tmp src debian Cargo.toml Makefile .cargo Fixup.pm
cp -a -t $(BUILDDIR).tmp/common common/src
mv $(BUILDDIR).tmp $(BUILDDIR)

View File

@@ -1,9 +1,58 @@
libpmg-rs-perl (0.6.3) bullseye; urgency=medium
libpmg-rs-perl (0.7.6) bookworm; urgency=medium
* build with proxmox-apt 0.9.4 to also detect repository with next suite as
configured
* upgrade to current rust crates or perlmod and proxmox-sys/tfa/apt/notify
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Jun 2023 11:36:12 +0200
* add bindings for proxmox-shared-cache crate
* use apt api method implementation from proxmox-apt crate
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Aug 2024 14:19:56 +0200
libpmg-rs-perl (0.7.5) bookworm; urgency=medium
* add EAB binding support to ACME
* make Proxmox::Lib::PMG pull in Proxmox::Lib::SslProbe to work around
an issue where the openssl-probe crate calls setenv() and messes up perl's
view of the environment
-- Proxmox Support Team <support@proxmox.com> Thu, 07 Dec 2023 09:57:43 +0100
libpmg-rs-perl (0.7.4) bookworm; urgency=medium
* update to env logger 0.10
* use declare_magic for ACME
* add Promox::Lib::PMG::use_safe_putenv
-- Proxmox Support Team <support@proxmox.com> Wed, 06 Dec 2023 11:22:56 +0100
libpmg-rs-perl (0.7.3) bookworm; urgency=medium
* reset failure counts when unlocking second factors
-- Proxmox Support Team <support@proxmox.com> Wed, 05 Jul 2023 13:35:23 +0200
libpmg-rs-perl (0.7.2) bookworm; urgency=medium
* set default log level to 'info'
* introduce PMG_LOG environment variable to override log level
* add tfa_lock_status query sub
* add api_unlock_tfa sub
* bump proxmox-tfa to 4.0.2
-- Proxmox Support Team <support@proxmox.com> Tue, 27 Jun 2023 16:01:23 +0200
libpmg-rs-perl (0.7.1) bookworm; urgency=medium
* rebuild for Debian 12 Bookworm based release series
-- Proxmox Support Team <support@proxmox.com> Thu, 18 May 2023 12:01:08 +0200
libpmg-rs-perl (0.6.2) bullseye; urgency=medium

View File

@@ -1 +0,0 @@
12

View File

@@ -1,45 +1,58 @@
Source: libpmg-rs-perl
Section: perl
Priority: optional
Build-Depends: cargo:native <!nocheck>,
debhelper-compat (= 13),
librust-openssl-probe-dev (= 0.1.5-1~bpo12+pve1),
dh-cargo (>= 25),
librust-anyhow-1+default-dev,
librust-hex-0.4+default-dev,
librust-http-0.2+default-dev (>= 0.2.7-~~),
librust-libc-0.2+default-dev,
librust-log-0.4+default-dev (>= 0.4.17-~~),
librust-nix-0.26+default-dev,
librust-openssl-0.10+default-dev (>= 0.10.40-~~),
librust-perlmod-0.13+default-dev (>= 0.13.4-~~),
librust-perlmod-0.13+exporter-dev (>= 0.13.4-~~),
librust-proxmox-acme-0.5+api-types-dev,
librust-proxmox-acme-0.5+client-dev,
librust-proxmox-acme-0.5+default-dev,
librust-proxmox-apt-0.11+cache-dev,
librust-proxmox-apt-0.11+default-dev,
librust-proxmox-apt-api-types-1+default-dev,
librust-proxmox-config-digest-0.1+default-dev,
librust-proxmox-http-0.9+client-sync-dev,
librust-proxmox-http-0.9+client-trait-dev,
librust-proxmox-http-0.9+default-dev,
librust-proxmox-http-error-0.1+default-dev,
librust-proxmox-log-0.2+default-dev,
librust-proxmox-notify-0.4+default-dev,
librust-proxmox-shared-cache-0.1+default-dev,
librust-proxmox-subscription-0.4+default-dev,
librust-proxmox-sys-0.6+default-dev,
librust-proxmox-tfa-5+api-dev,
librust-proxmox-tfa-5+default-dev,
librust-proxmox-time-2+default-dev,
librust-serde-1+default-dev,
librust-serde-bytes-0.11+default-dev,
librust-serde-json-1+default-dev,
librust-tracing-0.1+default-dev (>= 0.1.37-~~),
librust-url-2+default-dev,
libstd-rust-dev <!nocheck>,
perlmod-bin (>= 0.2.0-3),
rustc:native <!nocheck>,
Maintainer: Proxmox Support Team <support@proxmox.com>
Build-Depends:
debhelper (>= 12),
dh-cargo (>= 24),
perlmod-bin,
cargo:native <!nocheck>,
rustc:native <!nocheck>,
libstd-rust-dev <!nocheck>,
librust-anyhow-1+default-dev,
librust-env-logger-0.9+default-dev,
librust-hex-0.4+default-dev,
librust-http-0.2+default-dev (>= 0.2.7-~~),
librust-libc-0.2+default-dev,
librust-nix-0.26+default-dev,
librust-openssl-0.10+default-dev (>= 0.10.40-~~),
librust-perlmod-0.13+default-dev,
librust-perlmod-0.13+exporter-dev,
librust-proxmox-acme-rs-0.4+client-dev,
librust-proxmox-acme-rs-0.4+default-dev,
librust-proxmox-apt-0.9+default-dev (>= 0.9.4-~~),
librust-proxmox-http-0.8+client-sync-dev,
librust-proxmox-http-0.8+client-trait-dev,
librust-proxmox-http-0.8+default-dev,
librust-proxmox-subscription-0.3+default-dev,
librust-proxmox-sys-0.4+default-dev (>= 0.4.2-~~),
librust-proxmox-tfa-4+api-dev,
librust-proxmox-tfa-4+default-dev,
librust-proxmox-time-1+default-dev (>= 1.1.3-~~),
librust-serde-1+default-dev,
librust-serde-bytes-0.11+default-dev,
librust-serde-json-1+default-dev,
librust-url-2+default-dev,
Standards-Version: 4.3.0
Standards-Version: 4.6.1
Vcs-Git: git://git.proxmox.com/git/proxmox-perl-rs.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox-perl-rs.git
Homepage: https://www.proxmox.com
Package: libpmg-rs-perl
Architecture: any
Depends: ${perl:Depends},
Depends: ${misc:Depends},
${perl:Depends},
${shlibs:Depends},
libproxmox-rs-perl (>= 0.3.3),
Description: Components of Proxmox Mail Gateway which have been ported to Rust.
Contains parts of Proxmox Mail Gateway which have been ported to, or newly
implemented in the Rust programming language.

View File

@@ -1,10 +1,31 @@
# WARNING: this is *NOT* use as canonical source for d/control, but rather occasionally used via
# an invocation like:
# make clean
# rm debian/control
# debcargo package --config debian/debcargo.toml --changelog-ready --no-overlay-write-back --directory libpmg-rs-perl-0.7.1 pmg-rs 0.7.1
# mv libpmg-rs-perl-0.7.1/debian/control debian/control
# to semi.manually refresh the control file
#
# NOTE: debcargo thinks this is a source package, but it isn't! Drop provides, the dependencies of
# the binary package on rust source packages, Multi-Arch same, and other things that do not make
# sense for a combined perl + arch-dependent library package.
overlay = "."
crate_src_path = ".."
maintainer = "Proxmox Support Team <support@proxmox.com>"
[source]
section = "perl"
vcs_git = "git://git.proxmox.com/git/proxmox.git"
vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
vcs_git = "git://git.proxmox.com/git/proxmox-perl-rs.git"
vcs_browser = "https://git.proxmox.com/?p=proxmox-perl-rs.git"
build_depends = [
"perlmod-bin",
]
[packages.libpmg-rs-perl]
[packages.bin]
name = "libpmg-rs-perl"
summary = "Components of Proxmox Mail Gateway which have been ported to Rust."
description = """
Contains parts of Proxmox Mail Gateway which have been ported to, or newly
implemented in the Rust programming language.
"""

View File

@@ -1,7 +1,25 @@
#!/usr/bin/make -f
include /usr/share/dpkg/pkg-info.mk
include /usr/share/rustc/architecture.mk
#export DH_VERBOSE=1
export BUILD_MODE=release
CARGO=/usr/share/cargo/bin/cargo
export CFLAGS CXXFLAGS CPPFLAGS LDFLAGS
export DEB_HOST_RUST_TYPE DEB_HOST_GNU_TYPE
export CARGO_HOME = $(CURDIR)/debian/cargo_home
export DEB_CARGO_CRATE=pmg-rs_$(DEB_VERSION_UPSTREAM)
export DEB_CARGO_PACKAGE=pmg-rs
%:
dh $@
override_dh_auto_configure:
@perl -ne 'if (/^version\s*=\s*"(\d+(?:\.\d+)+)"/) { my $$v_cargo = $$1; my $$v_deb = "$(DEB_VERSION_UPSTREAM)"; \
die "ERROR: d/changelog <-> Cargo.toml version mismatch: $$v_cargo != $$v_deb\n" if $$v_cargo ne $$v_deb; exit(0); }' Cargo.toml
$(CARGO) prepare-debian $(CURDIR)/debian/cargo_registry --link-from-system
dh_auto_configure

View File

@@ -9,8 +9,8 @@ use std::os::unix::fs::OpenOptionsExt;
use anyhow::{format_err, Error};
use serde::{Deserialize, Serialize};
use proxmox_acme_rs::account::AccountData as AcmeAccountData;
use proxmox_acme_rs::{Account, Client};
use proxmox_acme::types::AccountData as AcmeAccountData;
use proxmox_acme::{Account, Client};
/// Our on-disk format inherited from PVE's proxmox-acme code.
#[derive(Deserialize, Serialize)]
@@ -79,6 +79,7 @@ impl Inner {
tos_agreed: bool,
contact: Vec<String>,
rsa_bits: Option<u32>,
eab_creds: Option<(String, String)>,
) -> Result<(), Error> {
self.tos = if tos_agreed {
self.client.terms_of_service_url()?.map(str::to_owned)
@@ -86,7 +87,9 @@ impl Inner {
None
};
let _account = self.client.new_account(contact, tos_agreed, rsa_bits)?;
let _account = self
.client
.new_account(contact, tos_agreed, rsa_bits, eab_creds)?;
let file = OpenOptions::new()
.write(true)
.create(true)
@@ -182,67 +185,45 @@ impl Inner {
#[perlmod::package(name = "PMG::RS::Acme")]
pub mod export {
use std::collections::HashMap;
use std::convert::TryFrom;
use std::sync::Mutex;
use anyhow::Error;
use serde_bytes::{ByteBuf, Bytes};
use perlmod::Value;
use proxmox_acme_rs::directory::Meta;
use proxmox_acme_rs::order::OrderData;
use proxmox_acme_rs::{Authorization, Challenge, Order};
use proxmox_acme::directory::Meta;
use proxmox_acme::order::OrderData;
use proxmox_acme::{Authorization, Challenge, Order};
use super::{AccountData, Inner};
const CLASSNAME: &str = "PMG::RS::Acme";
perlmod::declare_magic!(Box<Acme> : &Acme as "PMG::RS::Acme");
/// An Acme client instance.
pub struct Acme {
inner: Mutex<Inner>,
}
impl<'a> TryFrom<&'a Value> for &'a Acme {
type Error = Error;
fn try_from(value: &'a Value) -> Result<&'a Acme, Error> {
Ok(unsafe { value.from_blessed_box(CLASSNAME)? })
}
}
fn bless(class: Value, mut ptr: Box<Acme>) -> Result<Value, Error> {
let value = Value::new_pointer::<Acme>(&mut *ptr);
let value = Value::new_ref(&value);
let this = value.bless_sv(&class)?;
let _perl = Box::leak(ptr);
Ok(this)
}
/// Create a new ACME client instance given an account path and an API directory URL.
#[export(raw_return)]
pub fn new(#[raw] class: Value, api_directory: String) -> Result<Value, Error> {
bless(
class,
Box::new(Acme {
Ok(perlmod::instantiate_magic!(
&class,
MAGIC => Box::new(Acme {
inner: Mutex::new(Inner::new(api_directory)?),
}),
)
})
))
}
/// Load an existing account.
#[export(raw_return)]
pub fn load(#[raw] class: Value, account_path: String) -> Result<Value, Error> {
bless(
class,
Box::new(Acme {
Ok(perlmod::instantiate_magic!(
&class,
MAGIC => Box::new(Acme {
inner: Mutex::new(Inner::load(account_path)?),
}),
)
}
#[export(name = "DESTROY")]
fn destroy(#[raw] this: Value) {
perlmod::destructor!(this, Acme: CLASSNAME);
})
))
}
/// Create a new account.
@@ -260,11 +241,16 @@ pub mod export {
tos_agreed: bool,
contact: Vec<String>,
rsa_bits: Option<u32>,
eab_kid: Option<String>,
eab_hmac_key: Option<String>,
) -> Result<(), Error> {
this.inner
.lock()
.unwrap()
.new_account(account_path, tos_agreed, contact, rsa_bits)
this.inner.lock().unwrap().new_account(
account_path,
tos_agreed,
contact,
rsa_bits,
eab_kid.zip(eab_hmac_key),
)
}
/// Get the directory's meta information.

View File

@@ -1,12 +1,16 @@
#[perlmod::package(name = "PMG::RS::APT::Repositories")]
mod export {
use anyhow::Error;
use proxmox_apt_api_types::{
APTChangeRepositoryOptions, APTRepositoriesResult, APTRepositoryHandle,
};
use proxmox_config_digest::ConfigDigest;
use crate::common::apt::repositories::export as common;
/// Get information about configured and standard repositories.
#[export]
pub fn repositories() -> Result<common::RepositoriesResult, Error> {
pub fn repositories() -> Result<APTRepositoriesResult, Error> {
common::repositories("pmg")
}
@@ -15,7 +19,10 @@ mod export {
///
/// The `digest` parameter asserts that the configuration has not been modified.
#[export]
pub fn add_repository(handle: &str, digest: Option<&str>) -> Result<(), Error> {
pub fn add_repository(
handle: APTRepositoryHandle,
digest: Option<ConfigDigest>,
) -> Result<(), Error> {
common::add_repository(handle, "pmg", digest)
}
@@ -26,8 +33,8 @@ mod export {
pub fn change_repository(
path: &str,
index: usize,
options: common::ChangeProperties,
digest: Option<&str>,
options: APTChangeRepositoryOptions,
digest: Option<ConfigDigest>,
) -> Result<(), Error> {
common::change_repository(path, index, options, digest)
}

View File

@@ -5,7 +5,7 @@ pub mod export {
use anyhow::Error;
use serde_bytes::ByteBuf;
use proxmox_acme_rs::util::Csr;
use proxmox_acme::util::Csr;
/// Generates a CSR and its accompanying private key.
///

View File

@@ -1,3 +1,7 @@
use anyhow::Error;
use proxmox_apt_api_types::APTUpdateInfo;
#[path = "../common/src/mod.rs"]
pub mod common;
@@ -12,6 +16,20 @@ mod export {
#[export]
pub fn init() {
common::logger::init();
common::logger::init("PMG_LOG", "info");
}
/// CLI tools should call this very early. This is a workaround causing environment variable
/// manipulation to leak instead of crash. Required when calling into rust code that causes
/// `setenv` calls, particularly code using the openssl crate.
#[export]
pub fn use_safe_putenv() {
perlmod::ffi::use_safe_putenv(true);
}
}
pub fn send_updates_available(_updates: &[&APTUpdateInfo]) -> Result<(), Error> {
log::warn!("update notifications are not implemented for PMG yet");
Ok(())
}

View File

@@ -24,6 +24,7 @@ pub(self) use proxmox_tfa::api::{
#[perlmod::package(name = "PMG::RS::TFA")]
mod export {
use std::collections::HashMap;
use std::convert::TryInto;
use std::sync::Mutex;
@@ -436,6 +437,66 @@ mod export {
Err(methods::EntryNotFound) => bail!("no such entry"),
}
}
#[export]
fn api_unlock_tfa(#[raw] raw_this: Value, userid: &str) -> Result<bool, Error> {
let this: &Tfa = (&raw_this).try_into()?;
Ok(methods::unlock_and_reset_tfa(
&mut this.inner.lock().unwrap(),
&UserAccess::new(&raw_this)?,
userid,
)?)
}
#[derive(serde::Serialize)]
#[serde(rename_all = "kebab-case")]
struct TfaLockStatus {
/// Once a user runs into a TOTP limit they get locked out of TOTP until they successfully use
/// a recovery key.
#[serde(skip_serializing_if = "bool_is_false", default)]
totp_locked: bool,
/// If a user hits too many 2nd factor failures, they get completely blocked for a while.
#[serde(skip_serializing_if = "Option::is_none", default)]
#[serde(deserialize_with = "filter_expired_timestamp")]
tfa_locked_until: Option<i64>,
}
impl From<&proxmox_tfa::api::TfaUserData> for TfaLockStatus {
fn from(data: &proxmox_tfa::api::TfaUserData) -> Self {
Self {
totp_locked: data.totp_locked,
tfa_locked_until: data.tfa_locked_until,
}
}
}
fn bool_is_false(b: &bool) -> bool {
!*b
}
#[export]
fn tfa_lock_status(
#[try_from_ref] this: &Tfa,
userid: Option<&str>,
) -> Result<Option<perlmod::Value>, Error> {
let this = this.inner.lock().unwrap();
if let Some(userid) = userid {
if let Some(user) = this.users.get(userid) {
Ok(Some(perlmod::to_value(&TfaLockStatus::from(user))?))
} else {
Ok(None)
}
} else {
Ok(Some(perlmod::to_value(
&HashMap::<String, TfaLockStatus>::from_iter(
this.users
.iter()
.map(|(uid, data)| (uid.clone(), TfaLockStatus::from(data))),
),
)?))
}
}
}
/// Attach the path to errors from [`nix::mkir()`].
@@ -589,9 +650,8 @@ impl proxmox_tfa::api::OpenUserChallengeData for UserAccess {
}
}
// TODO: enable once we have UI/API admin stuff to unlock locked accounts
fn enable_lockout(&self) -> bool {
false
true
}
}

42
pve-rs/.cargo/config.toml Normal file
View File

@@ -0,0 +1,42 @@
[source.crates-io]
replace-with = "vendored-sources"
[source."git+git://git.proxmox.com/git/perlmod.git"]
git = "git://git.proxmox.com/git/perlmod.git"
replace-with = "vendored-sources"
[source."git+git://git.proxmox.com/git/proxmox-resource-scheduling.git"]
git = "git://git.proxmox.com/git/proxmox-resource-scheduling.git"
replace-with = "vendored-sources"
[source."git+https://gitea.basealt.ru/Proxmox/apt-pkg-native.git"]
git = "https://gitea.basealt.ru/Proxmox/apt-pkg-native.git"
replace-with = "vendored-sources"
[source."git+https://gitea.basealt.ru/Proxmox/proxmox.git"]
git = "https://gitea.basealt.ru/Proxmox/proxmox.git"
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "vendor"
# Local path overrides
# NOTE: You must run `cargo update` after changing this for it to take effect!
[patch.crates-io]
proxmox-tfa = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-time = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-uuid = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
perlmod = { git= "git://git.proxmox.com/git/perlmod.git" }
proxmox-apt-api-types = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-apt = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-config-digest = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-log = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-openid = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-schema = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-sys = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-http = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-http-error = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-notify = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-shared-cache = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-subscription = { git = "https://gitea.basealt.ru/Proxmox/proxmox.git" }
proxmox-resource-scheduling = { git = "git://git.proxmox.com/git/proxmox-resource-scheduling.git" }

2518
pve-rs/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[package]
name = "pve-rs"
version = "0.7.7"
version = "0.8.10"
description = "PVE parts which have been ported to Rust"
homepage = "https://www.proxmox.com"
authors = ["Proxmox Support Team <support@proxmox.com>"]
@@ -8,35 +8,44 @@ edition = "2021"
license = "AGPL-3"
repository = "https://git.proxmox.com/?p=proxmox.git"
exclude = [
"debian",
]
exclude = ["debian"]
[lib]
crate-type = [ "cdylib" ]
crate-type = ["cdylib"]
[dependencies]
anyhow = "1.0"
base32 = "0.4"
base64 = "0.13"
env_logger = "0.9"
hex = "0.4"
http = "0.2.7"
libc = "0.2"
log = "0.4.17"
nix = "0.26"
openssl = "0.10.40"
serde = "1.0"
serde_bytes = "0.11"
serde_json = "1.0"
tracing = "0.1.37"
url = "2"
perlmod = { version = "0.13", features = [ "exporter" ] }
perlmod = { version = "0.13", features = ["exporter"] }
proxmox-apt = "0.9.4"
proxmox-http = { version = "0.8", features = ["client-sync", "client-trait"] }
proxmox-openid = "0.9.8"
proxmox-resource-scheduling = "0.2.1"
proxmox-subscription = "0.3"
proxmox-sys = "0.4.2"
proxmox-tfa = { version = "4", features = ["api"] }
proxmox-time = "1.1.3"
proxmox-apt = { version = "0.11", features = ["cache"] }
proxmox-apt-api-types = "1.0"
proxmox-config-digest = "0.1"
proxmox-http = { version = "0.9", features = ["client-sync", "client-trait"] }
proxmox-http-error = "0.1.0"
proxmox-log = "0.2"
proxmox-notify = { version = "0.4", features = ["pve-context"] }
proxmox-openid = "0.10"
proxmox-resource-scheduling = "0.3.0"
proxmox-shared-cache = "0.1.0"
proxmox-subscription = "0.4"
proxmox-sys = "0.6"
proxmox-tfa = { version = "5", features = ["api"] }
proxmox-time = "2"
[features]
alt-linux = ["proxmox-apt/alt-linux", "proxmox-apt-api-types/alt-linux"]
default = ["alt-linux"]

4
pve-rs/Fixup.pm Normal file
View File

@@ -0,0 +1,4 @@
# BEGIN Fixup.pm
# This is prepended to the current PMG.pm to force-include the temporary `openssl-probe` fixup.
use Proxmox::Lib::SslProbe;
# END Fixup.pm

View File

@@ -1,4 +1,4 @@
include /usr/share/dpkg/pkg-info.mk
#include /usr/share/dpkg/pkg-info.mk
PACKAGE=libpve-rs-perl
export PERLMOD_PRODUCT=PVE
@@ -23,7 +23,8 @@ PERLMOD_GENPACKAGE := /usr/lib/perlmod/genpackage.pl \
--lib=pve_rs \
--lib-tag=proxmox \
--lib-package=Proxmox::Lib::PVE \
--lib-prefix=PVE
--lib-prefix=PVE \
--include-file=Fixup.pm
PERLMOD_PACKAGES := \
PVE::RS::APT::Repositories \
@@ -31,8 +32,10 @@ PERLMOD_PACKAGES := \
PVE::RS::ResourceScheduling::Static \
PVE::RS::TFA
PERLMOD_PACKAGE_FILES := $(addsuffix .pm,$(subst ::,/,$(PERLMOD_PACKAGES)))
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release
CARGO_BUILD_ARGS += --release --offline
TARGET_DIR=release
else
TARGET_DIR=debug
@@ -42,18 +45,19 @@ all: PVE
cargo build $(CARGO_BUILD_ARGS)
mkdir -p test/Proxmox/Lib
sed -r -e \
's@^sub libdirs.*$$@sub libdirs { return ("./target/$(TARGET_DIR)", "./../target/$(TARGET_DIR)"); }@' \
's@^sub libfile.*$$@sub libfile { "$(shell pwd)/target/$(TARGET_DIR)/libpve_rs.so"; }@' \
Proxmox/Lib/PVE.pm >test/Proxmox/Lib/PVE.pm
PVE: Proxmox/Lib/PVE.pm
Proxmox/Lib/PVE.pm:
Proxmox: Proxmox/Lib/PVE.pm
PVE: $(PERLMOD_PACKAGE_FILES)
Proxmox/Lib/PVE.pm $(PERLMOD_PACKAGE_FILES) &: Fixup.pm
$(PERLMOD_GENPACKAGE) $(PERLMOD_PACKAGES)
check: all
$(MAKE) -C test test
.PHONY: install
install: target/release/libpve_rs.so Proxmox/Lib/PVE.pm PVE
install: target/release/libpve_rs.so Proxmox/Lib/PVE.pm $(PERLMOD_PACKAGE_FILES)
install -d -m755 $(DESTDIR)$(PERL_INSTALLVENDORARCH)/auto
install -m644 target/release/libpve_rs.so $(DESTDIR)$(PERL_INSTALLVENDORARCH)/auto/libpve_rs.so
install -d -m755 $(DESTDIR)$(PERL_INSTALLVENDORLIB)
@@ -62,6 +66,7 @@ install: target/release/libpve_rs.so Proxmox/Lib/PVE.pm PVE
find $(PM_DIR) \! -type d -print -exec install -Dm644 '{}' $(DESTDIR)$(PERL_INSTALLVENDORLIB)'/{}' ';'
clean:
rm -rf PVE Proxmox
cargo clean
rm -f *.deb *.dsc *.tar.* *.build *.buildinfo *.changes Cargo.lock
rm -rf $(PACKAGE)-[0-9]*/
@@ -77,11 +82,11 @@ upload: $(DEBS)
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - $(DEBS) | ssh -X repoman@repo.proxmox.com upload --product pve --dist $(DEB_DISTRIBUTION)
$(BUILDDIR): src debian test common/src Cargo.toml Makefile .cargo/config
$(BUILDDIR): src debian test common/src Cargo.toml Makefile .cargo/config.toml
rm -rf $(BUILDDIR) $(BUILDDIR).tmp
mkdir $(BUILDDIR).tmp
mkdir $(BUILDDIR).tmp/common
cp -a -t $(BUILDDIR).tmp src debian test Cargo.toml Makefile .cargo
cp -a -t $(BUILDDIR).tmp src debian test Cargo.toml Makefile .cargo Fixup.pm
cp -a -t $(BUILDDIR).tmp/common common/src
mv $(BUILDDIR).tmp $(BUILDDIR)

View File

@@ -1,9 +1,99 @@
libpve-rs-perl (0.7.7) bullseye; urgency=medium
libpve-rs-perl (0.8.10) bookworm; urgency=medium
* build with proxmox-apt 0.9.4 to also detect repository with next suite as
configured
* use apt api implementation from the proxmox-apt crate
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Jun 2023 11:30:59 +0200
* send apt update notification via proxmox-notify
* add bindings for proxmox-shared-cache crate
* update to current proxmox-time/tfa/sys/log crates
-- Proxmox Support Team <support@proxmox.com> Fri, 09 Aug 2024 13:42:38 +0200
libpve-rs-perl (0.8.9) bookworm; urgency=medium
* update to notify 0.4: use file based notification templates
-- Proxmox Support Team <support@proxmox.com> Tue, 04 Jun 2024 11:01:03 +0200
libpve-rs-perl (0.8.8) bookworm; urgency=medium
* notify: include 'hostname' and 'type' metadata fields for forwarded mails
* notify: smtp: forward original message instead of nesting
* notify: smtp: add 'Auto-Submitted' header to email body
* notify: api: allow resetting built-in targets if used by a matcher
-- Proxmox Support Team <support@proxmox.com> Wed, 10 Jan 2024 14:19:47 +0100
libpve-rs-perl (0.8.7) bookworm; urgency=medium
* notify: adapt to new matcher-based notification routing
* notify: add bindings for smtp API calls
* pve-rs: notify: remove notify_context for PVE
* notify: add 'disable' parameter
* notify: support 'origin' paramter
-- Proxmox Support Team <support@proxmox.com> Fri, 17 Nov 2023 13:41:17 +0100
libpve-rs-perl (0.8.6) bookworm; urgency=medium
* re-build with newer proxmox-apt depenceny to make Ceph Reef repo available
-- Proxmox Support Team <support@proxmox.com> Tue, 05 Sep 2023 15:37:44 +0200
libpve-rs-perl (0.8.5) bookworm; urgency=medium
* add PVE::RS::Notify module
-- Proxmox Support Team <support@proxmox.com> Mon, 24 Jul 2023 11:18:56 +0200
libpve-rs-perl (0.8.4) bookworm; urgency=medium
* reset failure counts when unlocking second factors
-- Proxmox Support Team <support@proxmox.com> Wed, 05 Jul 2023 13:30:17 +0200
libpve-rs-perl (0.8.3) bookworm; urgency=medium
* set default log level to 'info'
* introduce PVE_LOG environment variable to override log level
* add tfa_lock_status query sub
* bump proxmox-tfa to 4.0.2
-- Proxmox Support Team <support@proxmox.com> Mon, 05 Jun 2023 12:55:03 +0200
libpve-rs-perl (0.8.2) bookworm; urgency=medium
* update proxmox-apt which updated repositories info for bookworm
-- Proxmox Support Team <support@proxmox.com> Sun, 04 Jun 2023 18:33:42 +0200
libpve-rs-perl (0.8.1) bookworm; urgency=medium
* bump proxmox-apt,http,openid,subscription,sys crates to their bookworm
versions
* bump proxmox-tfa to 4.0.1 to include the unlock API
* enable TFA lockout and provide the `api_unlock_tfa` call
-- Proxmox Support Team <support@proxmox.com> Wed, 31 May 2023 14:17:31 +0200
libpve-rs-perl (0.8.0) bookworm; urgency=medium
* rebuild for Debian 12 Bookworm based release series
-- Proxmox Support Team <support@proxmox.com> Tue, 16 May 2023 14:26:52 +0200
libpve-rs-perl (0.7.6) bullseye; urgency=medium

View File

@@ -1,41 +1,49 @@
Source: libpve-rs-perl
Section: perl
Priority: optional
Build-Depends:
debhelper-compat (= 13),
dh-cargo (>= 24),
perlmod-bin,
cargo:native <!nocheck>,
rustc:native <!nocheck>,
libstd-rust-dev <!nocheck>,
librust-anyhow-1+default-dev,
librust-base32-0.4+default-dev,
librust-base64-0.13+default-dev,
librust-env-logger-0.9+default-dev,
librust-hex-0.4+default-dev,
librust-http-0.2+default-dev (>= 0.2.7-~~),
librust-libc-0.2+default-dev,
librust-nix-0.26+default-dev,
librust-openssl-0.10+default-dev (>= 0.10.40-~~),
librust-perlmod-0.13+default-dev,
librust-perlmod-0.13+exporter-dev,
librust-proxmox-apt-0.9+default-dev (>= 0.9.4-~~),
librust-proxmox-http-0.8+client-sync-dev,
librust-proxmox-http-0.8+client-trait-dev,
librust-proxmox-http-0.8+default-dev,
librust-proxmox-openid-0.9+default-dev (>= 0.9.8-~~),
librust-proxmox-resource-scheduling-0.2+default-dev (>= 0.2.1-~~),
librust-proxmox-subscription-0.3+default-dev,
librust-proxmox-sys-0.4+default-dev (>= 0.4.2-~~),
librust-proxmox-tfa-4+api-dev,
librust-proxmox-tfa-4+default-dev,
librust-proxmox-time-1+default-dev (>= 1.1.3-~~),
librust-serde-1+default-dev,
librust-serde-bytes-0.11+default-dev,
librust-serde-json-1+default-dev,
librust-url-2+default-dev,
Build-Depends: cargo:native <!nocheck>,
debhelper-compat (= 13),
dh-cargo (>= 25),
librust-anyhow-1+default-dev,
librust-base32-0.4+default-dev,
librust-base64-0.13+default-dev,
librust-hex-0.4+default-dev,
librust-http-0.2+default-dev (>= 0.2.7-~~),
librust-libc-0.2+default-dev,
librust-log-0.4+default-dev (>= 0.4.17-~~),
librust-nix-0.26+default-dev,
librust-openssl-0.10+default-dev (>= 0.10.40-~~),
librust-perlmod-0.13+default-dev,
librust-perlmod-0.13+exporter-dev,
librust-proxmox-apt-0.11+cache-dev,
librust-proxmox-apt-0.11+default-dev,
librust-proxmox-apt-api-types-1+default-dev,
librust-proxmox-config-digest-0.1+default-dev,
librust-proxmox-http-0.9+client-sync-dev,
librust-proxmox-http-0.9+client-trait-dev,
librust-proxmox-http-0.9+default-dev,
librust-proxmox-http-error-0.1+default-dev,
librust-proxmox-log-0.2+default-dev,
librust-proxmox-notify-0.4+default-dev,
librust-proxmox-notify-0.4+pve-context-dev,
librust-proxmox-openid-0.10+default-dev,
librust-proxmox-resource-scheduling-0.3+default-dev,
librust-proxmox-shared-cache-0.1+default-dev,
librust-proxmox-subscription-0.4+default-dev,
librust-proxmox-sys-0.6+default-dev,
librust-proxmox-tfa-5+api-dev,
librust-proxmox-tfa-5+default-dev,
librust-proxmox-time-2+default-dev,
librust-serde-1+default-dev,
librust-serde-bytes-0.11+default-dev,
librust-serde-json-1+default-dev,
librust-tracing-0.1+default-dev (>= 0.1.37-~~),
librust-url-2+default-dev,
libstd-rust-dev <!nocheck>,
perlmod-bin (>= 0.2.0-3),
rustc:native <!nocheck>,
Maintainer: Proxmox Support Team <support@proxmox.com>
Standards-Version: 4.5.1
Standards-Version: 4.6.1
Vcs-Git: git://git.proxmox.com/git/proxmox-perl-rs.git
Vcs-Browser: https://git.proxmox.com/?p=proxmox-perl-rs.git
Homepage: https://www.proxmox.com
@@ -43,14 +51,14 @@ Rules-Requires-Root: no
Package: libpve-rs-perl
Architecture: any
Depends:
${misc:Depends},
${perl:Depends},
${shlibs:Depends},
Breaks:
libpve-access-control (<< 7.1-3),
libpve-common-perl (<< 7.1-4),
pve-manager (<< 7.1-11),
Depends: ${misc:Depends},
${perl:Depends},
${shlibs:Depends},
libproxmox-rs-perl (>= 0.3.3),
Breaks: libpve-access-control (<< 7.1-3),
libpve-common-perl (<< 7.1-4),
libpve-notify-perl (<< 8.0.7),
pve-manager (<< 7.1-11),
Description: PVE parts which have been ported to Rust - Rust source code
This package contains the source for the Rust pve-rs crate, packaged by
debcargo for use with cargo and dh-cargo.

View File

@@ -1,7 +1,25 @@
#!/usr/bin/make -f
include /usr/share/dpkg/pkg-info.mk
include /usr/share/rustc/architecture.mk
#export DH_VERBOSE=1
export BUILD_MODE=release
CARGO=/usr/share/cargo/bin/cargo
export CFLAGS CXXFLAGS CPPFLAGS LDFLAGS
export DEB_HOST_RUST_TYPE DEB_HOST_GNU_TYPE
export CARGO_HOME = $(CURDIR)/debian/cargo_home
export DEB_CARGO_CRATE=pve-rs_$(DEB_VERSION_UPSTREAM)
export DEB_CARGO_PACKAGE=pve-rs
%:
dh $@
override_dh_auto_configure:
@perl -ne 'if (/^version\s*=\s*"(\d+(?:\.\d+)+)"/) { my $$v_cargo = $$1; my $$v_deb = "$(DEB_VERSION_UPSTREAM)"; \
die "ERROR: d/changelog <-> Cargo.toml version mismatch: $$v_cargo != $$v_deb\n" if $$v_cargo ne $$v_deb; exit(0); }' Cargo.toml
$(CARGO) prepare-debian $(CURDIR)/debian/cargo_registry --link-from-system
dh_auto_configure

View File

@@ -2,12 +2,17 @@
mod export {
use anyhow::Error;
use proxmox_apt_api_types::{
APTChangeRepositoryOptions, APTRepositoriesResult, APTRepositoryHandle,
};
use proxmox_config_digest::ConfigDigest;
use crate::common::apt::repositories::export as common;
/// Get information about configured and standard repositories.
#[export]
pub fn repositories() -> Result<common::RepositoriesResult, Error> {
common::repositories("pve")
pub fn repositories() -> Result<APTRepositoriesResult, Error> {
proxmox_apt::list_repositories("pve")
}
/// Add the repository identified by the `handle`.
@@ -15,7 +20,10 @@ mod export {
///
/// The `digest` parameter asserts that the configuration has not been modified.
#[export]
pub fn add_repository(handle: &str, digest: Option<&str>) -> Result<(), Error> {
pub fn add_repository(
handle: APTRepositoryHandle,
digest: Option<ConfigDigest>,
) -> Result<(), Error> {
common::add_repository(handle, "pve", digest)
}
@@ -26,8 +34,8 @@ mod export {
pub fn change_repository(
path: &str,
index: usize,
options: common::ChangeProperties,
digest: Option<&str>,
options: APTChangeRepositoryOptions,
digest: Option<ConfigDigest>,
) -> Result<(), Error> {
common::change_repository(path, index, options, digest)
}

View File

@@ -1,5 +1,13 @@
//! Rust library for the Proxmox VE code base.
use std::collections::HashMap;
use anyhow::Error;
use serde_json::json;
use proxmox_apt_api_types::APTUpdateInfo;
use proxmox_notify::{Config, Notification, Severity};
#[path = "../common/src/mod.rs"]
pub mod common;
@@ -10,10 +18,69 @@ pub mod tfa;
#[perlmod::package(name = "Proxmox::Lib::PVE", lib = "pve_rs")]
mod export {
use proxmox_notify::context::pve::PVE_CONTEXT;
use crate::common;
#[export]
pub fn init() {
common::logger::init();
common::logger::init("PVE_LOG", "info");
proxmox_notify::context::set_context(&PVE_CONTEXT);
}
}
fn send_notification(notification: &Notification) -> Result<(), Error> {
let config = proxmox_sys::fs::file_read_optional_string("/etc/pve/notifications.cfg")?
.unwrap_or_default();
let private_config =
proxmox_sys::fs::file_read_optional_string("/etc/pve/priv/notifications.cfg")?
.unwrap_or_default();
let config = Config::new(&config, &private_config)?;
proxmox_notify::api::common::send(&config, notification)?;
Ok(())
}
pub fn send_updates_available(updates: &[&APTUpdateInfo]) -> Result<(), Error> {
let hostname = proxmox_sys::nodename().to_string();
let metadata = HashMap::from([
("hostname".into(), hostname.clone()),
("type".into(), "package-updates".into()),
]);
// The template uses the `table` handlebars helper, so
// we need to form the approriate data structure first.
let update_table = json!({
"schema": {
"columns": [
{
"label": "Package",
"id": "Package",
},
{
"label": "Old Version",
"id": "OldVersion",
},
{
"label": "New Version",
"id": "Version",
}
],
},
"data": updates,
});
let template_data = json!({
"hostname": hostname,
"updates": update_table,
});
let notification =
Notification::from_template(Severity::Info, "package-updates", template_data, metadata);
send_notification(&notification)?;
Ok(())
}

View File

@@ -1,6 +1,5 @@
#[perlmod::package(name = "PVE::RS::OpenId", lib = "pve_rs")]
mod export {
use std::convert::TryFrom;
use std::sync::Mutex;
use anyhow::Error;
@@ -9,34 +8,13 @@ mod export {
use proxmox_openid::{OpenIdAuthenticator, OpenIdConfig, PrivateAuthState};
const CLASSNAME: &str = "PVE::RS::OpenId";
perlmod::declare_magic!(Box<OpenId> : &OpenId as "PVE::RS::OpenId");
/// An OpenIdAuthenticator client instance.
pub struct OpenId {
inner: Mutex<OpenIdAuthenticator>,
}
impl<'a> TryFrom<&'a Value> for &'a OpenId {
type Error = Error;
fn try_from(value: &'a Value) -> Result<&'a OpenId, Error> {
Ok(unsafe { value.from_blessed_box(CLASSNAME)? })
}
}
fn bless(class: Value, mut ptr: Box<OpenId>) -> Result<Value, Error> {
let value = Value::new_pointer::<OpenId>(&mut *ptr);
let value = Value::new_ref(&value);
let this = value.bless_sv(&class)?;
let _perl = Box::leak(ptr);
Ok(this)
}
#[export(name = "DESTROY")]
fn destroy(#[raw] this: Value) {
perlmod::destructor!(this, OpenId: CLASSNAME);
}
/// Create a new OpenId client instance
#[export(raw_return)]
pub fn discover(
@@ -45,12 +23,12 @@ mod export {
redirect_url: &str,
) -> Result<Value, Error> {
let open_id = OpenIdAuthenticator::discover(&config, redirect_url)?;
bless(
class,
Box::new(OpenId {
Ok(perlmod::instantiate_magic!(
&class,
MAGIC => Box::new(OpenId {
inner: Mutex::new(open_id),
}),
)
})
))
}
#[export]

View File

@@ -27,6 +27,7 @@ pub(self) use proxmox_tfa::api::{
#[perlmod::package(name = "PVE::RS::TFA")]
mod export {
use std::collections::HashMap;
use std::convert::TryInto;
use std::sync::Mutex;
@@ -484,6 +485,66 @@ mod export {
Err(methods::EntryNotFound) => bail!("no such entry"),
}
}
#[export]
fn api_unlock_tfa(#[raw] raw_this: Value, userid: &str) -> Result<bool, Error> {
let this: &Tfa = (&raw_this).try_into()?;
Ok(methods::unlock_and_reset_tfa(
&mut this.inner.lock().unwrap(),
&UserAccess::new(&raw_this)?,
userid,
)?)
}
#[derive(serde::Serialize)]
#[serde(rename_all = "kebab-case")]
struct TfaLockStatus {
/// Once a user runs into a TOTP limit they get locked out of TOTP until they successfully use
/// a recovery key.
#[serde(skip_serializing_if = "bool_is_false", default)]
totp_locked: bool,
/// If a user hits too many 2nd factor failures, they get completely blocked for a while.
#[serde(skip_serializing_if = "Option::is_none", default)]
#[serde(deserialize_with = "filter_expired_timestamp")]
tfa_locked_until: Option<i64>,
}
impl From<&proxmox_tfa::api::TfaUserData> for TfaLockStatus {
fn from(data: &proxmox_tfa::api::TfaUserData) -> Self {
Self {
totp_locked: data.totp_locked,
tfa_locked_until: data.tfa_locked_until,
}
}
}
fn bool_is_false(b: &bool) -> bool {
!*b
}
#[export]
fn tfa_lock_status(
#[try_from_ref] this: &Tfa,
userid: Option<&str>,
) -> Result<Option<perlmod::Value>, Error> {
let this = this.inner.lock().unwrap();
if let Some(userid) = userid {
if let Some(user) = this.users.get(userid) {
Ok(Some(perlmod::to_value(&TfaLockStatus::from(user))?))
} else {
Ok(None)
}
} else {
Ok(Some(perlmod::to_value(
&HashMap::<String, TfaLockStatus>::from_iter(
this.users
.iter()
.map(|(uid, data)| (uid.clone(), TfaLockStatus::from(data))),
),
)?))
}
}
}
/// Version 1 format of `/etc/pve/priv/tfa.cfg`
@@ -993,9 +1054,8 @@ impl proxmox_tfa::api::OpenUserChallengeData for UserAccess {
}
}
/// TODO: Enable this once we can consider most clusters to support the new format.
fn enable_lockout(&self) -> bool {
false
true
}
}

View File

@@ -86,8 +86,59 @@ sub test_overcommitted {
is($nodes[3], "A", 'fourth should be A');
}
sub test_balance_small_memory_difference {
my ($with_start_load) = @_;
my $static = PVE::RS::ResourceScheduling::Static->new();
# Memory is different to avoid flaky results with what would otherwise be ties.
$static->add_node("A", 8, 10_000_000_000);
$static->add_node("B", 4, 9_000_000_000);
$static->add_node("C", 4, 8_000_000_000);
if ($with_start_load) {
$static->add_service_usage_to_node("A", { maxcpu => 4, maxmem => 1_000_000_000 });
$static->add_service_usage_to_node("B", { maxcpu => 2, maxmem => 1_000_000_000 });
$static->add_service_usage_to_node("C", { maxcpu => 2, maxmem => 1_000_000_000 });
}
my $service = {
maxcpu => 3,
maxmem => 16_000_000,
};
for (my $i = 0; $i < 20; $i++) {
my $score_list = $static->score_nodes_to_start_service($service);
# imitate HA manager
my $scores = { map { $_->[0] => -$_->[1] } $score_list->@* };
my @nodes = sort {
$scores->{$a} <=> $scores->{$b} || $a cmp $b
} keys $scores->%*;
if ($i % 4 <= 1) {
is($nodes[0], "A", 'first should be A');
is($nodes[1], "B", 'second should be B');
is($nodes[2], "C", 'third should be C');
} elsif ($i % 4 == 2) {
is($nodes[0], "B", 'first should be B');
is($nodes[1], "C", 'second should be C');
is($nodes[2], "A", 'third should be A');
} elsif ($i % 4 == 3) {
is($nodes[0], "C", 'first should be C');
is($nodes[1], "A", 'second should be A');
is($nodes[2], "B", 'third should be B');
} else {
die "internal error, got $i % 4 == " . ($i % 4) . "\n";
}
$static->add_service_usage_to_node($nodes[0], $service);
}
}
test_basic();
test_balance();
test_overcommitted();
test_balance_small_memory_difference(1);
test_balance_small_memory_difference(0);
done_testing();

View File

@@ -0,0 +1 @@
{"files":{"CHANGELOG.md":"0d04c7dddcffba3d83d4a05466c9b79cf13360c33f081fc9800596716bf954ce","Cargo.lock":"783a3adbf5047e30e0aa52f28dc6a0134534de715e5a36fd402c17e2eba43b1a","Cargo.toml":"940373297f67456852d5236a1eb7a3856bc672c69529cbbe6052ce5f85221170","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"e99d88d232bf57d70f0fb87f6b496d44b6653f99f8a63d250a54c61ea4bcde40","README.md":"c635ed91d7b0c87ff2f0f311cd1a31336d2cbc4d011965d3b58afaca073538d9","src/bin/addr2line.rs":"3da8c7657604578961f2bf89052b94e6c59c55abe27a2707913f98875d666124","src/frame.rs":"de3b23388c36a0874db5569d1f49ce6cc52ef2006b9ae9b9a3eba7654b201e2b","src/function.rs":"7fc741622c44c24fdf54c8a44cbdee058d49da33974bc86cd45b70c143071e68","src/lazy.rs":"ec230b69a0d194fe253227a41903231ca70a88f896af7a6f8a5a7d9ac63bf618","src/lib.rs":"8bf9fe3f3ced8ff84d60fdd456a8ff6e73170825cd696b0291b4644c01e620d2","src/line.rs":"049e9b1526ae3433a6001e8377245131e9cbd056d17e67a9b34898598a4f1c28","src/loader.rs":"9ad08da02599b9742a9821742fd84dbe0294838b6faa5f5753eacddc6101ffc1","src/lookup.rs":"0d28a2fd00f0696f8fb50cdc88cb7d55a910df8bf3052b7c74ae50a387346e67","src/unit.rs":"f4399c401759e14db5d596cfddfe2c8a0591a81c18d9adaedba7d243cc3bd192"},"package":"dfbe277e56a376000877090da837660b4427aad530e3028d44e0bffe4f89a1c1"}

444
pve-rs/vendor/addr2line/CHANGELOG.md vendored Normal file
View File

@@ -0,0 +1,444 @@
# `addr2line` Change Log
--------------------------------------------------------------------------------
## 0.24.2 (2024/10/04)
### Changed
* Enabled caching of DWARF abbreviations.
[#318](https://github.com/gimli-rs/addr2line/pull/318)
* Changed the `addr2line` binary to prefer symbol names over DWARF names.
[#332](https://github.com/gimli-rs/addr2line/pull/332)
* Updated `gimli` dependency.
### Added
* Added `Context::from_arc_dwarf`.
[#327](https://github.com/gimli-rs/addr2line/pull/327)
* Added benchmark comparison.
[#315](https://github.com/gimli-rs/addr2line/pull/315)
[#321](https://github.com/gimli-rs/addr2line/pull/321)
[#322](https://github.com/gimli-rs/addr2line/pull/322)
[#325](https://github.com/gimli-rs/addr2line/pull/325)
* Added more tests.
[#328](https://github.com/gimli-rs/addr2line/pull/328)
[#330](https://github.com/gimli-rs/addr2line/pull/330)
[#331](https://github.com/gimli-rs/addr2line/pull/331)
[#333](https://github.com/gimli-rs/addr2line/pull/333)
--------------------------------------------------------------------------------
## 0.24.1 (2024/07/26)
### Changed
* Fixed parsing of partial units, which are found in supplementary object files.
[#313](https://github.com/gimli-rs/addr2line/pull/313)
--------------------------------------------------------------------------------
## 0.24.0 (2024/07/16)
### Breaking changes
* Updated `gimli` dependency.
### Changed
* Changed the order of ranges returned by `Context::find_location_range`, and
fixed handling in rare situations.
[#303](https://github.com/gimli-rs/addr2line/pull/303)
[#304](https://github.com/gimli-rs/addr2line/pull/304)
[#306](https://github.com/gimli-rs/addr2line/pull/306)
* Improved the performance of `Context::find_location`.
[#305](https://github.com/gimli-rs/addr2line/pull/305)
### Added
* Added `LoaderReader`.
[#307](https://github.com/gimli-rs/addr2line/pull/307)
* Added `--all` option to `addr2line`.
[#307](https://github.com/gimli-rs/addr2line/pull/307)
--------------------------------------------------------------------------------
## 0.23.0 (2024/05/26)
### Breaking changes
* Updated `gimli` dependency.
* Deleted `Context::new`, `Context::new_with_sup`, and `builtin_split_dwarf_loader`.
Use `Context::from_dwarf` or `Loader::new` instead.
This removes `object` from the public API.
[#296](https://github.com/gimli-rs/addr2line/pull/296)
### Changed
* Fixed handling of column 0 in the line table.
[#290](https://github.com/gimli-rs/addr2line/pull/290)
* Moved `addr2line` from `examples` to `bin`. Requires the `bin` feature.
[#291](https://github.com/gimli-rs/addr2line/pull/291)
* Split up `lib.rs` into smaller modules.
[#292](https://github.com/gimli-rs/addr2line/pull/292)
### Added
* Added `Loader`. Requires the `loader` feature.
[#296](https://github.com/gimli-rs/addr2line/pull/296)
[#297](https://github.com/gimli-rs/addr2line/pull/297)
* Added unpacked Mach-O support to `Loader`.
[#298](https://github.com/gimli-rs/addr2line/pull/298)
--------------------------------------------------------------------------------
## 0.22.0 (2024/04/11)
### Breaking changes
* Updated `gimli` and `object` dependencies.
--------------------------------------------------------------------------------
## 0.21.0 (2023/08/12)
### Breaking changes
* Updated `gimli`, `object`, and `fallible-iterator` dependencies.
### Changed
* The minimum supported rust version is 1.65.0.
* Store boxed slices instead of `Vec` objects in `Context`.
[#278](https://github.com/gimli-rs/addr2line/pull/278)
--------------------------------------------------------------------------------
## 0.20.0 (2023/04/15)
### Breaking changes
* The minimum supported rust version is 1.58.0.
* Changed `Context::find_frames` to return `LookupResult`.
Use `LookupResult::skip_all_loads` to obtain the result without loading split DWARF.
[#260](https://github.com/gimli-rs/addr2line/pull/260)
* Replaced `Context::find_dwarf_unit` with `Context::find_dwarf_and_unit`.
[#260](https://github.com/gimli-rs/addr2line/pull/260)
* Updated `object` dependency.
### Changed
* Fix handling of file index 0 for DWARF 5.
[#264](https://github.com/gimli-rs/addr2line/pull/264)
### Added
* Added types and methods to support loading split DWARF:
`LookupResult`, `SplitDwarfLoad`, `SplitDwarfLoader`, `Context::preload_units`.
[#260](https://github.com/gimli-rs/addr2line/pull/260)
[#262](https://github.com/gimli-rs/addr2line/pull/262)
[#263](https://github.com/gimli-rs/addr2line/pull/263)
--------------------------------------------------------------------------------
## 0.19.0 (2022/11/24)
### Breaking changes
* Updated `gimli` and `object` dependencies.
--------------------------------------------------------------------------------
## 0.18.0 (2022/07/16)
### Breaking changes
* Updated `object` dependency.
### Changed
* Fixed handling of relative path for `DW_AT_comp_dir`.
[#239](https://github.com/gimli-rs/addr2line/pull/239)
* Fixed handling of `DW_FORM_addrx` for DWARF 5 support.
[#243](https://github.com/gimli-rs/addr2line/pull/243)
* Fixed handling of units that are missing range information.
[#249](https://github.com/gimli-rs/addr2line/pull/249)
--------------------------------------------------------------------------------
## 0.17.0 (2021/10/24)
### Breaking changes
* Updated `gimli` and `object` dependencies.
### Changed
* Use `skip_attributes` to improve performance.
[#236](https://github.com/gimli-rs/addr2line/pull/236)
--------------------------------------------------------------------------------
## 0.16.0 (2021/07/26)
### Breaking changes
* Updated `gimli` and `object` dependencies.
--------------------------------------------------------------------------------
## 0.15.2 (2021/06/04)
### Fixed
* Allow `Context` to be `Send`.
[#219](https://github.com/gimli-rs/addr2line/pull/219)
--------------------------------------------------------------------------------
## 0.15.1 (2021/05/02)
### Fixed
* Don't ignore aranges with address 0.
[#217](https://github.com/gimli-rs/addr2line/pull/217)
--------------------------------------------------------------------------------
## 0.15.0 (2021/05/02)
### Breaking changes
* Updated `gimli` and `object` dependencies.
[#215](https://github.com/gimli-rs/addr2line/pull/215)
* Added `debug_aranges` parameter to `Context::from_sections`.
[#200](https://github.com/gimli-rs/addr2line/pull/200)
### Added
* Added `.debug_aranges` support.
[#200](https://github.com/gimli-rs/addr2line/pull/200)
* Added supplementary object file support.
[#208](https://github.com/gimli-rs/addr2line/pull/208)
### Fixed
* Fixed handling of Windows paths in locations.
[#209](https://github.com/gimli-rs/addr2line/pull/209)
* examples/addr2line: Flush stdout after each response.
[#210](https://github.com/gimli-rs/addr2line/pull/210)
* examples/addr2line: Avoid copying every section.
[#213](https://github.com/gimli-rs/addr2line/pull/213)
--------------------------------------------------------------------------------
## 0.14.1 (2020/12/31)
### Fixed
* Fix location lookup for skeleton units.
[#201](https://github.com/gimli-rs/addr2line/pull/201)
### Added
* Added `Context::find_location_range`.
[#196](https://github.com/gimli-rs/addr2line/pull/196)
[#199](https://github.com/gimli-rs/addr2line/pull/199)
--------------------------------------------------------------------------------
## 0.14.0 (2020/10/27)
### Breaking changes
* Updated `gimli` and `object` dependencies.
### Fixed
* Handle units that only have line information.
[#188](https://github.com/gimli-rs/addr2line/pull/188)
* Handle DWARF units with version <= 4 and no `DW_AT_name`.
[#191](https://github.com/gimli-rs/addr2line/pull/191)
* Fix handling of `DW_FORM_ref_addr`.
[#193](https://github.com/gimli-rs/addr2line/pull/193)
--------------------------------------------------------------------------------
## 0.13.0 (2020/07/07)
### Breaking changes
* Updated `gimli` and `object` dependencies.
* Added `rustc-dep-of-std` feature.
[#166](https://github.com/gimli-rs/addr2line/pull/166)
### Changed
* Improve performance by parsing function contents lazily.
[#178](https://github.com/gimli-rs/addr2line/pull/178)
* Don't skip `.debug_info` and `.debug_line` entries with a zero address.
[#182](https://github.com/gimli-rs/addr2line/pull/182)
--------------------------------------------------------------------------------
## 0.12.2 (2020/06/21)
### Fixed
* Avoid linear search for `DW_FORM_ref_addr`.
[#175](https://github.com/gimli-rs/addr2line/pull/175)
--------------------------------------------------------------------------------
## 0.12.1 (2020/05/19)
### Fixed
* Handle units with overlapping address ranges.
[#163](https://github.com/gimli-rs/addr2line/pull/163)
* Don't assert for functions with overlapping address ranges.
[#168](https://github.com/gimli-rs/addr2line/pull/168)
--------------------------------------------------------------------------------
## 0.12.0 (2020/05/12)
### Breaking changes
* Updated `gimli` and `object` dependencies.
* Added more optional features: `smallvec` and `fallible-iterator`.
[#160](https://github.com/gimli-rs/addr2line/pull/160)
### Added
* Added `Context::dwarf` and `Context::find_dwarf_unit`.
[#159](https://github.com/gimli-rs/addr2line/pull/159)
### Changed
* Removed `lazycell` dependency.
[#160](https://github.com/gimli-rs/addr2line/pull/160)
--------------------------------------------------------------------------------
## 0.11.0 (2020/01/11)
### Breaking changes
* Updated `gimli` and `object` dependencies.
* [#130](https://github.com/gimli-rs/addr2line/pull/130)
Changed `Location::file` from `Option<String>` to `Option<&str>`.
This required adding lifetime parameters to `Location` and other structs that
contain it.
* [#152](https://github.com/gimli-rs/addr2line/pull/152)
Changed `Location::line` and `Location::column` from `Option<u64>`to `Option<u32>`.
* [#156](https://github.com/gimli-rs/addr2line/pull/156)
Deleted `alloc` feature, and fixed `no-std` builds with stable rust.
Removed default `Reader` parameter for `Context`, and added `ObjectContext` instead.
### Added
* [#134](https://github.com/gimli-rs/addr2line/pull/134)
Added `Context::from_dwarf`.
### Changed
* [#133](https://github.com/gimli-rs/addr2line/pull/133)
Fixed handling of units that can't be parsed.
* [#155](https://github.com/gimli-rs/addr2line/pull/155)
Fixed `addr2line` output to match binutils.
* [#130](https://github.com/gimli-rs/addr2line/pull/130)
Improved `.debug_line` parsing performance.
* [#148](https://github.com/gimli-rs/addr2line/pull/148)
[#150](https://github.com/gimli-rs/addr2line/pull/150)
[#151](https://github.com/gimli-rs/addr2line/pull/151)
[#152](https://github.com/gimli-rs/addr2line/pull/152)
Improved `.debug_info` parsing performance.
* [#137](https://github.com/gimli-rs/addr2line/pull/137)
[#138](https://github.com/gimli-rs/addr2line/pull/138)
[#139](https://github.com/gimli-rs/addr2line/pull/139)
[#140](https://github.com/gimli-rs/addr2line/pull/140)
[#146](https://github.com/gimli-rs/addr2line/pull/146)
Improved benchmarks.
--------------------------------------------------------------------------------
## 0.10.0 (2019/07/07)
### Breaking changes
* [#127](https://github.com/gimli-rs/addr2line/pull/127)
Update `gimli`.
--------------------------------------------------------------------------------
## 0.9.0 (2019/05/02)
### Breaking changes
* [#121](https://github.com/gimli-rs/addr2line/pull/121)
Update `gimli`, `object`, and `fallible-iterator` dependencies.
### Added
* [#121](https://github.com/gimli-rs/addr2line/pull/121)
Reexport `gimli`, `object`, and `fallible-iterator`.
--------------------------------------------------------------------------------
## 0.8.0 (2019/02/06)
### Breaking changes
* [#107](https://github.com/gimli-rs/addr2line/pull/107)
Update `object` dependency to 0.11. This is part of the public API.
### Added
* [#101](https://github.com/gimli-rs/addr2line/pull/101)
Add `object` feature (enabled by default). Disable this feature to remove
the `object` dependency and `Context::new` API.
* [#102](https://github.com/gimli-rs/addr2line/pull/102)
Add `std` (enabled by default) and `alloc` features.
### Changed
* [#108](https://github.com/gimli-rs/addr2line/issues/108)
`demangle` no longer outputs the hash for rust symbols.
* [#109](https://github.com/gimli-rs/addr2line/issues/109)
Set default `R` for `Context<R>`.

613
pve-rs/vendor/addr2line/Cargo.lock generated vendored Normal file
View File

@@ -0,0 +1,613 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "addr2line"
version = "0.24.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f5fb1d8e4442bd405fdfd1dacb42792696b0cf9cb15882e5d097b742a676d375"
dependencies = [
"gimli",
]
[[package]]
name = "addr2line"
version = "0.24.2"
dependencies = [
"backtrace",
"clap",
"compiler_builtins",
"cpp_demangle",
"fallible-iterator",
"findshlibs",
"gimli",
"libtest-mimic",
"memmap2",
"object",
"rustc-demangle",
"rustc-std-workspace-alloc",
"rustc-std-workspace-core",
"smallvec",
"typed-arena",
]
[[package]]
name = "adler2"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "512761e0bb2578dd7380c6baaa0f4ce03e84f95e960231d1dec8bf4d7d6e2627"
[[package]]
name = "anstream"
version = "0.6.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "64e15c1ab1f89faffbf04a634d5e1962e9074f2741eef6d97f3c4e322426d526"
dependencies = [
"anstyle",
"anstyle-parse",
"anstyle-query",
"anstyle-wincon",
"colorchoice",
"is_terminal_polyfill",
"utf8parse",
]
[[package]]
name = "anstyle"
version = "1.0.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1bec1de6f59aedf83baf9ff929c98f2ad654b97c9510f4e70cf6f661d49fd5b1"
[[package]]
name = "anstyle-parse"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb47de1e80c2b463c735db5b217a0ddc39d612e7ac9e2e96a5aed1f57616c1cb"
dependencies = [
"utf8parse",
]
[[package]]
name = "anstyle-query"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d36fc52c7f6c869915e99412912f22093507da8d9e942ceaf66fe4b7c14422a"
dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "anstyle-wincon"
version = "3.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5bf74e1b6e971609db8ca7a9ce79fd5768ab6ae46441c572e46cf596f59e57f8"
dependencies = [
"anstyle",
"windows-sys 0.52.0",
]
[[package]]
name = "backtrace"
version = "0.3.74"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8d82cb332cdfaed17ae235a638438ac4d4839913cc2af585c3c6746e8f8bee1a"
dependencies = [
"addr2line 0.24.1",
"cfg-if",
"libc",
"miniz_oxide",
"object",
"rustc-demangle",
"windows-targets",
]
[[package]]
name = "bitflags"
version = "2.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b048fb63fd8b5923fc5aa7b340d8e156aec7ec02f0c78fa8a6ddc2613f6f71de"
[[package]]
name = "cc"
version = "1.1.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "812acba72f0a070b003d3697490d2b55b837230ae7c6c6497f05cc2ddbb8d938"
dependencies = [
"shlex",
]
[[package]]
name = "cfg-if"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "clap"
version = "4.5.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7be5744db7978a28d9df86a214130d106a89ce49644cbc4e3f0c22c3fba30615"
dependencies = [
"clap_builder",
"clap_derive",
]
[[package]]
name = "clap_builder"
version = "4.5.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5fbc17d3ef8278f55b282b2a2e75ae6f6c7d4bb70ed3d0382375104bfafdb4b"
dependencies = [
"anstream",
"anstyle",
"clap_lex",
"strsim",
"terminal_size",
]
[[package]]
name = "clap_derive"
version = "4.5.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ac6a0c7b1a9e9a5186361f67dfa1b88213572f427fb9ab038efb2bd8c582dab"
dependencies = [
"heck",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "clap_lex"
version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1462739cb27611015575c0c11df5df7601141071f07518d56fcc1be504cbec97"
[[package]]
name = "colorchoice"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3fd119d74b830634cea2a0f58bbd0d54540518a14397557951e79340abc28c0"
[[package]]
name = "compiler_builtins"
version = "0.1.131"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d18d2ba094b78965890b2912f45dc8cb6bb3aff315ef54755ec33223b6454502"
[[package]]
name = "cpp_demangle"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96e58d342ad113c2b878f16d5d034c03be492ae460cdbc02b7f0f2284d310c7d"
dependencies = [
"cfg-if",
]
[[package]]
name = "crc32fast"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a97769d94ddab943e4510d138150169a2758b5ef3eb191a9ee688de3e23ef7b3"
dependencies = [
"cfg-if",
]
[[package]]
name = "errno"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "534c5cf6194dfab3db3242765c03bbe257cf92f22b38f6bc0c58d59108a820ba"
dependencies = [
"libc",
"windows-sys 0.52.0",
]
[[package]]
name = "escape8259"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5692dd7b5a1978a5aeb0ce83b7655c58ca8efdcb79d21036ea249da95afec2c6"
[[package]]
name = "fallible-iterator"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2acce4a10f12dc2fb14a218589d4f1f62ef011b2d0cc4b3cb1bba8e94da14649"
[[package]]
name = "findshlibs"
version = "0.10.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "40b9e59cd0f7e0806cca4be089683ecb6434e602038df21fe6bf6711b2f07f64"
dependencies = [
"cc",
"lazy_static",
"libc",
"winapi",
]
[[package]]
name = "flate2"
version = "1.0.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1b589b4dc103969ad3cf85c950899926ec64300a1a46d76c03a6072957036f0"
dependencies = [
"crc32fast",
"miniz_oxide",
]
[[package]]
name = "gimli"
version = "0.31.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f"
dependencies = [
"compiler_builtins",
"fallible-iterator",
"rustc-std-workspace-alloc",
"rustc-std-workspace-core",
"stable_deref_trait",
]
[[package]]
name = "heck"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "hermit-abi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d231dfb89cfffdbc30e7fc41579ed6066ad03abda9e567ccafae602b97ec5024"
[[package]]
name = "is_terminal_polyfill"
version = "1.70.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf"
[[package]]
name = "lazy_static"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
[[package]]
name = "libc"
version = "0.2.159"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "561d97a539a36e26a9a5fad1ea11a3039a67714694aaa379433e580854bc3dc5"
[[package]]
name = "libtest-mimic"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cc0bda45ed5b3a2904262c1bb91e526127aa70e7ef3758aba2ef93cf896b9b58"
dependencies = [
"clap",
"escape8259",
"termcolor",
"threadpool",
]
[[package]]
name = "linux-raw-sys"
version = "0.4.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78b3ae25bc7c8c38cec158d1f2757ee79e9b3740fbc7ccf0e59e4b08d793fa89"
[[package]]
name = "memchr"
version = "2.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
[[package]]
name = "memmap2"
version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd3f7eed9d3848f8b98834af67102b720745c4ec028fcd0aa0239277e7de374f"
dependencies = [
"libc",
]
[[package]]
name = "miniz_oxide"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2d80299ef12ff69b16a84bb182e3b9df68b5a91574d3d4fa6e41b65deec4df1"
dependencies = [
"adler2",
]
[[package]]
name = "num_cpus"
version = "1.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4161fcb6d602d4d2081af7c3a45852d875a03dd337a6bfdd6e06407b61342a43"
dependencies = [
"hermit-abi",
"libc",
]
[[package]]
name = "object"
version = "0.36.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aedf0a2d09c573ed1d8d85b30c119153926a2b36dce0ab28322c09a117a4683e"
dependencies = [
"flate2",
"memchr",
"ruzstd",
]
[[package]]
name = "proc-macro2"
version = "1.0.86"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e719e8df665df0d1c8fbfd238015744736151d4445ec0836b8e628aae103b77"
dependencies = [
"unicode-ident",
]
[[package]]
name = "quote"
version = "1.0.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5b9d34b8991d19d98081b46eacdd8eb58c6f2b201139f7c5f643cc155a633af"
dependencies = [
"proc-macro2",
]
[[package]]
name = "rustc-demangle"
version = "0.1.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "719b953e2095829ee67db738b3bfa9fa368c94900df327b3f07fe6e794d2fe1f"
[[package]]
name = "rustc-std-workspace-alloc"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff66d57013a5686e1917ed6a025d54dd591fcda71a41fe07edf4d16726aefa86"
[[package]]
name = "rustc-std-workspace-core"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1956f5517128a2b6f23ab2dadf1a976f4f5b27962e7724c2bf3d45e539ec098c"
[[package]]
name = "rustix"
version = "0.38.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8acb788b847c24f28525660c4d7758620a7210875711f79e7f663cc152726811"
dependencies = [
"bitflags",
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.52.0",
]
[[package]]
name = "ruzstd"
version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "99c3938e133aac070997ddc684d4b393777d293ba170f2988c8fd5ea2ad4ce21"
dependencies = [
"twox-hash",
]
[[package]]
name = "shlex"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "smallvec"
version = "1.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67"
[[package]]
name = "stable_deref_trait"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3"
[[package]]
name = "static_assertions"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]]
name = "strsim"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
[[package]]
name = "syn"
version = "2.0.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89132cd0bf050864e1d38dc3bbc07a0eb8e7530af26344d3d2bbbef83499f590"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "termcolor"
version = "1.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06794f8f6c5c898b3275aebefa6b8a1cb24cd2c6c79397ab15774837a0bc5755"
dependencies = [
"winapi-util",
]
[[package]]
name = "terminal_size"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4f599bd7ca042cfdf8f4512b277c02ba102247820f9d9d4a9f521f496751a6ef"
dependencies = [
"rustix",
"windows-sys 0.59.0",
]
[[package]]
name = "threadpool"
version = "1.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d050e60b33d41c19108b32cea32164033a9013fe3b46cbd4457559bfbf77afaa"
dependencies = [
"num_cpus",
]
[[package]]
name = "twox-hash"
version = "1.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97fee6b57c6a41524a810daee9286c02d7752c4253064d0b05472833a438f675"
dependencies = [
"cfg-if",
"static_assertions",
]
[[package]]
name = "typed-arena"
version = "2.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6af6ae20167a9ece4bcb41af5b80f8a1f1df981f6391189ce00fd257af04126a"
[[package]]
name = "unicode-ident"
version = "1.0.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e91b56cd4cadaeb79bbf1a5645f6b4f8dc5bde8834ad5894a8db35fda9efa1fe"
[[package]]
name = "utf8parse"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
[[package]]
name = "winapi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
dependencies = [
"winapi-i686-pc-windows-gnu",
"winapi-x86_64-pc-windows-gnu",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-util"
version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb"
dependencies = [
"windows-sys 0.59.0",
]
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows-sys"
version = "0.52.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "282be5f36a8ce781fad8c8ae18fa3f9beff57ec1b52cb3de0789201425d9a33d"
dependencies = [
"windows-targets",
]
[[package]]
name = "windows-sys"
version = "0.59.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b"
dependencies = [
"windows-targets",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_gnullvm",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"

161
pve-rs/vendor/addr2line/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,161 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2018"
rust-version = "1.65"
name = "addr2line"
version = "0.24.2"
build = false
include = [
"/CHANGELOG.md",
"/Cargo.lock",
"/Cargo.toml",
"/LICENSE-APACHE",
"/LICENSE-MIT",
"/README.md",
"/src",
]
autobins = false
autoexamples = false
autotests = false
autobenches = false
description = "A cross-platform symbolication library written in Rust, using `gimli`"
documentation = "https://docs.rs/addr2line"
readme = "README.md"
keywords = [
"DWARF",
"debug",
"elf",
"symbolicate",
"atos",
]
categories = ["development-tools::debugging"]
license = "Apache-2.0 OR MIT"
repository = "https://github.com/gimli-rs/addr2line"
[profile.bench]
codegen-units = 1
debug = 2
[profile.release]
debug = 2
[lib]
name = "addr2line"
path = "src/lib.rs"
[[bin]]
name = "addr2line"
path = "src/bin/addr2line.rs"
required-features = ["bin"]
[dependencies.alloc]
version = "1.0.0"
optional = true
package = "rustc-std-workspace-alloc"
[dependencies.clap]
version = "4.3.21"
features = ["wrap_help"]
optional = true
[dependencies.compiler_builtins]
version = "0.1.2"
optional = true
[dependencies.core]
version = "1.0.0"
optional = true
package = "rustc-std-workspace-core"
[dependencies.cpp_demangle]
version = "0.4"
features = ["alloc"]
optional = true
default-features = false
[dependencies.fallible-iterator]
version = "0.3.0"
optional = true
default-features = false
[dependencies.gimli]
version = "0.31.1"
features = ["read"]
default-features = false
[dependencies.memmap2]
version = "0.9.4"
optional = true
[dependencies.object]
version = "0.36.0"
features = [
"read",
"compression",
]
optional = true
default-features = false
[dependencies.rustc-demangle]
version = "0.1"
optional = true
[dependencies.smallvec]
version = "1"
optional = true
default-features = false
[dependencies.typed-arena]
version = "2"
optional = true
[dev-dependencies.backtrace]
version = "0.3.13"
[dev-dependencies.findshlibs]
version = "0.10"
[dev-dependencies.libtest-mimic]
version = "0.7.2"
[features]
all = ["bin"]
bin = [
"loader",
"rustc-demangle",
"cpp_demangle",
"fallible-iterator",
"smallvec",
"dep:clap",
]
cargo-all = []
default = [
"rustc-demangle",
"cpp_demangle",
"loader",
"fallible-iterator",
"smallvec",
]
loader = [
"std",
"dep:object",
"dep:memmap2",
"dep:typed-arena",
]
rustc-dep-of-std = [
"core",
"alloc",
"compiler_builtins",
"gimli/rustc-dep-of-std",
]
std = ["gimli/std"]

201
pve-rs/vendor/addr2line/LICENSE-APACHE vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

25
pve-rs/vendor/addr2line/LICENSE-MIT vendored Normal file
View File

@@ -0,0 +1,25 @@
Copyright (c) 2016-2018 The gimli Developers
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

50
pve-rs/vendor/addr2line/README.md vendored Normal file
View File

@@ -0,0 +1,50 @@
# addr2line
[![](https://img.shields.io/crates/v/addr2line.svg)](https://crates.io/crates/addr2line)
[![](https://img.shields.io/docsrs/addr2line.svg)](https://docs.rs/addr2line)
[![Coverage Status](https://coveralls.io/repos/github/gimli-rs/addr2line/badge.svg?branch=master)](https://coveralls.io/github/gimli-rs/addr2line?branch=master)
`addr2line` provides a cross-platform library for retrieving per-address debug information
from files with DWARF debug information. Given an address, it can return the file name,
line number, and function name associated with that address, as well as the inline call
stack leading to that address.
The crate has a CLI wrapper around the library which provides some of
the functionality of the `addr2line` command line tool distributed with
[GNU binutils](https://sourceware.org/binutils/docs/binutils/addr2line.html).
# Quickstart
- Add the [`addr2line` crate](https://crates.io/crates/addr2line) to your `Cargo.toml`.
- Call [`addr2line::Loader::new`](https://docs.rs/addr2line/*/addr2line/struct.Loader.html#method.new) with the file path.
- Use [`addr2line::Loader::find_location`](https://docs.rs/addr2line/*/addr2line/struct.Loader.html#method.find_location)
or [`addr2line::Loader::find_frames`](https://docs.rs/addr2line/*/addr2line/struct.Loader.html#method.find_frames)
to look up debug information for an address.
If you want to provide your own file loading and memory management, use
[`addr2line::Context`](https://docs.rs/addr2line/*/addr2line/struct.Context.html)
instead of `addr2line::Loader`.
# Performance
`addr2line` optimizes for speed over memory by caching parsed information.
The DWARF information is parsed lazily where possible.
The library aims to perform similarly to equivalent existing tools such
as `addr2line` from binutils, `eu-addr2line` from elfutils, and
`llvm-addr2line` from the llvm project. Current benchmarks show a performance
improvement in all cases:
![addr2line runtime](benchmark-time.svg)
## License
Licensed under either of
* Apache License, Version 2.0 ([`LICENSE-APACHE`](./LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([`LICENSE-MIT`](./LICENSE-MIT) or https://opensource.org/licenses/MIT)
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.

View File

@@ -0,0 +1,285 @@
use fallible_iterator::FallibleIterator;
use std::borrow::Cow;
use std::io::{BufRead, Lines, StdinLock, Write};
use std::path::{Path, PathBuf};
use clap::{Arg, ArgAction, Command};
use addr2line::{Loader, LoaderReader, Location};
fn parse_uint_from_hex_string(string: &str) -> Option<u64> {
if string.len() > 2 && string.starts_with("0x") {
u64::from_str_radix(&string[2..], 16).ok()
} else {
u64::from_str_radix(string, 16).ok()
}
}
enum Addrs<'a> {
Args(clap::parser::ValuesRef<'a, String>),
Stdin(Lines<StdinLock<'a>>),
All {
iter: addr2line::LocationRangeIter<'a, LoaderReader<'a>>,
max: u64,
},
}
impl<'a> Iterator for Addrs<'a> {
type Item = Option<u64>;
fn next(&mut self) -> Option<Option<u64>> {
let text = match self {
Addrs::Args(vals) => vals.next().map(Cow::from),
Addrs::Stdin(lines) => lines.next().map(Result::unwrap).map(Cow::from),
Addrs::All { iter, max } => {
for (addr, _len, _loc) in iter {
if addr >= *max {
*max = addr + 1;
return Some(Some(addr));
}
}
return None;
}
};
text.as_ref()
.map(Cow::as_ref)
.map(parse_uint_from_hex_string)
}
}
fn print_loc(loc: Option<&Location<'_>>, basenames: bool, llvm: bool) {
if let Some(loc) = loc {
if let Some(ref file) = loc.file.as_ref() {
let path = if basenames {
Path::new(Path::new(file).file_name().unwrap())
} else {
Path::new(file)
};
print!("{}:", path.display());
} else {
print!("??:");
}
if llvm {
print!("{}:{}", loc.line.unwrap_or(0), loc.column.unwrap_or(0));
} else if let Some(line) = loc.line {
print!("{}", line);
} else {
print!("?");
}
println!();
} else if llvm {
println!("??:0:0");
} else {
println!("??:0");
}
}
fn print_function(name: Option<&str>, language: Option<gimli::DwLang>, demangle: bool) {
if let Some(name) = name {
if demangle {
print!("{}", addr2line::demangle_auto(Cow::from(name), language));
} else {
print!("{}", name);
}
} else {
print!("??");
}
}
struct Options<'a> {
do_functions: bool,
do_inlines: bool,
pretty: bool,
print_addrs: bool,
basenames: bool,
demangle: bool,
llvm: bool,
exe: &'a PathBuf,
sup: Option<&'a PathBuf>,
}
fn main() {
let matches = Command::new("addr2line")
.version(env!("CARGO_PKG_VERSION"))
.about("A fast addr2line Rust port")
.max_term_width(100)
.args(&[
Arg::new("exe")
.short('e')
.long("exe")
.value_name("filename")
.value_parser(clap::value_parser!(PathBuf))
.help(
"Specify the name of the executable for which addresses should be translated.",
)
.required(true),
Arg::new("sup")
.long("sup")
.value_name("filename")
.value_parser(clap::value_parser!(PathBuf))
.help("Path to supplementary object file."),
Arg::new("all")
.long("all")
.action(ArgAction::SetTrue)
.conflicts_with("addrs")
.help("Display all addresses that have line number information."),
Arg::new("functions")
.short('f')
.long("functions")
.action(ArgAction::SetTrue)
.help("Display function names as well as file and line number information."),
Arg::new("pretty").short('p').long("pretty-print")
.action(ArgAction::SetTrue)
.help(
"Make the output more human friendly: each location are printed on one line.",
),
Arg::new("inlines").short('i').long("inlines")
.action(ArgAction::SetTrue)
.help(
"If the address belongs to a function that was inlined, the source information for \
all enclosing scopes back to the first non-inlined function will also be printed.",
),
Arg::new("addresses").short('a').long("addresses")
.action(ArgAction::SetTrue)
.help(
"Display the address before the function name, file and line number information.",
),
Arg::new("basenames")
.short('s')
.long("basenames")
.action(ArgAction::SetTrue)
.help("Display only the base of each file name."),
Arg::new("demangle").short('C').long("demangle")
.action(ArgAction::SetTrue)
.help(
"Demangle function names. \
Specifying a specific demangling style (like GNU addr2line) is not supported. \
(TODO)"
),
Arg::new("llvm")
.long("llvm")
.action(ArgAction::SetTrue)
.help("Display output in the same format as llvm-symbolizer."),
Arg::new("addrs")
.action(ArgAction::Append)
.help("Addresses to use instead of reading from stdin."),
])
.get_matches();
let opts = Options {
do_functions: matches.get_flag("functions"),
do_inlines: matches.get_flag("inlines"),
pretty: matches.get_flag("pretty"),
print_addrs: matches.get_flag("addresses"),
basenames: matches.get_flag("basenames"),
demangle: matches.get_flag("demangle"),
llvm: matches.get_flag("llvm"),
exe: matches.get_one::<PathBuf>("exe").unwrap(),
sup: matches.get_one::<PathBuf>("sup"),
};
let ctx = Loader::new_with_sup(opts.exe, opts.sup).unwrap();
let stdin = std::io::stdin();
let addrs = if matches.get_flag("all") {
Addrs::All {
iter: ctx.find_location_range(0, !0).unwrap(),
max: 0,
}
} else {
matches
.get_many::<String>("addrs")
.map(Addrs::Args)
.unwrap_or_else(|| Addrs::Stdin(stdin.lock().lines()))
};
for probe in addrs {
if opts.print_addrs {
let addr = probe.unwrap_or(0);
if opts.llvm {
print!("0x{:x}", addr);
} else {
print!("0x{:016x}", addr);
}
if opts.pretty {
print!(": ");
} else {
println!();
}
}
if opts.do_functions || opts.do_inlines {
let mut printed_anything = false;
if let Some(probe) = probe {
let mut frames = ctx.find_frames(probe).unwrap().peekable();
let mut first = true;
while let Some(frame) = frames.next().unwrap() {
if opts.pretty && !first {
print!(" (inlined by) ");
}
first = false;
if opts.do_functions {
// Only use the symbol table if this isn't an inlined function.
let symbol = if matches!(frames.peek(), Ok(None)) {
ctx.find_symbol(probe)
} else {
None
};
if symbol.is_some() {
// Prefer the symbol table over the DWARF name because:
// - the symbol can include a clone suffix
// - llvm may omit the linkage name in the DWARF with -g1
print_function(symbol, None, opts.demangle);
} else if let Some(func) = frame.function {
print_function(
func.raw_name().ok().as_deref(),
func.language,
opts.demangle,
);
} else {
print_function(None, None, opts.demangle);
}
if opts.pretty {
print!(" at ");
} else {
println!();
}
}
print_loc(frame.location.as_ref(), opts.basenames, opts.llvm);
printed_anything = true;
if !opts.do_inlines {
break;
}
}
}
if !printed_anything {
if opts.do_functions {
let name = probe.and_then(|probe| ctx.find_symbol(probe));
print_function(name, None, opts.demangle);
if opts.pretty {
print!(" at ");
} else {
println!();
}
}
print_loc(None, opts.basenames, opts.llvm);
}
} else {
let loc = probe.and_then(|probe| ctx.find_location(probe).unwrap());
print_loc(loc.as_ref(), opts.basenames, opts.llvm);
}
if opts.llvm {
println!();
}
std::io::stdout().flush().unwrap();
}
}

221
pve-rs/vendor/addr2line/src/frame.rs vendored Normal file
View File

@@ -0,0 +1,221 @@
use alloc::borrow::Cow;
use alloc::string::String;
use core::iter;
use crate::{maybe_small, Error, Function, InlinedFunction, ResUnit};
/// A source location.
pub struct Location<'a> {
/// The file name.
pub file: Option<&'a str>,
/// The line number.
pub line: Option<u32>,
/// The column number.
///
/// A value of `Some(0)` indicates the left edge.
pub column: Option<u32>,
}
/// A function frame.
pub struct Frame<'ctx, R: gimli::Reader> {
/// The DWARF unit offset corresponding to the DIE of the function.
pub dw_die_offset: Option<gimli::UnitOffset<R::Offset>>,
/// The name of the function.
pub function: Option<FunctionName<R>>,
/// The source location corresponding to this frame.
pub location: Option<Location<'ctx>>,
}
/// An iterator over function frames.
pub struct FrameIter<'ctx, R>(FrameIterState<'ctx, R>)
where
R: gimli::Reader;
enum FrameIterState<'ctx, R>
where
R: gimli::Reader,
{
Empty,
Location(Option<Location<'ctx>>),
Frames(FrameIterFrames<'ctx, R>),
}
struct FrameIterFrames<'ctx, R>
where
R: gimli::Reader,
{
unit: &'ctx ResUnit<R>,
sections: &'ctx gimli::Dwarf<R>,
function: &'ctx Function<R>,
inlined_functions: iter::Rev<maybe_small::IntoIter<&'ctx InlinedFunction<R>>>,
next: Option<Location<'ctx>>,
}
impl<'ctx, R> FrameIter<'ctx, R>
where
R: gimli::Reader + 'ctx,
{
pub(crate) fn new_empty() -> Self {
FrameIter(FrameIterState::Empty)
}
pub(crate) fn new_location(location: Location<'ctx>) -> Self {
FrameIter(FrameIterState::Location(Some(location)))
}
pub(crate) fn new_frames(
unit: &'ctx ResUnit<R>,
sections: &'ctx gimli::Dwarf<R>,
function: &'ctx Function<R>,
inlined_functions: maybe_small::Vec<&'ctx InlinedFunction<R>>,
location: Option<Location<'ctx>>,
) -> Self {
FrameIter(FrameIterState::Frames(FrameIterFrames {
unit,
sections,
function,
inlined_functions: inlined_functions.into_iter().rev(),
next: location,
}))
}
/// Advances the iterator and returns the next frame.
#[allow(clippy::should_implement_trait)]
pub fn next(&mut self) -> Result<Option<Frame<'ctx, R>>, Error> {
let frames = match &mut self.0 {
FrameIterState::Empty => return Ok(None),
FrameIterState::Location(location) => {
// We can't move out of a mutable reference, so use `take` instead.
let location = location.take();
self.0 = FrameIterState::Empty;
return Ok(Some(Frame {
dw_die_offset: None,
function: None,
location,
}));
}
FrameIterState::Frames(frames) => frames,
};
let loc = frames.next.take();
let func = match frames.inlined_functions.next() {
Some(func) => func,
None => {
let frame = Frame {
dw_die_offset: Some(frames.function.dw_die_offset),
function: frames.function.name.clone().map(|name| FunctionName {
name,
language: frames.unit.lang,
}),
location: loc,
};
self.0 = FrameIterState::Empty;
return Ok(Some(frame));
}
};
let mut next = Location {
file: None,
line: if func.call_line != 0 {
Some(func.call_line)
} else {
None
},
column: if func.call_column != 0 {
Some(func.call_column)
} else {
None
},
};
if let Some(call_file) = func.call_file {
if let Some(lines) = frames.unit.parse_lines(frames.sections)? {
next.file = lines.file(call_file);
}
}
frames.next = Some(next);
Ok(Some(Frame {
dw_die_offset: Some(func.dw_die_offset),
function: func.name.clone().map(|name| FunctionName {
name,
language: frames.unit.lang,
}),
location: loc,
}))
}
}
#[cfg(feature = "fallible-iterator")]
impl<'ctx, R> fallible_iterator::FallibleIterator for FrameIter<'ctx, R>
where
R: gimli::Reader + 'ctx,
{
type Item = Frame<'ctx, R>;
type Error = Error;
#[inline]
fn next(&mut self) -> Result<Option<Frame<'ctx, R>>, Error> {
self.next()
}
}
/// A function name.
pub struct FunctionName<R: gimli::Reader> {
/// The name of the function.
pub name: R,
/// The language of the compilation unit containing this function.
pub language: Option<gimli::DwLang>,
}
impl<R: gimli::Reader> FunctionName<R> {
/// The raw name of this function before demangling.
pub fn raw_name(&self) -> Result<Cow<'_, str>, Error> {
self.name.to_string_lossy()
}
/// The name of this function after demangling (if applicable).
pub fn demangle(&self) -> Result<Cow<'_, str>, Error> {
self.raw_name().map(|x| demangle_auto(x, self.language))
}
}
/// Demangle a symbol name using the demangling scheme for the given language.
///
/// Returns `None` if demangling failed or is not required.
#[allow(unused_variables)]
pub fn demangle(name: &str, language: gimli::DwLang) -> Option<String> {
match language {
#[cfg(feature = "rustc-demangle")]
gimli::DW_LANG_Rust => rustc_demangle::try_demangle(name)
.ok()
.as_ref()
.map(|x| format!("{:#}", x)),
#[cfg(feature = "cpp_demangle")]
gimli::DW_LANG_C_plus_plus
| gimli::DW_LANG_C_plus_plus_03
| gimli::DW_LANG_C_plus_plus_11
| gimli::DW_LANG_C_plus_plus_14 => cpp_demangle::Symbol::new(name)
.ok()
.and_then(|x| x.demangle(&Default::default()).ok()),
_ => None,
}
}
/// Apply 'best effort' demangling of a symbol name.
///
/// If `language` is given, then only the demangling scheme for that language
/// is used.
///
/// If `language` is `None`, then heuristics are used to determine how to
/// demangle the name. Currently, these heuristics are very basic.
///
/// If demangling fails or is not required, then `name` is returned unchanged.
pub fn demangle_auto(name: Cow<'_, str>, language: Option<gimli::DwLang>) -> Cow<'_, str> {
match language {
Some(language) => demangle(name.as_ref(), language),
None => demangle(name.as_ref(), gimli::DW_LANG_Rust)
.or_else(|| demangle(name.as_ref(), gimli::DW_LANG_C_plus_plus)),
}
.map(Cow::from)
.unwrap_or(name)
}

564
pve-rs/vendor/addr2line/src/function.rs vendored Normal file
View File

@@ -0,0 +1,564 @@
use alloc::boxed::Box;
use alloc::vec::Vec;
use core::cmp::Ordering;
use crate::lazy::LazyResult;
use crate::maybe_small;
use crate::{Context, DebugFile, Error, RangeAttributes};
pub(crate) struct LazyFunctions<R: gimli::Reader>(LazyResult<Functions<R>>);
impl<R: gimli::Reader> LazyFunctions<R> {
pub(crate) fn new() -> Self {
LazyFunctions(LazyResult::new())
}
pub(crate) fn borrow(&self, unit: gimli::UnitRef<R>) -> Result<&Functions<R>, Error> {
self.0
.borrow_with(|| Functions::parse(unit))
.as_ref()
.map_err(Error::clone)
}
}
pub(crate) struct Functions<R: gimli::Reader> {
/// List of all `DW_TAG_subprogram` details in the unit.
pub(crate) functions: Box<[LazyFunction<R>]>,
/// List of `DW_TAG_subprogram` address ranges in the unit.
pub(crate) addresses: Box<[FunctionAddress]>,
}
/// A single address range for a function.
///
/// It is possible for a function to have multiple address ranges; this
/// is handled by having multiple `FunctionAddress` entries with the same
/// `function` field.
pub(crate) struct FunctionAddress {
range: gimli::Range,
/// An index into `Functions::functions`.
pub(crate) function: usize,
}
pub(crate) struct LazyFunction<R: gimli::Reader> {
dw_die_offset: gimli::UnitOffset<R::Offset>,
lazy: LazyResult<Function<R>>,
}
impl<R: gimli::Reader> LazyFunction<R> {
fn new(dw_die_offset: gimli::UnitOffset<R::Offset>) -> Self {
LazyFunction {
dw_die_offset,
lazy: LazyResult::new(),
}
}
pub(crate) fn borrow(
&self,
file: DebugFile,
unit: gimli::UnitRef<R>,
ctx: &Context<R>,
) -> Result<&Function<R>, Error> {
self.lazy
.borrow_with(|| Function::parse(self.dw_die_offset, file, unit, ctx))
.as_ref()
.map_err(Error::clone)
}
}
pub(crate) struct Function<R: gimli::Reader> {
pub(crate) dw_die_offset: gimli::UnitOffset<R::Offset>,
pub(crate) name: Option<R>,
/// List of all `DW_TAG_inlined_subroutine` details in this function.
inlined_functions: Box<[InlinedFunction<R>]>,
/// List of `DW_TAG_inlined_subroutine` address ranges in this function.
inlined_addresses: Box<[InlinedFunctionAddress]>,
}
pub(crate) struct InlinedFunctionAddress {
range: gimli::Range,
call_depth: usize,
/// An index into `Function::inlined_functions`.
function: usize,
}
pub(crate) struct InlinedFunction<R: gimli::Reader> {
pub(crate) dw_die_offset: gimli::UnitOffset<R::Offset>,
pub(crate) name: Option<R>,
pub(crate) call_file: Option<u64>,
pub(crate) call_line: u32,
pub(crate) call_column: u32,
}
impl<R: gimli::Reader> Functions<R> {
fn parse(unit: gimli::UnitRef<R>) -> Result<Functions<R>, Error> {
let mut functions = Vec::new();
let mut addresses = Vec::new();
let mut entries = unit.entries_raw(None)?;
while !entries.is_empty() {
let dw_die_offset = entries.next_offset();
if let Some(abbrev) = entries.read_abbreviation()? {
if abbrev.tag() == gimli::DW_TAG_subprogram {
let mut ranges = RangeAttributes::default();
for spec in abbrev.attributes() {
match entries.read_attribute(*spec) {
Ok(ref attr) => {
match attr.name() {
gimli::DW_AT_low_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => {
ranges.low_pc = Some(val)
}
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.low_pc = Some(unit.address(index)?);
}
_ => {}
},
gimli::DW_AT_high_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => {
ranges.high_pc = Some(val)
}
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.high_pc = Some(unit.address(index)?);
}
gimli::AttributeValue::Udata(val) => {
ranges.size = Some(val)
}
_ => {}
},
gimli::DW_AT_ranges => {
ranges.ranges_offset =
unit.attr_ranges_offset(attr.value())?;
}
_ => {}
};
}
Err(e) => return Err(e),
}
}
let function_index = functions.len();
let has_address = ranges.for_each_range(unit, |range| {
addresses.push(FunctionAddress {
range,
function: function_index,
});
})?;
if has_address {
functions.push(LazyFunction::new(dw_die_offset));
}
} else {
entries.skip_attributes(abbrev.attributes())?;
}
}
}
// The binary search requires the addresses to be sorted.
//
// It also requires them to be non-overlapping. In practice, overlapping
// function ranges are unlikely, so we don't try to handle that yet.
//
// It's possible for multiple functions to have the same address range if the
// compiler can detect and remove functions with identical code. In that case
// we'll nondeterministically return one of them.
addresses.sort_by_key(|x| x.range.begin);
Ok(Functions {
functions: functions.into_boxed_slice(),
addresses: addresses.into_boxed_slice(),
})
}
pub(crate) fn find_address(&self, probe: u64) -> Option<usize> {
self.addresses
.binary_search_by(|address| {
if probe < address.range.begin {
Ordering::Greater
} else if probe >= address.range.end {
Ordering::Less
} else {
Ordering::Equal
}
})
.ok()
}
pub(crate) fn parse_inlined_functions(
&self,
file: DebugFile,
unit: gimli::UnitRef<R>,
ctx: &Context<R>,
) -> Result<(), Error> {
for function in &*self.functions {
function.borrow(file, unit, ctx)?;
}
Ok(())
}
}
impl<R: gimli::Reader> Function<R> {
fn parse(
dw_die_offset: gimli::UnitOffset<R::Offset>,
file: DebugFile,
unit: gimli::UnitRef<R>,
ctx: &Context<R>,
) -> Result<Self, Error> {
let mut entries = unit.entries_raw(Some(dw_die_offset))?;
let depth = entries.next_depth();
let abbrev = entries.read_abbreviation()?.unwrap();
debug_assert_eq!(abbrev.tag(), gimli::DW_TAG_subprogram);
let mut name = None;
for spec in abbrev.attributes() {
match entries.read_attribute(*spec) {
Ok(ref attr) => {
match attr.name() {
gimli::DW_AT_linkage_name | gimli::DW_AT_MIPS_linkage_name => {
if let Ok(val) = unit.attr_string(attr.value()) {
name = Some(val);
}
}
gimli::DW_AT_name => {
if name.is_none() {
name = unit.attr_string(attr.value()).ok();
}
}
gimli::DW_AT_abstract_origin | gimli::DW_AT_specification => {
if name.is_none() {
name = name_attr(attr.value(), file, unit, ctx, 16)?;
}
}
_ => {}
};
}
Err(e) => return Err(e),
}
}
let mut state = InlinedState {
entries,
functions: Vec::new(),
addresses: Vec::new(),
file,
unit,
ctx,
};
Function::parse_children(&mut state, depth, 0)?;
// Sort ranges in "breadth-first traversal order", i.e. first by call_depth
// and then by range.begin. This allows finding the range containing an
// address at a certain depth using binary search.
// Note: Using DFS order, i.e. ordering by range.begin first and then by
// call_depth, would not work! Consider the two examples
// "[0..10 at depth 0], [0..2 at depth 1], [6..8 at depth 1]" and
// "[0..5 at depth 0], [0..2 at depth 1], [5..10 at depth 0], [6..8 at depth 1]".
// In this example, if you want to look up address 7 at depth 0, and you
// encounter [0..2 at depth 1], are you before or after the target range?
// You don't know.
state.addresses.sort_by(|r1, r2| {
if r1.call_depth < r2.call_depth {
Ordering::Less
} else if r1.call_depth > r2.call_depth {
Ordering::Greater
} else if r1.range.begin < r2.range.begin {
Ordering::Less
} else if r1.range.begin > r2.range.begin {
Ordering::Greater
} else {
Ordering::Equal
}
});
Ok(Function {
dw_die_offset,
name,
inlined_functions: state.functions.into_boxed_slice(),
inlined_addresses: state.addresses.into_boxed_slice(),
})
}
fn parse_children(
state: &mut InlinedState<R>,
depth: isize,
inlined_depth: usize,
) -> Result<(), Error> {
loop {
let dw_die_offset = state.entries.next_offset();
let next_depth = state.entries.next_depth();
if next_depth <= depth {
return Ok(());
}
if let Some(abbrev) = state.entries.read_abbreviation()? {
match abbrev.tag() {
gimli::DW_TAG_subprogram => {
Function::skip(&mut state.entries, abbrev, next_depth)?;
}
gimli::DW_TAG_inlined_subroutine => {
InlinedFunction::parse(
state,
dw_die_offset,
abbrev,
next_depth,
inlined_depth,
)?;
}
_ => {
state.entries.skip_attributes(abbrev.attributes())?;
}
}
}
}
}
fn skip(
entries: &mut gimli::EntriesRaw<'_, '_, R>,
abbrev: &gimli::Abbreviation,
depth: isize,
) -> Result<(), Error> {
// TODO: use DW_AT_sibling
entries.skip_attributes(abbrev.attributes())?;
while entries.next_depth() > depth {
if let Some(abbrev) = entries.read_abbreviation()? {
entries.skip_attributes(abbrev.attributes())?;
}
}
Ok(())
}
/// Build the list of inlined functions that contain `probe`.
pub(crate) fn find_inlined_functions(
&self,
probe: u64,
) -> maybe_small::Vec<&InlinedFunction<R>> {
// `inlined_functions` is ordered from outside to inside.
let mut inlined_functions = maybe_small::Vec::new();
let mut inlined_addresses = &self.inlined_addresses[..];
loop {
let current_depth = inlined_functions.len();
// Look up (probe, current_depth) in inline_ranges.
// `inlined_addresses` is sorted in "breadth-first traversal order", i.e.
// by `call_depth` first, and then by `range.begin`. See the comment at
// the sort call for more information about why.
let search = inlined_addresses.binary_search_by(|range| {
if range.call_depth > current_depth {
Ordering::Greater
} else if range.call_depth < current_depth {
Ordering::Less
} else if range.range.begin > probe {
Ordering::Greater
} else if range.range.end <= probe {
Ordering::Less
} else {
Ordering::Equal
}
});
if let Ok(index) = search {
let function_index = inlined_addresses[index].function;
inlined_functions.push(&self.inlined_functions[function_index]);
inlined_addresses = &inlined_addresses[index + 1..];
} else {
break;
}
}
inlined_functions
}
}
impl<R: gimli::Reader> InlinedFunction<R> {
fn parse(
state: &mut InlinedState<R>,
dw_die_offset: gimli::UnitOffset<R::Offset>,
abbrev: &gimli::Abbreviation,
depth: isize,
inlined_depth: usize,
) -> Result<(), Error> {
let unit = state.unit;
let mut ranges = RangeAttributes::default();
let mut name = None;
let mut call_file = None;
let mut call_line = 0;
let mut call_column = 0;
for spec in abbrev.attributes() {
match state.entries.read_attribute(*spec) {
Ok(ref attr) => match attr.name() {
gimli::DW_AT_low_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => ranges.low_pc = Some(val),
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.low_pc = Some(unit.address(index)?);
}
_ => {}
},
gimli::DW_AT_high_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => ranges.high_pc = Some(val),
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.high_pc = Some(unit.address(index)?);
}
gimli::AttributeValue::Udata(val) => ranges.size = Some(val),
_ => {}
},
gimli::DW_AT_ranges => {
ranges.ranges_offset = unit.attr_ranges_offset(attr.value())?;
}
gimli::DW_AT_linkage_name | gimli::DW_AT_MIPS_linkage_name => {
if let Ok(val) = unit.attr_string(attr.value()) {
name = Some(val);
}
}
gimli::DW_AT_name => {
if name.is_none() {
name = unit.attr_string(attr.value()).ok();
}
}
gimli::DW_AT_abstract_origin | gimli::DW_AT_specification => {
if name.is_none() {
name = name_attr(attr.value(), state.file, unit, state.ctx, 16)?;
}
}
gimli::DW_AT_call_file => {
// There is a spec issue [1] with how DW_AT_call_file is specified in DWARF 5.
// Before, a file index of 0 would indicate no source file, however in
// DWARF 5 this could be a valid index into the file table.
//
// Implementations such as LLVM generates a file index of 0 when DWARF 5 is
// used.
//
// Thus, if we see a version of 5 or later, treat a file index of 0 as such.
// [1]: http://wiki.dwarfstd.org/index.php?title=DWARF5_Line_Table_File_Numbers
if let gimli::AttributeValue::FileIndex(fi) = attr.value() {
if fi > 0 || unit.header.version() >= 5 {
call_file = Some(fi);
}
}
}
gimli::DW_AT_call_line => {
call_line = attr.udata_value().unwrap_or(0) as u32;
}
gimli::DW_AT_call_column => {
call_column = attr.udata_value().unwrap_or(0) as u32;
}
_ => {}
},
Err(e) => return Err(e),
}
}
let function_index = state.functions.len();
state.functions.push(InlinedFunction {
dw_die_offset,
name,
call_file,
call_line,
call_column,
});
ranges.for_each_range(unit, |range| {
state.addresses.push(InlinedFunctionAddress {
range,
call_depth: inlined_depth,
function: function_index,
});
})?;
Function::parse_children(state, depth, inlined_depth + 1)
}
}
struct InlinedState<'a, R: gimli::Reader> {
// Mutable fields.
entries: gimli::EntriesRaw<'a, 'a, R>,
functions: Vec<InlinedFunction<R>>,
addresses: Vec<InlinedFunctionAddress>,
// Constant fields.
file: DebugFile,
unit: gimli::UnitRef<'a, R>,
ctx: &'a Context<R>,
}
fn name_attr<R>(
attr: gimli::AttributeValue<R>,
mut file: DebugFile,
unit: gimli::UnitRef<R>,
ctx: &Context<R>,
recursion_limit: usize,
) -> Result<Option<R>, Error>
where
R: gimli::Reader,
{
if recursion_limit == 0 {
return Ok(None);
}
match attr {
gimli::AttributeValue::UnitRef(offset) => {
name_entry(file, unit, offset, ctx, recursion_limit)
}
gimli::AttributeValue::DebugInfoRef(dr) => {
let sections = unit.dwarf;
let (unit, offset) = ctx.find_unit(dr, file)?;
let unit = gimli::UnitRef::new(sections, unit);
name_entry(file, unit, offset, ctx, recursion_limit)
}
gimli::AttributeValue::DebugInfoRefSup(dr) => {
if let Some(sup_sections) = unit.dwarf.sup.as_ref() {
file = DebugFile::Supplementary;
let (unit, offset) = ctx.find_unit(dr, file)?;
let unit = gimli::UnitRef::new(sup_sections, unit);
name_entry(file, unit, offset, ctx, recursion_limit)
} else {
Ok(None)
}
}
_ => Ok(None),
}
}
fn name_entry<R>(
file: DebugFile,
unit: gimli::UnitRef<R>,
offset: gimli::UnitOffset<R::Offset>,
ctx: &Context<R>,
recursion_limit: usize,
) -> Result<Option<R>, Error>
where
R: gimli::Reader,
{
let mut entries = unit.entries_raw(Some(offset))?;
let abbrev = if let Some(abbrev) = entries.read_abbreviation()? {
abbrev
} else {
return Err(gimli::Error::NoEntryAtGivenOffset);
};
let mut name = None;
let mut next = None;
for spec in abbrev.attributes() {
match entries.read_attribute(*spec) {
Ok(ref attr) => match attr.name() {
gimli::DW_AT_linkage_name | gimli::DW_AT_MIPS_linkage_name => {
if let Ok(val) = unit.attr_string(attr.value()) {
return Ok(Some(val));
}
}
gimli::DW_AT_name => {
if let Ok(val) = unit.attr_string(attr.value()) {
name = Some(val);
}
}
gimli::DW_AT_abstract_origin | gimli::DW_AT_specification => {
next = Some(attr.value());
}
_ => {}
},
Err(e) => return Err(e),
}
}
if name.is_some() {
return Ok(name);
}
if let Some(next) = next {
return name_attr(next, file, unit, ctx, recursion_limit - 1);
}
Ok(None)
}

34
pve-rs/vendor/addr2line/src/lazy.rs vendored Normal file
View File

@@ -0,0 +1,34 @@
use core::cell::UnsafeCell;
pub(crate) type LazyResult<T> = LazyCell<Result<T, crate::Error>>;
pub(crate) struct LazyCell<T> {
contents: UnsafeCell<Option<T>>,
}
impl<T> LazyCell<T> {
pub(crate) fn new() -> LazyCell<T> {
LazyCell {
contents: UnsafeCell::new(None),
}
}
pub(crate) fn borrow(&self) -> Option<&T> {
unsafe { &*self.contents.get() }.as_ref()
}
pub(crate) fn borrow_with(&self, closure: impl FnOnce() -> T) -> &T {
// First check if we're already initialized...
let ptr = self.contents.get();
if let Some(val) = unsafe { &*ptr } {
return val;
}
// Note that while we're executing `closure` our `borrow_with` may
// be called recursively. This means we need to check again after
// the closure has executed. For that we use the `get_or_insert`
// method which will only perform mutation if we aren't already
// `Some`.
let val = closure();
unsafe { (*ptr).get_or_insert(val) }
}
}

414
pve-rs/vendor/addr2line/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,414 @@
//! `addr2line` provides a cross-platform library for retrieving per-address debug information
//! from files with DWARF debug information. Given an address, it can return the file name,
//! line number, and function name associated with that address, as well as the inline call
//! stack leading to that address.
//!
//! At the lowest level, the library uses a [`Context`] to cache parsed information so that
//! multiple lookups are efficient. To create a `Context`, you first need to open and parse the
//! file using an object file parser such as [`object`](https://github.com/gimli-rs/object),
//! create a [`gimli::Dwarf`], and finally call [`Context::from_dwarf`].
//!
//! Location information is obtained with [`Context::find_location`] or
//! [`Context::find_location_range`]. Function information is obtained with
//! [`Context::find_frames`], which returns a frame for each inline function. Each frame
//! contains both name and location.
//!
//! The library also provides a [`Loader`] which internally memory maps the files,
//! uses the `object` crate to do the parsing, and creates a `Context`.
//! The `Context` is not exposed, but the `Loader` provides the same functionality
//! via [`Loader::find_location`], [`Loader::find_location_range`], and
//! [`Loader::find_frames`]. The `Loader` also provides [`Loader::find_symbol`]
//! to use the symbol table instead of DWARF debugging information.
//! The `Loader` will load Mach-O dSYM files and split DWARF files as needed.
//!
//! The crate has a CLI wrapper around the library which provides some of
//! the functionality of the `addr2line` command line tool distributed with
//! [GNU binutils](https://sourceware.org/binutils/docs/binutils/addr2line.html).
#![deny(missing_docs)]
#![no_std]
#[cfg(feature = "cargo-all")]
compile_error!("'--all-features' is not supported; use '--features all' instead");
#[cfg(feature = "std")]
extern crate std;
#[allow(unused_imports)]
#[macro_use]
extern crate alloc;
#[cfg(feature = "fallible-iterator")]
pub extern crate fallible_iterator;
pub extern crate gimli;
use alloc::sync::Arc;
use core::ops::ControlFlow;
use crate::function::{Function, Functions, InlinedFunction, LazyFunctions};
use crate::line::{LazyLines, LineLocationRangeIter, Lines};
use crate::lookup::{LoopingLookup, SimpleLookup};
use crate::unit::{ResUnit, ResUnits, SupUnits};
#[cfg(feature = "smallvec")]
mod maybe_small {
pub type Vec<T> = smallvec::SmallVec<[T; 16]>;
pub type IntoIter<T> = smallvec::IntoIter<[T; 16]>;
}
#[cfg(not(feature = "smallvec"))]
mod maybe_small {
pub type Vec<T> = alloc::vec::Vec<T>;
pub type IntoIter<T> = alloc::vec::IntoIter<T>;
}
mod frame;
pub use frame::{demangle, demangle_auto, Frame, FrameIter, FunctionName, Location};
mod function;
mod lazy;
mod line;
#[cfg(feature = "loader")]
mod loader;
#[cfg(feature = "loader")]
pub use loader::{Loader, LoaderReader};
mod lookup;
pub use lookup::{LookupContinuation, LookupResult, SplitDwarfLoad};
mod unit;
pub use unit::LocationRangeIter;
type Error = gimli::Error;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum DebugFile {
Primary,
Supplementary,
Dwo,
}
/// The state necessary to perform address to line translation.
///
/// Constructing a `Context` is somewhat costly, so users should aim to reuse `Context`s
/// when performing lookups for many addresses in the same executable.
pub struct Context<R: gimli::Reader> {
sections: Arc<gimli::Dwarf<R>>,
units: ResUnits<R>,
sup_units: SupUnits<R>,
}
impl<R: gimli::Reader> Context<R> {
/// Construct a new `Context` from DWARF sections.
///
/// This method does not support using a supplementary object file.
#[allow(clippy::too_many_arguments)]
pub fn from_sections(
debug_abbrev: gimli::DebugAbbrev<R>,
debug_addr: gimli::DebugAddr<R>,
debug_aranges: gimli::DebugAranges<R>,
debug_info: gimli::DebugInfo<R>,
debug_line: gimli::DebugLine<R>,
debug_line_str: gimli::DebugLineStr<R>,
debug_ranges: gimli::DebugRanges<R>,
debug_rnglists: gimli::DebugRngLists<R>,
debug_str: gimli::DebugStr<R>,
debug_str_offsets: gimli::DebugStrOffsets<R>,
default_section: R,
) -> Result<Self, Error> {
Self::from_dwarf(gimli::Dwarf {
debug_abbrev,
debug_addr,
debug_aranges,
debug_info,
debug_line,
debug_line_str,
debug_str,
debug_str_offsets,
debug_types: default_section.clone().into(),
locations: gimli::LocationLists::new(
default_section.clone().into(),
default_section.into(),
),
ranges: gimli::RangeLists::new(debug_ranges, debug_rnglists),
file_type: gimli::DwarfFileType::Main,
sup: None,
abbreviations_cache: gimli::AbbreviationsCache::new(),
})
}
/// Construct a new `Context` from an existing [`gimli::Dwarf`] object.
#[inline]
pub fn from_dwarf(sections: gimli::Dwarf<R>) -> Result<Context<R>, Error> {
Self::from_arc_dwarf(Arc::new(sections))
}
/// Construct a new `Context` from an existing [`gimli::Dwarf`] object.
#[inline]
pub fn from_arc_dwarf(sections: Arc<gimli::Dwarf<R>>) -> Result<Context<R>, Error> {
let units = ResUnits::parse(&sections)?;
let sup_units = if let Some(sup) = sections.sup.as_ref() {
SupUnits::parse(sup)?
} else {
SupUnits::default()
};
Ok(Context {
sections,
units,
sup_units,
})
}
}
impl<R: gimli::Reader> Context<R> {
/// Find the DWARF unit corresponding to the given virtual memory address.
pub fn find_dwarf_and_unit(
&self,
probe: u64,
) -> LookupResult<impl LookupContinuation<Output = Option<gimli::UnitRef<R>>, Buf = R>> {
let mut units_iter = self.units.find(probe);
if let Some(unit) = units_iter.next() {
return LoopingLookup::new_lookup(
unit.find_function_or_location(probe, self),
move |r| {
ControlFlow::Break(match r {
Ok((Some(_), _)) | Ok((_, Some(_))) => {
let (_file, unit) = unit
.dwarf_and_unit(self)
// We've already been through both error cases here to get to this point.
.unwrap()
.unwrap();
Some(unit)
}
_ => match units_iter.next() {
Some(next_unit) => {
return ControlFlow::Continue(
next_unit.find_function_or_location(probe, self),
);
}
None => None,
},
})
},
);
}
LoopingLookup::new_complete(None)
}
/// Find the source file and line corresponding to the given virtual memory address.
pub fn find_location(&self, probe: u64) -> Result<Option<Location<'_>>, Error> {
for unit in self.units.find(probe) {
if let Some(location) = unit.find_location(probe, &self.sections)? {
return Ok(Some(location));
}
}
Ok(None)
}
/// Return source file and lines for a range of addresses. For each location it also
/// returns the address and size of the range of the underlying instructions.
pub fn find_location_range(
&self,
probe_low: u64,
probe_high: u64,
) -> Result<LocationRangeIter<'_, R>, Error> {
self.units
.find_location_range(probe_low, probe_high, &self.sections)
}
/// Return an iterator for the function frames corresponding to the given virtual
/// memory address.
///
/// If the probe address is not for an inline function then only one frame is
/// returned.
///
/// If the probe address is for an inline function then the first frame corresponds
/// to the innermost inline function. Subsequent frames contain the caller and call
/// location, until an non-inline caller is reached.
pub fn find_frames(
&self,
probe: u64,
) -> LookupResult<impl LookupContinuation<Output = Result<FrameIter<'_, R>, Error>, Buf = R>>
{
let mut units_iter = self.units.find(probe);
if let Some(unit) = units_iter.next() {
LoopingLookup::new_lookup(unit.find_function_or_location(probe, self), move |r| {
ControlFlow::Break(match r {
Err(e) => Err(e),
Ok((Some(function), location)) => {
let inlined_functions = function.find_inlined_functions(probe);
Ok(FrameIter::new_frames(
unit,
&self.sections,
function,
inlined_functions,
location,
))
}
Ok((None, Some(location))) => Ok(FrameIter::new_location(location)),
Ok((None, None)) => match units_iter.next() {
Some(next_unit) => {
return ControlFlow::Continue(
next_unit.find_function_or_location(probe, self),
);
}
None => Ok(FrameIter::new_empty()),
},
})
})
} else {
LoopingLookup::new_complete(Ok(FrameIter::new_empty()))
}
}
/// Preload units for `probe`.
///
/// The iterator returns pairs of `SplitDwarfLoad`s containing the
/// information needed to locate and load split DWARF for `probe` and
/// a matching callback to invoke once that data is available.
///
/// If this method is called, and all of the returned closures are invoked,
/// addr2line guarantees that any future API call for the address `probe`
/// will not require the loading of any split DWARF.
///
/// ```no_run
/// # use addr2line::*;
/// # use std::sync::Arc;
/// # let ctx: Context<gimli::EndianSlice<gimli::RunTimeEndian>> = todo!();
/// # let do_split_dwarf_load = |load: SplitDwarfLoad<gimli::EndianSlice<gimli::RunTimeEndian>>| -> Option<Arc<gimli::Dwarf<gimli::EndianSlice<gimli::RunTimeEndian>>>> { None };
/// const ADDRESS: u64 = 0xdeadbeef;
/// ctx.preload_units(ADDRESS).for_each(|(load, callback)| {
/// let dwo = do_split_dwarf_load(load);
/// callback(dwo);
/// });
///
/// let frames_iter = match ctx.find_frames(ADDRESS) {
/// LookupResult::Output(result) => result,
/// LookupResult::Load { .. } => unreachable!("addr2line promised we wouldn't get here"),
/// };
///
/// // ...
/// ```
pub fn preload_units(
&'_ self,
probe: u64,
) -> impl Iterator<
Item = (
SplitDwarfLoad<R>,
impl FnOnce(Option<Arc<gimli::Dwarf<R>>>) -> Result<(), gimli::Error> + '_,
),
> {
self.units
.find(probe)
.filter_map(move |unit| match unit.dwarf_and_unit(self) {
LookupResult::Output(_) => None,
LookupResult::Load { load, continuation } => Some((load, |result| {
continuation.resume(result).unwrap().map(|_| ())
})),
})
}
/// Initialize all line data structures. This is used for benchmarks.
#[doc(hidden)]
pub fn parse_lines(&self) -> Result<(), Error> {
for unit in self.units.iter() {
unit.parse_lines(&self.sections)?;
}
Ok(())
}
/// Initialize all function data structures. This is used for benchmarks.
#[doc(hidden)]
pub fn parse_functions(&self) -> Result<(), Error> {
for unit in self.units.iter() {
unit.parse_functions(self).skip_all_loads()?;
}
Ok(())
}
/// Initialize all inlined function data structures. This is used for benchmarks.
#[doc(hidden)]
pub fn parse_inlined_functions(&self) -> Result<(), Error> {
for unit in self.units.iter() {
unit.parse_inlined_functions(self).skip_all_loads()?;
}
Ok(())
}
}
impl<R: gimli::Reader> Context<R> {
// Find the unit containing the given offset, and convert the offset into a unit offset.
fn find_unit(
&self,
offset: gimli::DebugInfoOffset<R::Offset>,
file: DebugFile,
) -> Result<(&gimli::Unit<R>, gimli::UnitOffset<R::Offset>), Error> {
let unit = match file {
DebugFile::Primary => self.units.find_offset(offset)?,
DebugFile::Supplementary => self.sup_units.find_offset(offset)?,
DebugFile::Dwo => return Err(gimli::Error::NoEntryAtGivenOffset),
};
let unit_offset = offset
.to_unit_offset(&unit.header)
.ok_or(gimli::Error::NoEntryAtGivenOffset)?;
Ok((unit, unit_offset))
}
}
struct RangeAttributes<R: gimli::Reader> {
low_pc: Option<u64>,
high_pc: Option<u64>,
size: Option<u64>,
ranges_offset: Option<gimli::RangeListsOffset<<R as gimli::Reader>::Offset>>,
}
impl<R: gimli::Reader> Default for RangeAttributes<R> {
fn default() -> Self {
RangeAttributes {
low_pc: None,
high_pc: None,
size: None,
ranges_offset: None,
}
}
}
impl<R: gimli::Reader> RangeAttributes<R> {
fn for_each_range<F: FnMut(gimli::Range)>(
&self,
unit: gimli::UnitRef<R>,
mut f: F,
) -> Result<bool, Error> {
let mut added_any = false;
let mut add_range = |range: gimli::Range| {
if range.begin < range.end {
f(range);
added_any = true
}
};
if let Some(ranges_offset) = self.ranges_offset {
let mut range_list = unit.ranges(ranges_offset)?;
while let Some(range) = range_list.next()? {
add_range(range);
}
} else if let (Some(begin), Some(end)) = (self.low_pc, self.high_pc) {
add_range(gimli::Range { begin, end });
} else if let (Some(begin), Some(size)) = (self.low_pc, self.size) {
// If `begin` is a -1 tombstone, this will overflow and the check in
// `add_range` will ignore it.
let end = begin.wrapping_add(size);
add_range(gimli::Range { begin, end });
}
Ok(added_any)
}
}
#[cfg(test)]
mod tests {
#[test]
fn context_is_send() {
fn assert_is_send<T: Send>() {}
assert_is_send::<crate::Context<gimli::read::EndianSlice<'_, gimli::LittleEndian>>>();
}
}

314
pve-rs/vendor/addr2line/src/line.rs vendored Normal file
View File

@@ -0,0 +1,314 @@
use alloc::boxed::Box;
use alloc::string::{String, ToString};
use alloc::vec::Vec;
use core::cmp::Ordering;
use core::mem;
use core::num::NonZeroU64;
use crate::lazy::LazyResult;
use crate::{Error, Location};
pub(crate) struct LazyLines(LazyResult<Lines>);
impl LazyLines {
pub(crate) fn new() -> Self {
LazyLines(LazyResult::new())
}
pub(crate) fn borrow<R: gimli::Reader>(
&self,
dw_unit: gimli::UnitRef<R>,
ilnp: &gimli::IncompleteLineProgram<R, R::Offset>,
) -> Result<&Lines, Error> {
self.0
.borrow_with(|| Lines::parse(dw_unit, ilnp.clone()))
.as_ref()
.map_err(Error::clone)
}
}
struct LineSequence {
start: u64,
end: u64,
rows: Box<[LineRow]>,
}
struct LineRow {
address: u64,
file_index: u64,
line: u32,
column: u32,
}
pub(crate) struct Lines {
files: Box<[String]>,
sequences: Box<[LineSequence]>,
}
impl Lines {
fn parse<R: gimli::Reader>(
dw_unit: gimli::UnitRef<R>,
ilnp: gimli::IncompleteLineProgram<R, R::Offset>,
) -> Result<Self, Error> {
let mut sequences = Vec::new();
let mut sequence_rows = Vec::<LineRow>::new();
let mut rows = ilnp.rows();
while let Some((_, row)) = rows.next_row()? {
if row.end_sequence() {
if let Some(start) = sequence_rows.first().map(|x| x.address) {
let end = row.address();
let mut rows = Vec::new();
mem::swap(&mut rows, &mut sequence_rows);
sequences.push(LineSequence {
start,
end,
rows: rows.into_boxed_slice(),
});
}
continue;
}
let address = row.address();
let file_index = row.file_index();
// Convert line and column to u32 to save a little memory.
// We'll handle the special case of line 0 later,
// and return left edge as column 0 in the public API.
let line = row.line().map(NonZeroU64::get).unwrap_or(0) as u32;
let column = match row.column() {
gimli::ColumnType::LeftEdge => 0,
gimli::ColumnType::Column(x) => x.get() as u32,
};
if let Some(last_row) = sequence_rows.last_mut() {
if last_row.address == address {
last_row.file_index = file_index;
last_row.line = line;
last_row.column = column;
continue;
}
}
sequence_rows.push(LineRow {
address,
file_index,
line,
column,
});
}
sequences.sort_by_key(|x| x.start);
let mut files = Vec::new();
let header = rows.header();
match header.file(0) {
Some(file) => files.push(render_file(dw_unit, file, header)?),
None => files.push(String::from("")), // DWARF version <= 4 may not have 0th index
}
let mut index = 1;
while let Some(file) = header.file(index) {
files.push(render_file(dw_unit, file, header)?);
index += 1;
}
Ok(Self {
files: files.into_boxed_slice(),
sequences: sequences.into_boxed_slice(),
})
}
pub(crate) fn file(&self, index: u64) -> Option<&str> {
self.files.get(index as usize).map(String::as_str)
}
pub(crate) fn ranges(&self) -> impl Iterator<Item = gimli::Range> + '_ {
self.sequences.iter().map(|sequence| gimli::Range {
begin: sequence.start,
end: sequence.end,
})
}
fn row_location(&self, row: &LineRow) -> Location<'_> {
let file = self.files.get(row.file_index as usize).map(String::as_str);
Location {
file,
line: if row.line != 0 { Some(row.line) } else { None },
// If row.line is specified then row.column always has meaning.
column: if row.line != 0 {
Some(row.column)
} else {
None
},
}
}
pub(crate) fn find_location(&self, probe: u64) -> Result<Option<Location<'_>>, Error> {
let seq_idx = self.sequences.binary_search_by(|sequence| {
if probe < sequence.start {
Ordering::Greater
} else if probe >= sequence.end {
Ordering::Less
} else {
Ordering::Equal
}
});
let seq_idx = match seq_idx {
Ok(x) => x,
Err(_) => return Ok(None),
};
let sequence = &self.sequences[seq_idx];
let idx = sequence
.rows
.binary_search_by(|row| row.address.cmp(&probe));
let idx = match idx {
Ok(x) => x,
Err(0) => return Ok(None),
Err(x) => x - 1,
};
Ok(Some(self.row_location(&sequence.rows[idx])))
}
pub(crate) fn find_location_range(
&self,
probe_low: u64,
probe_high: u64,
) -> Result<LineLocationRangeIter<'_>, Error> {
// Find index for probe_low.
let seq_idx = self.sequences.binary_search_by(|sequence| {
if probe_low < sequence.start {
Ordering::Greater
} else if probe_low >= sequence.end {
Ordering::Less
} else {
Ordering::Equal
}
});
let seq_idx = match seq_idx {
Ok(x) => x,
Err(x) => x, // probe below sequence, but range could overlap
};
let row_idx = if let Some(seq) = self.sequences.get(seq_idx) {
let idx = seq.rows.binary_search_by(|row| row.address.cmp(&probe_low));
match idx {
Ok(x) => x,
Err(0) => 0, // probe below sequence, but range could overlap
Err(x) => x - 1,
}
} else {
0
};
Ok(LineLocationRangeIter {
lines: self,
seq_idx,
row_idx,
probe_high,
})
}
}
pub(crate) struct LineLocationRangeIter<'ctx> {
lines: &'ctx Lines,
seq_idx: usize,
row_idx: usize,
probe_high: u64,
}
impl<'ctx> Iterator for LineLocationRangeIter<'ctx> {
type Item = (u64, u64, Location<'ctx>);
fn next(&mut self) -> Option<(u64, u64, Location<'ctx>)> {
while let Some(seq) = self.lines.sequences.get(self.seq_idx) {
if seq.start >= self.probe_high {
break;
}
match seq.rows.get(self.row_idx) {
Some(row) => {
if row.address >= self.probe_high {
break;
}
let nextaddr = seq
.rows
.get(self.row_idx + 1)
.map(|row| row.address)
.unwrap_or(seq.end);
let item = (
row.address,
nextaddr - row.address,
self.lines.row_location(row),
);
self.row_idx += 1;
return Some(item);
}
None => {
self.seq_idx += 1;
self.row_idx = 0;
}
}
}
None
}
}
fn render_file<R: gimli::Reader>(
dw_unit: gimli::UnitRef<R>,
file: &gimli::FileEntry<R, R::Offset>,
header: &gimli::LineProgramHeader<R, R::Offset>,
) -> Result<String, gimli::Error> {
let mut path = if let Some(ref comp_dir) = dw_unit.comp_dir {
comp_dir.to_string_lossy()?.into_owned()
} else {
String::new()
};
// The directory index 0 is defined to correspond to the compilation unit directory.
if file.directory_index() != 0 {
if let Some(directory) = file.directory(header) {
path_push(
&mut path,
dw_unit.attr_string(directory)?.to_string_lossy()?.as_ref(),
);
}
}
path_push(
&mut path,
dw_unit
.attr_string(file.path_name())?
.to_string_lossy()?
.as_ref(),
);
Ok(path)
}
fn path_push(path: &mut String, p: &str) {
if has_unix_root(p) || has_windows_root(p) {
*path = p.to_string();
} else {
let dir_separator = if has_windows_root(path.as_str()) {
'\\'
} else {
'/'
};
if !path.is_empty() && !path.ends_with(dir_separator) {
path.push(dir_separator);
}
*path += p;
}
}
/// Check if the path in the given string has a unix style root
fn has_unix_root(p: &str) -> bool {
p.starts_with('/')
}
/// Check if the path in the given string has a windows style root
fn has_windows_root(p: &str) -> bool {
p.starts_with('\\') || p.get(1..3) == Some(":\\")
}

451
pve-rs/vendor/addr2line/src/loader.rs vendored Normal file
View File

@@ -0,0 +1,451 @@
use alloc::borrow::Cow;
use alloc::boxed::Box;
use alloc::sync::Arc;
use alloc::vec::Vec;
use std::ffi::OsStr;
use std::fs::File;
use std::path::{Path, PathBuf};
use memmap2::Mmap;
use object::{Object, ObjectMapFile, ObjectSection, SymbolMap, SymbolMapName};
use typed_arena::Arena;
use crate::lazy::LazyCell;
use crate::{
Context, FrameIter, Location, LocationRangeIter, LookupContinuation, LookupResult,
SplitDwarfLoad,
};
/// The type used by [`Loader`] for reading DWARF data.
///
/// This is used in the return types of the methods of [`Loader`].
// TODO: use impl Trait when stable
pub type LoaderReader<'a> = gimli::EndianSlice<'a, gimli::RunTimeEndian>;
type Error = Box<dyn std::error::Error>;
type Result<T> = std::result::Result<T, Error>;
/// A loader for the DWARF data required for a `Context`.
///
/// For performance reasons, a [`Context`] normally borrows the input data.
/// However, that means the input data must outlive the `Context`, which
/// is inconvenient for long-lived `Context`s.
/// This loader uses an arena to store the input data, together with the
/// `Context` itself. This ensures that the input data lives as long as
/// the `Context`.
///
/// The loader performs some additional tasks:
/// - Loads the symbol table from the executable file (see [`Self::find_symbol`]).
/// - Loads Mach-O dSYM files that are located next to the executable file.
/// - Locates and loads split DWARF files (DWO and DWP).
pub struct Loader {
internal: LoaderInternal<'static>,
arena_data: Arena<Vec<u8>>,
arena_mmap: Arena<Mmap>,
}
impl Loader {
/// Load the DWARF data for an executable file and create a `Context`.
#[inline]
pub fn new(path: impl AsRef<Path>) -> Result<Self> {
Self::new_with_sup(path, None::<&Path>)
}
/// Load the DWARF data for an executable file and create a `Context`.
///
/// Optionally also use a supplementary object file.
pub fn new_with_sup(
path: impl AsRef<Path>,
sup_path: Option<impl AsRef<Path>>,
) -> Result<Self> {
let arena_data = Arena::new();
let arena_mmap = Arena::new();
let internal = LoaderInternal::new(
path.as_ref(),
sup_path.as_ref().map(AsRef::as_ref),
&arena_data,
&arena_mmap,
)?;
Ok(Loader {
// Convert to static lifetime to allow self-reference by `internal`.
// `internal` is only accessed through `borrow_internal`, which ensures
// that the static lifetime does not leak.
internal: unsafe {
core::mem::transmute::<LoaderInternal<'_>, LoaderInternal<'static>>(internal)
},
arena_data,
arena_mmap,
})
}
fn borrow_internal<'a, F, T>(&'a self, f: F) -> T
where
F: FnOnce(&'a LoaderInternal<'a>, &'a Arena<Vec<u8>>, &'a Arena<Mmap>) -> T,
{
// Do not leak the static lifetime.
let internal = unsafe {
core::mem::transmute::<&LoaderInternal<'static>, &'a LoaderInternal<'a>>(&self.internal)
};
f(internal, &self.arena_data, &self.arena_mmap)
}
/// Get the base address used for relative virtual addresses.
///
/// Currently this is only non-zero for PE.
pub fn relative_address_base(&self) -> u64 {
self.borrow_internal(|i, _data, _mmap| i.relative_address_base)
}
/// Find the source file and line corresponding to the given virtual memory address.
///
/// This calls [`Context::find_location`] with the given address.
pub fn find_location(&self, probe: u64) -> Result<Option<Location<'_>>> {
self.borrow_internal(|i, data, mmap| i.find_location(probe, data, mmap))
}
/// Return source file and lines for a range of addresses.
///
/// This calls [`Context::find_location_range`] with the given range.
pub fn find_location_range(
&self,
probe_low: u64,
probe_high: u64,
) -> Result<LocationRangeIter<'_, LoaderReader>> {
self.borrow_internal(|i, data, mmap| {
i.find_location_range(probe_low, probe_high, data, mmap)
})
}
/// Return an iterator for the function frames corresponding to the given virtual
/// memory address.
///
/// This calls [`Context::find_frames`] with the given address.
pub fn find_frames(&self, probe: u64) -> Result<FrameIter<'_, LoaderReader<'_>>> {
self.borrow_internal(|i, data, mmap| i.find_frames(probe, data, mmap))
}
/// Find the symbol table entry corresponding to the given virtual memory address.
pub fn find_symbol(&self, probe: u64) -> Option<&str> {
self.borrow_internal(|i, _data, _mmap| i.find_symbol(probe))
}
}
struct LoaderInternal<'a> {
ctx: Context<LoaderReader<'a>>,
relative_address_base: u64,
symbols: SymbolMap<SymbolMapName<'a>>,
dwarf_package: Option<gimli::DwarfPackage<LoaderReader<'a>>>,
// Map from address to Mach-O object file path.
object_map: object::ObjectMap<'a>,
// A context for each Mach-O object file.
objects: Vec<LazyCell<Option<ObjectContext<'a>>>>,
}
impl<'a> LoaderInternal<'a> {
fn new(
path: &Path,
sup_path: Option<&Path>,
arena_data: &'a Arena<Vec<u8>>,
arena_mmap: &'a Arena<Mmap>,
) -> Result<Self> {
let file = File::open(path)?;
let map = arena_mmap.alloc(unsafe { Mmap::map(&file)? });
let mut object = object::File::parse(&**map)?;
let relative_address_base = object.relative_address_base();
let symbols = object.symbol_map();
let object_map = object.object_map();
let mut objects = Vec::new();
objects.resize_with(object_map.objects().len(), LazyCell::new);
// Load supplementary object file.
// TODO: use debuglink and debugaltlink
let sup_map;
let sup_object = if let Some(sup_path) = sup_path {
let sup_file = File::open(sup_path)?;
sup_map = arena_mmap.alloc(unsafe { Mmap::map(&sup_file)? });
Some(object::File::parse(&**sup_map)?)
} else {
None
};
// Load Mach-O dSYM file, ignoring errors.
if let Some(map) = (|| {
let uuid = object.mach_uuid().ok()??;
path.parent()?.read_dir().ok()?.find_map(|candidate| {
let candidate = candidate.ok()?;
let path = candidate.path();
if path.extension().and_then(OsStr::to_str) != Some("dSYM") {
return None;
}
let path = path.join("Contents/Resources/DWARF");
path.read_dir().ok()?.find_map(|candidate| {
let candidate = candidate.ok()?;
let path = candidate.path();
let file = File::open(path).ok()?;
let map = unsafe { Mmap::map(&file) }.ok()?;
let object = object::File::parse(&*map).ok()?;
if object.mach_uuid() == Ok(Some(uuid)) {
Some(map)
} else {
None
}
})
})
})() {
let map = arena_mmap.alloc(map);
object = object::File::parse(&**map)?;
}
// Load the DWARF sections.
let endian = if object.is_little_endian() {
gimli::RunTimeEndian::Little
} else {
gimli::RunTimeEndian::Big
};
let mut dwarf =
gimli::Dwarf::load(|id| load_section(Some(id.name()), &object, endian, arena_data))?;
if let Some(sup_object) = &sup_object {
dwarf.load_sup(|id| load_section(Some(id.name()), sup_object, endian, arena_data))?;
}
dwarf.populate_abbreviations_cache(gimli::AbbreviationsCacheStrategy::Duplicates);
let ctx = Context::from_dwarf(dwarf)?;
// Load the DWP file, ignoring errors.
let dwarf_package = (|| {
let mut dwp_path = path.to_path_buf();
let dwp_extension = path
.extension()
.map(|previous_extension| {
let mut previous_extension = previous_extension.to_os_string();
previous_extension.push(".dwp");
previous_extension
})
.unwrap_or_else(|| "dwp".into());
dwp_path.set_extension(dwp_extension);
let dwp_file = File::open(&dwp_path).ok()?;
let map = arena_mmap.alloc(unsafe { Mmap::map(&dwp_file) }.ok()?);
let dwp_object = object::File::parse(&**map).ok()?;
let endian = if dwp_object.is_little_endian() {
gimli::RunTimeEndian::Little
} else {
gimli::RunTimeEndian::Big
};
let empty = gimli::EndianSlice::new(&[][..], endian);
gimli::DwarfPackage::load(
|id| load_section(id.dwo_name(), &dwp_object, endian, arena_data),
empty,
)
.ok()
})();
Ok(LoaderInternal {
ctx,
relative_address_base,
symbols,
dwarf_package,
object_map,
objects,
})
}
fn ctx(
&self,
probe: u64,
arena_data: &'a Arena<Vec<u8>>,
arena_mmap: &'a Arena<Mmap>,
) -> (&Context<LoaderReader<'a>>, u64) {
self.object_ctx(probe, arena_data, arena_mmap)
.unwrap_or((&self.ctx, probe))
}
fn object_ctx(
&self,
probe: u64,
arena_data: &'a Arena<Vec<u8>>,
arena_mmap: &'a Arena<Mmap>,
) -> Option<(&Context<LoaderReader<'a>>, u64)> {
let symbol = self.object_map.get(probe)?;
let object_context = self.objects[symbol.object_index()]
.borrow_with(|| {
ObjectContext::new(symbol.object(&self.object_map), arena_data, arena_mmap)
})
.as_ref()?;
object_context.ctx(symbol.name(), probe - symbol.address())
}
fn find_symbol(&self, probe: u64) -> Option<&str> {
self.symbols.get(probe).map(|x| x.name())
}
fn find_location(
&'a self,
probe: u64,
arena_data: &'a Arena<Vec<u8>>,
arena_mmap: &'a Arena<Mmap>,
) -> Result<Option<Location<'a>>> {
let (ctx, probe) = self.ctx(probe, arena_data, arena_mmap);
Ok(ctx.find_location(probe)?)
}
fn find_location_range(
&self,
probe_low: u64,
probe_high: u64,
arena_data: &'a Arena<Vec<u8>>,
arena_mmap: &'a Arena<Mmap>,
) -> Result<LocationRangeIter<'a, LoaderReader>> {
let (ctx, probe) = self.ctx(probe_low, arena_data, arena_mmap);
// TODO: handle ranges that cover multiple objects
let probe_high = probe + (probe_high - probe_low);
Ok(ctx.find_location_range(probe, probe_high)?)
}
fn find_frames(
&self,
probe: u64,
arena_data: &'a Arena<Vec<u8>>,
arena_mmap: &'a Arena<Mmap>,
) -> Result<FrameIter<'a, LoaderReader>> {
let (ctx, probe) = self.ctx(probe, arena_data, arena_mmap);
let mut frames = ctx.find_frames(probe);
loop {
let (load, continuation) = match frames {
LookupResult::Output(output) => return Ok(output?),
LookupResult::Load { load, continuation } => (load, continuation),
};
let r = self.load_dwo(load, arena_data, arena_mmap)?;
frames = continuation.resume(r);
}
}
fn load_dwo(
&self,
load: SplitDwarfLoad<LoaderReader<'a>>,
arena_data: &'a Arena<Vec<u8>>,
arena_mmap: &'a Arena<Mmap>,
) -> Result<Option<Arc<gimli::Dwarf<LoaderReader<'a>>>>> {
// Load the DWO file from the DWARF package, if available.
if let Some(dwp) = self.dwarf_package.as_ref() {
if let Some(cu) = dwp.find_cu(load.dwo_id, &load.parent)? {
return Ok(Some(Arc::new(cu)));
}
}
// Determine the path to the DWO file.
let mut path = PathBuf::new();
if let Some(p) = load.comp_dir.as_ref() {
path.push(convert_path(p.slice())?);
}
let Some(p) = load.path.as_ref() else {
return Ok(None);
};
path.push(convert_path(p.slice())?);
// Load the DWO file, ignoring errors.
let dwo = (|| {
let file = File::open(&path).ok()?;
let map = arena_mmap.alloc(unsafe { Mmap::map(&file) }.ok()?);
let object = object::File::parse(&**map).ok()?;
let endian = if object.is_little_endian() {
gimli::RunTimeEndian::Little
} else {
gimli::RunTimeEndian::Big
};
let mut dwo_dwarf =
gimli::Dwarf::load(|id| load_section(id.dwo_name(), &object, endian, arena_data))
.ok()?;
let dwo_unit_header = dwo_dwarf.units().next().ok()??;
let dwo_unit = dwo_dwarf.unit(dwo_unit_header).ok()?;
if dwo_unit.dwo_id != Some(load.dwo_id) {
return None;
}
dwo_dwarf.make_dwo(&load.parent);
Some(Arc::new(dwo_dwarf))
})();
Ok(dwo)
}
}
struct ObjectContext<'a> {
ctx: Context<LoaderReader<'a>>,
symbols: SymbolMap<SymbolMapName<'a>>,
}
impl<'a> ObjectContext<'a> {
fn new(
object: &ObjectMapFile<'a>,
arena_data: &'a Arena<Vec<u8>>,
arena_mmap: &'a Arena<Mmap>,
) -> Option<Self> {
let file = File::open(convert_path(object.path()).ok()?).ok()?;
let map = &**arena_mmap.alloc(unsafe { Mmap::map(&file) }.ok()?);
let data = if let Some(member_name) = object.member() {
let archive = object::read::archive::ArchiveFile::parse(map).ok()?;
let member = archive.members().find_map(|member| {
let member = member.ok()?;
if member.name() == member_name {
Some(member)
} else {
None
}
})?;
member.data(map).ok()?
} else {
map
};
let object = object::File::parse(data).ok()?;
let endian = if object.is_little_endian() {
gimli::RunTimeEndian::Little
} else {
gimli::RunTimeEndian::Big
};
let dwarf =
gimli::Dwarf::load(|id| load_section(Some(id.name()), &object, endian, arena_data))
.ok()?;
let ctx = Context::from_dwarf(dwarf).ok()?;
let symbols = object.symbol_map();
Some(ObjectContext { ctx, symbols })
}
fn ctx(&self, symbol_name: &[u8], probe: u64) -> Option<(&Context<LoaderReader<'a>>, u64)> {
self.symbols
.symbols()
.iter()
.find(|symbol| symbol.name().as_bytes() == symbol_name)
.map(|symbol| (&self.ctx, probe + symbol.address()))
}
}
fn load_section<'input, Endian: gimli::Endianity>(
name: Option<&'static str>,
file: &object::File<'input>,
endian: Endian,
arena_data: &'input Arena<Vec<u8>>,
) -> Result<gimli::EndianSlice<'input, Endian>> {
let data = match name.and_then(|name| file.section_by_name(name)) {
Some(section) => match section.uncompressed_data()? {
Cow::Borrowed(b) => b,
Cow::Owned(b) => arena_data.alloc(b),
},
None => &[],
};
Ok(gimli::EndianSlice::new(data, endian))
}
#[cfg(unix)]
fn convert_path(bytes: &[u8]) -> Result<PathBuf> {
use std::os::unix::ffi::OsStrExt;
let s = OsStr::from_bytes(bytes);
Ok(PathBuf::from(s))
}
#[cfg(not(unix))]
fn convert_path(bytes: &[u8]) -> Result<PathBuf> {
let s = std::str::from_utf8(bytes)?;
Ok(PathBuf::from(s))
}

261
pve-rs/vendor/addr2line/src/lookup.rs vendored Normal file
View File

@@ -0,0 +1,261 @@
use alloc::sync::Arc;
use core::marker::PhantomData;
use core::ops::ControlFlow;
/// This struct contains the information needed to find split DWARF data
/// and to produce a `gimli::Dwarf<R>` for it.
pub struct SplitDwarfLoad<R> {
/// The dwo id, for looking up in a DWARF package, or for
/// verifying an unpacked dwo found on the file system
pub dwo_id: gimli::DwoId,
/// The compilation directory `path` is relative to.
pub comp_dir: Option<R>,
/// A path on the filesystem, relative to `comp_dir` to find this dwo.
pub path: Option<R>,
/// Once the split DWARF data is loaded, the loader is expected
/// to call [make_dwo(parent)](gimli::read::Dwarf::make_dwo) before
/// returning the data.
pub parent: Arc<gimli::Dwarf<R>>,
}
/// Operations that consult debug information may require additional files
/// to be loaded if split DWARF is being used. This enum returns the result
/// of the operation in the `Output` variant, or information about the split
/// DWARF that is required and a continuation to invoke once it is available
/// in the `Load` variant.
///
/// This enum is intended to be used in a loop like so:
/// ```no_run
/// # use addr2line::*;
/// # use std::sync::Arc;
/// # let ctx: Context<gimli::EndianSlice<gimli::RunTimeEndian>> = todo!();
/// # let do_split_dwarf_load = |load: SplitDwarfLoad<gimli::EndianSlice<gimli::RunTimeEndian>>| -> Option<Arc<gimli::Dwarf<gimli::EndianSlice<gimli::RunTimeEndian>>>> { None };
/// const ADDRESS: u64 = 0xdeadbeef;
/// let mut r = ctx.find_frames(ADDRESS);
/// let result = loop {
/// match r {
/// LookupResult::Output(result) => break result,
/// LookupResult::Load { load, continuation } => {
/// let dwo = do_split_dwarf_load(load);
/// r = continuation.resume(dwo);
/// }
/// }
/// };
/// ```
pub enum LookupResult<L: LookupContinuation> {
/// The lookup requires split DWARF data to be loaded.
Load {
/// The information needed to find the split DWARF data.
load: SplitDwarfLoad<<L as LookupContinuation>::Buf>,
/// The continuation to resume with the loaded split DWARF data.
continuation: L,
},
/// The lookup has completed and produced an output.
Output(<L as LookupContinuation>::Output),
}
/// This trait represents a partially complete operation that can be resumed
/// once a load of needed split DWARF data is completed or abandoned by the
/// API consumer.
pub trait LookupContinuation: Sized {
/// The final output of this operation.
type Output;
/// The type of reader used.
type Buf: gimli::Reader;
/// Resumes the operation with the provided data.
///
/// After the caller loads the split DWARF data required, call this
/// method to resume the operation. The return value of this method
/// indicates if the computation has completed or if further data is
/// required.
///
/// If the additional data cannot be located, or the caller does not
/// support split DWARF, `resume(None)` can be used to continue the
/// operation with the data that is available.
fn resume(self, input: Option<Arc<gimli::Dwarf<Self::Buf>>>) -> LookupResult<Self>;
}
impl<L: LookupContinuation> LookupResult<L> {
/// Callers that do not handle split DWARF can call `skip_all_loads`
/// to fast-forward to the end result. This result is produced with
/// the data that is available and may be less accurate than the
/// the results that would be produced if the caller did properly
/// support split DWARF.
pub fn skip_all_loads(mut self) -> L::Output {
loop {
self = match self {
LookupResult::Output(t) => return t,
LookupResult::Load { continuation, .. } => continuation.resume(None),
};
}
}
pub(crate) fn map<T, F: FnOnce(L::Output) -> T>(
self,
f: F,
) -> LookupResult<MappedLookup<T, L, F>> {
match self {
LookupResult::Output(t) => LookupResult::Output(f(t)),
LookupResult::Load { load, continuation } => LookupResult::Load {
load,
continuation: MappedLookup {
original: continuation,
mutator: f,
},
},
}
}
pub(crate) fn unwrap(self) -> L::Output {
match self {
LookupResult::Output(t) => t,
LookupResult::Load { .. } => unreachable!("Internal API misuse"),
}
}
}
pub(crate) struct SimpleLookup<T, R, F>
where
F: FnOnce(Option<Arc<gimli::Dwarf<R>>>) -> T,
R: gimli::Reader,
{
f: F,
phantom: PhantomData<(T, R)>,
}
impl<T, R, F> SimpleLookup<T, R, F>
where
F: FnOnce(Option<Arc<gimli::Dwarf<R>>>) -> T,
R: gimli::Reader,
{
pub(crate) fn new_complete(t: F::Output) -> LookupResult<SimpleLookup<T, R, F>> {
LookupResult::Output(t)
}
pub(crate) fn new_needs_load(
load: SplitDwarfLoad<R>,
f: F,
) -> LookupResult<SimpleLookup<T, R, F>> {
LookupResult::Load {
load,
continuation: SimpleLookup {
f,
phantom: PhantomData,
},
}
}
}
impl<T, R, F> LookupContinuation for SimpleLookup<T, R, F>
where
F: FnOnce(Option<Arc<gimli::Dwarf<R>>>) -> T,
R: gimli::Reader,
{
type Output = T;
type Buf = R;
fn resume(self, v: Option<Arc<gimli::Dwarf<Self::Buf>>>) -> LookupResult<Self> {
LookupResult::Output((self.f)(v))
}
}
pub(crate) struct MappedLookup<T, L, F>
where
L: LookupContinuation,
F: FnOnce(L::Output) -> T,
{
original: L,
mutator: F,
}
impl<T, L, F> LookupContinuation for MappedLookup<T, L, F>
where
L: LookupContinuation,
F: FnOnce(L::Output) -> T,
{
type Output = T;
type Buf = L::Buf;
fn resume(self, v: Option<Arc<gimli::Dwarf<Self::Buf>>>) -> LookupResult<Self> {
match self.original.resume(v) {
LookupResult::Output(t) => LookupResult::Output((self.mutator)(t)),
LookupResult::Load { load, continuation } => LookupResult::Load {
load,
continuation: MappedLookup {
original: continuation,
mutator: self.mutator,
},
},
}
}
}
/// Some functions (e.g. `find_frames`) require considering multiple
/// compilation units, each of which might require their own split DWARF
/// lookup (and thus produce a continuation).
///
/// We store the underlying continuation here as well as a mutator function
/// that will either a) decide that the result of this continuation is
/// what is needed and mutate it to the final result or b) produce another
/// `LookupResult`. `new_lookup` will in turn eagerly drive any non-continuation
/// `LookupResult` with successive invocations of the mutator, until a new
/// continuation or a final result is produced. And finally, the impl of
/// `LookupContinuation::resume` will call `new_lookup` each time the
/// computation is resumed.
pub(crate) struct LoopingLookup<T, L, F>
where
L: LookupContinuation,
F: FnMut(L::Output) -> ControlFlow<T, LookupResult<L>>,
{
continuation: L,
mutator: F,
}
impl<T, L, F> LoopingLookup<T, L, F>
where
L: LookupContinuation,
F: FnMut(L::Output) -> ControlFlow<T, LookupResult<L>>,
{
pub(crate) fn new_complete(t: T) -> LookupResult<Self> {
LookupResult::Output(t)
}
pub(crate) fn new_lookup(mut r: LookupResult<L>, mut mutator: F) -> LookupResult<Self> {
// Drive the loop eagerly so that we only ever have to represent one state
// (the r == ControlFlow::Continue state) in LoopingLookup.
loop {
match r {
LookupResult::Output(l) => match mutator(l) {
ControlFlow::Break(t) => return LookupResult::Output(t),
ControlFlow::Continue(r2) => {
r = r2;
}
},
LookupResult::Load { load, continuation } => {
return LookupResult::Load {
load,
continuation: LoopingLookup {
continuation,
mutator,
},
};
}
}
}
}
}
impl<T, L, F> LookupContinuation for LoopingLookup<T, L, F>
where
L: LookupContinuation,
F: FnMut(L::Output) -> ControlFlow<T, LookupResult<L>>,
{
type Output = T;
type Buf = L::Buf;
fn resume(self, v: Option<Arc<gimli::Dwarf<Self::Buf>>>) -> LookupResult<Self> {
let r = self.continuation.resume(v);
LoopingLookup::new_lookup(r, self.mutator)
}
}

589
pve-rs/vendor/addr2line/src/unit.rs vendored Normal file
View File

@@ -0,0 +1,589 @@
use alloc::boxed::Box;
use alloc::sync::Arc;
use alloc::vec::Vec;
use core::cmp;
use crate::lazy::LazyResult;
use crate::{
Context, DebugFile, Error, Function, Functions, LazyFunctions, LazyLines,
LineLocationRangeIter, Lines, Location, LookupContinuation, LookupResult, RangeAttributes,
SimpleLookup, SplitDwarfLoad,
};
pub(crate) struct UnitRange {
unit_id: usize,
min_begin: u64,
range: gimli::Range,
}
pub(crate) struct ResUnit<R: gimli::Reader> {
offset: gimli::DebugInfoOffset<R::Offset>,
dw_unit: gimli::Unit<R>,
pub(crate) lang: Option<gimli::DwLang>,
lines: LazyLines,
functions: LazyFunctions<R>,
dwo: LazyResult<Option<Box<DwoUnit<R>>>>,
}
type UnitRef<'unit, R> = (DebugFile, gimli::UnitRef<'unit, R>);
impl<R: gimli::Reader> ResUnit<R> {
pub(crate) fn unit_ref<'a>(&'a self, sections: &'a gimli::Dwarf<R>) -> gimli::UnitRef<'a, R> {
gimli::UnitRef::new(sections, &self.dw_unit)
}
/// Returns the DWARF sections and the unit.
///
/// Loads the DWO unit if necessary.
pub(crate) fn dwarf_and_unit<'unit, 'ctx: 'unit>(
&'unit self,
ctx: &'ctx Context<R>,
) -> LookupResult<
SimpleLookup<
Result<UnitRef<'unit, R>, Error>,
R,
impl FnOnce(Option<Arc<gimli::Dwarf<R>>>) -> Result<UnitRef<'unit, R>, Error>,
>,
> {
let map_dwo = move |dwo: &'unit Result<Option<Box<DwoUnit<R>>>, Error>| match dwo {
Ok(Some(dwo)) => Ok((DebugFile::Dwo, dwo.unit_ref())),
Ok(None) => Ok((DebugFile::Primary, self.unit_ref(&*ctx.sections))),
Err(e) => Err(*e),
};
let complete = |dwo| SimpleLookup::new_complete(map_dwo(dwo));
if let Some(dwo) = self.dwo.borrow() {
return complete(dwo);
}
let dwo_id = match self.dw_unit.dwo_id {
None => {
return complete(self.dwo.borrow_with(|| Ok(None)));
}
Some(dwo_id) => dwo_id,
};
let comp_dir = self.dw_unit.comp_dir.clone();
let dwo_name = self.dw_unit.dwo_name().and_then(|s| {
if let Some(s) = s {
Ok(Some(ctx.sections.attr_string(&self.dw_unit, s)?))
} else {
Ok(None)
}
});
let path = match dwo_name {
Ok(v) => v,
Err(e) => {
return complete(self.dwo.borrow_with(|| Err(e)));
}
};
let process_dwo = move |dwo_dwarf: Option<Arc<gimli::Dwarf<R>>>| {
let dwo_dwarf = match dwo_dwarf {
None => return Ok(None),
Some(dwo_dwarf) => dwo_dwarf,
};
let mut dwo_units = dwo_dwarf.units();
let dwo_header = match dwo_units.next()? {
Some(dwo_header) => dwo_header,
None => return Ok(None),
};
let mut dwo_unit = dwo_dwarf.unit(dwo_header)?;
dwo_unit.copy_relocated_attributes(&self.dw_unit);
Ok(Some(Box::new(DwoUnit {
sections: dwo_dwarf,
dw_unit: dwo_unit,
})))
};
SimpleLookup::new_needs_load(
SplitDwarfLoad {
dwo_id,
comp_dir,
path,
parent: ctx.sections.clone(),
},
move |dwo_dwarf| map_dwo(self.dwo.borrow_with(|| process_dwo(dwo_dwarf))),
)
}
pub(crate) fn parse_lines(&self, sections: &gimli::Dwarf<R>) -> Result<Option<&Lines>, Error> {
// NB: line information is always stored in the main debug file so this does not need
// to handle DWOs.
let ilnp = match self.dw_unit.line_program {
Some(ref ilnp) => ilnp,
None => return Ok(None),
};
self.lines.borrow(self.unit_ref(sections), ilnp).map(Some)
}
pub(crate) fn parse_functions<'unit, 'ctx: 'unit>(
&'unit self,
ctx: &'ctx Context<R>,
) -> LookupResult<impl LookupContinuation<Output = Result<&'unit Functions<R>, Error>, Buf = R>>
{
self.dwarf_and_unit(ctx).map(move |r| {
let (_file, unit) = r?;
self.functions.borrow(unit)
})
}
pub(crate) fn parse_inlined_functions<'unit, 'ctx: 'unit>(
&'unit self,
ctx: &'ctx Context<R>,
) -> LookupResult<impl LookupContinuation<Output = Result<(), Error>, Buf = R> + 'unit> {
self.dwarf_and_unit(ctx).map(move |r| {
let (file, unit) = r?;
self.functions
.borrow(unit)?
.parse_inlined_functions(file, unit, ctx)
})
}
pub(crate) fn find_location(
&self,
probe: u64,
sections: &gimli::Dwarf<R>,
) -> Result<Option<Location<'_>>, Error> {
let Some(lines) = self.parse_lines(sections)? else {
return Ok(None);
};
lines.find_location(probe)
}
#[inline]
pub(crate) fn find_location_range(
&self,
probe_low: u64,
probe_high: u64,
sections: &gimli::Dwarf<R>,
) -> Result<Option<LineLocationRangeIter<'_>>, Error> {
let Some(lines) = self.parse_lines(sections)? else {
return Ok(None);
};
lines.find_location_range(probe_low, probe_high).map(Some)
}
pub(crate) fn find_function_or_location<'unit, 'ctx: 'unit>(
&'unit self,
probe: u64,
ctx: &'ctx Context<R>,
) -> LookupResult<
impl LookupContinuation<
Output = Result<(Option<&'unit Function<R>>, Option<Location<'unit>>), Error>,
Buf = R,
>,
> {
self.dwarf_and_unit(ctx).map(move |r| {
let (file, unit) = r?;
let functions = self.functions.borrow(unit)?;
let function = match functions.find_address(probe) {
Some(address) => {
let function_index = functions.addresses[address].function;
let function = &functions.functions[function_index];
Some(function.borrow(file, unit, ctx)?)
}
None => None,
};
let location = self.find_location(probe, unit.dwarf)?;
Ok((function, location))
})
}
}
pub(crate) struct ResUnits<R: gimli::Reader> {
ranges: Box<[UnitRange]>,
units: Box<[ResUnit<R>]>,
}
impl<R: gimli::Reader> ResUnits<R> {
pub(crate) fn parse(sections: &gimli::Dwarf<R>) -> Result<Self, Error> {
// Find all the references to compilation units in .debug_aranges.
// Note that we always also iterate through all of .debug_info to
// find compilation units, because .debug_aranges may be missing some.
let mut aranges = Vec::new();
let mut headers = sections.debug_aranges.headers();
while let Some(header) = headers.next()? {
aranges.push((header.debug_info_offset(), header.offset()));
}
aranges.sort_by_key(|i| i.0);
let mut unit_ranges = Vec::new();
let mut res_units = Vec::new();
let mut units = sections.units();
while let Some(header) = units.next()? {
let unit_id = res_units.len();
let offset = match header.offset().as_debug_info_offset() {
Some(offset) => offset,
None => continue,
};
// We mainly want compile units, but we may need to follow references to entries
// within other units for function names. We don't need anything from type units.
let mut need_unit_range = match header.type_() {
gimli::UnitType::Type { .. } | gimli::UnitType::SplitType { .. } => continue,
gimli::UnitType::Partial => {
// Partial units are only needed for references from other units.
// They shouldn't have any address ranges.
false
}
_ => true,
};
let dw_unit = match sections.unit(header) {
Ok(dw_unit) => dw_unit,
Err(_) => continue,
};
let dw_unit_ref = gimli::UnitRef::new(sections, &dw_unit);
let mut lang = None;
if need_unit_range {
let mut entries = dw_unit_ref.entries_raw(None)?;
let abbrev = match entries.read_abbreviation()? {
Some(abbrev) => abbrev,
None => continue,
};
let mut ranges = RangeAttributes::default();
for spec in abbrev.attributes() {
let attr = entries.read_attribute(*spec)?;
match attr.name() {
gimli::DW_AT_low_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => ranges.low_pc = Some(val),
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.low_pc = Some(dw_unit_ref.address(index)?);
}
_ => {}
},
gimli::DW_AT_high_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => ranges.high_pc = Some(val),
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.high_pc = Some(dw_unit_ref.address(index)?);
}
gimli::AttributeValue::Udata(val) => ranges.size = Some(val),
_ => {}
},
gimli::DW_AT_ranges => {
ranges.ranges_offset = dw_unit_ref.attr_ranges_offset(attr.value())?;
}
gimli::DW_AT_language => {
if let gimli::AttributeValue::Language(val) = attr.value() {
lang = Some(val);
}
}
_ => {}
}
}
// Find the address ranges for the CU, using in order of preference:
// - DW_AT_ranges
// - .debug_aranges
// - DW_AT_low_pc/DW_AT_high_pc
//
// Using DW_AT_ranges before .debug_aranges is possibly an arbitrary choice,
// but the feeling is that DW_AT_ranges is more likely to be reliable or complete
// if it is present.
//
// .debug_aranges must be used before DW_AT_low_pc/DW_AT_high_pc because
// it has been observed on macOS that DW_AT_ranges was not emitted even for
// discontiguous CUs.
let i = match ranges.ranges_offset {
Some(_) => None,
None => aranges.binary_search_by_key(&offset, |x| x.0).ok(),
};
if let Some(mut i) = i {
// There should be only one set per CU, but in practice multiple
// sets have been observed. This is probably a compiler bug, but
// either way we need to handle it.
while i > 0 && aranges[i - 1].0 == offset {
i -= 1;
}
for (_, aranges_offset) in aranges[i..].iter().take_while(|x| x.0 == offset) {
let aranges_header = sections.debug_aranges.header(*aranges_offset)?;
let mut aranges = aranges_header.entries();
while let Some(arange) = aranges.next()? {
if arange.length() != 0 {
unit_ranges.push(UnitRange {
range: arange.range(),
unit_id,
min_begin: 0,
});
need_unit_range = false;
}
}
}
} else {
need_unit_range &= !ranges.for_each_range(dw_unit_ref, |range| {
unit_ranges.push(UnitRange {
range,
unit_id,
min_begin: 0,
});
})?;
}
}
let lines = LazyLines::new();
if need_unit_range {
// The unit did not declare any ranges.
// Try to get some ranges from the line program sequences.
if let Some(ref ilnp) = dw_unit_ref.line_program {
if let Ok(lines) = lines.borrow(dw_unit_ref, ilnp) {
for range in lines.ranges() {
unit_ranges.push(UnitRange {
range,
unit_id,
min_begin: 0,
})
}
}
}
}
res_units.push(ResUnit {
offset,
dw_unit,
lang,
lines,
functions: LazyFunctions::new(),
dwo: LazyResult::new(),
});
}
// Sort this for faster lookup in `Self::find_range`.
unit_ranges.sort_by_key(|i| i.range.end);
// Calculate the `min_begin` field now that we've determined the order of
// CUs.
let mut min = !0;
for i in unit_ranges.iter_mut().rev() {
min = min.min(i.range.begin);
i.min_begin = min;
}
Ok(ResUnits {
ranges: unit_ranges.into_boxed_slice(),
units: res_units.into_boxed_slice(),
})
}
pub(crate) fn iter(&self) -> impl Iterator<Item = &ResUnit<R>> {
self.units.iter()
}
pub(crate) fn find_offset(
&self,
offset: gimli::DebugInfoOffset<R::Offset>,
) -> Result<&gimli::Unit<R>, Error> {
match self
.units
.binary_search_by_key(&offset.0, |unit| unit.offset.0)
{
// There is never a DIE at the unit offset or before the first unit.
Ok(_) | Err(0) => Err(gimli::Error::NoEntryAtGivenOffset),
Err(i) => Ok(&self.units[i - 1].dw_unit),
}
}
/// Finds the CUs for the function address given.
///
/// There might be multiple CUs whose range contains this address.
/// Weak symbols have shown up in the wild which cause this to happen
/// but otherwise this can happen if the CU has non-contiguous functions
/// but only reports a single range.
///
/// Consequently we return an iterator for all CUs which may contain the
/// address, and the caller must check if there is actually a function or
/// location in the CU for that address.
pub(crate) fn find(&self, probe: u64) -> impl Iterator<Item = &ResUnit<R>> {
self.find_range(probe, probe + 1).map(|(unit, _range)| unit)
}
/// Finds the CUs covering the range of addresses given.
///
/// The range is [low, high) (ie, the upper bound is exclusive). This can return multiple
/// ranges for the same unit.
#[inline]
pub(crate) fn find_range(
&self,
probe_low: u64,
probe_high: u64,
) -> impl Iterator<Item = (&ResUnit<R>, &gimli::Range)> {
// Find the position of the next range after a range which
// ends at `probe_low` or lower.
let pos = match self
.ranges
.binary_search_by_key(&probe_low, |i| i.range.end)
{
Ok(i) => i + 1, // Range `i` ends at exactly `probe_low`.
Err(i) => i, // Range `i - 1` ends at a lower address.
};
// Iterate from that position to find matching CUs.
self.ranges[pos..]
.iter()
.take_while(move |i| {
// We know that this CU's end is at least `probe_low` because
// of our sorted array.
debug_assert!(i.range.end >= probe_low);
// Each entry keeps track of the minimum begin address for the
// remainder of the array of unit ranges. If our probe is before
// the minimum range begin of this entry, then it's guaranteed
// to not fit in any subsequent entries, so we break out.
probe_high > i.min_begin
})
.filter_map(move |i| {
// If this CU doesn't actually contain this address, move to the
// next CU.
if probe_low >= i.range.end || probe_high <= i.range.begin {
return None;
}
Some((&self.units[i.unit_id], &i.range))
})
}
pub(crate) fn find_location_range<'a>(
&'a self,
probe_low: u64,
probe_high: u64,
sections: &'a gimli::Dwarf<R>,
) -> Result<LocationRangeIter<'a, R>, Error> {
let unit_iter = Box::new(self.find_range(probe_low, probe_high));
Ok(LocationRangeIter {
unit_iter,
iter: None,
probe_low,
probe_high,
sections,
})
}
}
/// A DWO unit has its own DWARF sections.
struct DwoUnit<R: gimli::Reader> {
sections: Arc<gimli::Dwarf<R>>,
dw_unit: gimli::Unit<R>,
}
impl<R: gimli::Reader> DwoUnit<R> {
fn unit_ref(&self) -> gimli::UnitRef<R> {
gimli::UnitRef::new(&self.sections, &self.dw_unit)
}
}
pub(crate) struct SupUnit<R: gimli::Reader> {
offset: gimli::DebugInfoOffset<R::Offset>,
dw_unit: gimli::Unit<R>,
}
pub(crate) struct SupUnits<R: gimli::Reader> {
units: Box<[SupUnit<R>]>,
}
impl<R: gimli::Reader> Default for SupUnits<R> {
fn default() -> Self {
SupUnits {
units: Box::default(),
}
}
}
impl<R: gimli::Reader> SupUnits<R> {
pub(crate) fn parse(sections: &gimli::Dwarf<R>) -> Result<Self, Error> {
let mut sup_units = Vec::new();
let mut units = sections.units();
while let Some(header) = units.next()? {
let offset = match header.offset().as_debug_info_offset() {
Some(offset) => offset,
None => continue,
};
let dw_unit = match sections.unit(header) {
Ok(dw_unit) => dw_unit,
Err(_) => continue,
};
sup_units.push(SupUnit { dw_unit, offset });
}
Ok(SupUnits {
units: sup_units.into_boxed_slice(),
})
}
pub(crate) fn find_offset(
&self,
offset: gimli::DebugInfoOffset<R::Offset>,
) -> Result<&gimli::Unit<R>, Error> {
match self
.units
.binary_search_by_key(&offset.0, |unit| unit.offset.0)
{
// There is never a DIE at the unit offset or before the first unit.
Ok(_) | Err(0) => Err(gimli::Error::NoEntryAtGivenOffset),
Err(i) => Ok(&self.units[i - 1].dw_unit),
}
}
}
/// Iterator over `Location`s in a range of addresses, returned by `Context::find_location_range`.
pub struct LocationRangeIter<'ctx, R: gimli::Reader> {
unit_iter: Box<dyn Iterator<Item = (&'ctx ResUnit<R>, &'ctx gimli::Range)> + 'ctx>,
iter: Option<LineLocationRangeIter<'ctx>>,
probe_low: u64,
probe_high: u64,
sections: &'ctx gimli::Dwarf<R>,
}
impl<'ctx, R: gimli::Reader> LocationRangeIter<'ctx, R> {
fn next_loc(&mut self) -> Result<Option<(u64, u64, Location<'ctx>)>, Error> {
loop {
let iter = self.iter.take();
match iter {
None => match self.unit_iter.next() {
Some((unit, range)) => {
self.iter = unit.find_location_range(
cmp::max(self.probe_low, range.begin),
cmp::min(self.probe_high, range.end),
self.sections,
)?;
}
None => return Ok(None),
},
Some(mut iter) => {
if let item @ Some(_) = iter.next() {
self.iter = Some(iter);
return Ok(item);
}
}
}
}
}
}
impl<'ctx, R> Iterator for LocationRangeIter<'ctx, R>
where
R: gimli::Reader + 'ctx,
{
type Item = (u64, u64, Location<'ctx>);
#[inline]
fn next(&mut self) -> Option<Self::Item> {
self.next_loc().unwrap_or_default()
}
}
#[cfg(feature = "fallible-iterator")]
impl<'ctx, R> fallible_iterator::FallibleIterator for LocationRangeIter<'ctx, R>
where
R: gimli::Reader + 'ctx,
{
type Item = (u64, u64, Location<'ctx>);
type Error = Error;
#[inline]
fn next(&mut self) -> Result<Option<Self::Item>, Self::Error> {
self.next_loc()
}
}

View File

@@ -0,0 +1 @@
{"files":{"CHANGELOG.md":"52435caf085b428cdb6171a34f4980f52aaaf541a3dced226c92eb82f69a48a7","Cargo.toml":"56b9cca6450964cbe772b6519bc048c2f56cc80e9261de1126d789c5e1951136","LICENSE-0BSD":"861399f8c21c042b110517e76dc6b63a2b334276c8cf17412fc3c8908ca8dc17","LICENSE-APACHE":"8ada45cd9f843acf64e4722ae262c622a2b3b3007c7310ef36ac1061a30f6adb","LICENSE-MIT":"23f18e03dc49df91622fe2a76176497404e46ced8a715d9d2b67a7446571cca3","README.md":"cd955d5d6a49161e6f7a04df4a5963581b66ed43fd5096b2dedca8e295efe4f9","RELEASE_PROCESS.md":"a86cd10fc70f167f8d00e9e4ce0c6b4ebdfa1865058390dffd1e0ad4d3e68d9d","benches/bench.rs":"d67bef1c7f36ed300a8fbcf9d50b9dfdead1fd340bf87a4d47d99a0c1c042c04","src/algo.rs":"932c2bc591d13fe4470185125617b5aaa660a3898f23b553acc85df0bf49dded","src/lib.rs":"4acd41668fe30daffa37084e7e223f268957b816afc1864ffb3f5d6d7adf0890"},"package":"512761e0bb2578dd7380c6baaa0f4ce03e84f95e960231d1dec8bf4d7d6e2627"}

77
pve-rs/vendor/adler2/CHANGELOG.md vendored Normal file
View File

@@ -0,0 +1,77 @@
# Changelog
All notable changes to this project will be documented in this file.
---
## [2.0.0](https://github.com/Frommi/miniz_oxide/compare/1.0.2..2.0.0) - 2024-08-04
First release of adler2 - fork of adler crate as the original is unmaintained and archived
##### Changes since last version of Adler:
### Bug Fixes
- **(core)** change to rust 2021 edition, update repository info and links, update author info - ([867b115](https://github.com/Frommi/miniz_oxide/commit/867b115bad79bf62098f2acccc81bf53ec5a125d)) - oyvindln
- **(core)** simplify some code and fix benches - ([128fb9c](https://github.com/Frommi/miniz_oxide/commit/128fb9cb6cad5c3a54fb0b6c68549d80b79a1fe0)) - oyvindln
### Changelog of original adler crate
---
## [1.0.2 - 2021-02-26](https://github.com/jonas-schievink/adler/releases/tag/v1.0.2)
- Fix doctest on big-endian systems ([#9]).
[#9]: https://github.com/jonas-schievink/adler/pull/9
## [1.0.1 - 2020-11-08](https://github.com/jonas-schievink/adler/releases/tag/v1.0.1)
### Fixes
- Fix documentation on docs.rs.
## [1.0.0 - 2020-11-08](https://github.com/jonas-schievink/adler/releases/tag/v1.0.0)
### Fixes
- Fix `cargo test --no-default-features` ([#5]).
### Improvements
- Extended and clarified documentation.
- Added more rustdoc examples.
- Extended CI to test the crate with `--no-default-features`.
### Breaking Changes
- `adler32_reader` now takes its generic argument by value instead of as a `&mut`.
- Renamed `adler32_reader` to `adler32`.
## [0.2.3 - 2020-07-11](https://github.com/jonas-schievink/adler/releases/tag/v0.2.3)
- Process 4 Bytes at a time, improving performance by up to 50% ([#2]).
## [0.2.2 - 2020-06-27](https://github.com/jonas-schievink/adler/releases/tag/v0.2.2)
- Bump MSRV to 1.31.0.
## [0.2.1 - 2020-06-27](https://github.com/jonas-schievink/adler/releases/tag/v0.2.1)
- Add a few `#[inline]` annotations to small functions.
- Fix CI badge.
- Allow integration into libstd.
## [0.2.0 - 2020-06-27](https://github.com/jonas-schievink/adler/releases/tag/v0.2.0)
- Support `#![no_std]` when using `default-features = false`.
- Improve performance by around 7x.
- Support Rust 1.8.0.
- Improve API naming.
## [0.1.0 - 2020-06-26](https://github.com/jonas-schievink/adler/releases/tag/v0.1.0)
Initial release.
[#2]: https://github.com/jonas-schievink/adler/pull/2
[#5]: https://github.com/jonas-schievink/adler/pull/5

97
pve-rs/vendor/adler2/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,97 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2021"
name = "adler2"
version = "2.0.0"
authors = [
"Jonas Schievink <jonasschievink@gmail.com>",
"oyvindln <oyvindln@users.noreply.github.com>",
]
build = false
autobins = false
autoexamples = false
autotests = false
autobenches = false
description = "A simple clean-room implementation of the Adler-32 checksum"
documentation = "https://docs.rs/adler2/"
readme = "README.md"
keywords = [
"checksum",
"integrity",
"hash",
"adler32",
"zlib",
]
categories = ["algorithms"]
license = "0BSD OR MIT OR Apache-2.0"
repository = "https://github.com/oyvindln/adler2"
[package.metadata.docs.rs]
rustdoc-args = ["--cfg=docsrs"]
[package.metadata.release]
no-dev-version = true
pre-release-commit-message = "Release {{version}}"
tag-message = "{{version}}"
[[package.metadata.release.pre-release-replacements]]
file = "CHANGELOG.md"
replace = """
## Unreleased
No changes.
## [{{version}} - {{date}}](https://github.com/jonas-schievink/adler/releases/tag/v{{version}})
"""
search = """
## Unreleased
"""
[[package.metadata.release.pre-release-replacements]]
file = "README.md"
replace = 'adler = "{{version}}"'
search = 'adler = "[a-z0-9\\.-]+"'
[[package.metadata.release.pre-release-replacements]]
file = "src/lib.rs"
replace = "https://docs.rs/adler/{{version}}"
search = 'https://docs.rs/adler/[a-z0-9\.-]+'
[lib]
name = "adler2"
path = "src/lib.rs"
[[bench]]
name = "bench"
path = "benches/bench.rs"
harness = false
[dependencies.compiler_builtins]
version = "0.1.2"
optional = true
[dependencies.core]
version = "1.0.0"
optional = true
package = "rustc-std-workspace-core"
[dev-dependencies.criterion]
version = "0.3.2"
[features]
default = ["std"]
rustc-dep-of-std = [
"core",
"compiler_builtins",
]
std = []

12
pve-rs/vendor/adler2/LICENSE-0BSD vendored Normal file
View File

@@ -0,0 +1,12 @@
Copyright (C) Jonas Schievink <jonasschievink@gmail.com>
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

201
pve-rs/vendor/adler2/LICENSE-APACHE vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/LICENSE-2.0
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

23
pve-rs/vendor/adler2/LICENSE-MIT vendored Normal file
View File

@@ -0,0 +1,23 @@
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

46
pve-rs/vendor/adler2/README.md vendored Normal file
View File

@@ -0,0 +1,46 @@
# Adler-32 checksums for Rust
This is a fork of the adler crate as the [original](https://github.com/jonas-schievink/adler) has been archived and is no longer updated by it's author
[![crates.io](https://img.shields.io/crates/v/adler.svg)](https://crates.io/crates/adler)
[![docs.rs](https://docs.rs/adler/badge.svg)](https://docs.rs/adler/)
![CI](https://github.com/jonas-schievink/adler/workflows/CI/badge.svg)
This crate provides a simple implementation of the Adler-32 checksum, used in
the zlib compression format.
Please refer to the [changelog](CHANGELOG.md) to see what changed in the last
releases.
## Features
- Permissively licensed (0BSD) clean-room implementation.
- Zero dependencies.
- Zero `unsafe`.
- Decent performance (3-4 GB/s) (see note).
- Supports `#![no_std]` (with `default-features = false`).
## Usage
Add an entry to your `Cargo.toml`:
```toml
[dependencies]
adler2 = "2.0.0"
```
Check the [API Documentation](https://docs.rs/adler/) for how to use the
crate's functionality.
## Rust version support
Currently, this crate supports all Rust versions starting at Rust 1.56.0.
Bumping the Minimum Supported Rust Version (MSRV) is *not* considered a breaking
change, but will not be done without good reasons. The latest 3 stable Rust
versions will always be supported no matter what.
## Performance
Due to the way the algorithm works this crate and the fact that it's not possible to use explicit simd in safe rust currently, this crate benefits drastically from being compiled with newer cpu instructions enabled (using e.g ```RUSTFLAGS=-C target-feature'+sse4.1``` or ```-C target-cpu=x86-64-v2```/```-C target-cpu=x86-64-v3``` arguments depending on what cpu support is being targeted.)
Judging by the crate benchmarks, on a Ryzen 5600, compiling with SSE 4.1 (enabled in x86-64-v2 feature level) enabled can give a ~50-150% speedup, enabling the LZCNT instruction (enabled in x86-64-v3 feature level) can give a further ~50% speedup,

13
pve-rs/vendor/adler2/RELEASE_PROCESS.md vendored Normal file
View File

@@ -0,0 +1,13 @@
# What to do to publish a new release
1. Ensure all notable changes are in the changelog under "Unreleased".
2. Execute `cargo release <level>` to bump version(s), tag and publish
everything. External subcommand, must be installed with `cargo install
cargo-release`.
`<level>` can be one of `major|minor|patch`. If this is the first release
(`0.1.0`), use `minor`, since the version starts out as `0.0.0`.
3. Go to the GitHub releases, edit the just-pushed tag. Copy the release notes
from the changelog.

109
pve-rs/vendor/adler2/benches/bench.rs vendored Normal file
View File

@@ -0,0 +1,109 @@
extern crate adler2;
extern crate criterion;
use adler2::{adler32_slice, Adler32};
use criterion::{criterion_group, criterion_main, Criterion, Throughput};
fn simple(c: &mut Criterion) {
{
const SIZE: usize = 100;
let mut group = c.benchmark_group("simple-100b");
group.throughput(Throughput::Bytes(SIZE as u64));
group.bench_function("zeroes-100", |bencher| {
bencher.iter(|| {
adler32_slice(&[0; SIZE]);
});
});
group.bench_function("ones-100", |bencher| {
bencher.iter(|| {
adler32_slice(&[0xff; SIZE]);
});
});
}
{
const SIZE: usize = 1024;
let mut group = c.benchmark_group("simple-1k");
group.throughput(Throughput::Bytes(SIZE as u64));
group.bench_function("zeroes-1k", |bencher| {
bencher.iter(|| {
adler32_slice(&[0; SIZE]);
});
});
group.bench_function("ones-1k", |bencher| {
bencher.iter(|| {
adler32_slice(&[0xff; SIZE]);
});
});
}
{
const SIZE: usize = 1024 * 1024;
let mut group = c.benchmark_group("simple-1m");
group.throughput(Throughput::Bytes(SIZE as u64));
group.bench_function("zeroes-1m", |bencher| {
bencher.iter(|| {
adler32_slice(&[0; SIZE]);
});
});
group.bench_function("ones-1m", |bencher| {
bencher.iter(|| {
adler32_slice(&[0xff; SIZE]);
});
});
}
}
fn chunked(c: &mut Criterion) {
const SIZE: usize = 16 * 1024 * 1024;
let data = vec![0xAB; SIZE];
let mut group = c.benchmark_group("chunked-16m");
group.throughput(Throughput::Bytes(SIZE as u64));
group.bench_function("5552", |bencher| {
bencher.iter(|| {
let mut h = Adler32::new();
for chunk in data.chunks(5552) {
h.write_slice(chunk);
}
h.checksum()
});
});
group.bench_function("8k", |bencher| {
bencher.iter(|| {
let mut h = Adler32::new();
for chunk in data.chunks(8 * 1024) {
h.write_slice(chunk);
}
h.checksum()
});
});
group.bench_function("64k", |bencher| {
bencher.iter(|| {
let mut h = Adler32::new();
for chunk in data.chunks(64 * 1024) {
h.write_slice(chunk);
}
h.checksum()
});
});
group.bench_function("1m", |bencher| {
bencher.iter(|| {
let mut h = Adler32::new();
for chunk in data.chunks(1024 * 1024) {
h.write_slice(chunk);
}
h.checksum()
});
});
}
criterion_group!(benches, simple, chunked);
criterion_main!(benches);

155
pve-rs/vendor/adler2/src/algo.rs vendored Normal file
View File

@@ -0,0 +1,155 @@
use crate::Adler32;
use std::ops::{AddAssign, MulAssign, RemAssign};
impl Adler32 {
pub(crate) fn compute(&mut self, bytes: &[u8]) {
// The basic algorithm is, for every byte:
// a = (a + byte) % MOD
// b = (b + a) % MOD
// where MOD = 65521.
//
// For efficiency, we can defer the `% MOD` operations as long as neither a nor b overflows:
// - Between calls to `write`, we ensure that a and b are always in range 0..MOD.
// - We use 32-bit arithmetic in this function.
// - Therefore, a and b must not increase by more than 2^32-MOD without performing a `% MOD`
// operation.
//
// According to Wikipedia, b is calculated as follows for non-incremental checksumming:
// b = n×D1 + (n1)×D2 + (n2)×D3 + ... + Dn + n*1 (mod 65521)
// Where n is the number of bytes and Di is the i-th Byte. We need to change this to account
// for the previous values of a and b, as well as treat every input Byte as being 255:
// b_inc = n×255 + (n-1)×255 + ... + 255 + n*65520
// Or in other words:
// b_inc = n*65520 + n(n+1)/2*255
// The max chunk size is thus the largest value of n so that b_inc <= 2^32-65521.
// 2^32-65521 = n*65520 + n(n+1)/2*255
// Plugging this into an equation solver since I can't math gives n = 5552.18..., so 5552.
//
// On top of the optimization outlined above, the algorithm can also be parallelized with a
// bit more work:
//
// Note that b is a linear combination of a vector of input bytes (D1, ..., Dn).
//
// If we fix some value k<N and rewrite indices 1, ..., N as
//
// 1_1, 1_2, ..., 1_k, 2_1, ..., 2_k, ..., (N/k)_k,
//
// then we can express a and b in terms of sums of smaller sequences kb and ka:
//
// ka(j) := D1_j + D2_j + ... + D(N/k)_j where j <= k
// kb(j) := (N/k)*D1_j + (N/k-1)*D2_j + ... + D(N/k)_j where j <= k
//
// a = ka(1) + ka(2) + ... + ka(k) + 1
// b = k*(kb(1) + kb(2) + ... + kb(k)) - 1*ka(2) - ... - (k-1)*ka(k) + N
//
// We use this insight to unroll the main loop and process k=4 bytes at a time.
// The resulting code is highly amenable to SIMD acceleration, although the immediate speedups
// stem from increased pipeline parallelism rather than auto-vectorization.
//
// This technique is described in-depth (here:)[https://software.intel.com/content/www/us/\
// en/develop/articles/fast-computation-of-fletcher-checksums.html]
const MOD: u32 = 65521;
const CHUNK_SIZE: usize = 5552 * 4;
let mut a = u32::from(self.a);
let mut b = u32::from(self.b);
let mut a_vec = U32X4([0; 4]);
let mut b_vec = a_vec;
let (bytes, remainder) = bytes.split_at(bytes.len() - bytes.len() % 4);
// iterate over 4 bytes at a time
let chunk_iter = bytes.chunks_exact(CHUNK_SIZE);
let remainder_chunk = chunk_iter.remainder();
for chunk in chunk_iter {
for byte_vec in chunk.chunks_exact(4) {
let val = U32X4::from(byte_vec);
a_vec += val;
b_vec += a_vec;
}
b += CHUNK_SIZE as u32 * a;
a_vec %= MOD;
b_vec %= MOD;
b %= MOD;
}
// special-case the final chunk because it may be shorter than the rest
for byte_vec in remainder_chunk.chunks_exact(4) {
let val = U32X4::from(byte_vec);
a_vec += val;
b_vec += a_vec;
}
b += remainder_chunk.len() as u32 * a;
a_vec %= MOD;
b_vec %= MOD;
b %= MOD;
// combine the sub-sum results into the main sum
b_vec *= 4;
b_vec.0[1] += MOD - a_vec.0[1];
b_vec.0[2] += (MOD - a_vec.0[2]) * 2;
b_vec.0[3] += (MOD - a_vec.0[3]) * 3;
for &av in a_vec.0.iter() {
a += av;
}
for &bv in b_vec.0.iter() {
b += bv;
}
// iterate over the remaining few bytes in serial
for &byte in remainder.iter() {
a += u32::from(byte);
b += a;
}
self.a = (a % MOD) as u16;
self.b = (b % MOD) as u16;
}
}
#[derive(Copy, Clone)]
struct U32X4([u32; 4]);
impl U32X4 {
#[inline]
fn from(bytes: &[u8]) -> Self {
U32X4([
u32::from(bytes[0]),
u32::from(bytes[1]),
u32::from(bytes[2]),
u32::from(bytes[3]),
])
}
}
impl AddAssign<Self> for U32X4 {
#[inline]
fn add_assign(&mut self, other: Self) {
// Implement this in a primitive manner to help out the compiler a bit.
self.0[0] += other.0[0];
self.0[1] += other.0[1];
self.0[2] += other.0[2];
self.0[3] += other.0[3];
}
}
impl RemAssign<u32> for U32X4 {
#[inline]
fn rem_assign(&mut self, quotient: u32) {
self.0[0] %= quotient;
self.0[1] %= quotient;
self.0[2] %= quotient;
self.0[3] %= quotient;
}
}
impl MulAssign<u32> for U32X4 {
#[inline]
fn mul_assign(&mut self, rhs: u32) {
self.0[0] *= rhs;
self.0[1] *= rhs;
self.0[2] *= rhs;
self.0[3] *= rhs;
}
}

287
pve-rs/vendor/adler2/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,287 @@
//! Adler-32 checksum implementation.
//!
//! This implementation features:
//!
//! - Permissively licensed (0BSD) clean-room implementation.
//! - Zero dependencies.
//! - Zero `unsafe`.
//! - Decent performance (3-4 GB/s).
//! - `#![no_std]` support (with `default-features = false`).
#![doc(html_root_url = "https://docs.rs/adler2/2.0.0")]
// Deny a few warnings in doctests, since rustdoc `allow`s many warnings by default
#![doc(test(attr(deny(unused_imports, unused_must_use))))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![warn(missing_debug_implementations)]
#![forbid(unsafe_code)]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "std"))]
extern crate core as std;
mod algo;
use std::hash::Hasher;
#[cfg(feature = "std")]
use std::io::{self, BufRead};
/// Adler-32 checksum calculator.
///
/// An instance of this type is equivalent to an Adler-32 checksum: It can be created in the default
/// state via [`new`] (or the provided `Default` impl), or from a precalculated checksum via
/// [`from_checksum`], and the currently stored checksum can be fetched via [`checksum`].
///
/// This type also implements `Hasher`, which makes it easy to calculate Adler-32 checksums of any
/// type that implements or derives `Hash`. This also allows using Adler-32 in a `HashMap`, although
/// that is not recommended (while every checksum is a hash function, they are not necessarily a
/// good one).
///
/// # Examples
///
/// Basic, piecewise checksum calculation:
///
/// ```
/// use adler2::Adler32;
///
/// let mut adler = Adler32::new();
///
/// adler.write_slice(&[0, 1, 2]);
/// adler.write_slice(&[3, 4, 5]);
///
/// assert_eq!(adler.checksum(), 0x00290010);
/// ```
///
/// Using `Hash` to process structures:
///
/// ```
/// use std::hash::Hash;
/// use adler2::Adler32;
///
/// #[derive(Hash)]
/// struct Data {
/// byte: u8,
/// word: u16,
/// big: u64,
/// }
///
/// let mut adler = Adler32::new();
///
/// let data = Data { byte: 0x1F, word: 0xABCD, big: !0 };
/// data.hash(&mut adler);
///
/// // hash value depends on architecture endianness
/// if cfg!(target_endian = "little") {
/// assert_eq!(adler.checksum(), 0x33410990);
/// }
/// if cfg!(target_endian = "big") {
/// assert_eq!(adler.checksum(), 0x331F0990);
/// }
///
/// ```
///
/// [`new`]: #method.new
/// [`from_checksum`]: #method.from_checksum
/// [`checksum`]: #method.checksum
#[derive(Debug, Copy, Clone)]
pub struct Adler32 {
a: u16,
b: u16,
}
impl Adler32 {
/// Creates a new Adler-32 instance with default state.
#[inline]
pub fn new() -> Self {
Self::default()
}
/// Creates an `Adler32` instance from a precomputed Adler-32 checksum.
///
/// This allows resuming checksum calculation without having to keep the `Adler32` instance
/// around.
///
/// # Example
///
/// ```
/// # use adler2::Adler32;
/// let parts = [
/// "rust",
/// "acean",
/// ];
/// let whole = adler2::adler32_slice(b"rustacean");
///
/// let mut sum = Adler32::new();
/// sum.write_slice(parts[0].as_bytes());
/// let partial = sum.checksum();
///
/// // ...later
///
/// let mut sum = Adler32::from_checksum(partial);
/// sum.write_slice(parts[1].as_bytes());
/// assert_eq!(sum.checksum(), whole);
/// ```
#[inline]
pub const fn from_checksum(sum: u32) -> Self {
Adler32 {
a: sum as u16,
b: (sum >> 16) as u16,
}
}
/// Returns the calculated checksum at this point in time.
#[inline]
pub fn checksum(&self) -> u32 {
(u32::from(self.b) << 16) | u32::from(self.a)
}
/// Adds `bytes` to the checksum calculation.
///
/// If efficiency matters, this should be called with Byte slices that contain at least a few
/// thousand Bytes.
pub fn write_slice(&mut self, bytes: &[u8]) {
self.compute(bytes);
}
}
impl Default for Adler32 {
#[inline]
fn default() -> Self {
Adler32 { a: 1, b: 0 }
}
}
impl Hasher for Adler32 {
#[inline]
fn finish(&self) -> u64 {
u64::from(self.checksum())
}
fn write(&mut self, bytes: &[u8]) {
self.write_slice(bytes);
}
}
/// Calculates the Adler-32 checksum of a byte slice.
///
/// This is a convenience function around the [`Adler32`] type.
///
/// [`Adler32`]: struct.Adler32.html
pub fn adler32_slice(data: &[u8]) -> u32 {
let mut h = Adler32::new();
h.write_slice(data);
h.checksum()
}
/// Calculates the Adler-32 checksum of a `BufRead`'s contents.
///
/// The passed `BufRead` implementor will be read until it reaches EOF (or until it reports an
/// error).
///
/// If you only have a `Read` implementor, you can wrap it in `std::io::BufReader` before calling
/// this function.
///
/// # Errors
///
/// Any error returned by the reader are bubbled up by this function.
///
/// # Examples
///
/// ```no_run
/// # fn run() -> Result<(), Box<dyn std::error::Error>> {
/// use adler2::adler32;
///
/// use std::fs::File;
/// use std::io::BufReader;
///
/// let file = File::open("input.txt")?;
/// let mut file = BufReader::new(file);
///
/// adler32(&mut file)?;
/// # Ok(()) }
/// # fn main() { run().unwrap() }
/// ```
#[cfg(feature = "std")]
#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
pub fn adler32<R: BufRead>(mut reader: R) -> io::Result<u32> {
let mut h = Adler32::new();
loop {
let len = {
let buf = reader.fill_buf()?;
if buf.is_empty() {
return Ok(h.checksum());
}
h.write_slice(buf);
buf.len()
};
reader.consume(len);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn zeroes() {
assert_eq!(adler32_slice(&[]), 1);
assert_eq!(adler32_slice(&[0]), 1 | 1 << 16);
assert_eq!(adler32_slice(&[0, 0]), 1 | 2 << 16);
assert_eq!(adler32_slice(&[0; 100]), 0x00640001);
assert_eq!(adler32_slice(&[0; 1024]), 0x04000001);
assert_eq!(adler32_slice(&[0; 1024 * 1024]), 0x00f00001);
}
#[test]
fn ones() {
assert_eq!(adler32_slice(&[0xff; 1024]), 0x79a6fc2e);
assert_eq!(adler32_slice(&[0xff; 1024 * 1024]), 0x8e88ef11);
}
#[test]
fn mixed() {
assert_eq!(adler32_slice(&[1]), 2 | 2 << 16);
assert_eq!(adler32_slice(&[40]), 41 | 41 << 16);
assert_eq!(adler32_slice(&[0xA5; 1024 * 1024]), 0xd5009ab1);
}
/// Example calculation from https://en.wikipedia.org/wiki/Adler-32.
#[test]
fn wiki() {
assert_eq!(adler32_slice(b"Wikipedia"), 0x11E60398);
}
#[test]
fn resume() {
let mut adler = Adler32::new();
adler.write_slice(&[0xff; 1024]);
let partial = adler.checksum();
assert_eq!(partial, 0x79a6fc2e); // from above
adler.write_slice(&[0xff; 1024 * 1024 - 1024]);
assert_eq!(adler.checksum(), 0x8e88ef11); // from above
// Make sure that we can resume computing from the partial checksum via `from_checksum`.
let mut adler = Adler32::from_checksum(partial);
adler.write_slice(&[0xff; 1024 * 1024 - 1024]);
assert_eq!(adler.checksum(), 0x8e88ef11); // from above
}
#[cfg(feature = "std")]
#[test]
fn bufread() {
use std::io::BufReader;
fn test(data: &[u8], checksum: u32) {
// `BufReader` uses an 8 KB buffer, so this will test buffer refilling.
let mut buf = BufReader::new(data);
let real_sum = adler32(&mut buf).unwrap();
assert_eq!(checksum, real_sum);
}
test(&[], 1);
test(&[0; 1024], 0x04000001);
test(&[0; 1024 * 1024], 0x00f00001);
test(&[0xA5; 1024 * 1024], 0xd5009ab1);
}
}

View File

@@ -0,0 +1 @@
{"files":{"Cargo.toml":"ddcbd9309cebf3ffd26f87e09bb8f971793535955ebfd9a7196eba31a53471f8","FAQ.md":"9eb41898523ee209a0a937f9bcb78afe45ad55ca0556f8a4d4063558098f6d1e","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"0444c6991eead6822f7b9102e654448d51624431119546492e8b231db42c48bb","README.md":"d7f74d616a751bcca23d5d3b58a6daf556356a526c5f0b6aa0504715d176549a","build.rs":"23cbf4cf1b742e2c4da8bc58d06d1d021479dec80cec6a0bc3704c7172e2864a","rustfmt.toml":"e090969e99df9360705680cc0097cfaddae10c22dc2e01470592cf3b9787fd36","src/aes_hash.rs":"013602aec42150e59ba9ed6135525a624a4b42c1b1328b9857ec238aa12c3178","src/convert.rs":"54e49f93d51665366923d4d815cfd67790d3c769e84ab4386ba97f928d17d1bd","src/fallback_hash.rs":"a82451f6458a6e7a7e7da82a3c982e9bb825a2092ab79c41459d8011775fb0b1","src/hash_map.rs":"5ee97baa64fa528ba9c01bd018332c4974846c4813c6f8c30cee9f3546598f1c","src/hash_quality_test.rs":"1a560a181a804791bc6ad797df5352cdd87123fed7f19f659de0c2d883248bed","src/hash_set.rs":"360e55d066b44624f06e49efa140c03fda635fb17a59622cc29a83830bd1f263","src/lib.rs":"e2f4e7bfcf2807c73e3b8d3b1bd83c6789313b6b55edd59e15e04146e55e01b6","src/operations.rs":"38ed2b48a13d826c48ede5f304c9c2572c0c8f64ac8ac5a1ed4e112e536f3a97","src/random_state.rs":"03f40a654cfca2e00a2dabd21c85368ee50b8b6289efe98ea1745b25c721b9c6","src/specialize.rs":"56354db8a0f7e6ee1340a08f2ab6f79a0ff439fd61badac5e7e59fe4f4a653ba","tests/bench.rs":"7a425f564201560f9a8fb6c77f91f29bb88ec815b10bd27d15740c922a4f928e","tests/map_tests.rs":"e56b6f700e3b1176210e4b266d7a42b3263e966e5e565d53b1bc27af7a87168e","tests/nopanic.rs":"0d28a46248d77283941db1d9fd154c68b965c81a0e3db1fe4a43e06fc448da8f"},"package":"e89da841a80418a9b391ebaea17f5c112ffaaa96f621d2c285b5174da76b9011"}

167
pve-rs/vendor/ahash/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,167 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2018"
rust-version = "1.60.0"
name = "ahash"
version = "0.8.11"
authors = ["Tom Kaitchuck <Tom.Kaitchuck@gmail.com>"]
build = "./build.rs"
exclude = [
"/smhasher",
"/benchmark_tools",
]
description = "A non-cryptographic hash function using AES-NI for high performance"
documentation = "https://docs.rs/ahash"
readme = "README.md"
keywords = [
"hash",
"hasher",
"hashmap",
"aes",
"no-std",
]
categories = [
"algorithms",
"data-structures",
"no-std",
]
license = "MIT OR Apache-2.0"
repository = "https://github.com/tkaitchuck/ahash"
[package.metadata.docs.rs]
features = ["std"]
rustc-args = [
"-C",
"target-feature=+aes",
]
rustdoc-args = [
"-C",
"target-feature=+aes",
]
[profile.bench]
opt-level = 3
lto = "fat"
codegen-units = 1
debug = 0
debug-assertions = false
[profile.release]
opt-level = 3
lto = "fat"
codegen-units = 1
debug = 0
debug-assertions = false
[profile.test]
opt-level = 2
lto = "fat"
[lib]
name = "ahash"
path = "src/lib.rs"
test = true
doctest = true
bench = true
doc = true
[[bench]]
name = "ahash"
path = "tests/bench.rs"
harness = false
[[bench]]
name = "map"
path = "tests/map_tests.rs"
harness = false
[dependencies.atomic-polyfill]
version = "1.0.1"
optional = true
[dependencies.cfg-if]
version = "1.0"
[dependencies.const-random]
version = "0.1.17"
optional = true
[dependencies.getrandom]
version = "0.2.7"
optional = true
[dependencies.serde]
version = "1.0.117"
optional = true
[dependencies.zerocopy]
version = "0.7.31"
features = ["simd"]
default-features = false
[dev-dependencies.criterion]
version = "0.3.2"
features = ["html_reports"]
[dev-dependencies.fnv]
version = "1.0.5"
[dev-dependencies.fxhash]
version = "0.2.1"
[dev-dependencies.hashbrown]
version = "0.14.3"
[dev-dependencies.hex]
version = "0.4.2"
[dev-dependencies.no-panic]
version = "0.1.10"
[dev-dependencies.pcg-mwc]
version = "0.2.1"
[dev-dependencies.rand]
version = "0.8.5"
[dev-dependencies.seahash]
version = "4.0"
[dev-dependencies.serde_json]
version = "1.0.59"
[dev-dependencies.smallvec]
version = "1.13.1"
[build-dependencies.version_check]
version = "0.9.4"
[features]
atomic-polyfill = [
"dep:atomic-polyfill",
"once_cell/atomic-polyfill",
]
compile-time-rng = ["const-random"]
default = [
"std",
"runtime-rng",
]
nightly-arm-aes = []
no-rng = []
runtime-rng = ["getrandom"]
std = []
[target."cfg(not(all(target_arch = \"arm\", target_os = \"none\")))".dependencies.once_cell]
version = "1.18.0"
features = ["alloc"]
default-features = false

118
pve-rs/vendor/ahash/FAQ.md vendored Normal file
View File

@@ -0,0 +1,118 @@
## How does aHash prevent DOS attacks
AHash is designed to [prevent an adversary that does not know the key from being able to create hash collisions or partial collisions.](https://github.com/tkaitchuck/aHash/wiki/How-aHash-is-resists-DOS-attacks)
If you are a cryptographer and would like to help review aHash's algorithm, please post a comment [here](https://github.com/tkaitchuck/aHash/issues/11).
In short, this is achieved by ensuring that:
* aHash is designed to [resist differential crypto analysis](https://github.com/tkaitchuck/aHash/wiki/How-aHash-is-resists-DOS-attacks#differential-analysis). Meaning it should not be possible to devise a scheme to "cancel" out a modification of the internal state from a block of input via some corresponding change in a subsequent block of input.
* This is achieved by not performing any "premixing" - This reversible mixing gave previous hashes such as murmurhash confidence in their quality, but could be undone by a deliberate attack.
* Before it is used each chunk of input is "masked" such as by xoring it with an unpredictable value.
* aHash obeys the '[strict avalanche criterion](https://en.wikipedia.org/wiki/Avalanche_effect#Strict_avalanche_criterion)':
Each bit of input has the potential to flip every bit of the output.
* Similarly, each bit in the key can affect every bit in the output.
* Input bits never effect just one, or a very few, bits in intermediate state. This is specifically designed to prevent the sort of
[differential attacks launched by the sipHash authors](https://emboss.github.io/blog/2012/12/14/breaking-murmur-hash-flooding-dos-reloaded/) which cancel previous inputs.
* The `finish` call at the end of the hash is designed to not expose individual bits of the internal state.
* For example in the main algorithm 256bits of state and 256bits of keys are reduced to 64 total bits using 3 rounds of AES encryption.
Reversing this is more than non-trivial. Most of the information is by definition gone, and any given bit of the internal state is fully diffused across the output.
* In both aHash and its fallback the internal state is divided into two halves which are updated by two unrelated techniques using the same input. - This means that if there is a way to attack one of them it likely won't be able to attack both of them at the same time.
* It is deliberately difficult to 'chain' collisions. (This has been the major technique used to weaponize attacks on other hash functions)
More details are available on [the wiki](https://github.com/tkaitchuck/aHash/wiki/How-aHash-is-resists-DOS-attacks).
## Why not use a cryptographic hash in a hashmap.
Cryptographic hashes are designed to make is nearly impossible to find two items that collide when the attacker has full control
over the input. This has several implications:
* They are very difficult to construct, and have to go to a lot of effort to ensure that collisions are not possible.
* They have no notion of a 'key'. Rather, they are fully deterministic and provide exactly one hash for a given input.
For a HashMap the requirements are different.
* Speed is very important, especially for short inputs. Often the key for a HashMap is a single `u32` or similar, and to be effective
the bucket that it should be hashed to needs to be computed in just a few CPU cycles.
* A hashmap does not need to provide a hard and fast guarantee that no two inputs will ever collide. Hence, hashCodes are not 256bits
but are just 64 or 32 bits in length. Often the first thing done with the hashcode is to truncate it further to compute which among a few buckets should be used for a key.
* Here collisions are expected, and a cheap to deal with provided there is no systematic way to generated huge numbers of values that all
go to the same bucket.
* This also means that unlike a cryptographic hash partial collisions matter. It doesn't do a hashmap any good to produce a unique 256bit hash if
the lower 12 bits are all the same. This means that even a provably irreversible hash would not offer protection from a DOS attack in a hashmap
because an attacker can easily just brute force the bottom N bits.
From a cryptography point of view, a hashmap needs something closer to a block cypher.
Where the input can be quickly mixed in a way that cannot be reversed without knowing a key.
## Why isn't aHash cryptographically secure
It is not designed to be.
Attempting to use aHash as a secure hash will likely fail to hold up for several reasons:
1. aHash relies on random keys which are assumed to not be observable by an attacker. For a cryptographic hash all inputs can be seen and controlled by the attacker.
2. aHash has not yet gone through peer review, which is a pre-requisite for security critical algorithms.
3. Because aHash uses reduced rounds of AES as opposed to the standard of 10. Things like the SQUARE attack apply to part of the internal state.
(These are mitigated by other means to prevent producing collections, but would be a problem in other contexts).
4. Like any cypher based hash, it will show certain statistical deviations from truly random output when comparing a (VERY) large number of hashes.
(By definition cyphers have fewer collisions than truly random data.)
There are efforts to build a secure hash function that uses AES-NI for acceleration, but aHash is not one of them.
## How is aHash so fast
AHash uses a number of tricks.
One trick is taking advantage of specialization. If aHash is compiled on nightly it will take
advantage of specialized hash implementations for strings, slices, and primitives.
Another is taking advantage of hardware instructions.
When it is available aHash uses AES rounds using the AES-NI instruction. AES-NI is very fast (on an intel i7-6700 it
is as fast as a 64 bit multiplication.) and handles 16 bytes of input at a time, while being a very strong permutation.
This is obviously much faster than most standard approaches to hashing, and does a better job of scrambling data than most non-secure hashes.
On an intel i7-6700 compiled on nightly Rust with flags `-C opt-level=3 -C target-cpu=native -C codegen-units=1`:
| Input | SipHash 1-3 time | FnvHash time|FxHash time| aHash time| aHash Fallback* |
|----------------|-----------|-----------|-----------|-----------|---------------|
| u8 | 9.3271 ns | 0.808 ns | **0.594 ns** | 0.7704 ns | 0.7664 ns |
| u16 | 9.5139 ns | 0.803 ns | **0.594 ns** | 0.7653 ns | 0.7704 ns |
| u32 | 9.1196 ns | 1.4424 ns | **0.594 ns** | 0.7637 ns | 0.7712 ns |
| u64 | 10.854 ns | 3.0484 ns | **0.628 ns** | 0.7788 ns | 0.7888 ns |
| u128 | 12.465 ns | 7.0728 ns | 0.799 ns | **0.6174 ns** | 0.6250 ns |
| 1 byte string | 11.745 ns | 2.4743 ns | 2.4000 ns | **1.4921 ns** | 1.5861 ns |
| 3 byte string | 12.066 ns | 3.5221 ns | 2.9253 ns | **1.4745 ns** | 1.8518 ns |
| 4 byte string | 11.634 ns | 4.0770 ns | 1.8818 ns | **1.5206 ns** | 1.8924 ns |
| 7 byte string | 14.762 ns | 5.9780 ns | 3.2282 ns | **1.5207 ns** | 1.8933 ns |
| 8 byte string | 13.442 ns | 4.0535 ns | 2.9422 ns | **1.6262 ns** | 1.8929 ns |
| 15 byte string | 16.880 ns | 8.3434 ns | 4.6070 ns | **1.6265 ns** | 1.7965 ns |
| 16 byte string | 15.155 ns | 7.5796 ns | 3.2619 ns | **1.6262 ns** | 1.8011 ns |
| 24 byte string | 16.521 ns | 12.492 ns | 3.5424 ns | **1.6266 ns** | 2.8311 ns |
| 68 byte string | 24.598 ns | 50.715 ns | 5.8312 ns | **4.8282 ns** | 5.4824 ns |
| 132 byte string| 39.224 ns | 119.96 ns | 11.777 ns | **6.5087 ns** | 9.1459 ns |
|1024 byte string| 254.00 ns | 1087.3 ns | 156.41 ns | **25.402 ns** | 54.566 ns |
* Fallback refers to the algorithm aHash would use if AES instructions are unavailable.
For reference a hash that does nothing (not even reads the input data takes) **0.520 ns**. So that represents the fastest
possible time.
As you can see above aHash like `FxHash` provides a large speedup over `SipHash-1-3` which is already nearly twice as fast as `SipHash-2-4`.
Rust's HashMap by default uses `SipHash-1-3` because faster hash functions such as `FxHash` are predictable and vulnerable to denial of
service attacks. While `aHash` has both very strong scrambling and very high performance.
AHash performs well when dealing with large inputs because aHash reads 8 or 16 bytes at a time. (depending on availability of AES-NI)
Because of this, and its optimized logic, `aHash` is able to outperform `FxHash` with strings.
It also provides especially good performance dealing with unaligned input.
(Notice the big performance gaps between 3 vs 4, 7 vs 8 and 15 vs 16 in `FxHash` above)
### Which CPUs can use the hardware acceleration
Hardware AES instructions are built into Intel processors built after 2010 and AMD processors after 2012.
It is also available on [many other CPUs](https://en.wikipedia.org/wiki/AES_instruction_set) should in eventually
be able to get aHash to work. However, only X86 and X86-64 are the only supported architectures at the moment, as currently
they are the only architectures for which Rust provides an intrinsic.
aHash also uses `sse2` and `sse3` instructions. X86 processors that have `aesni` also have these instruction sets.

201
pve-rs/vendor/ahash/LICENSE-APACHE vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

25
pve-rs/vendor/ahash/LICENSE-MIT vendored Normal file
View File

@@ -0,0 +1,25 @@
Copyright (c) 2018 Tom Kaitchuck
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

109
pve-rs/vendor/ahash/README.md vendored Normal file
View File

@@ -0,0 +1,109 @@
# aHash ![Build Status](https://img.shields.io/github/actions/workflow/status/tkaitchuck/aHash/rust.yml?branch=master) ![Licence](https://img.shields.io/crates/l/ahash) ![Downloads](https://img.shields.io/crates/d/ahash)
AHash is the [fastest](https://github.com/tkaitchuck/aHash/blob/master/compare/readme.md#Speed),
[DOS resistant hash](https://github.com/tkaitchuck/aHash/wiki/How-aHash-is-resists-DOS-attacks) currently available in Rust.
AHash is intended *exclusively* for use in in-memory hashmaps.
AHash's output is of [high quality](https://github.com/tkaitchuck/aHash/blob/master/compare/readme.md#Quality) but aHash is **not** a cryptographically secure hash.
## Design
Because AHash is a keyed hash, each map will produce completely different hashes, which cannot be predicted without knowing the keys.
[This prevents DOS attacks where an attacker sends a large number of items whose hashes collide that get used as keys in a hashmap.](https://github.com/tkaitchuck/aHash/wiki/How-aHash-is-resists-DOS-attacks)
This also avoids [accidentally quadratic behavior by reading from one map and writing to another.](https://accidentallyquadratic.tumblr.com/post/153545455987/rust-hash-iteration-reinsertion)
## Goals and Non-Goals
AHash does *not* have a fixed standard for its output. This allows it to improve over time. For example,
if any faster algorithm is found, aHash will be updated to incorporate the technique.
Similarly, should any flaw in aHash's DOS resistance be found, aHash will be changed to correct the flaw.
Because it does not have a fixed standard, different computers or computers on different versions of the code will observe different hash values.
As such, aHash is not recommended for use other than in-memory maps. Specifically, aHash is not intended for network use or in applications which persist hashed values.
(In these cases `HighwayHash` would be a better choice)
Additionally, aHash is not intended to be cryptographically secure and should not be used as a MAC, or anywhere which requires a cryptographically secure hash.
(In these cases `SHA-3` would be a better choice)
## Usage
AHash is a drop in replacement for the default implementation of the `Hasher` trait. To construct a `HashMap` using aHash
as its hasher do the following:
```rust
use ahash::{AHasher, RandomState};
use std::collections::HashMap;
let mut map: HashMap<i32, i32, RandomState> = HashMap::default();
map.insert(12, 34);
```
For convenience, wrappers called `AHashMap` and `AHashSet` are also provided.
These do the same thing with slightly less typing.
```rust
use ahash::AHashMap;
let mut map: AHashMap<i32, i32> = AHashMap::new();
map.insert(12, 34);
map.insert(56, 78);
```
## Flags
The aHash package has the following flags:
* `std`: This enables features which require the standard library. (On by default) This includes providing the utility classes `AHashMap` and `AHashSet`.
* `serde`: Enables `serde` support for the utility classes `AHashMap` and `AHashSet`.
* `runtime-rng`: To obtain a seed for Hashers will obtain randomness from the operating system. (On by default)
This is done using the [getrandom](https://github.com/rust-random/getrandom) crate.
* `compile-time-rng`: For OS targets without access to a random number generator, `compile-time-rng` provides an alternative.
If `getrandom` is unavailable and `compile-time-rng` is enabled, aHash will generate random numbers at compile time and embed them in the binary.
* `nightly-arm-aes`: To use AES instructions on 32-bit ARM, which requires nightly. This is not needed on AArch64.
This allows for DOS resistance even if there is no random number generator available at runtime (assuming the compiled binary is not public).
This makes the binary non-deterministic. (If non-determinism is a problem see [constrandom's documentation](https://github.com/tkaitchuck/constrandom#deterministic-builds))
If both `runtime-rng` and `compile-time-rng` are enabled the `runtime-rng` will take precedence and `compile-time-rng` will do nothing.
If neither flag is set, seeds can be supplied by the application. [Multiple apis](https://docs.rs/ahash/latest/ahash/random_state/struct.RandomState.html)
are available to do this.
## Comparison with other hashers
A full comparison with other hashing algorithms can be found [here](https://github.com/tkaitchuck/aHash/blob/master/compare/readme.md)
![Hasher performance](https://docs.google.com/spreadsheets/d/e/2PACX-1vSK7Li2nS-Bur9arAYF9IfT37MP-ohAe1v19lZu5fd9MajI1fSveLAQZyEie4Ea9k5-SWHTff7nL2DW/pubchart?oid=1323618938&format=image)
For a more representative performance comparison which includes the overhead of using a HashMap, see [HashBrown's benchmarks](https://github.com/rust-lang/hashbrown#performance)
as HashBrown now uses aHash as its hasher by default.
## Hash quality
AHash passes the full [SMHasher test suite](https://github.com/rurban/smhasher).
The code to reproduce the result, and the full output [are checked into the repo](https://github.com/tkaitchuck/aHash/tree/master/smhasher).
## Additional FAQ
A separate FAQ document is maintained [here](https://github.com/tkaitchuck/aHash/blob/master/FAQ.md).
If you have questions not covered there, open an issue [here](https://github.com/tkaitchuck/aHash/issues).
## License
Licensed under either of:
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
## Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.

20
pve-rs/vendor/ahash/build.rs vendored Normal file
View File

@@ -0,0 +1,20 @@
#![deny(warnings)]
use std::env;
fn main() {
println!("cargo:rerun-if-changed=build.rs");
if let Some(true) = version_check::supports_feature("specialize") {
println!("cargo:rustc-cfg=feature=\"specialize\"");
}
let arch = env::var("CARGO_CFG_TARGET_ARCH").expect("CARGO_CFG_TARGET_ARCH was not set");
if arch.eq_ignore_ascii_case("x86_64")
|| arch.eq_ignore_ascii_case("aarch64")
|| arch.eq_ignore_ascii_case("mips64")
|| arch.eq_ignore_ascii_case("powerpc64")
|| arch.eq_ignore_ascii_case("riscv64gc")
|| arch.eq_ignore_ascii_case("s390x")
{
println!("cargo:rustc-cfg=feature=\"folded_multiply\"");
}
}

1
pve-rs/vendor/ahash/rustfmt.toml vendored Normal file
View File

@@ -0,0 +1 @@
max_width = 120

433
pve-rs/vendor/ahash/src/aes_hash.rs vendored Normal file
View File

@@ -0,0 +1,433 @@
use crate::convert::*;
use crate::operations::*;
use crate::random_state::PI;
use crate::RandomState;
use core::hash::Hasher;
/// A `Hasher` for hashing an arbitrary stream of bytes.
///
/// Instances of [`AHasher`] represent state that is updated while hashing data.
///
/// Each method updates the internal state based on the new data provided. Once
/// all of the data has been provided, the resulting hash can be obtained by calling
/// `finish()`
///
/// [Clone] is also provided in case you wish to calculate hashes for two different items that
/// start with the same data.
///
#[derive(Debug, Clone)]
pub struct AHasher {
enc: u128,
sum: u128,
key: u128,
}
impl AHasher {
/// Creates a new hasher keyed to the provided keys.
///
/// Normally hashers are created via `AHasher::default()` for fixed keys or `RandomState::new()` for randomly
/// generated keys and `RandomState::with_seeds(a,b)` for seeds that are set and can be reused. All of these work at
/// map creation time (and hence don't have any overhead on a per-item bais).
///
/// This method directly creates the hasher instance and performs no transformation on the provided seeds. This may
/// be useful where a HashBuilder is not desired, such as for testing purposes.
///
/// # Example
///
/// ```
/// use std::hash::Hasher;
/// use ahash::AHasher;
///
/// let mut hasher = AHasher::new_with_keys(1234, 5678);
///
/// hasher.write_u32(1989);
/// hasher.write_u8(11);
/// hasher.write_u8(9);
/// hasher.write(b"Huh?");
///
/// println!("Hash is {:x}!", hasher.finish());
/// ```
#[inline]
pub(crate) fn new_with_keys(key1: u128, key2: u128) -> Self {
let pi: [u128; 2] = PI.convert();
let key1 = key1 ^ pi[0];
let key2 = key2 ^ pi[1];
Self {
enc: key1,
sum: key2,
key: key1 ^ key2,
}
}
#[allow(unused)] // False positive
pub(crate) fn test_with_keys(key1: u128, key2: u128) -> Self {
Self {
enc: key1,
sum: key2,
key: key1 ^ key2,
}
}
#[inline]
pub(crate) fn from_random_state(rand_state: &RandomState) -> Self {
let key1 = [rand_state.k0, rand_state.k1].convert();
let key2 = [rand_state.k2, rand_state.k3].convert();
Self {
enc: key1,
sum: key2,
key: key1 ^ key2,
}
}
#[inline(always)]
fn hash_in(&mut self, new_value: u128) {
self.enc = aesdec(self.enc, new_value);
self.sum = shuffle_and_add(self.sum, new_value);
}
#[inline(always)]
fn hash_in_2(&mut self, v1: u128, v2: u128) {
self.enc = aesdec(self.enc, v1);
self.sum = shuffle_and_add(self.sum, v1);
self.enc = aesdec(self.enc, v2);
self.sum = shuffle_and_add(self.sum, v2);
}
#[inline]
#[cfg(feature = "specialize")]
fn short_finish(&self) -> u64 {
let combined = aesenc(self.sum, self.enc);
let result: [u64; 2] = aesdec(combined, combined).convert();
result[0]
}
}
/// Provides [Hasher] methods to hash all of the primitive types.
///
/// [Hasher]: core::hash::Hasher
impl Hasher for AHasher {
#[inline]
fn write_u8(&mut self, i: u8) {
self.write_u64(i as u64);
}
#[inline]
fn write_u16(&mut self, i: u16) {
self.write_u64(i as u64);
}
#[inline]
fn write_u32(&mut self, i: u32) {
self.write_u64(i as u64);
}
#[inline]
fn write_u128(&mut self, i: u128) {
self.hash_in(i);
}
#[inline]
#[cfg(any(
target_pointer_width = "64",
target_pointer_width = "32",
target_pointer_width = "16"
))]
fn write_usize(&mut self, i: usize) {
self.write_u64(i as u64);
}
#[inline]
#[cfg(target_pointer_width = "128")]
fn write_usize(&mut self, i: usize) {
self.write_u128(i as u128);
}
#[inline]
fn write_u64(&mut self, i: u64) {
self.write_u128(i as u128);
}
#[inline]
#[allow(clippy::collapsible_if)]
fn write(&mut self, input: &[u8]) {
let mut data = input;
let length = data.len();
add_in_length(&mut self.enc, length as u64);
//A 'binary search' on sizes reduces the number of comparisons.
if data.len() <= 8 {
let value = read_small(data);
self.hash_in(value.convert());
} else {
if data.len() > 32 {
if data.len() > 64 {
let tail = data.read_last_u128x4();
let mut current: [u128; 4] = [self.key; 4];
current[0] = aesenc(current[0], tail[0]);
current[1] = aesdec(current[1], tail[1]);
current[2] = aesenc(current[2], tail[2]);
current[3] = aesdec(current[3], tail[3]);
let mut sum: [u128; 2] = [self.key, !self.key];
sum[0] = add_by_64s(sum[0].convert(), tail[0].convert()).convert();
sum[1] = add_by_64s(sum[1].convert(), tail[1].convert()).convert();
sum[0] = shuffle_and_add(sum[0], tail[2]);
sum[1] = shuffle_and_add(sum[1], tail[3]);
while data.len() > 64 {
let (blocks, rest) = data.read_u128x4();
current[0] = aesdec(current[0], blocks[0]);
current[1] = aesdec(current[1], blocks[1]);
current[2] = aesdec(current[2], blocks[2]);
current[3] = aesdec(current[3], blocks[3]);
sum[0] = shuffle_and_add(sum[0], blocks[0]);
sum[1] = shuffle_and_add(sum[1], blocks[1]);
sum[0] = shuffle_and_add(sum[0], blocks[2]);
sum[1] = shuffle_and_add(sum[1], blocks[3]);
data = rest;
}
self.hash_in_2(current[0], current[1]);
self.hash_in_2(current[2], current[3]);
self.hash_in_2(sum[0], sum[1]);
} else {
//len 33-64
let (head, _) = data.read_u128x2();
let tail = data.read_last_u128x2();
self.hash_in_2(head[0], head[1]);
self.hash_in_2(tail[0], tail[1]);
}
} else {
if data.len() > 16 {
//len 17-32
self.hash_in_2(data.read_u128().0, data.read_last_u128());
} else {
//len 9-16
let value: [u64; 2] = [data.read_u64().0, data.read_last_u64()];
self.hash_in(value.convert());
}
}
}
}
#[inline]
fn finish(&self) -> u64 {
let combined = aesenc(self.sum, self.enc);
let result: [u64; 2] = aesdec(aesdec(combined, self.key), combined).convert();
result[0]
}
}
#[cfg(feature = "specialize")]
pub(crate) struct AHasherU64 {
pub(crate) buffer: u64,
pub(crate) pad: u64,
}
/// A specialized hasher for only primitives under 64 bits.
#[cfg(feature = "specialize")]
impl Hasher for AHasherU64 {
#[inline]
fn finish(&self) -> u64 {
folded_multiply(self.buffer, self.pad)
}
#[inline]
fn write(&mut self, _bytes: &[u8]) {
unreachable!("Specialized hasher was called with a different type of object")
}
#[inline]
fn write_u8(&mut self, i: u8) {
self.write_u64(i as u64);
}
#[inline]
fn write_u16(&mut self, i: u16) {
self.write_u64(i as u64);
}
#[inline]
fn write_u32(&mut self, i: u32) {
self.write_u64(i as u64);
}
#[inline]
fn write_u64(&mut self, i: u64) {
self.buffer = folded_multiply(i ^ self.buffer, MULTIPLE);
}
#[inline]
fn write_u128(&mut self, _i: u128) {
unreachable!("Specialized hasher was called with a different type of object")
}
#[inline]
fn write_usize(&mut self, _i: usize) {
unreachable!("Specialized hasher was called with a different type of object")
}
}
#[cfg(feature = "specialize")]
pub(crate) struct AHasherFixed(pub AHasher);
/// A specialized hasher for fixed size primitives larger than 64 bits.
#[cfg(feature = "specialize")]
impl Hasher for AHasherFixed {
#[inline]
fn finish(&self) -> u64 {
self.0.short_finish()
}
#[inline]
fn write(&mut self, bytes: &[u8]) {
self.0.write(bytes)
}
#[inline]
fn write_u8(&mut self, i: u8) {
self.write_u64(i as u64);
}
#[inline]
fn write_u16(&mut self, i: u16) {
self.write_u64(i as u64);
}
#[inline]
fn write_u32(&mut self, i: u32) {
self.write_u64(i as u64);
}
#[inline]
fn write_u64(&mut self, i: u64) {
self.0.write_u64(i);
}
#[inline]
fn write_u128(&mut self, i: u128) {
self.0.write_u128(i);
}
#[inline]
fn write_usize(&mut self, i: usize) {
self.0.write_usize(i);
}
}
#[cfg(feature = "specialize")]
pub(crate) struct AHasherStr(pub AHasher);
/// A specialized hasher for strings
/// Note that the other types don't panic because the hash impl for String tacks on an unneeded call. (As does vec)
#[cfg(feature = "specialize")]
impl Hasher for AHasherStr {
#[inline]
fn finish(&self) -> u64 {
let result: [u64; 2] = self.0.enc.convert();
result[0]
}
#[inline]
fn write(&mut self, bytes: &[u8]) {
if bytes.len() > 8 {
self.0.write(bytes);
self.0.enc = aesenc(self.0.sum, self.0.enc);
self.0.enc = aesdec(aesdec(self.0.enc, self.0.key), self.0.enc);
} else {
add_in_length(&mut self.0.enc, bytes.len() as u64);
let value = read_small(bytes).convert();
self.0.sum = shuffle_and_add(self.0.sum, value);
self.0.enc = aesenc(self.0.sum, self.0.enc);
self.0.enc = aesdec(aesdec(self.0.enc, self.0.key), self.0.enc);
}
}
#[inline]
fn write_u8(&mut self, _i: u8) {}
#[inline]
fn write_u16(&mut self, _i: u16) {}
#[inline]
fn write_u32(&mut self, _i: u32) {}
#[inline]
fn write_u64(&mut self, _i: u64) {}
#[inline]
fn write_u128(&mut self, _i: u128) {}
#[inline]
fn write_usize(&mut self, _i: usize) {}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::convert::Convert;
use crate::operations::aesenc;
use crate::RandomState;
use std::hash::{BuildHasher, Hasher};
#[test]
fn test_sanity() {
let mut hasher = RandomState::with_seeds(1, 2, 3, 4).build_hasher();
hasher.write_u64(0);
let h1 = hasher.finish();
hasher.write(&[1, 0, 0, 0, 0, 0, 0, 0]);
let h2 = hasher.finish();
assert_ne!(h1, h2);
}
#[cfg(feature = "compile-time-rng")]
#[test]
fn test_builder() {
use std::collections::HashMap;
use std::hash::BuildHasherDefault;
let mut map = HashMap::<u32, u64, BuildHasherDefault<AHasher>>::default();
map.insert(1, 3);
}
#[cfg(feature = "compile-time-rng")]
#[test]
fn test_default() {
let hasher_a = AHasher::default();
let a_enc: [u64; 2] = hasher_a.enc.convert();
let a_sum: [u64; 2] = hasher_a.sum.convert();
assert_ne!(0, a_enc[0]);
assert_ne!(0, a_enc[1]);
assert_ne!(0, a_sum[0]);
assert_ne!(0, a_sum[1]);
assert_ne!(a_enc[0], a_enc[1]);
assert_ne!(a_sum[0], a_sum[1]);
assert_ne!(a_enc[0], a_sum[0]);
assert_ne!(a_enc[1], a_sum[1]);
let hasher_b = AHasher::default();
let b_enc: [u64; 2] = hasher_b.enc.convert();
let b_sum: [u64; 2] = hasher_b.sum.convert();
assert_eq!(a_enc[0], b_enc[0]);
assert_eq!(a_enc[1], b_enc[1]);
assert_eq!(a_sum[0], b_sum[0]);
assert_eq!(a_sum[1], b_sum[1]);
}
#[test]
fn test_hash() {
let mut result: [u64; 2] = [0x6c62272e07bb0142, 0x62b821756295c58d];
let value: [u64; 2] = [1 << 32, 0xFEDCBA9876543210];
result = aesenc(value.convert(), result.convert()).convert();
result = aesenc(result.convert(), result.convert()).convert();
let mut result2: [u64; 2] = [0x6c62272e07bb0142, 0x62b821756295c58d];
let value2: [u64; 2] = [1, 0xFEDCBA9876543210];
result2 = aesenc(value2.convert(), result2.convert()).convert();
result2 = aesenc(result2.convert(), result.convert()).convert();
let result: [u8; 16] = result.convert();
let result2: [u8; 16] = result2.convert();
assert_ne!(hex::encode(result), hex::encode(result2));
}
#[test]
fn test_conversion() {
let input: &[u8] = "dddddddd".as_bytes();
let bytes: u64 = as_array!(input, 8).convert();
assert_eq!(bytes, 0x6464646464646464);
}
}

162
pve-rs/vendor/ahash/src/convert.rs vendored Normal file
View File

@@ -0,0 +1,162 @@
pub(crate) trait Convert<To> {
fn convert(self) -> To;
}
macro_rules! convert {
($a:ty, $b:ty) => {
impl Convert<$b> for $a {
#[inline(always)]
fn convert(self) -> $b {
zerocopy::transmute!(self)
}
}
impl Convert<$a> for $b {
#[inline(always)]
fn convert(self) -> $a {
zerocopy::transmute!(self)
}
}
};
}
convert!([u128; 4], [u64; 8]);
convert!([u128; 4], [u32; 16]);
convert!([u128; 4], [u16; 32]);
convert!([u128; 4], [u8; 64]);
convert!([u128; 2], [u64; 4]);
convert!([u128; 2], [u32; 8]);
convert!([u128; 2], [u16; 16]);
convert!([u128; 2], [u8; 32]);
convert!(u128, [u64; 2]);
convert!(u128, [u32; 4]);
convert!(u128, [u16; 8]);
convert!(u128, [u8; 16]);
convert!([u64; 8], [u32; 16]);
convert!([u64; 8], [u16; 32]);
convert!([u64; 8], [u8; 64]);
convert!([u64; 4], [u32; 8]);
convert!([u64; 4], [u16; 16]);
convert!([u64; 4], [u8; 32]);
convert!([u64; 2], [u32; 4]);
convert!([u64; 2], [u16; 8]);
convert!([u64; 2], [u8; 16]);
convert!([u32; 4], [u16; 8]);
convert!([u32; 4], [u8; 16]);
convert!([u16; 8], [u8; 16]);
convert!(u64, [u32; 2]);
convert!(u64, [u16; 4]);
convert!(u64, [u8; 8]);
convert!([u32; 2], [u16; 4]);
convert!([u32; 2], [u8; 8]);
convert!(u32, [u16; 2]);
convert!(u32, [u8; 4]);
convert!([u16; 2], [u8; 4]);
convert!(u16, [u8; 2]);
convert!([[u64; 4]; 2], [u8; 64]);
convert!([f64; 2], [u8; 16]);
convert!([f32; 4], [u8; 16]);
convert!(f64, [u8; 8]);
convert!([f32; 2], [u8; 8]);
convert!(f32, [u8; 4]);
macro_rules! as_array {
($input:expr, $len:expr) => {{
{
#[inline(always)]
fn as_array<T>(slice: &[T]) -> &[T; $len] {
core::convert::TryFrom::try_from(slice).unwrap()
}
as_array($input)
}
}};
}
pub(crate) trait ReadFromSlice {
fn read_u16(&self) -> (u16, &[u8]);
fn read_u32(&self) -> (u32, &[u8]);
fn read_u64(&self) -> (u64, &[u8]);
fn read_u128(&self) -> (u128, &[u8]);
fn read_u128x2(&self) -> ([u128; 2], &[u8]);
fn read_u128x4(&self) -> ([u128; 4], &[u8]);
fn read_last_u16(&self) -> u16;
fn read_last_u32(&self) -> u32;
fn read_last_u64(&self) -> u64;
fn read_last_u128(&self) -> u128;
fn read_last_u128x2(&self) -> [u128; 2];
fn read_last_u128x4(&self) -> [u128; 4];
}
impl ReadFromSlice for [u8] {
#[inline(always)]
fn read_u16(&self) -> (u16, &[u8]) {
let (value, rest) = self.split_at(2);
(as_array!(value, 2).convert(), rest)
}
#[inline(always)]
fn read_u32(&self) -> (u32, &[u8]) {
let (value, rest) = self.split_at(4);
(as_array!(value, 4).convert(), rest)
}
#[inline(always)]
fn read_u64(&self) -> (u64, &[u8]) {
let (value, rest) = self.split_at(8);
(as_array!(value, 8).convert(), rest)
}
#[inline(always)]
fn read_u128(&self) -> (u128, &[u8]) {
let (value, rest) = self.split_at(16);
(as_array!(value, 16).convert(), rest)
}
#[inline(always)]
fn read_u128x2(&self) -> ([u128; 2], &[u8]) {
let (value, rest) = self.split_at(32);
(as_array!(value, 32).convert(), rest)
}
#[inline(always)]
fn read_u128x4(&self) -> ([u128; 4], &[u8]) {
let (value, rest) = self.split_at(64);
(as_array!(value, 64).convert(), rest)
}
#[inline(always)]
fn read_last_u16(&self) -> u16 {
let (_, value) = self.split_at(self.len() - 2);
as_array!(value, 2).convert()
}
#[inline(always)]
fn read_last_u32(&self) -> u32 {
let (_, value) = self.split_at(self.len() - 4);
as_array!(value, 4).convert()
}
#[inline(always)]
fn read_last_u64(&self) -> u64 {
let (_, value) = self.split_at(self.len() - 8);
as_array!(value, 8).convert()
}
#[inline(always)]
fn read_last_u128(&self) -> u128 {
let (_, value) = self.split_at(self.len() - 16);
as_array!(value, 16).convert()
}
#[inline(always)]
fn read_last_u128x2(&self) -> [u128; 2] {
let (_, value) = self.split_at(self.len() - 32);
as_array!(value, 32).convert()
}
#[inline(always)]
fn read_last_u128x4(&self) -> [u128; 4] {
let (_, value) = self.split_at(self.len() - 64);
as_array!(value, 64).convert()
}
}

367
pve-rs/vendor/ahash/src/fallback_hash.rs vendored Normal file
View File

@@ -0,0 +1,367 @@
use crate::convert::*;
use crate::operations::folded_multiply;
use crate::operations::read_small;
use crate::operations::MULTIPLE;
use crate::random_state::PI;
use crate::RandomState;
use core::hash::Hasher;
const ROT: u32 = 23; //17
/// A `Hasher` for hashing an arbitrary stream of bytes.
///
/// Instances of [`AHasher`] represent state that is updated while hashing data.
///
/// Each method updates the internal state based on the new data provided. Once
/// all of the data has been provided, the resulting hash can be obtained by calling
/// `finish()`
///
/// [Clone] is also provided in case you wish to calculate hashes for two different items that
/// start with the same data.
///
#[derive(Debug, Clone)]
pub struct AHasher {
buffer: u64,
pad: u64,
extra_keys: [u64; 2],
}
impl AHasher {
/// Creates a new hasher keyed to the provided key.
#[inline]
#[allow(dead_code)] // Is not called if non-fallback hash is used.
pub(crate) fn new_with_keys(key1: u128, key2: u128) -> AHasher {
let pi: [u128; 2] = PI.convert();
let key1: [u64; 2] = (key1 ^ pi[0]).convert();
let key2: [u64; 2] = (key2 ^ pi[1]).convert();
AHasher {
buffer: key1[0],
pad: key1[1],
extra_keys: key2,
}
}
#[allow(unused)] // False positive
pub(crate) fn test_with_keys(key1: u128, key2: u128) -> Self {
let key1: [u64; 2] = key1.convert();
let key2: [u64; 2] = key2.convert();
Self {
buffer: key1[0],
pad: key1[1],
extra_keys: key2,
}
}
#[inline]
#[allow(dead_code)] // Is not called if non-fallback hash is used.
pub(crate) fn from_random_state(rand_state: &RandomState) -> AHasher {
AHasher {
buffer: rand_state.k1,
pad: rand_state.k0,
extra_keys: [rand_state.k2, rand_state.k3],
}
}
/// This update function has the goal of updating the buffer with a single multiply
/// FxHash does this but is vulnerable to attack. To avoid this input needs to be masked to with an
/// unpredictable value. Other hashes such as murmurhash have taken this approach but were found vulnerable
/// to attack. The attack was based on the idea of reversing the pre-mixing (Which is necessarily
/// reversible otherwise bits would be lost) then placing a difference in the highest bit before the
/// multiply used to mix the data. Because a multiply can never affect the bits to the right of it, a
/// subsequent update that also differed in this bit could result in a predictable collision.
///
/// This version avoids this vulnerability while still only using a single multiply. It takes advantage
/// of the fact that when a 64 bit multiply is performed the upper 64 bits are usually computed and thrown
/// away. Instead it creates two 128 bit values where the upper 64 bits are zeros and multiplies them.
/// (The compiler is smart enough to turn this into a 64 bit multiplication in the assembly)
/// Then the upper bits are xored with the lower bits to produce a single 64 bit result.
///
/// To understand why this is a good scrambling function it helps to understand multiply-with-carry PRNGs:
/// https://en.wikipedia.org/wiki/Multiply-with-carry_pseudorandom_number_generator
/// If the multiple is chosen well, this creates a long period, decent quality PRNG.
/// Notice that this function is equivalent to this except the `buffer`/`state` is being xored with each
/// new block of data. In the event that data is all zeros, it is exactly equivalent to a MWC PRNG.
///
/// This is impervious to attack because every bit buffer at the end is dependent on every bit in
/// `new_data ^ buffer`. For example suppose two inputs differed in only the 5th bit. Then when the
/// multiplication is performed the `result` will differ in bits 5-69. More specifically it will differ by
/// 2^5 * MULTIPLE. However in the next step bits 65-128 are turned into a separate 64 bit value. So the
/// differing bits will be in the lower 6 bits of this value. The two intermediate values that differ in
/// bits 5-63 and in bits 0-5 respectively get added together. Producing an output that differs in every
/// bit. The addition carries in the multiplication and at the end additionally mean that the even if an
/// attacker somehow knew part of (but not all) the contents of the buffer before hand,
/// they would not be able to predict any of the bits in the buffer at the end.
#[inline(always)]
fn update(&mut self, new_data: u64) {
self.buffer = folded_multiply(new_data ^ self.buffer, MULTIPLE);
}
/// Similar to the above this function performs an update using a "folded multiply".
/// However it takes in 128 bits of data instead of 64. Both halves must be masked.
///
/// This makes it impossible for an attacker to place a single bit difference between
/// two blocks so as to cancel each other.
///
/// However this is not sufficient. to prevent (a,b) from hashing the same as (b,a) the buffer itself must
/// be updated between calls in a way that does not commute. To achieve this XOR and Rotate are used.
/// Add followed by xor is not the same as xor followed by add, and rotate ensures that the same out bits
/// can't be changed by the same set of input bits. To cancel this sequence with subsequent input would require
/// knowing the keys.
#[inline(always)]
fn large_update(&mut self, new_data: u128) {
let block: [u64; 2] = new_data.convert();
let combined = folded_multiply(block[0] ^ self.extra_keys[0], block[1] ^ self.extra_keys[1]);
self.buffer = (self.buffer.wrapping_add(self.pad) ^ combined).rotate_left(ROT);
}
#[inline]
#[cfg(feature = "specialize")]
fn short_finish(&self) -> u64 {
folded_multiply(self.buffer, self.pad)
}
}
/// Provides [Hasher] methods to hash all of the primitive types.
///
/// [Hasher]: core::hash::Hasher
impl Hasher for AHasher {
#[inline]
fn write_u8(&mut self, i: u8) {
self.update(i as u64);
}
#[inline]
fn write_u16(&mut self, i: u16) {
self.update(i as u64);
}
#[inline]
fn write_u32(&mut self, i: u32) {
self.update(i as u64);
}
#[inline]
fn write_u64(&mut self, i: u64) {
self.update(i as u64);
}
#[inline]
fn write_u128(&mut self, i: u128) {
self.large_update(i);
}
#[inline]
#[cfg(any(
target_pointer_width = "64",
target_pointer_width = "32",
target_pointer_width = "16"
))]
fn write_usize(&mut self, i: usize) {
self.write_u64(i as u64);
}
#[inline]
#[cfg(target_pointer_width = "128")]
fn write_usize(&mut self, i: usize) {
self.write_u128(i as u128);
}
#[inline]
#[allow(clippy::collapsible_if)]
fn write(&mut self, input: &[u8]) {
let mut data = input;
let length = data.len() as u64;
//Needs to be an add rather than an xor because otherwise it could be canceled with carefully formed input.
self.buffer = self.buffer.wrapping_add(length).wrapping_mul(MULTIPLE);
//A 'binary search' on sizes reduces the number of comparisons.
if data.len() > 8 {
if data.len() > 16 {
let tail = data.read_last_u128();
self.large_update(tail);
while data.len() > 16 {
let (block, rest) = data.read_u128();
self.large_update(block);
data = rest;
}
} else {
self.large_update([data.read_u64().0, data.read_last_u64()].convert());
}
} else {
let value = read_small(data);
self.large_update(value.convert());
}
}
#[inline]
fn finish(&self) -> u64 {
let rot = (self.buffer & 63) as u32;
folded_multiply(self.buffer, self.pad).rotate_left(rot)
}
}
#[cfg(feature = "specialize")]
pub(crate) struct AHasherU64 {
pub(crate) buffer: u64,
pub(crate) pad: u64,
}
/// A specialized hasher for only primitives under 64 bits.
#[cfg(feature = "specialize")]
impl Hasher for AHasherU64 {
#[inline]
fn finish(&self) -> u64 {
folded_multiply(self.buffer, self.pad)
//self.buffer
}
#[inline]
fn write(&mut self, _bytes: &[u8]) {
unreachable!("Specialized hasher was called with a different type of object")
}
#[inline]
fn write_u8(&mut self, i: u8) {
self.write_u64(i as u64);
}
#[inline]
fn write_u16(&mut self, i: u16) {
self.write_u64(i as u64);
}
#[inline]
fn write_u32(&mut self, i: u32) {
self.write_u64(i as u64);
}
#[inline]
fn write_u64(&mut self, i: u64) {
self.buffer = folded_multiply(i ^ self.buffer, MULTIPLE);
}
#[inline]
fn write_u128(&mut self, _i: u128) {
unreachable!("Specialized hasher was called with a different type of object")
}
#[inline]
fn write_usize(&mut self, _i: usize) {
unreachable!("Specialized hasher was called with a different type of object")
}
}
#[cfg(feature = "specialize")]
pub(crate) struct AHasherFixed(pub AHasher);
/// A specialized hasher for fixed size primitives larger than 64 bits.
#[cfg(feature = "specialize")]
impl Hasher for AHasherFixed {
#[inline]
fn finish(&self) -> u64 {
self.0.short_finish()
}
#[inline]
fn write(&mut self, bytes: &[u8]) {
self.0.write(bytes)
}
#[inline]
fn write_u8(&mut self, i: u8) {
self.write_u64(i as u64);
}
#[inline]
fn write_u16(&mut self, i: u16) {
self.write_u64(i as u64);
}
#[inline]
fn write_u32(&mut self, i: u32) {
self.write_u64(i as u64);
}
#[inline]
fn write_u64(&mut self, i: u64) {
self.0.write_u64(i);
}
#[inline]
fn write_u128(&mut self, i: u128) {
self.0.write_u128(i);
}
#[inline]
fn write_usize(&mut self, i: usize) {
self.0.write_usize(i);
}
}
#[cfg(feature = "specialize")]
pub(crate) struct AHasherStr(pub AHasher);
/// A specialized hasher for a single string
/// Note that the other types don't panic because the hash impl for String tacks on an unneeded call. (As does vec)
#[cfg(feature = "specialize")]
impl Hasher for AHasherStr {
#[inline]
fn finish(&self) -> u64 {
self.0.finish()
}
#[inline]
fn write(&mut self, bytes: &[u8]) {
if bytes.len() > 8 {
self.0.write(bytes)
} else {
let value = read_small(bytes);
self.0.buffer = folded_multiply(value[0] ^ self.0.buffer, value[1] ^ self.0.extra_keys[1]);
self.0.pad = self.0.pad.wrapping_add(bytes.len() as u64);
}
}
#[inline]
fn write_u8(&mut self, _i: u8) {}
#[inline]
fn write_u16(&mut self, _i: u16) {}
#[inline]
fn write_u32(&mut self, _i: u32) {}
#[inline]
fn write_u64(&mut self, _i: u64) {}
#[inline]
fn write_u128(&mut self, _i: u128) {}
#[inline]
fn write_usize(&mut self, _i: usize) {}
}
#[cfg(test)]
mod tests {
use crate::fallback_hash::*;
#[test]
fn test_hash() {
let mut hasher = AHasher::new_with_keys(0, 0);
let value: u64 = 1 << 32;
hasher.update(value);
let result = hasher.buffer;
let mut hasher = AHasher::new_with_keys(0, 0);
let value2: u64 = 1;
hasher.update(value2);
let result2 = hasher.buffer;
let result: [u8; 8] = result.convert();
let result2: [u8; 8] = result2.convert();
assert_ne!(hex::encode(result), hex::encode(result2));
}
#[test]
fn test_conversion() {
let input: &[u8] = "dddddddd".as_bytes();
let bytes: u64 = as_array!(input, 8).convert();
assert_eq!(bytes, 0x6464646464646464);
}
}

501
pve-rs/vendor/ahash/src/hash_map.rs vendored Normal file
View File

@@ -0,0 +1,501 @@
use std::borrow::Borrow;
use std::collections::hash_map::{IntoKeys, IntoValues};
use std::collections::{hash_map, HashMap};
use std::fmt::{self, Debug};
use std::hash::{BuildHasher, Hash};
use std::iter::FromIterator;
use std::ops::{Deref, DerefMut, Index};
use std::panic::UnwindSafe;
#[cfg(feature = "serde")]
use serde::{
de::{Deserialize, Deserializer},
ser::{Serialize, Serializer},
};
use crate::RandomState;
/// A [`HashMap`](std::collections::HashMap) using [`RandomState`](crate::RandomState) to hash the items.
/// (Requires the `std` feature to be enabled.)
#[derive(Clone)]
pub struct AHashMap<K, V, S = crate::RandomState>(HashMap<K, V, S>);
impl<K, V> From<HashMap<K, V, crate::RandomState>> for AHashMap<K, V> {
fn from(item: HashMap<K, V, crate::RandomState>) -> Self {
AHashMap(item)
}
}
impl<K, V, const N: usize> From<[(K, V); N]> for AHashMap<K, V>
where
K: Eq + Hash,
{
/// # Examples
///
/// ```
/// use ahash::AHashMap;
///
/// let map1 = AHashMap::from([(1, 2), (3, 4)]);
/// let map2: AHashMap<_, _> = [(1, 2), (3, 4)].into();
/// assert_eq!(map1, map2);
/// ```
fn from(arr: [(K, V); N]) -> Self {
Self::from_iter(arr)
}
}
impl<K, V> Into<HashMap<K, V, crate::RandomState>> for AHashMap<K, V> {
fn into(self) -> HashMap<K, V, crate::RandomState> {
self.0
}
}
impl<K, V> AHashMap<K, V, RandomState> {
/// This crates a hashmap using [RandomState::new] which obtains its keys from [RandomSource].
/// See the documentation in [RandomSource] for notes about key strength.
pub fn new() -> Self {
AHashMap(HashMap::with_hasher(RandomState::new()))
}
/// This crates a hashmap with the specified capacity using [RandomState::new].
/// See the documentation in [RandomSource] for notes about key strength.
pub fn with_capacity(capacity: usize) -> Self {
AHashMap(HashMap::with_capacity_and_hasher(capacity, RandomState::new()))
}
}
impl<K, V, S> AHashMap<K, V, S>
where
S: BuildHasher,
{
pub fn with_hasher(hash_builder: S) -> Self {
AHashMap(HashMap::with_hasher(hash_builder))
}
pub fn with_capacity_and_hasher(capacity: usize, hash_builder: S) -> Self {
AHashMap(HashMap::with_capacity_and_hasher(capacity, hash_builder))
}
}
impl<K, V, S> AHashMap<K, V, S>
where
K: Hash + Eq,
S: BuildHasher,
{
/// Returns a reference to the value corresponding to the key.
///
/// The key may be any borrowed form of the map's key type, but
/// [`Hash`] and [`Eq`] on the borrowed form *must* match those for
/// the key type.
///
/// # Examples
///
/// ```
/// use std::collections::HashMap;
///
/// let mut map = HashMap::new();
/// map.insert(1, "a");
/// assert_eq!(map.get(&1), Some(&"a"));
/// assert_eq!(map.get(&2), None);
/// ```
#[inline]
pub fn get<Q: ?Sized>(&self, k: &Q) -> Option<&V>
where
K: Borrow<Q>,
Q: Hash + Eq,
{
self.0.get(k)
}
/// Returns the key-value pair corresponding to the supplied key.
///
/// The supplied key may be any borrowed form of the map's key type, but
/// [`Hash`] and [`Eq`] on the borrowed form *must* match those for
/// the key type.
///
/// # Examples
///
/// ```
/// use std::collections::HashMap;
///
/// let mut map = HashMap::new();
/// map.insert(1, "a");
/// assert_eq!(map.get_key_value(&1), Some((&1, &"a")));
/// assert_eq!(map.get_key_value(&2), None);
/// ```
#[inline]
pub fn get_key_value<Q: ?Sized>(&self, k: &Q) -> Option<(&K, &V)>
where
K: Borrow<Q>,
Q: Hash + Eq,
{
self.0.get_key_value(k)
}
/// Returns a mutable reference to the value corresponding to the key.
///
/// The key may be any borrowed form of the map's key type, but
/// [`Hash`] and [`Eq`] on the borrowed form *must* match those for
/// the key type.
///
/// # Examples
///
/// ```
/// use std::collections::HashMap;
///
/// let mut map = HashMap::new();
/// map.insert(1, "a");
/// if let Some(x) = map.get_mut(&1) {
/// *x = "b";
/// }
/// assert_eq!(map[&1], "b");
/// ```
#[inline]
pub fn get_mut<Q: ?Sized>(&mut self, k: &Q) -> Option<&mut V>
where
K: Borrow<Q>,
Q: Hash + Eq,
{
self.0.get_mut(k)
}
/// Inserts a key-value pair into the map.
///
/// If the map did not have this key present, [`None`] is returned.
///
/// If the map did have this key present, the value is updated, and the old
/// value is returned. The key is not updated, though; this matters for
/// types that can be `==` without being identical. See the [module-level
/// documentation] for more.
///
/// # Examples
///
/// ```
/// use std::collections::HashMap;
///
/// let mut map = HashMap::new();
/// assert_eq!(map.insert(37, "a"), None);
/// assert_eq!(map.is_empty(), false);
///
/// map.insert(37, "b");
/// assert_eq!(map.insert(37, "c"), Some("b"));
/// assert_eq!(map[&37], "c");
/// ```
#[inline]
pub fn insert(&mut self, k: K, v: V) -> Option<V> {
self.0.insert(k, v)
}
/// Creates a consuming iterator visiting all the keys in arbitrary order.
/// The map cannot be used after calling this.
/// The iterator element type is `K`.
///
/// # Examples
///
/// ```
/// use std::collections::HashMap;
///
/// let map = HashMap::from([
/// ("a", 1),
/// ("b", 2),
/// ("c", 3),
/// ]);
///
/// let mut vec: Vec<&str> = map.into_keys().collect();
/// // The `IntoKeys` iterator produces keys in arbitrary order, so the
/// // keys must be sorted to test them against a sorted array.
/// vec.sort_unstable();
/// assert_eq!(vec, ["a", "b", "c"]);
/// ```
///
/// # Performance
///
/// In the current implementation, iterating over keys takes O(capacity) time
/// instead of O(len) because it internally visits empty buckets too.
#[inline]
pub fn into_keys(self) -> IntoKeys<K, V> {
self.0.into_keys()
}
/// Creates a consuming iterator visiting all the values in arbitrary order.
/// The map cannot be used after calling this.
/// The iterator element type is `V`.
///
/// # Examples
///
/// ```
/// use std::collections::HashMap;
///
/// let map = HashMap::from([
/// ("a", 1),
/// ("b", 2),
/// ("c", 3),
/// ]);
///
/// let mut vec: Vec<i32> = map.into_values().collect();
/// // The `IntoValues` iterator produces values in arbitrary order, so
/// // the values must be sorted to test them against a sorted array.
/// vec.sort_unstable();
/// assert_eq!(vec, [1, 2, 3]);
/// ```
///
/// # Performance
///
/// In the current implementation, iterating over values takes O(capacity) time
/// instead of O(len) because it internally visits empty buckets too.
#[inline]
pub fn into_values(self) -> IntoValues<K, V> {
self.0.into_values()
}
/// Removes a key from the map, returning the value at the key if the key
/// was previously in the map.
///
/// The key may be any borrowed form of the map's key type, but
/// [`Hash`] and [`Eq`] on the borrowed form *must* match those for
/// the key type.
///
/// # Examples
///
/// ```
/// use std::collections::HashMap;
///
/// let mut map = HashMap::new();
/// map.insert(1, "a");
/// assert_eq!(map.remove(&1), Some("a"));
/// assert_eq!(map.remove(&1), None);
/// ```
#[inline]
pub fn remove<Q: ?Sized>(&mut self, k: &Q) -> Option<V>
where
K: Borrow<Q>,
Q: Hash + Eq,
{
self.0.remove(k)
}
}
impl<K, V, S> Deref for AHashMap<K, V, S> {
type Target = HashMap<K, V, S>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<K, V, S> DerefMut for AHashMap<K, V, S> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl<K, V, S> UnwindSafe for AHashMap<K, V, S>
where
K: UnwindSafe,
V: UnwindSafe,
{
}
impl<K, V, S> PartialEq for AHashMap<K, V, S>
where
K: Eq + Hash,
V: PartialEq,
S: BuildHasher,
{
fn eq(&self, other: &AHashMap<K, V, S>) -> bool {
self.0.eq(&other.0)
}
}
impl<K, V, S> Eq for AHashMap<K, V, S>
where
K: Eq + Hash,
V: Eq,
S: BuildHasher,
{
}
impl<K, Q: ?Sized, V, S> Index<&Q> for AHashMap<K, V, S>
where
K: Eq + Hash + Borrow<Q>,
Q: Eq + Hash,
S: BuildHasher,
{
type Output = V;
/// Returns a reference to the value corresponding to the supplied key.
///
/// # Panics
///
/// Panics if the key is not present in the `HashMap`.
#[inline]
fn index(&self, key: &Q) -> &V {
self.0.index(key)
}
}
impl<K, V, S> Debug for AHashMap<K, V, S>
where
K: Debug,
V: Debug,
S: BuildHasher,
{
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
self.0.fmt(fmt)
}
}
impl<K, V> FromIterator<(K, V)> for AHashMap<K, V, RandomState>
where
K: Eq + Hash,
{
/// This crates a hashmap from the provided iterator using [RandomState::new].
/// See the documentation in [RandomSource] for notes about key strength.
fn from_iter<T: IntoIterator<Item = (K, V)>>(iter: T) -> Self {
let mut inner = HashMap::with_hasher(RandomState::new());
inner.extend(iter);
AHashMap(inner)
}
}
impl<'a, K, V, S> IntoIterator for &'a AHashMap<K, V, S> {
type Item = (&'a K, &'a V);
type IntoIter = hash_map::Iter<'a, K, V>;
fn into_iter(self) -> Self::IntoIter {
(&self.0).iter()
}
}
impl<'a, K, V, S> IntoIterator for &'a mut AHashMap<K, V, S> {
type Item = (&'a K, &'a mut V);
type IntoIter = hash_map::IterMut<'a, K, V>;
fn into_iter(self) -> Self::IntoIter {
(&mut self.0).iter_mut()
}
}
impl<K, V, S> IntoIterator for AHashMap<K, V, S> {
type Item = (K, V);
type IntoIter = hash_map::IntoIter<K, V>;
fn into_iter(self) -> Self::IntoIter {
self.0.into_iter()
}
}
impl<K, V, S> Extend<(K, V)> for AHashMap<K, V, S>
where
K: Eq + Hash,
S: BuildHasher,
{
#[inline]
fn extend<T: IntoIterator<Item = (K, V)>>(&mut self, iter: T) {
self.0.extend(iter)
}
}
impl<'a, K, V, S> Extend<(&'a K, &'a V)> for AHashMap<K, V, S>
where
K: Eq + Hash + Copy + 'a,
V: Copy + 'a,
S: BuildHasher,
{
#[inline]
fn extend<T: IntoIterator<Item = (&'a K, &'a V)>>(&mut self, iter: T) {
self.0.extend(iter)
}
}
/// NOTE: For safety this trait impl is only available available if either of the flags `runtime-rng` (on by default) or
/// `compile-time-rng` are enabled. This is to prevent weakly keyed maps from being accidentally created. Instead one of
/// constructors for [RandomState] must be used.
#[cfg(any(feature = "compile-time-rng", feature = "runtime-rng", feature = "no-rng"))]
impl<K, V> Default for AHashMap<K, V, RandomState> {
#[inline]
fn default() -> AHashMap<K, V, RandomState> {
AHashMap(HashMap::default())
}
}
#[cfg(feature = "serde")]
impl<K, V> Serialize for AHashMap<K, V>
where
K: Serialize + Eq + Hash,
V: Serialize,
{
fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {
self.deref().serialize(serializer)
}
}
#[cfg(feature = "serde")]
impl<'de, K, V> Deserialize<'de> for AHashMap<K, V>
where
K: Deserialize<'de> + Eq + Hash,
V: Deserialize<'de>,
{
fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<Self, D::Error> {
let hash_map = HashMap::deserialize(deserializer);
hash_map.map(|hash_map| Self(hash_map))
}
fn deserialize_in_place<D: Deserializer<'de>>(deserializer: D, place: &mut Self) -> Result<(), D::Error> {
use serde::de::{MapAccess, Visitor};
struct MapInPlaceVisitor<'a, K: 'a, V: 'a>(&'a mut AHashMap<K, V>);
impl<'a, 'de, K, V> Visitor<'de> for MapInPlaceVisitor<'a, K, V>
where
K: Deserialize<'de> + Eq + Hash,
V: Deserialize<'de>,
{
type Value = ();
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
formatter.write_str("a map")
}
fn visit_map<A>(self, mut map: A) -> Result<Self::Value, A::Error>
where
A: MapAccess<'de>,
{
self.0.clear();
self.0.reserve(map.size_hint().unwrap_or(0).min(4096));
while let Some((key, value)) = map.next_entry()? {
self.0.insert(key, value);
}
Ok(())
}
}
deserializer.deserialize_map(MapInPlaceVisitor(place))
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_borrow() {
let mut map: AHashMap<String, String> = AHashMap::new();
map.insert("foo".to_string(), "Bar".to_string());
map.insert("Bar".to_string(), map.get("foo").unwrap().to_owned());
}
#[cfg(feature = "serde")]
#[test]
fn test_serde() {
let mut map = AHashMap::new();
map.insert("for".to_string(), 0);
map.insert("bar".to_string(), 1);
let mut serialization = serde_json::to_string(&map).unwrap();
let mut deserialization: AHashMap<String, u64> = serde_json::from_str(&serialization).unwrap();
assert_eq!(deserialization, map);
map.insert("baz".to_string(), 2);
serialization = serde_json::to_string(&map).unwrap();
let mut deserializer = serde_json::Deserializer::from_str(&serialization);
AHashMap::deserialize_in_place(&mut deserializer, &mut deserialization).unwrap();
assert_eq!(deserialization, map);
}
}

View File

@@ -0,0 +1,534 @@
use core::hash::{Hash, Hasher};
use std::collections::HashMap;
fn assert_sufficiently_different(a: u64, b: u64, tolerance: i32) {
let (same_byte_count, same_nibble_count) = count_same_bytes_and_nibbles(a, b);
assert!(same_byte_count <= tolerance, "{:x} vs {:x}: {:}", a, b, same_byte_count);
assert!(
same_nibble_count <= tolerance * 3,
"{:x} vs {:x}: {:}",
a,
b,
same_nibble_count
);
let flipped_bits = (a ^ b).count_ones();
assert!(
flipped_bits > 12 && flipped_bits < 52,
"{:x} and {:x}: {:}",
a,
b,
flipped_bits
);
for rotate in 0..64 {
let flipped_bits2 = (a ^ (b.rotate_left(rotate))).count_ones();
assert!(
flipped_bits2 > 10 && flipped_bits2 < 54,
"{:x} and {:x}: {:}",
a,
b.rotate_left(rotate),
flipped_bits2
);
}
}
fn count_same_bytes_and_nibbles(a: u64, b: u64) -> (i32, i32) {
let mut same_byte_count = 0;
let mut same_nibble_count = 0;
for byte in 0..8 {
let ba = (a >> (8 * byte)) as u8;
let bb = (b >> (8 * byte)) as u8;
if ba == bb {
same_byte_count += 1;
}
if ba & 0xF0u8 == bb & 0xF0u8 {
same_nibble_count += 1;
}
if ba & 0x0Fu8 == bb & 0x0Fu8 {
same_nibble_count += 1;
}
}
(same_byte_count, same_nibble_count)
}
fn gen_combinations(options: &[u32; 11], depth: u32, so_far: Vec<u32>, combinations: &mut Vec<Vec<u32>>) {
if depth == 0 {
return;
}
for option in options {
let mut next = so_far.clone();
next.push(*option);
combinations.push(next.clone());
gen_combinations(options, depth - 1, next, combinations);
}
}
fn test_no_full_collisions<T: Hasher>(gen_hash: impl Fn() -> T) {
let options: [u32; 11] = [
0x00000000, 0x10000000, 0x20000000, 0x40000000, 0x80000000, 0xF0000000, 1, 2, 4, 8, 15,
];
let mut combinations = Vec::new();
gen_combinations(&options, 7, Vec::new(), &mut combinations);
let mut map: HashMap<u64, Vec<u8>> = HashMap::new();
for combination in combinations {
use zerocopy::AsBytes;
let array = combination.as_slice().as_bytes().to_vec();
let mut hasher = gen_hash();
hasher.write(&array);
let hash = hasher.finish();
if let Some(value) = map.get(&hash) {
assert_eq!(
value, &array,
"Found a collision between {:x?} and {:x?}. Hash: {:x?}",
value, &array, &hash
);
} else {
map.insert(hash, array);
}
}
assert_eq!(21435887, map.len()); //11^7 + 11^6 ...
}
fn test_keys_change_output<T: Hasher>(constructor: impl Fn(u128, u128) -> T) {
let mut a = constructor(1, 1);
let mut b = constructor(1, 2);
let mut c = constructor(2, 1);
let mut d = constructor(2, 2);
"test".hash(&mut a);
"test".hash(&mut b);
"test".hash(&mut c);
"test".hash(&mut d);
assert_sufficiently_different(a.finish(), b.finish(), 1);
assert_sufficiently_different(a.finish(), c.finish(), 1);
assert_sufficiently_different(a.finish(), d.finish(), 1);
assert_sufficiently_different(b.finish(), c.finish(), 1);
assert_sufficiently_different(b.finish(), d.finish(), 1);
assert_sufficiently_different(c.finish(), d.finish(), 1);
}
fn test_input_affect_every_byte<T: Hasher>(constructor: impl Fn(u128, u128) -> T) {
let base = hash_with(&0, constructor(0, 0));
for shift in 0..16 {
let mut alternatives = vec![];
for v in 0..256 {
let input = (v as u128) << (shift * 8);
let hasher = constructor(0, 0);
alternatives.push(hash_with(&input, hasher));
}
assert_each_byte_differs(shift, base, alternatives);
}
}
///Ensures that for every bit in the output there is some value for each byte in the key that flips it.
fn test_keys_affect_every_byte<H: Hash, T: Hasher>(item: H, constructor: impl Fn(u128, u128) -> T) {
let base = hash_with(&item, constructor(0, 0));
for shift in 0..16 {
let mut alternatives1 = vec![];
let mut alternatives2 = vec![];
for v in 0..256 {
let input = (v as u128) << (shift * 8);
let hasher1 = constructor(input, 0);
let hasher2 = constructor(0, input);
let h1 = hash_with(&item, hasher1);
let h2 = hash_with(&item, hasher2);
alternatives1.push(h1);
alternatives2.push(h2);
}
assert_each_byte_differs(shift, base, alternatives1);
assert_each_byte_differs(shift, base, alternatives2);
}
}
fn assert_each_byte_differs(num: u64, base: u64, alternatives: Vec<u64>) {
let mut changed_bits = 0_u64;
for alternative in alternatives {
changed_bits |= base ^ alternative
}
assert_eq!(
core::u64::MAX,
changed_bits,
"Bits changed: {:x} on num: {:?}. base {:x}",
changed_bits,
num,
base
);
}
fn test_finish_is_consistent<T: Hasher>(constructor: impl Fn(u128, u128) -> T) {
let mut hasher = constructor(1, 2);
"Foo".hash(&mut hasher);
let a = hasher.finish();
let b = hasher.finish();
assert_eq!(a, b);
}
fn test_single_key_bit_flip<T: Hasher>(constructor: impl Fn(u128, u128) -> T) {
for bit in 0..128 {
let mut a = constructor(0, 0);
let mut b = constructor(0, 1 << bit);
let mut c = constructor(1 << bit, 0);
"1234".hash(&mut a);
"1234".hash(&mut b);
"1234".hash(&mut c);
assert_sufficiently_different(a.finish(), b.finish(), 2);
assert_sufficiently_different(a.finish(), c.finish(), 2);
assert_sufficiently_different(b.finish(), c.finish(), 2);
let mut a = constructor(0, 0);
let mut b = constructor(0, 1 << bit);
let mut c = constructor(1 << bit, 0);
"12345678".hash(&mut a);
"12345678".hash(&mut b);
"12345678".hash(&mut c);
assert_sufficiently_different(a.finish(), b.finish(), 2);
assert_sufficiently_different(a.finish(), c.finish(), 2);
assert_sufficiently_different(b.finish(), c.finish(), 2);
let mut a = constructor(0, 0);
let mut b = constructor(0, 1 << bit);
let mut c = constructor(1 << bit, 0);
"1234567812345678".hash(&mut a);
"1234567812345678".hash(&mut b);
"1234567812345678".hash(&mut c);
assert_sufficiently_different(a.finish(), b.finish(), 2);
assert_sufficiently_different(a.finish(), c.finish(), 2);
assert_sufficiently_different(b.finish(), c.finish(), 2);
}
}
fn test_all_bytes_matter<T: Hasher>(hasher: impl Fn() -> T) {
let mut item = vec![0; 256];
let base_hash = hash(&item, &hasher);
for pos in 0..256 {
item[pos] = 255;
let hash = hash(&item, &hasher);
assert_ne!(base_hash, hash, "Position {} did not affect output", pos);
item[pos] = 0;
}
}
fn test_no_pair_collisions<T: Hasher>(hasher: impl Fn() -> T) {
let base = [0_u64, 0_u64];
let base_hash = hash(&base, &hasher);
for bitpos1 in 0..64 {
let a = 1_u64 << bitpos1;
for bitpos2 in 0..bitpos1 {
let b = 1_u64 << bitpos2;
let aa = hash(&[a, a], &hasher);
let ab = hash(&[a, b], &hasher);
let ba = hash(&[b, a], &hasher);
let bb = hash(&[b, b], &hasher);
assert_sufficiently_different(base_hash, aa, 3);
assert_sufficiently_different(base_hash, ab, 3);
assert_sufficiently_different(base_hash, ba, 3);
assert_sufficiently_different(base_hash, bb, 3);
assert_sufficiently_different(aa, ab, 3);
assert_sufficiently_different(ab, ba, 3);
assert_sufficiently_different(ba, bb, 3);
assert_sufficiently_different(aa, ba, 3);
assert_sufficiently_different(ab, bb, 3);
assert_sufficiently_different(aa, bb, 3);
}
}
}
fn hash<H: Hash, T: Hasher>(b: &H, hash_builder: &dyn Fn() -> T) -> u64 {
let mut hasher = hash_builder();
b.hash(&mut hasher);
hasher.finish()
}
fn hash_with<H: Hash, T: Hasher>(b: &H, mut hasher: T) -> u64 {
b.hash(&mut hasher);
hasher.finish()
}
fn test_single_bit_flip<T: Hasher>(hasher: impl Fn() -> T) {
let size = 32;
let compare_value = hash(&0u32, &hasher);
for pos in 0..size {
let test_value = hash(&(1u32 << pos), &hasher);
assert_sufficiently_different(compare_value, test_value, 2);
}
let size = 64;
let compare_value = hash(&0u64, &hasher);
for pos in 0..size {
let test_value = hash(&(1u64 << pos), &hasher);
assert_sufficiently_different(compare_value, test_value, 2);
}
let size = 128;
let compare_value = hash(&0u128, &hasher);
for pos in 0..size {
let test_value = hash(&(1u128 << pos), &hasher);
dbg!(compare_value, test_value);
assert_sufficiently_different(compare_value, test_value, 2);
}
}
fn test_padding_doesnot_collide<T: Hasher>(hasher: impl Fn() -> T) {
for c in 0..128u8 {
for string in ["", "\0", "\x01", "1234", "12345678", "1234567812345678"].iter() {
let mut short = hasher();
string.hash(&mut short);
let value = short.finish();
let mut padded = string.to_string();
for num in 1..=128 {
let mut long = hasher();
padded.push(c as char);
padded.hash(&mut long);
let (same_bytes, same_nibbles) = count_same_bytes_and_nibbles(value, long.finish());
assert!(
same_bytes <= 3,
"{} bytes of {} -> {:x} vs {:x}",
num,
c,
value,
long.finish()
);
assert!(
same_nibbles <= 8,
"{} bytes of {} -> {:x} vs {:x}",
num,
c,
value,
long.finish()
);
let flipped_bits = (value ^ long.finish()).count_ones();
assert!(flipped_bits > 10);
}
if string.len() > 0 {
let mut padded = string[1..].to_string();
padded.push(c as char);
for num in 2..=128 {
let mut long = hasher();
padded.push(c as char);
padded.hash(&mut long);
let (same_bytes, same_nibbles) = count_same_bytes_and_nibbles(value, long.finish());
assert!(
same_bytes <= 3,
"string {:?} + {} bytes of {} -> {:x} vs {:x}",
string,
num,
c,
value,
long.finish()
);
assert!(
same_nibbles <= 8,
"string {:?} + {} bytes of {} -> {:x} vs {:x}",
string,
num,
c,
value,
long.finish()
);
let flipped_bits = (value ^ long.finish()).count_ones();
assert!(flipped_bits > 10);
}
}
}
}
}
fn test_length_extension<T: Hasher>(hasher: impl Fn(u128, u128) -> T) {
for key in 0..256 {
let h1 = hasher(key, key);
let v1 = hash_with(&[0_u8, 0, 0, 0, 0, 0, 0, 0], h1);
let h2 = hasher(key, key);
let v2 = hash_with(&[1_u8, 0, 0, 0, 0, 0, 0, 0, 0], h2);
assert_ne!(v1, v2);
}
}
fn test_sparse<T: Hasher>(hasher: impl Fn() -> T) {
use smallvec::SmallVec;
let mut buf = [0u8; 256];
let mut hashes = HashMap::new();
for idx_1 in 0..255_u8 {
for idx_2 in idx_1 + 1..=255_u8 {
for value_1 in [1, 2, 4, 8, 16, 32, 64, 128] {
for value_2 in [
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 16, 17, 18, 20, 24, 31, 32, 33, 48, 64, 96, 127, 128, 129,
192, 254, 255,
] {
buf[idx_1 as usize] = value_1;
buf[idx_2 as usize] = value_2;
let hash_value = hash_with(&buf, &mut hasher());
let keys = hashes.entry(hash_value).or_insert(SmallVec::<[[u8; 4]; 1]>::new());
keys.push([idx_1, value_1, idx_2, value_2]);
buf[idx_1 as usize] = 0;
buf[idx_2 as usize] = 0;
}
}
}
}
hashes.retain(|_key, value| value.len() != 1);
assert_eq!(0, hashes.len(), "Collision with: {:?}", hashes);
}
#[cfg(test)]
mod fallback_tests {
use crate::fallback_hash::*;
use crate::hash_quality_test::*;
#[test]
fn fallback_single_bit_flip() {
test_single_bit_flip(|| AHasher::new_with_keys(0, 0))
}
#[test]
fn fallback_single_key_bit_flip() {
test_single_key_bit_flip(AHasher::new_with_keys)
}
#[test]
fn fallback_all_bytes_matter() {
test_all_bytes_matter(|| AHasher::new_with_keys(0, 0));
}
#[test]
fn fallback_test_no_pair_collisions() {
test_no_pair_collisions(|| AHasher::new_with_keys(0, 0));
}
#[test]
fn fallback_test_no_full_collisions() {
test_no_full_collisions(|| AHasher::new_with_keys(0, 0));
}
#[test]
fn fallback_keys_change_output() {
test_keys_change_output(AHasher::new_with_keys);
}
#[test]
fn fallback_input_affect_every_byte() {
test_input_affect_every_byte(AHasher::new_with_keys);
}
#[test]
fn fallback_keys_affect_every_byte() {
//For fallback second key is not used in every hash.
#[cfg(all(not(feature = "specialize"), feature = "folded_multiply"))]
test_keys_affect_every_byte(0, |a, b| AHasher::new_with_keys(a ^ b, a));
test_keys_affect_every_byte("", |a, b| AHasher::new_with_keys(a ^ b, a));
test_keys_affect_every_byte((0, 0), |a, b| AHasher::new_with_keys(a ^ b, a));
}
#[test]
fn fallback_finish_is_consistant() {
test_finish_is_consistent(AHasher::test_with_keys)
}
#[test]
fn fallback_padding_doesnot_collide() {
test_padding_doesnot_collide(|| AHasher::new_with_keys(0, 0));
test_padding_doesnot_collide(|| AHasher::new_with_keys(0, 2));
test_padding_doesnot_collide(|| AHasher::new_with_keys(2, 0));
test_padding_doesnot_collide(|| AHasher::new_with_keys(2, 2));
}
#[test]
fn fallback_length_extension() {
test_length_extension(|a, b| AHasher::new_with_keys(a, b));
}
#[test]
fn test_no_sparse_collisions() {
test_sparse(|| AHasher::new_with_keys(0, 0));
test_sparse(|| AHasher::new_with_keys(1, 2));
}
}
///Basic sanity tests of the cypto properties of aHash.
#[cfg(any(
all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "aes", not(miri)),
all(target_arch = "aarch64", target_feature = "aes", not(miri)),
all(feature = "nightly-arm-aes", target_arch = "arm", target_feature = "aes", not(miri)),
))]
#[cfg(test)]
mod aes_tests {
use crate::aes_hash::*;
use crate::hash_quality_test::*;
use std::hash::{Hash, Hasher};
//This encrypts to 0.
const BAD_KEY2: u128 = 0x6363_6363_6363_6363_6363_6363_6363_6363;
//This decrypts to 0.
const BAD_KEY: u128 = 0x5252_5252_5252_5252_5252_5252_5252_5252;
#[test]
fn test_single_bit_in_byte() {
let mut hasher1 = AHasher::test_with_keys(0, 0);
8_u32.hash(&mut hasher1);
let mut hasher2 = AHasher::test_with_keys(0, 0);
0_u32.hash(&mut hasher2);
assert_sufficiently_different(hasher1.finish(), hasher2.finish(), 1);
}
#[test]
fn aes_single_bit_flip() {
test_single_bit_flip(|| AHasher::test_with_keys(BAD_KEY, BAD_KEY));
test_single_bit_flip(|| AHasher::test_with_keys(BAD_KEY2, BAD_KEY2));
}
#[test]
fn aes_single_key_bit_flip() {
test_single_key_bit_flip(AHasher::test_with_keys)
}
#[test]
fn aes_all_bytes_matter() {
test_all_bytes_matter(|| AHasher::test_with_keys(BAD_KEY, BAD_KEY));
test_all_bytes_matter(|| AHasher::test_with_keys(BAD_KEY2, BAD_KEY2));
}
#[test]
fn aes_test_no_pair_collisions() {
test_no_pair_collisions(|| AHasher::test_with_keys(BAD_KEY, BAD_KEY));
test_no_pair_collisions(|| AHasher::test_with_keys(BAD_KEY2, BAD_KEY2));
}
#[test]
fn ase_test_no_full_collisions() {
test_no_full_collisions(|| AHasher::test_with_keys(12345, 67890));
}
#[test]
fn aes_keys_change_output() {
test_keys_change_output(AHasher::test_with_keys);
}
#[test]
fn aes_input_affect_every_byte() {
test_input_affect_every_byte(AHasher::test_with_keys);
}
#[test]
fn aes_keys_affect_every_byte() {
#[cfg(not(feature = "specialize"))]
test_keys_affect_every_byte(0, AHasher::test_with_keys);
test_keys_affect_every_byte("", AHasher::test_with_keys);
test_keys_affect_every_byte((0, 0), AHasher::test_with_keys);
}
#[test]
fn aes_finish_is_consistant() {
test_finish_is_consistent(AHasher::test_with_keys)
}
#[test]
fn aes_padding_doesnot_collide() {
test_padding_doesnot_collide(|| AHasher::test_with_keys(BAD_KEY, BAD_KEY));
test_padding_doesnot_collide(|| AHasher::test_with_keys(BAD_KEY2, BAD_KEY2));
}
#[test]
fn aes_length_extension() {
test_length_extension(|a, b| AHasher::test_with_keys(a, b));
}
#[test]
fn aes_no_sparse_collisions() {
test_sparse(|| AHasher::test_with_keys(0, 0));
test_sparse(|| AHasher::test_with_keys(1, 2));
}
}

352
pve-rs/vendor/ahash/src/hash_set.rs vendored Normal file
View File

@@ -0,0 +1,352 @@
use crate::RandomState;
use std::collections::{hash_set, HashSet};
use std::fmt::{self, Debug};
use std::hash::{BuildHasher, Hash};
use std::iter::FromIterator;
use std::ops::{BitAnd, BitOr, BitXor, Deref, DerefMut, Sub};
#[cfg(feature = "serde")]
use serde::{
de::{Deserialize, Deserializer},
ser::{Serialize, Serializer},
};
/// A [`HashSet`](std::collections::HashSet) using [`RandomState`](crate::RandomState) to hash the items.
/// (Requires the `std` feature to be enabled.)
#[derive(Clone)]
pub struct AHashSet<T, S = RandomState>(HashSet<T, S>);
impl<T> From<HashSet<T, RandomState>> for AHashSet<T> {
fn from(item: HashSet<T, RandomState>) -> Self {
AHashSet(item)
}
}
impl<T, const N: usize> From<[T; N]> for AHashSet<T>
where
T: Eq + Hash,
{
/// # Examples
///
/// ```
/// use ahash::AHashSet;
///
/// let set1 = AHashSet::from([1, 2, 3, 4]);
/// let set2: AHashSet<_> = [1, 2, 3, 4].into();
/// assert_eq!(set1, set2);
/// ```
fn from(arr: [T; N]) -> Self {
Self::from_iter(arr)
}
}
impl<T> Into<HashSet<T, RandomState>> for AHashSet<T> {
fn into(self) -> HashSet<T, RandomState> {
self.0
}
}
impl<T> AHashSet<T, RandomState> {
/// This crates a hashset using [RandomState::new].
/// See the documentation in [RandomSource] for notes about key strength.
pub fn new() -> Self {
AHashSet(HashSet::with_hasher(RandomState::new()))
}
/// This crates a hashset with the specified capacity using [RandomState::new].
/// See the documentation in [RandomSource] for notes about key strength.
pub fn with_capacity(capacity: usize) -> Self {
AHashSet(HashSet::with_capacity_and_hasher(capacity, RandomState::new()))
}
}
impl<T, S> AHashSet<T, S>
where
S: BuildHasher,
{
pub fn with_hasher(hash_builder: S) -> Self {
AHashSet(HashSet::with_hasher(hash_builder))
}
pub fn with_capacity_and_hasher(capacity: usize, hash_builder: S) -> Self {
AHashSet(HashSet::with_capacity_and_hasher(capacity, hash_builder))
}
}
impl<T, S> Deref for AHashSet<T, S> {
type Target = HashSet<T, S>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<T, S> DerefMut for AHashSet<T, S> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl<T, S> PartialEq for AHashSet<T, S>
where
T: Eq + Hash,
S: BuildHasher,
{
fn eq(&self, other: &AHashSet<T, S>) -> bool {
self.0.eq(&other.0)
}
}
impl<T, S> Eq for AHashSet<T, S>
where
T: Eq + Hash,
S: BuildHasher,
{
}
impl<T, S> BitOr<&AHashSet<T, S>> for &AHashSet<T, S>
where
T: Eq + Hash + Clone,
S: BuildHasher + Default,
{
type Output = AHashSet<T, S>;
/// Returns the union of `self` and `rhs` as a new `AHashSet<T, S>`.
///
/// # Examples
///
/// ```
/// use ahash::AHashSet;
///
/// let a: AHashSet<_> = vec![1, 2, 3].into_iter().collect();
/// let b: AHashSet<_> = vec![3, 4, 5].into_iter().collect();
///
/// let set = &a | &b;
///
/// let mut i = 0;
/// let expected = [1, 2, 3, 4, 5];
/// for x in &set {
/// assert!(expected.contains(x));
/// i += 1;
/// }
/// assert_eq!(i, expected.len());
/// ```
fn bitor(self, rhs: &AHashSet<T, S>) -> AHashSet<T, S> {
AHashSet(self.0.bitor(&rhs.0))
}
}
impl<T, S> BitAnd<&AHashSet<T, S>> for &AHashSet<T, S>
where
T: Eq + Hash + Clone,
S: BuildHasher + Default,
{
type Output = AHashSet<T, S>;
/// Returns the intersection of `self` and `rhs` as a new `AHashSet<T, S>`.
///
/// # Examples
///
/// ```
/// use ahash::AHashSet;
///
/// let a: AHashSet<_> = vec![1, 2, 3].into_iter().collect();
/// let b: AHashSet<_> = vec![2, 3, 4].into_iter().collect();
///
/// let set = &a & &b;
///
/// let mut i = 0;
/// let expected = [2, 3];
/// for x in &set {
/// assert!(expected.contains(x));
/// i += 1;
/// }
/// assert_eq!(i, expected.len());
/// ```
fn bitand(self, rhs: &AHashSet<T, S>) -> AHashSet<T, S> {
AHashSet(self.0.bitand(&rhs.0))
}
}
impl<T, S> BitXor<&AHashSet<T, S>> for &AHashSet<T, S>
where
T: Eq + Hash + Clone,
S: BuildHasher + Default,
{
type Output = AHashSet<T, S>;
/// Returns the symmetric difference of `self` and `rhs` as a new `AHashSet<T, S>`.
///
/// # Examples
///
/// ```
/// use ahash::AHashSet;
///
/// let a: AHashSet<_> = vec![1, 2, 3].into_iter().collect();
/// let b: AHashSet<_> = vec![3, 4, 5].into_iter().collect();
///
/// let set = &a ^ &b;
///
/// let mut i = 0;
/// let expected = [1, 2, 4, 5];
/// for x in &set {
/// assert!(expected.contains(x));
/// i += 1;
/// }
/// assert_eq!(i, expected.len());
/// ```
fn bitxor(self, rhs: &AHashSet<T, S>) -> AHashSet<T, S> {
AHashSet(self.0.bitxor(&rhs.0))
}
}
impl<T, S> Sub<&AHashSet<T, S>> for &AHashSet<T, S>
where
T: Eq + Hash + Clone,
S: BuildHasher + Default,
{
type Output = AHashSet<T, S>;
/// Returns the difference of `self` and `rhs` as a new `AHashSet<T, S>`.
///
/// # Examples
///
/// ```
/// use ahash::AHashSet;
///
/// let a: AHashSet<_> = vec![1, 2, 3].into_iter().collect();
/// let b: AHashSet<_> = vec![3, 4, 5].into_iter().collect();
///
/// let set = &a - &b;
///
/// let mut i = 0;
/// let expected = [1, 2];
/// for x in &set {
/// assert!(expected.contains(x));
/// i += 1;
/// }
/// assert_eq!(i, expected.len());
/// ```
fn sub(self, rhs: &AHashSet<T, S>) -> AHashSet<T, S> {
AHashSet(self.0.sub(&rhs.0))
}
}
impl<T, S> Debug for AHashSet<T, S>
where
T: Debug,
S: BuildHasher,
{
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
self.0.fmt(fmt)
}
}
impl<T> FromIterator<T> for AHashSet<T, RandomState>
where
T: Eq + Hash,
{
/// This crates a hashset from the provided iterator using [RandomState::new].
/// See the documentation in [RandomSource] for notes about key strength.
#[inline]
fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> AHashSet<T> {
let mut inner = HashSet::with_hasher(RandomState::new());
inner.extend(iter);
AHashSet(inner)
}
}
impl<'a, T, S> IntoIterator for &'a AHashSet<T, S> {
type Item = &'a T;
type IntoIter = hash_set::Iter<'a, T>;
fn into_iter(self) -> Self::IntoIter {
(&self.0).iter()
}
}
impl<T, S> IntoIterator for AHashSet<T, S> {
type Item = T;
type IntoIter = hash_set::IntoIter<T>;
fn into_iter(self) -> Self::IntoIter {
self.0.into_iter()
}
}
impl<T, S> Extend<T> for AHashSet<T, S>
where
T: Eq + Hash,
S: BuildHasher,
{
#[inline]
fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) {
self.0.extend(iter)
}
}
impl<'a, T, S> Extend<&'a T> for AHashSet<T, S>
where
T: 'a + Eq + Hash + Copy,
S: BuildHasher,
{
#[inline]
fn extend<I: IntoIterator<Item = &'a T>>(&mut self, iter: I) {
self.0.extend(iter)
}
}
/// NOTE: For safety this trait impl is only available available if either of the flags `runtime-rng` (on by default) or
/// `compile-time-rng` are enabled. This is to prevent weakly keyed maps from being accidentally created. Instead one of
/// constructors for [RandomState] must be used.
#[cfg(any(feature = "compile-time-rng", feature = "runtime-rng", feature = "no-rng"))]
impl<T> Default for AHashSet<T, RandomState> {
/// Creates an empty `AHashSet<T, S>` with the `Default` value for the hasher.
#[inline]
fn default() -> AHashSet<T, RandomState> {
AHashSet(HashSet::default())
}
}
#[cfg(feature = "serde")]
impl<T> Serialize for AHashSet<T>
where
T: Serialize + Eq + Hash,
{
fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {
self.deref().serialize(serializer)
}
}
#[cfg(feature = "serde")]
impl<'de, T> Deserialize<'de> for AHashSet<T>
where
T: Deserialize<'de> + Eq + Hash,
{
fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<Self, D::Error> {
let hash_set = HashSet::deserialize(deserializer);
hash_set.map(|hash_set| Self(hash_set))
}
fn deserialize_in_place<D: Deserializer<'de>>(deserializer: D, place: &mut Self) -> Result<(), D::Error> {
HashSet::deserialize_in_place(deserializer, place)
}
}
#[cfg(all(test, feature = "serde"))]
mod test {
use super::*;
#[test]
fn test_serde() {
let mut set = AHashSet::new();
set.insert("for".to_string());
set.insert("bar".to_string());
let mut serialization = serde_json::to_string(&set).unwrap();
let mut deserialization: AHashSet<String> = serde_json::from_str(&serialization).unwrap();
assert_eq!(deserialization, set);
set.insert("baz".to_string());
serialization = serde_json::to_string(&set).unwrap();
let mut deserializer = serde_json::Deserializer::from_str(&serialization);
AHashSet::deserialize_in_place(&mut deserializer, &mut deserialization).unwrap();
assert_eq!(deserialization, set);
}
}

396
pve-rs/vendor/ahash/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,396 @@
//! AHash is a high performance keyed hash function.
//!
//! It quickly provides a high quality hash where the result is not predictable without knowing the Key.
//! AHash works with `HashMap` to hash keys, but without allowing for the possibility that an malicious user can
//! induce a collision.
//!
//! # How aHash works
//!
//! When it is available aHash uses the hardware AES instructions to provide a keyed hash function.
//! When it is not, aHash falls back on a slightly slower alternative algorithm.
//!
//! Because aHash does not have a fixed standard for its output, it is able to improve over time.
//! But this also means that different computers or computers using different versions of ahash may observe different
//! hash values for the same input.
#![cfg_attr(
all(
feature = "std",
any(feature = "compile-time-rng", feature = "runtime-rng", feature = "no-rng")
),
doc = r##"
# Basic Usage
AHash provides an implementation of the [Hasher] trait.
To construct a HashMap using aHash as its hasher do the following:
```
use ahash::{AHasher, RandomState};
use std::collections::HashMap;
let mut map: HashMap<i32, i32, RandomState> = HashMap::default();
map.insert(12, 34);
```
### Randomness
The above requires a source of randomness to generate keys for the hashmap. By default this obtained from the OS.
It is also possible to have randomness supplied via the `compile-time-rng` flag, or manually.
### If randomess is not available
[AHasher::default()] can be used to hash using fixed keys. This works with
[BuildHasherDefault](std::hash::BuildHasherDefault). For example:
```
use std::hash::BuildHasherDefault;
use std::collections::HashMap;
use ahash::AHasher;
let mut m: HashMap<_, _, BuildHasherDefault<AHasher>> = HashMap::default();
# m.insert(12, 34);
```
It is also possible to instantiate [RandomState] directly:
```
use ahash::HashMap;
use ahash::RandomState;
let mut m = HashMap::with_hasher(RandomState::with_seed(42));
# m.insert(1, 2);
```
Or for uses besides a hashhmap:
```
use std::hash::BuildHasher;
use ahash::RandomState;
let hash_builder = RandomState::with_seed(42);
let hash = hash_builder.hash_one("Some Data");
```
There are several constructors for [RandomState] with different ways to supply seeds.
# Convenience wrappers
For convenience, both new-type wrappers and type aliases are provided.
The new type wrappers are called called `AHashMap` and `AHashSet`.
```
use ahash::AHashMap;
let mut map: AHashMap<i32, i32> = AHashMap::new();
map.insert(12, 34);
```
This avoids the need to type "RandomState". (For convenience `From`, `Into`, and `Deref` are provided).
# Aliases
For even less typing and better interop with existing libraries (such as rayon) which require a `std::collection::HashMap` ,
the type aliases [HashMap], [HashSet] are provided.
```
use ahash::{HashMap, HashMapExt};
let mut map: HashMap<i32, i32> = HashMap::new();
map.insert(12, 34);
```
Note the import of [HashMapExt]. This is needed for the constructor.
"##
)]
#![deny(clippy::correctness, clippy::complexity, clippy::perf)]
#![allow(clippy::pedantic, clippy::cast_lossless, clippy::unreadable_literal)]
#![cfg_attr(all(not(test), not(feature = "std")), no_std)]
#![cfg_attr(feature = "specialize", feature(min_specialization))]
#![cfg_attr(feature = "nightly-arm-aes", feature(stdarch_arm_neon_intrinsics))]
#[macro_use]
mod convert;
mod fallback_hash;
cfg_if::cfg_if! {
if #[cfg(any(
all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "aes", not(miri)),
all(feature = "nightly-arm-aes", target_arch = "aarch64", target_feature = "aes", not(miri)),
all(feature = "nightly-arm-aes", target_arch = "arm", target_feature = "aes", not(miri)),
))] {
mod aes_hash;
pub use crate::aes_hash::AHasher;
} else {
pub use crate::fallback_hash::AHasher;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "std")] {
mod hash_map;
mod hash_set;
pub use crate::hash_map::AHashMap;
pub use crate::hash_set::AHashSet;
/// [Hasher]: std::hash::Hasher
/// [HashMap]: std::collections::HashMap
/// Type alias for [HashMap]<K, V, ahash::RandomState>
pub type HashMap<K, V> = std::collections::HashMap<K, V, crate::RandomState>;
/// Type alias for [HashSet]<K, ahash::RandomState>
pub type HashSet<K> = std::collections::HashSet<K, crate::RandomState>;
}
}
#[cfg(test)]
mod hash_quality_test;
mod operations;
pub mod random_state;
mod specialize;
pub use crate::random_state::RandomState;
use core::hash::BuildHasher;
use core::hash::Hash;
use core::hash::Hasher;
#[cfg(feature = "std")]
/// A convenience trait that can be used together with the type aliases defined to
/// get access to the `new()` and `with_capacity()` methods for the HashMap type alias.
pub trait HashMapExt {
/// Constructs a new HashMap
fn new() -> Self;
/// Constructs a new HashMap with a given initial capacity
fn with_capacity(capacity: usize) -> Self;
}
#[cfg(feature = "std")]
/// A convenience trait that can be used together with the type aliases defined to
/// get access to the `new()` and `with_capacity()` methods for the HashSet type aliases.
pub trait HashSetExt {
/// Constructs a new HashSet
fn new() -> Self;
/// Constructs a new HashSet with a given initial capacity
fn with_capacity(capacity: usize) -> Self;
}
#[cfg(feature = "std")]
impl<K, V, S> HashMapExt for std::collections::HashMap<K, V, S>
where
S: BuildHasher + Default,
{
fn new() -> Self {
std::collections::HashMap::with_hasher(S::default())
}
fn with_capacity(capacity: usize) -> Self {
std::collections::HashMap::with_capacity_and_hasher(capacity, S::default())
}
}
#[cfg(feature = "std")]
impl<K, S> HashSetExt for std::collections::HashSet<K, S>
where
S: BuildHasher + Default,
{
fn new() -> Self {
std::collections::HashSet::with_hasher(S::default())
}
fn with_capacity(capacity: usize) -> Self {
std::collections::HashSet::with_capacity_and_hasher(capacity, S::default())
}
}
/// Provides a default [Hasher] with fixed keys.
/// This is typically used in conjunction with [BuildHasherDefault] to create
/// [AHasher]s in order to hash the keys of the map.
///
/// Generally it is preferable to use [RandomState] instead, so that different
/// hashmaps will have different keys. However if fixed keys are desirable this
/// may be used instead.
///
/// # Example
/// ```
/// use std::hash::BuildHasherDefault;
/// use ahash::{AHasher, RandomState};
/// use std::collections::HashMap;
///
/// let mut map: HashMap<i32, i32, BuildHasherDefault<AHasher>> = HashMap::default();
/// map.insert(12, 34);
/// ```
///
/// [BuildHasherDefault]: std::hash::BuildHasherDefault
/// [Hasher]: std::hash::Hasher
/// [HashMap]: std::collections::HashMap
impl Default for AHasher {
/// Constructs a new [AHasher] with fixed keys.
/// If `std` is enabled these will be generated upon first invocation.
/// Otherwise if the `compile-time-rng`feature is enabled these will be generated at compile time.
/// If neither of these features are available, hardcoded constants will be used.
///
/// Because the values are fixed, different hashers will all hash elements the same way.
/// This could make hash values predictable, if DOS attacks are a concern. If this behaviour is
/// not required, it may be preferable to use [RandomState] instead.
///
/// # Examples
///
/// ```
/// use ahash::AHasher;
/// use std::hash::Hasher;
///
/// let mut hasher_1 = AHasher::default();
/// let mut hasher_2 = AHasher::default();
///
/// hasher_1.write_u32(1234);
/// hasher_2.write_u32(1234);
///
/// assert_eq!(hasher_1.finish(), hasher_2.finish());
/// ```
#[inline]
fn default() -> AHasher {
RandomState::with_fixed_keys().build_hasher()
}
}
/// Used for specialization. (Sealed)
pub(crate) trait BuildHasherExt: BuildHasher {
#[doc(hidden)]
fn hash_as_u64<T: Hash + ?Sized>(&self, value: &T) -> u64;
#[doc(hidden)]
fn hash_as_fixed_length<T: Hash + ?Sized>(&self, value: &T) -> u64;
#[doc(hidden)]
fn hash_as_str<T: Hash + ?Sized>(&self, value: &T) -> u64;
}
impl<B: BuildHasher> BuildHasherExt for B {
#[inline]
#[cfg(feature = "specialize")]
default fn hash_as_u64<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = self.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
#[inline]
#[cfg(not(feature = "specialize"))]
fn hash_as_u64<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = self.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
#[inline]
#[cfg(feature = "specialize")]
default fn hash_as_fixed_length<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = self.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
#[inline]
#[cfg(not(feature = "specialize"))]
fn hash_as_fixed_length<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = self.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
#[inline]
#[cfg(feature = "specialize")]
default fn hash_as_str<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = self.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
#[inline]
#[cfg(not(feature = "specialize"))]
fn hash_as_str<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = self.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
}
// #[inline(never)]
// #[doc(hidden)]
// pub fn hash_test(input: &[u8]) -> u64 {
// let a = RandomState::with_seeds(11, 22, 33, 44);
// <[u8]>::get_hash(input, &a)
// }
#[cfg(feature = "std")]
#[cfg(test)]
mod test {
use crate::convert::Convert;
use crate::specialize::CallHasher;
use crate::*;
use std::collections::HashMap;
#[test]
fn test_ahash_alias_map_construction() {
let mut map = super::HashMap::with_capacity(1234);
map.insert(1, "test");
}
#[test]
fn test_ahash_alias_set_construction() {
let mut set = super::HashSet::with_capacity(1234);
set.insert(1);
}
#[test]
fn test_default_builder() {
use core::hash::BuildHasherDefault;
let mut map = HashMap::<u32, u64, BuildHasherDefault<AHasher>>::default();
map.insert(1, 3);
}
#[test]
fn test_builder() {
let mut map = HashMap::<u32, u64, RandomState>::default();
map.insert(1, 3);
}
#[test]
fn test_conversion() {
let input: &[u8] = b"dddddddd";
let bytes: u64 = as_array!(input, 8).convert();
assert_eq!(bytes, 0x6464646464646464);
}
#[test]
fn test_non_zero() {
let mut hasher1 = AHasher::new_with_keys(0, 0);
let mut hasher2 = AHasher::new_with_keys(0, 0);
"foo".hash(&mut hasher1);
"bar".hash(&mut hasher2);
assert_ne!(hasher1.finish(), 0);
assert_ne!(hasher2.finish(), 0);
assert_ne!(hasher1.finish(), hasher2.finish());
let mut hasher1 = AHasher::new_with_keys(0, 0);
let mut hasher2 = AHasher::new_with_keys(0, 0);
3_u64.hash(&mut hasher1);
4_u64.hash(&mut hasher2);
assert_ne!(hasher1.finish(), 0);
assert_ne!(hasher2.finish(), 0);
assert_ne!(hasher1.finish(), hasher2.finish());
}
#[test]
fn test_non_zero_specialized() {
let hasher_build = RandomState::with_seeds(0, 0, 0, 0);
let h1 = str::get_hash("foo", &hasher_build);
let h2 = str::get_hash("bar", &hasher_build);
assert_ne!(h1, 0);
assert_ne!(h2, 0);
assert_ne!(h1, h2);
let h1 = u64::get_hash(&3_u64, &hasher_build);
let h2 = u64::get_hash(&4_u64, &hasher_build);
assert_ne!(h1, 0);
assert_ne!(h2, 0);
assert_ne!(h1, h2);
}
#[test]
fn test_ahasher_construction() {
let _ = AHasher::new_with_keys(1234, 5678);
}
}

372
pve-rs/vendor/ahash/src/operations.rs vendored Normal file
View File

@@ -0,0 +1,372 @@
use crate::convert::*;
#[allow(unused)]
use zerocopy::transmute;
///This constant comes from Kunth's prng (Empirically it works better than those from splitmix32).
pub(crate) const MULTIPLE: u64 = 6364136223846793005;
/// This is a constant with a lot of special properties found by automated search.
/// See the unit tests below. (Below are alternative values)
#[cfg(all(target_feature = "ssse3", not(miri)))]
const SHUFFLE_MASK: u128 = 0x020a0700_0c01030e_050f0d08_06090b04_u128;
//const SHUFFLE_MASK: u128 = 0x000d0702_0a040301_05080f0c_0e0b0609_u128;
//const SHUFFLE_MASK: u128 = 0x040A0700_030E0106_0D050F08_020B0C09_u128;
#[inline(always)]
#[cfg(feature = "folded_multiply")]
pub(crate) const fn folded_multiply(s: u64, by: u64) -> u64 {
let result = (s as u128).wrapping_mul(by as u128);
((result & 0xffff_ffff_ffff_ffff) as u64) ^ ((result >> 64) as u64)
}
#[inline(always)]
#[cfg(not(feature = "folded_multiply"))]
pub(crate) const fn folded_multiply(s: u64, by: u64) -> u64 {
let b1 = s.wrapping_mul(by.swap_bytes());
let b2 = s.swap_bytes().wrapping_mul(!by);
b1 ^ b2.swap_bytes()
}
/// Given a small (less than 8 byte slice) returns the same data stored in two u32s.
/// (order of and non-duplication of bytes is NOT guaranteed)
#[inline(always)]
pub(crate) fn read_small(data: &[u8]) -> [u64; 2] {
debug_assert!(data.len() <= 8);
if data.len() >= 2 {
if data.len() >= 4 {
//len 4-8
[data.read_u32().0 as u64, data.read_last_u32() as u64]
} else {
//len 2-3
[data.read_u16().0 as u64, data[data.len() - 1] as u64]
}
} else {
if data.len() > 0 {
[data[0] as u64, data[0] as u64]
} else {
[0, 0]
}
}
}
#[inline(always)]
pub(crate) fn shuffle(a: u128) -> u128 {
#[cfg(all(target_feature = "ssse3", not(miri)))]
{
#[cfg(target_arch = "x86")]
use core::arch::x86::*;
#[cfg(target_arch = "x86_64")]
use core::arch::x86_64::*;
unsafe { transmute!(_mm_shuffle_epi8(transmute!(a), transmute!(SHUFFLE_MASK))) }
}
#[cfg(not(all(target_feature = "ssse3", not(miri))))]
{
a.swap_bytes()
}
}
#[allow(unused)] //not used by fallback
#[inline(always)]
pub(crate) fn add_and_shuffle(a: u128, b: u128) -> u128 {
let sum = add_by_64s(a.convert(), b.convert());
shuffle(sum.convert())
}
#[allow(unused)] //not used by fallback
#[inline(always)]
pub(crate) fn shuffle_and_add(base: u128, to_add: u128) -> u128 {
let shuffled: [u64; 2] = shuffle(base).convert();
add_by_64s(shuffled, to_add.convert()).convert()
}
#[cfg(all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "sse2", not(miri)))]
#[inline(always)]
pub(crate) fn add_by_64s(a: [u64; 2], b: [u64; 2]) -> [u64; 2] {
unsafe {
#[cfg(target_arch = "x86")]
use core::arch::x86::*;
#[cfg(target_arch = "x86_64")]
use core::arch::x86_64::*;
transmute!(_mm_add_epi64(transmute!(a), transmute!(b)))
}
}
#[cfg(not(all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "sse2", not(miri))))]
#[inline(always)]
pub(crate) fn add_by_64s(a: [u64; 2], b: [u64; 2]) -> [u64; 2] {
[a[0].wrapping_add(b[0]), a[1].wrapping_add(b[1])]
}
#[cfg(all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "aes", not(miri)))]
#[allow(unused)]
#[inline(always)]
pub(crate) fn aesenc(value: u128, xor: u128) -> u128 {
#[cfg(target_arch = "x86")]
use core::arch::x86::*;
#[cfg(target_arch = "x86_64")]
use core::arch::x86_64::*;
unsafe {
let value = transmute!(value);
transmute!(_mm_aesenc_si128(value, transmute!(xor)))
}
}
#[cfg(any(
all(feature = "nightly-arm-aes", target_arch = "aarch64", target_feature = "aes", not(miri)),
all(feature = "nightly-arm-aes", target_arch = "arm", target_feature = "aes", not(miri)),
))]
#[allow(unused)]
#[inline(always)]
pub(crate) fn aesenc(value: u128, xor: u128) -> u128 {
#[cfg(target_arch = "aarch64")]
use core::arch::aarch64::*;
#[cfg(target_arch = "arm")]
use core::arch::arm::*;
let res = unsafe { vaesmcq_u8(vaeseq_u8(transmute!(value), transmute!(0u128))) };
let value: u128 = transmute!(res);
xor ^ value
}
#[cfg(all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "aes", not(miri)))]
#[allow(unused)]
#[inline(always)]
pub(crate) fn aesdec(value: u128, xor: u128) -> u128 {
#[cfg(target_arch = "x86")]
use core::arch::x86::*;
#[cfg(target_arch = "x86_64")]
use core::arch::x86_64::*;
unsafe {
let value = transmute!(value);
transmute!(_mm_aesdec_si128(value, transmute!(xor)))
}
}
#[cfg(any(
all(feature = "nightly-arm-aes", target_arch = "aarch64", target_feature = "aes", not(miri)),
all(feature = "nightly-arm-aes", target_arch = "arm", target_feature = "aes", not(miri)),
))]
#[allow(unused)]
#[inline(always)]
pub(crate) fn aesdec(value: u128, xor: u128) -> u128 {
#[cfg(target_arch = "aarch64")]
use core::arch::aarch64::*;
#[cfg(target_arch = "arm")]
use core::arch::arm::*;
let res = unsafe { vaesimcq_u8(vaesdq_u8(transmute!(value), transmute!(0u128))) };
let value: u128 = transmute!(res);
xor ^ value
}
#[allow(unused)]
#[inline(always)]
pub(crate) fn add_in_length(enc: &mut u128, len: u64) {
#[cfg(all(target_arch = "x86_64", target_feature = "sse2", not(miri)))]
{
#[cfg(target_arch = "x86_64")]
use core::arch::x86_64::*;
unsafe {
let enc = enc as *mut u128;
let len = _mm_cvtsi64_si128(len as i64);
let data = _mm_loadu_si128(enc.cast());
let sum = _mm_add_epi64(data, len);
_mm_storeu_si128(enc.cast(), sum);
}
}
#[cfg(not(all(target_arch = "x86_64", target_feature = "sse2", not(miri))))]
{
let mut t: [u64; 2] = enc.convert();
t[0] = t[0].wrapping_add(len);
*enc = t.convert();
}
}
#[cfg(test)]
mod test {
use super::*;
// This is code to search for the shuffle constant
//
//thread_local! { static MASK: Cell<u128> = Cell::new(0); }
//
// fn shuffle(a: u128) -> u128 {
// use std::intrinsics::transmute;
// #[cfg(target_arch = "x86")]
// use core::arch::x86::*;
// #[cfg(target_arch = "x86_64")]
// use core::arch::x86_64::*;
// MASK.with(|mask| {
// unsafe { transmute!(_mm_shuffle_epi8(transmute!(a), transmute!(mask.get()))) }
// })
// }
//
// #[test]
// fn find_shuffle() {
// use rand::prelude::*;
// use SliceRandom;
// use std::panic;
// use std::io::Write;
//
// let mut value: [u8; 16] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ,13, 14, 15];
// let mut rand = thread_rng();
// let mut successful_list = HashMap::new();
// for _attempt in 0..10000000 {
// rand.shuffle(&mut value);
// let test_val = value.convert();
// MASK.with(|mask| {
// mask.set(test_val);
// });
// if let Ok(successful) = panic::catch_unwind(|| {
// test_shuffle_does_not_collide_with_aes();
// test_shuffle_moves_high_bits();
// test_shuffle_moves_every_value();
// //test_shuffle_does_not_loop();
// value
// }) {
// let successful: u128 = successful.convert();
// successful_list.insert(successful, iters_before_loop());
// }
// }
// let write_file = File::create("/tmp/output").unwrap();
// let mut writer = BufWriter::new(&write_file);
//
// for success in successful_list {
// writeln!(writer, "Found successful: {:x?} - {:?}", success.0, success.1);
// }
// }
//
// fn iters_before_loop() -> u32 {
// let numbered = 0x00112233_44556677_8899AABB_CCDDEEFF;
// let mut shuffled = shuffle(numbered);
// let mut count = 0;
// loop {
// // println!("{:>16x}", shuffled);
// if numbered == shuffled {
// break;
// }
// count += 1;
// shuffled = shuffle(shuffled);
// }
// count
// }
#[cfg(all(
any(target_arch = "x86", target_arch = "x86_64"),
target_feature = "ssse3",
target_feature = "aes",
not(miri)
))]
#[test]
fn test_shuffle_does_not_collide_with_aes() {
let mut value: [u8; 16] = [0; 16];
let zero_mask_enc = aesenc(0, 0);
let zero_mask_dec = aesdec(0, 0);
for index in 0..16 {
value[index] = 1;
let excluded_positions_enc: [u8; 16] = aesenc(value.convert(), zero_mask_enc).convert();
let excluded_positions_dec: [u8; 16] = aesdec(value.convert(), zero_mask_dec).convert();
let actual_location: [u8; 16] = shuffle(value.convert()).convert();
for pos in 0..16 {
if actual_location[pos] != 0 {
assert_eq!(
0, excluded_positions_enc[pos],
"Forward Overlap between {:?} and {:?} at {}",
excluded_positions_enc, actual_location, index
);
assert_eq!(
0, excluded_positions_dec[pos],
"Reverse Overlap between {:?} and {:?} at {}",
excluded_positions_dec, actual_location, index
);
}
}
value[index] = 0;
}
}
#[test]
fn test_shuffle_contains_each_value() {
let value: [u8; 16] = 0x00010203_04050607_08090A0B_0C0D0E0F_u128.convert();
let shuffled: [u8; 16] = shuffle(value.convert()).convert();
for index in 0..16_u8 {
assert!(shuffled.contains(&index), "Value is missing {}", index);
}
}
#[test]
fn test_shuffle_moves_every_value() {
let mut value: [u8; 16] = [0; 16];
for index in 0..16 {
value[index] = 1;
let shuffled: [u8; 16] = shuffle(value.convert()).convert();
assert_eq!(0, shuffled[index], "Value is not moved {}", index);
value[index] = 0;
}
}
#[test]
fn test_shuffle_moves_high_bits() {
assert!(
shuffle(1) > (1_u128 << 80),
"Low bits must be moved to other half {:?} -> {:?}",
0,
shuffle(1)
);
assert!(
shuffle(1_u128 << 58) >= (1_u128 << 64),
"High bits must be moved to other half {:?} -> {:?}",
7,
shuffle(1_u128 << 58)
);
assert!(
shuffle(1_u128 << 58) < (1_u128 << 112),
"High bits must not remain high {:?} -> {:?}",
7,
shuffle(1_u128 << 58)
);
assert!(
shuffle(1_u128 << 64) < (1_u128 << 64),
"Low bits must be moved to other half {:?} -> {:?}",
8,
shuffle(1_u128 << 64)
);
assert!(
shuffle(1_u128 << 64) >= (1_u128 << 16),
"Low bits must not remain low {:?} -> {:?}",
8,
shuffle(1_u128 << 64)
);
assert!(
shuffle(1_u128 << 120) < (1_u128 << 50),
"High bits must be moved to low half {:?} -> {:?}",
15,
shuffle(1_u128 << 120)
);
}
#[cfg(all(
any(target_arch = "x86", target_arch = "x86_64"),
target_feature = "ssse3",
not(miri)
))]
#[test]
fn test_shuffle_does_not_loop() {
let numbered = 0x00112233_44556677_8899AABB_CCDDEEFF;
let mut shuffled = shuffle(numbered);
for count in 0..100 {
// println!("{:>16x}", shuffled);
assert_ne!(numbered, shuffled, "Equal after {} vs {:x}", count, shuffled);
shuffled = shuffle(shuffled);
}
}
#[test]
fn test_add_length() {
let mut enc = (u64::MAX as u128) << 64 | 50;
add_in_length(&mut enc, u64::MAX);
assert_eq!(enc >> 64, u64::MAX as u128);
assert_eq!(enc as u64, 49);
}
}

528
pve-rs/vendor/ahash/src/random_state.rs vendored Normal file
View File

@@ -0,0 +1,528 @@
use core::hash::Hash;
cfg_if::cfg_if! {
if #[cfg(any(
all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "aes", not(miri)),
all(feature = "nightly-arm-aes", target_arch = "aarch64", target_feature = "aes", not(miri)),
all(feature = "nightly-arm-aes", target_arch = "arm", target_feature = "aes", not(miri)),
))] {
use crate::aes_hash::*;
} else {
use crate::fallback_hash::*;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "specialize")]{
use crate::BuildHasherExt;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "std")] {
extern crate std as alloc;
} else {
extern crate alloc;
}
}
#[cfg(feature = "atomic-polyfill")]
use atomic_polyfill as atomic;
#[cfg(not(feature = "atomic-polyfill"))]
use core::sync::atomic;
use alloc::boxed::Box;
use atomic::{AtomicUsize, Ordering};
use core::any::{Any, TypeId};
use core::fmt;
use core::hash::BuildHasher;
use core::hash::Hasher;
pub(crate) const PI: [u64; 4] = [
0x243f_6a88_85a3_08d3,
0x1319_8a2e_0370_7344,
0xa409_3822_299f_31d0,
0x082e_fa98_ec4e_6c89,
];
pub(crate) const PI2: [u64; 4] = [
0x4528_21e6_38d0_1377,
0xbe54_66cf_34e9_0c6c,
0xc0ac_29b7_c97c_50dd,
0x3f84_d5b5_b547_0917,
];
cfg_if::cfg_if! {
if #[cfg(all(feature = "compile-time-rng", any(test, fuzzing)))] {
#[inline]
fn get_fixed_seeds() -> &'static [[u64; 4]; 2] {
use const_random::const_random;
const RAND: [[u64; 4]; 2] = [
[
const_random!(u64),
const_random!(u64),
const_random!(u64),
const_random!(u64),
], [
const_random!(u64),
const_random!(u64),
const_random!(u64),
const_random!(u64),
]
];
&RAND
}
} else if #[cfg(all(feature = "runtime-rng", not(fuzzing)))] {
#[inline]
fn get_fixed_seeds() -> &'static [[u64; 4]; 2] {
use crate::convert::Convert;
static SEEDS: OnceBox<[[u64; 4]; 2]> = OnceBox::new();
SEEDS.get_or_init(|| {
let mut result: [u8; 64] = [0; 64];
getrandom::getrandom(&mut result).expect("getrandom::getrandom() failed.");
Box::new(result.convert())
})
}
} else if #[cfg(feature = "compile-time-rng")] {
#[inline]
fn get_fixed_seeds() -> &'static [[u64; 4]; 2] {
use const_random::const_random;
const RAND: [[u64; 4]; 2] = [
[
const_random!(u64),
const_random!(u64),
const_random!(u64),
const_random!(u64),
], [
const_random!(u64),
const_random!(u64),
const_random!(u64),
const_random!(u64),
]
];
&RAND
}
} else {
#[inline]
fn get_fixed_seeds() -> &'static [[u64; 4]; 2] {
&[PI, PI2]
}
}
}
cfg_if::cfg_if! {
if #[cfg(not(all(target_arch = "arm", target_os = "none")))] {
use once_cell::race::OnceBox;
static RAND_SOURCE: OnceBox<Box<dyn RandomSource + Send + Sync>> = OnceBox::new();
}
}
/// A supplier of Randomness used for different hashers.
/// See [set_random_source].
///
/// If [set_random_source] aHash will default to the best available source of randomness.
/// In order this is:
/// 1. OS provided random number generator (available if the `runtime-rng` flag is enabled which it is by default) - This should be very strong.
/// 2. Strong compile time random numbers used to permute a static "counter". (available if `compile-time-rng` is enabled.
/// __Enabling this is recommended if `runtime-rng` is not possible__)
/// 3. A static counter that adds the memory address of each [RandomState] created permuted with fixed constants.
/// (Similar to above but with fixed keys) - This is the weakest option. The strength of this heavily depends on whether or not ASLR is enabled.
/// (Rust enables ASLR by default)
pub trait RandomSource {
fn gen_hasher_seed(&self) -> usize;
}
struct DefaultRandomSource {
counter: AtomicUsize,
}
impl DefaultRandomSource {
fn new() -> DefaultRandomSource {
DefaultRandomSource {
counter: AtomicUsize::new(&PI as *const _ as usize),
}
}
#[cfg(all(target_arch = "arm", target_os = "none"))]
const fn default() -> DefaultRandomSource {
DefaultRandomSource {
counter: AtomicUsize::new(PI[3] as usize),
}
}
}
impl RandomSource for DefaultRandomSource {
cfg_if::cfg_if! {
if #[cfg(all(target_arch = "arm", target_os = "none"))] {
fn gen_hasher_seed(&self) -> usize {
let stack = self as *const _ as usize;
let previous = self.counter.load(Ordering::Relaxed);
let new = previous.wrapping_add(stack);
self.counter.store(new, Ordering::Relaxed);
new
}
} else {
fn gen_hasher_seed(&self) -> usize {
let stack = self as *const _ as usize;
self.counter.fetch_add(stack, Ordering::Relaxed)
}
}
}
}
cfg_if::cfg_if! {
if #[cfg(all(target_arch = "arm", target_os = "none"))] {
#[inline]
fn get_src() -> &'static dyn RandomSource {
static RAND_SOURCE: DefaultRandomSource = DefaultRandomSource::default();
&RAND_SOURCE
}
} else {
/// Provides an optional way to manually supply a source of randomness for Hasher keys.
///
/// The provided [RandomSource] will be used to be used as a source of randomness by [RandomState] to generate new states.
/// If this method is not invoked the standard source of randomness is used as described in the Readme.
///
/// The source of randomness can only be set once, and must be set before the first RandomState is created.
/// If the source has already been specified `Err` is returned with a `bool` indicating if the set failed because
/// method was previously invoked (true) or if the default source is already being used (false).
#[cfg(not(all(target_arch = "arm", target_os = "none")))]
pub fn set_random_source(source: impl RandomSource + Send + Sync + 'static) -> Result<(), bool> {
RAND_SOURCE.set(Box::new(Box::new(source))).map_err(|s| s.as_ref().type_id() != TypeId::of::<&DefaultRandomSource>())
}
#[inline]
fn get_src() -> &'static dyn RandomSource {
RAND_SOURCE.get_or_init(|| Box::new(Box::new(DefaultRandomSource::new()))).as_ref()
}
}
}
/// Provides a [Hasher] factory. This is typically used (e.g. by [HashMap]) to create
/// [AHasher]s in order to hash the keys of the map. See `build_hasher` below.
///
/// [build_hasher]: ahash::
/// [Hasher]: std::hash::Hasher
/// [BuildHasher]: std::hash::BuildHasher
/// [HashMap]: std::collections::HashMap
///
/// There are multiple constructors each is documented in more detail below:
///
/// | Constructor | Dynamically random? | Seed |
/// |---------------|---------------------|------|
/// |`new` | Each instance unique|_[RandomSource]_|
/// |`generate_with`| Each instance unique|`u64` x 4 + [RandomSource]|
/// |`with_seed` | Fixed per process |`u64` + static random number|
/// |`with_seeds` | Fixed |`u64` x 4|
///
#[derive(Clone)]
pub struct RandomState {
pub(crate) k0: u64,
pub(crate) k1: u64,
pub(crate) k2: u64,
pub(crate) k3: u64,
}
impl fmt::Debug for RandomState {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.pad("RandomState { .. }")
}
}
impl RandomState {
/// Create a new `RandomState` `BuildHasher` using random keys.
///
/// Each instance will have a unique set of keys derived from [RandomSource].
///
#[inline]
pub fn new() -> RandomState {
let src = get_src();
let fixed = get_fixed_seeds();
Self::from_keys(&fixed[0], &fixed[1], src.gen_hasher_seed())
}
/// Create a new `RandomState` `BuildHasher` based on the provided seeds, but in such a way
/// that each time it is called the resulting state will be different and of high quality.
/// This allows fixed constant or poor quality seeds to be provided without the problem of different
/// `BuildHasher`s being identical or weak.
///
/// This is done via permuting the provided values with the value of a static counter and memory address.
/// (This makes this method somewhat more expensive than `with_seeds` below which does not do this).
///
/// The provided values (k0-k3) do not need to be of high quality but they should not all be the same value.
#[inline]
pub fn generate_with(k0: u64, k1: u64, k2: u64, k3: u64) -> RandomState {
let src = get_src();
let fixed = get_fixed_seeds();
RandomState::from_keys(&fixed[0], &[k0, k1, k2, k3], src.gen_hasher_seed())
}
fn from_keys(a: &[u64; 4], b: &[u64; 4], c: usize) -> RandomState {
let &[k0, k1, k2, k3] = a;
let mut hasher = AHasher::from_random_state(&RandomState { k0, k1, k2, k3 });
hasher.write_usize(c);
let mix = |l: u64, r: u64| {
let mut h = hasher.clone();
h.write_u64(l);
h.write_u64(r);
h.finish()
};
RandomState {
k0: mix(b[0], b[2]),
k1: mix(b[1], b[3]),
k2: mix(b[2], b[1]),
k3: mix(b[3], b[0]),
}
}
/// Internal. Used by Default.
#[inline]
pub(crate) fn with_fixed_keys() -> RandomState {
let [k0, k1, k2, k3] = get_fixed_seeds()[0];
RandomState { k0, k1, k2, k3 }
}
/// Build a `RandomState` from a single key. The provided key does not need to be of high quality,
/// but all `RandomState`s created from the same key will produce identical hashers.
/// (In contrast to `generate_with` above)
///
/// This allows for explicitly setting the seed to be used.
///
/// Note: This method does not require the provided seed to be strong.
#[inline]
pub fn with_seed(key: usize) -> RandomState {
let fixed = get_fixed_seeds();
RandomState::from_keys(&fixed[0], &fixed[1], key)
}
/// Allows for explicitly setting the seeds to used.
/// All `RandomState`s created with the same set of keys key will produce identical hashers.
/// (In contrast to `generate_with` above)
///
/// Note: If DOS resistance is desired one of these should be a decent quality random number.
/// If 4 high quality random number are not cheaply available this method is robust against 0s being passed for
/// one or more of the parameters or the same value being passed for more than one parameter.
/// It is recommended to pass numbers in order from highest to lowest quality (if there is any difference).
#[inline]
pub const fn with_seeds(k0: u64, k1: u64, k2: u64, k3: u64) -> RandomState {
RandomState {
k0: k0 ^ PI2[0],
k1: k1 ^ PI2[1],
k2: k2 ^ PI2[2],
k3: k3 ^ PI2[3],
}
}
/// Calculates the hash of a single value. This provides a more convenient (and faster) way to obtain a hash:
/// For example:
#[cfg_attr(
feature = "std",
doc = r##" # Examples
```
use std::hash::BuildHasher;
use ahash::RandomState;
let hash_builder = RandomState::new();
let hash = hash_builder.hash_one("Some Data");
```
"##
)]
/// This is similar to:
#[cfg_attr(
feature = "std",
doc = r##" # Examples
```
use std::hash::{BuildHasher, Hash, Hasher};
use ahash::RandomState;
let hash_builder = RandomState::new();
let mut hasher = hash_builder.build_hasher();
"Some Data".hash(&mut hasher);
let hash = hasher.finish();
```
"##
)]
/// (Note that these two ways to get a hash may not produce the same value for the same data)
///
/// This is intended as a convenience for code which *consumes* hashes, such
/// as the implementation of a hash table or in unit tests that check
/// whether a custom [`Hash`] implementation behaves as expected.
///
/// This must not be used in any code which *creates* hashes, such as in an
/// implementation of [`Hash`]. The way to create a combined hash of
/// multiple values is to call [`Hash::hash`] multiple times using the same
/// [`Hasher`], not to call this method repeatedly and combine the results.
#[inline]
pub fn hash_one<T: Hash>(&self, x: T) -> u64
where
Self: Sized,
{
use crate::specialize::CallHasher;
T::get_hash(&x, self)
}
}
/// Creates an instance of RandomState using keys obtained from the random number generator.
/// Each instance created in this way will have a unique set of keys. (But the resulting instance
/// can be used to create many hashers each or which will have the same keys.)
///
/// This is the same as [RandomState::new()]
///
/// NOTE: For safety this trait impl is only available available if either of the flags `runtime-rng` (on by default) or
/// `compile-time-rng` are enabled. This is to prevent weakly keyed maps from being accidentally created. Instead one of
/// constructors for [RandomState] must be used.
#[cfg(any(feature = "compile-time-rng", feature = "runtime-rng", feature = "no-rng"))]
impl Default for RandomState {
#[inline]
fn default() -> Self {
Self::new()
}
}
impl BuildHasher for RandomState {
type Hasher = AHasher;
/// Constructs a new [AHasher] with keys based on this [RandomState] object.
/// This means that two different [RandomState]s will will generate
/// [AHasher]s that will return different hashcodes, but [Hasher]s created from the same [BuildHasher]
/// will generate the same hashes for the same input data.
///
#[cfg_attr(
feature = "std",
doc = r##" # Examples
```
use ahash::{AHasher, RandomState};
use std::hash::{Hasher, BuildHasher};
let build_hasher = RandomState::new();
let mut hasher_1 = build_hasher.build_hasher();
let mut hasher_2 = build_hasher.build_hasher();
hasher_1.write_u32(1234);
hasher_2.write_u32(1234);
assert_eq!(hasher_1.finish(), hasher_2.finish());
let other_build_hasher = RandomState::new();
let mut different_hasher = other_build_hasher.build_hasher();
different_hasher.write_u32(1234);
assert_ne!(different_hasher.finish(), hasher_1.finish());
```
"##
)]
/// [Hasher]: std::hash::Hasher
/// [BuildHasher]: std::hash::BuildHasher
/// [HashMap]: std::collections::HashMap
#[inline]
fn build_hasher(&self) -> AHasher {
AHasher::from_random_state(self)
}
/// Calculates the hash of a single value. This provides a more convenient (and faster) way to obtain a hash:
/// For example:
#[cfg_attr(
feature = "std",
doc = r##" # Examples
```
use std::hash::BuildHasher;
use ahash::RandomState;
let hash_builder = RandomState::new();
let hash = hash_builder.hash_one("Some Data");
```
"##
)]
/// This is similar to:
#[cfg_attr(
feature = "std",
doc = r##" # Examples
```
use std::hash::{BuildHasher, Hash, Hasher};
use ahash::RandomState;
let hash_builder = RandomState::new();
let mut hasher = hash_builder.build_hasher();
"Some Data".hash(&mut hasher);
let hash = hasher.finish();
```
"##
)]
/// (Note that these two ways to get a hash may not produce the same value for the same data)
///
/// This is intended as a convenience for code which *consumes* hashes, such
/// as the implementation of a hash table or in unit tests that check
/// whether a custom [`Hash`] implementation behaves as expected.
///
/// This must not be used in any code which *creates* hashes, such as in an
/// implementation of [`Hash`]. The way to create a combined hash of
/// multiple values is to call [`Hash::hash`] multiple times using the same
/// [`Hasher`], not to call this method repeatedly and combine the results.
#[cfg(feature = "specialize")]
#[inline]
fn hash_one<T: Hash>(&self, x: T) -> u64 {
RandomState::hash_one(self, x)
}
}
#[cfg(feature = "specialize")]
impl BuildHasherExt for RandomState {
#[inline]
fn hash_as_u64<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = AHasherU64 {
buffer: self.k1,
pad: self.k0,
};
value.hash(&mut hasher);
hasher.finish()
}
#[inline]
fn hash_as_fixed_length<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = AHasherFixed(self.build_hasher());
value.hash(&mut hasher);
hasher.finish()
}
#[inline]
fn hash_as_str<T: Hash + ?Sized>(&self, value: &T) -> u64 {
let mut hasher = AHasherStr(self.build_hasher());
value.hash(&mut hasher);
hasher.finish()
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_unique() {
let a = RandomState::generate_with(1, 2, 3, 4);
let b = RandomState::generate_with(1, 2, 3, 4);
assert_ne!(a.build_hasher().finish(), b.build_hasher().finish());
}
#[cfg(all(feature = "runtime-rng", not(all(feature = "compile-time-rng", test))))]
#[test]
fn test_not_pi() {
assert_ne!(PI, get_fixed_seeds()[0]);
}
#[cfg(all(feature = "compile-time-rng", any(not(feature = "runtime-rng"), test)))]
#[test]
fn test_not_pi_const() {
assert_ne!(PI, get_fixed_seeds()[0]);
}
#[cfg(all(not(feature = "runtime-rng"), not(feature = "compile-time-rng")))]
#[test]
fn test_pi() {
assert_eq!(PI, get_fixed_seeds()[0]);
}
#[test]
fn test_with_seeds_const() {
const _CONST_RANDOM_STATE: RandomState = RandomState::with_seeds(17, 19, 21, 23);
}
}

218
pve-rs/vendor/ahash/src/specialize.rs vendored Normal file
View File

@@ -0,0 +1,218 @@
use core::hash::BuildHasher;
use core::hash::Hash;
use core::hash::Hasher;
#[cfg(not(feature = "std"))]
extern crate alloc;
#[cfg(feature = "std")]
extern crate std as alloc;
#[cfg(feature = "specialize")]
use crate::BuildHasherExt;
#[cfg(feature = "specialize")]
use alloc::string::String;
#[cfg(feature = "specialize")]
use alloc::vec::Vec;
/// Provides a way to get an optimized hasher for a given data type.
/// Rather than using a Hasher generically which can hash any value, this provides a way to get a specialized hash
/// for a specific type. So this may be faster for primitive types.
pub(crate) trait CallHasher {
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64;
}
#[cfg(not(feature = "specialize"))]
impl<T> CallHasher for T
where
T: Hash + ?Sized,
{
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
let mut hasher = build_hasher.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
}
#[cfg(feature = "specialize")]
impl<T> CallHasher for T
where
T: Hash + ?Sized,
{
#[inline]
default fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
let mut hasher = build_hasher.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
}
macro_rules! call_hasher_impl {
($typ:ty) => {
#[cfg(feature = "specialize")]
impl CallHasher for $typ {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_u64(value)
}
}
};
}
call_hasher_impl!(u8);
call_hasher_impl!(u16);
call_hasher_impl!(u32);
call_hasher_impl!(u64);
call_hasher_impl!(i8);
call_hasher_impl!(i16);
call_hasher_impl!(i32);
call_hasher_impl!(i64);
#[cfg(feature = "specialize")]
impl CallHasher for u128 {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_fixed_length(value)
}
}
#[cfg(feature = "specialize")]
impl CallHasher for i128 {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_fixed_length(value)
}
}
#[cfg(feature = "specialize")]
impl CallHasher for usize {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_fixed_length(value)
}
}
#[cfg(feature = "specialize")]
impl CallHasher for isize {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_fixed_length(value)
}
}
#[cfg(feature = "specialize")]
impl CallHasher for [u8] {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_str(value)
}
}
#[cfg(feature = "specialize")]
impl CallHasher for Vec<u8> {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_str(value)
}
}
#[cfg(feature = "specialize")]
impl CallHasher for str {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_str(value)
}
}
#[cfg(all(feature = "specialize"))]
impl CallHasher for String {
#[inline]
fn get_hash<H: Hash + ?Sized, B: BuildHasher>(value: &H, build_hasher: &B) -> u64 {
build_hasher.hash_as_str(value)
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::*;
#[test]
#[cfg(feature = "specialize")]
pub fn test_specialized_invoked() {
let build_hasher = RandomState::with_seeds(1, 2, 3, 4);
let shortened = u64::get_hash(&0, &build_hasher);
let mut hasher = AHasher::new_with_keys(1, 2);
0_u64.hash(&mut hasher);
assert_ne!(hasher.finish(), shortened);
}
/// Tests that some non-trivial transformation takes place.
#[test]
pub fn test_input_processed() {
let build_hasher = RandomState::with_seeds(2, 2, 2, 2);
assert_ne!(0, u64::get_hash(&0, &build_hasher));
assert_ne!(1, u64::get_hash(&0, &build_hasher));
assert_ne!(2, u64::get_hash(&0, &build_hasher));
assert_ne!(3, u64::get_hash(&0, &build_hasher));
assert_ne!(4, u64::get_hash(&0, &build_hasher));
assert_ne!(5, u64::get_hash(&0, &build_hasher));
assert_ne!(0, u64::get_hash(&1, &build_hasher));
assert_ne!(1, u64::get_hash(&1, &build_hasher));
assert_ne!(2, u64::get_hash(&1, &build_hasher));
assert_ne!(3, u64::get_hash(&1, &build_hasher));
assert_ne!(4, u64::get_hash(&1, &build_hasher));
assert_ne!(5, u64::get_hash(&1, &build_hasher));
let xored = u64::get_hash(&0, &build_hasher) ^ u64::get_hash(&1, &build_hasher);
assert_ne!(0, xored);
assert_ne!(1, xored);
assert_ne!(2, xored);
assert_ne!(3, xored);
assert_ne!(4, xored);
assert_ne!(5, xored);
}
#[test]
pub fn test_ref_independent() {
let build_hasher = RandomState::with_seeds(1, 2, 3, 4);
assert_eq!(u8::get_hash(&&1, &build_hasher), u8::get_hash(&1, &build_hasher));
assert_eq!(u16::get_hash(&&2, &build_hasher), u16::get_hash(&2, &build_hasher));
assert_eq!(u32::get_hash(&&3, &build_hasher), u32::get_hash(&3, &build_hasher));
assert_eq!(u64::get_hash(&&4, &build_hasher), u64::get_hash(&4, &build_hasher));
assert_eq!(u128::get_hash(&&5, &build_hasher), u128::get_hash(&5, &build_hasher));
assert_eq!(
str::get_hash(&"test", &build_hasher),
str::get_hash("test", &build_hasher)
);
assert_eq!(
str::get_hash(&"test", &build_hasher),
String::get_hash(&"test".to_string(), &build_hasher)
);
#[cfg(feature = "specialize")]
assert_eq!(
str::get_hash(&"test", &build_hasher),
<[u8]>::get_hash("test".as_bytes(), &build_hasher)
);
let build_hasher = RandomState::with_seeds(10, 20, 30, 40);
assert_eq!(u8::get_hash(&&&1, &build_hasher), u8::get_hash(&1, &build_hasher));
assert_eq!(u16::get_hash(&&&2, &build_hasher), u16::get_hash(&2, &build_hasher));
assert_eq!(u32::get_hash(&&&3, &build_hasher), u32::get_hash(&3, &build_hasher));
assert_eq!(u64::get_hash(&&&4, &build_hasher), u64::get_hash(&4, &build_hasher));
assert_eq!(u128::get_hash(&&&5, &build_hasher), u128::get_hash(&5, &build_hasher));
assert_eq!(
str::get_hash(&&"test", &build_hasher),
str::get_hash("test", &build_hasher)
);
assert_eq!(
str::get_hash(&&"test", &build_hasher),
String::get_hash(&"test".to_string(), &build_hasher)
);
#[cfg(feature = "specialize")]
assert_eq!(
str::get_hash(&&"test", &build_hasher),
<[u8]>::get_hash(&"test".to_string().into_bytes(), &build_hasher)
);
}
}

199
pve-rs/vendor/ahash/tests/bench.rs vendored Normal file
View File

@@ -0,0 +1,199 @@
#![cfg_attr(feature = "specialize", feature(build_hasher_simple_hash_one))]
use ahash::{AHasher, RandomState};
use criterion::*;
use fxhash::FxHasher;
use rand::Rng;
use std::collections::hash_map::DefaultHasher;
use std::hash::{BuildHasherDefault, Hash, Hasher};
// Needs to be in sync with `src/lib.rs`
const AHASH_IMPL: &str = if cfg!(any(
all(
any(target_arch = "x86", target_arch = "x86_64"),
target_feature = "aes",
not(miri),
),
all(feature = "nightly-arm-aes", target_arch = "aarch64", target_feature = "aes", not(miri)),
all(
feature = "nightly-arm-aes",
target_arch = "arm",
target_feature = "aes",
not(miri)
),
)) {
"aeshash"
} else {
"fallbackhash"
};
fn ahash<H: Hash>(b: &H) -> u64 {
let build_hasher = RandomState::with_seeds(1, 2, 3, 4);
build_hasher.hash_one(b)
}
fn fnvhash<H: Hash>(b: &H) -> u64 {
let mut hasher = fnv::FnvHasher::default();
b.hash(&mut hasher);
hasher.finish()
}
fn siphash<H: Hash>(b: &H) -> u64 {
let mut hasher = DefaultHasher::default();
b.hash(&mut hasher);
hasher.finish()
}
fn fxhash<H: Hash>(b: &H) -> u64 {
let mut hasher = FxHasher::default();
b.hash(&mut hasher);
hasher.finish()
}
fn seahash<H: Hash>(b: &H) -> u64 {
let mut hasher = seahash::SeaHasher::default();
b.hash(&mut hasher);
hasher.finish()
}
const STRING_LENGTHS: [u32; 12] = [1, 3, 4, 7, 8, 15, 16, 24, 33, 68, 132, 1024];
fn gen_strings() -> Vec<String> {
STRING_LENGTHS
.iter()
.map(|len| {
let mut string = String::default();
for pos in 1..=*len {
let c = (48 + (pos % 10) as u8) as char;
string.push(c);
}
string
})
.collect()
}
macro_rules! bench_inputs {
($group:ident, $hash:ident) => {
// Number of iterations per batch should be high enough to hide timing overhead.
let size = BatchSize::NumIterations(50_000);
let mut rng = rand::thread_rng();
$group.bench_function("u8", |b| b.iter_batched(|| rng.gen::<u8>(), |v| $hash(&v), size));
$group.bench_function("u16", |b| b.iter_batched(|| rng.gen::<u16>(), |v| $hash(&v), size));
$group.bench_function("u32", |b| b.iter_batched(|| rng.gen::<u32>(), |v| $hash(&v), size));
$group.bench_function("u64", |b| b.iter_batched(|| rng.gen::<u64>(), |v| $hash(&v), size));
$group.bench_function("u128", |b| b.iter_batched(|| rng.gen::<u128>(), |v| $hash(&v), size));
$group.bench_with_input("strings", &gen_strings(), |b, s| b.iter(|| $hash(black_box(s))));
};
}
fn bench_ahash(c: &mut Criterion) {
let mut group = c.benchmark_group(AHASH_IMPL);
bench_inputs!(group, ahash);
}
fn bench_fx(c: &mut Criterion) {
let mut group = c.benchmark_group("fx");
bench_inputs!(group, fxhash);
}
fn bench_fnv(c: &mut Criterion) {
let mut group = c.benchmark_group("fnv");
bench_inputs!(group, fnvhash);
}
fn bench_sea(c: &mut Criterion) {
let mut group = c.benchmark_group("sea");
bench_inputs!(group, seahash);
}
fn bench_sip(c: &mut Criterion) {
let mut group = c.benchmark_group("sip");
bench_inputs!(group, siphash);
}
fn bench_map(c: &mut Criterion) {
#[cfg(feature = "std")]
{
let mut group = c.benchmark_group("map");
group.bench_function("aHash-alias", |b| {
b.iter(|| {
let hm: ahash::HashMap<i32, i32> = (0..1_000_000).map(|i| (i, i)).collect();
let mut sum = 0;
for i in 0..1_000_000 {
if let Some(x) = hm.get(&i) {
sum += x;
}
}
})
});
group.bench_function("aHash-hashBrown", |b| {
b.iter(|| {
let hm: hashbrown::HashMap<i32, i32> = (0..1_000_000).map(|i| (i, i)).collect();
let mut sum = 0;
for i in 0..1_000_000 {
if let Some(x) = hm.get(&i) {
sum += x;
}
}
})
});
group.bench_function("aHash-hashBrown-explicit", |b| {
b.iter(|| {
let hm: hashbrown::HashMap<i32, i32, RandomState> = (0..1_000_000).map(|i| (i, i)).collect();
let mut sum = 0;
for i in 0..1_000_000 {
if let Some(x) = hm.get(&i) {
sum += x;
}
}
})
});
group.bench_function("aHash-wrapper", |b| {
b.iter(|| {
let hm: ahash::AHashMap<i32, i32> = (0..1_000_000).map(|i| (i, i)).collect();
let mut sum = 0;
for i in 0..1_000_000 {
if let Some(x) = hm.get(&i) {
sum += x;
}
}
})
});
group.bench_function("aHash-rand", |b| {
b.iter(|| {
let hm: std::collections::HashMap<i32, i32, RandomState> = (0..1_000_000).map(|i| (i, i)).collect();
let mut sum = 0;
for i in 0..1_000_000 {
if let Some(x) = hm.get(&i) {
sum += x;
}
}
})
});
group.bench_function("aHash-default", |b| {
b.iter(|| {
let hm: std::collections::HashMap<i32, i32, BuildHasherDefault<AHasher>> =
(0..1_000_000).map(|i| (i, i)).collect();
let mut sum = 0;
for i in 0..1_000_000 {
if let Some(x) = hm.get(&i) {
sum += x;
}
}
})
});
}
}
criterion_main!(benches);
criterion_group!(
benches,
bench_ahash,
bench_fx,
bench_fnv,
bench_sea,
bench_sip,
bench_map
);

310
pve-rs/vendor/ahash/tests/map_tests.rs vendored Normal file
View File

@@ -0,0 +1,310 @@
#![cfg_attr(feature = "specialize", feature(build_hasher_simple_hash_one))]
use std::hash::{BuildHasher, Hash, Hasher};
use ahash::RandomState;
use criterion::*;
use fxhash::FxHasher;
fn gen_word_pairs() -> Vec<String> {
let words: Vec<_> = r#"
a, ability, able, about, above, accept, according, account, across, act, action,
activity, actually, add, address, administration, admit, adult, affect, after,
again, against, age, agency, agent, ago, agree, agreement, ahead, air, all,
allow, almost, alone, along, already, also, although, always, American, among,
amount, analysis, and, animal, another, answer, any, anyone, anything, appear,
apply, approach, area, argue, arm, around, arrive, art, article, artist, as,
ask, assume, at, attack, attention, attorney, audience, author, authority,
available, avoid, away, baby, back, bad, bag, ball, bank, bar, base, be, beat,
beautiful, because, become, bed, before, begin, behavior, behind, believe,
benefit, best, better, between, beyond, big, bill, billion, bit, black, blood,
blue, board, body, book, born, both, box, boy, break, bring, brother, budget,
build, building, business, but, buy, by, call, camera, campaign, can, cancer,
candidate, capital, car, card, care, career, carry, case, catch, cause, cell,
center, central, century, certain, certainly, chair, challenge, chance, change,
character, charge, check, child, choice, choose, church, citizen, city, civil,
claim, class, clear, clearly, close, coach, cold, collection, college, color,
come, commercial, common, community, company, compare, computer, concern,
condition, conference, Congress, consider, consumer, contain, continue, control,
cost, could, country, couple, course, court, cover, create, crime, cultural,
culture, cup, current, customer, cut, dark, data, daughter, day, dead, deal,
death, debate, decade, decide, decision, deep, defense, degree, Democrat,
democratic, describe, design, despite, detail, determine, develop, development,
die, difference, different, difficult, dinner, direction, director, discover,
discuss, discussion, disease, do, doctor, dog, door, down, draw, dream, drive,
drop, drug, during, each, early, east, easy, eat, economic, economy, edge,
education, effect, effort, eight, either, election, else, employee, end, energy,
enjoy, enough, enter, entire, environment, environmental, especially, establish,
even, evening, event, ever, every, everybody, everyone, everything, evidence,
exactly, example, executive, exist, expect, experience, expert, explain, eye,
face, fact, factor, fail, fall, family, far, fast, father, fear, federal, feel,
feeling, few, field, fight, figure, fill, film, final, finally, financial, find,
fine, finger, finish, fire, firm, first, fish, five, floor, fly, focus, follow,
food, foot, for, force, foreign, forget, form, former, forward, four, free,
friend, from, front, full, fund, future, game, garden, gas, general, generation,
get, girl, give, glass, go, goal, good, government, great, green, ground, group,
grow, growth, guess, gun, guy, hair, half, hand, hang, happen, happy, hard,
have, he, head, health, hear, heart, heat, heavy, help, her, here, herself,
high, him, himself, his, history, hit, hold, home, hope, hospital, hot, hotel,
hour, house, how, however, huge, human, hundred, husband, I, idea, identify, if,
image, imagine, impact, important, improve, in, include, including, increase,
indeed, indicate, individual, industry, information, inside, instead,
institution, interest, interesting, international, interview, into, investment,
involve, issue, it, item, its, itself, job, join, just, keep, key, kid, kill,
kind, kitchen, know, knowledge, land, language, large, last, late, later, laugh,
law, lawyer, lay, lead, leader, learn, least, leave, left, leg, legal, less,
let, letter, level, lie, life, light, like, likely, line, list, listen, little,
live, local, long, look, lose, loss, lot, love, low, machine, magazine, main,
maintain, major, majority, make, man, manage, management, manager, many, market,
marriage, material, matter, may, maybe, me, mean, measure, media, medical, meet,
meeting, member, memory, mention, message, method, middle, might, military,
million, mind, minute, miss, mission, model, modern, moment, money, month, more,
morning, most, mother, mouth, move, movement, movie, Mr, Mrs, much, music, must,
my, myself, name, nation, national, natural, nature, near, nearly, necessary,
need, network, never, new, news, newspaper, next, nice, night, no, none, nor,
north, not, note, nothing, notice, now, n't, number, occur, of, off, offer,
office, officer, official, often, oh, oil, ok, old, on, once, one, only, onto,
open, operation, opportunity, option, or, order, organization, other, others,
our, out, outside, over, own, owner, page, pain, painting, paper, parent, part,
participant, particular, particularly, partner, party, pass, past, patient,
pattern, pay, peace, people, per, perform, performance, perhaps, period, person,
personal, phone, physical, pick, picture, piece, place, plan, plant, play,
player, PM, point, police, policy, political, politics, poor, popular,
population, position, positive, possible, power, practice, prepare, present,
president, pressure, pretty, prevent, price, private, probably, problem,
process, produce, product, production, professional, professor, program,
project, property, protect, prove, provide, public, pull, purpose, push, put,
quality, question, quickly, quite, race, radio, raise, range, rate, rather,
reach, read, ready, real, reality, realize, really, reason, receive, recent,
recently, recognize, record, red, reduce, reflect, region, relate, relationship,
religious, remain, remember, remove, report, represent, Republican, require,
research, resource, respond, response, responsibility, rest, result, return,
reveal, rich, right, rise, risk, road, rock, role, room, rule, run, safe, same,
save, say, scene, school, science, scientist, score, sea, season, seat, second,
section, security, see, seek, seem, sell, send, senior, sense, series, serious,
serve, service, set, seven, several, sex, sexual, shake, share, she, shoot,
short, shot, should, shoulder, show, side, sign, significant, similar, simple,
simply, since, sing, single, sister, sit, site, situation, six, size, skill,
skin, small, smile, so, social, society, soldier, some, somebody, someone,
something, sometimes, son, song, soon, sort, sound, source, south, southern,
space, speak, special, specific, speech, spend, sport, spring, staff, stage,
stand, standard, star, start, state, statement, station, stay, step, still,
stock, stop, store, story, strategy, street, strong, structure, student, study,
stuff, style, subject, success, successful, such, suddenly, suffer, suggest,
summer, support, sure, surface, system, table, take, talk, task, tax, teach,
teacher, team, technology, television, tell, ten, tend, term, test, than, thank,
that, the, their, them, themselves, then, theory, there, these, they, thing,
think, third, this, those, though, thought, thousand, threat, three, through,
throughout, throw, thus, time, to, today, together, tonight, too, top, total,
tough, toward, town, trade, traditional, training, travel, treat, treatment,
tree, trial, trip, trouble, true, truth, try, turn, TV, two, type, under,
understand, unit, until, up, upon, us, use, usually, value, various, very,
victim, view, violence, visit, voice, vote, wait, walk, wall, want, war, watch,
water, way, we, weapon, wear, week, weight, well, west, western, what, whatever,
when, where, whether, which, while, white, who, whole, whom, whose, why, wide,
wife, will, win, wind, window, wish, with, within, without, woman, wonder, word,
work, worker, world, worry, would, write, writer, wrong, yard, yeah, year, yes,
yet, you, young, your, yourself"#
.split(',')
.map(|word| word.trim())
.collect();
let mut word_pairs: Vec<_> = Vec::new();
for word in &words {
for other_word in &words {
word_pairs.push(word.to_string() + " " + other_word);
}
}
assert_eq!(1_000_000, word_pairs.len());
word_pairs
}
#[allow(unused)] // False positive
fn test_hash_common_words<B: BuildHasher>(build_hasher: &B) {
let word_pairs: Vec<_> = gen_word_pairs();
check_for_collisions(build_hasher, &word_pairs, 32);
}
#[allow(unused)] // False positive
fn check_for_collisions<H: Hash, B: BuildHasher>(build_hasher: &B, items: &[H], bucket_count: usize) {
let mut buckets = vec![0; bucket_count];
for item in items {
let value = hash(item, build_hasher) as usize;
buckets[value % bucket_count] += 1;
}
let mean = items.len() / bucket_count;
let max = *buckets.iter().max().unwrap();
let min = *buckets.iter().min().unwrap();
assert!(
(min as f64) > (mean as f64) * 0.95,
"min: {}, max:{}, {:?}",
min,
max,
buckets
);
assert!(
(max as f64) < (mean as f64) * 1.05,
"min: {}, max:{}, {:?}",
min,
max,
buckets
);
}
#[cfg(feature = "specialize")]
#[allow(unused)] // False positive
fn hash<H: Hash, B: BuildHasher>(b: &H, build_hasher: &B) -> u64 {
build_hasher.hash_one(b)
}
#[cfg(not(feature = "specialize"))]
#[allow(unused)] // False positive
fn hash<H: Hash, B: BuildHasher>(b: &H, build_hasher: &B) -> u64 {
let mut hasher = build_hasher.build_hasher();
b.hash(&mut hasher);
hasher.finish()
}
#[test]
fn test_bucket_distribution() {
let build_hasher = RandomState::with_seeds(1, 2, 3, 4);
test_hash_common_words(&build_hasher);
let sequence: Vec<_> = (0..320000).collect();
check_for_collisions(&build_hasher, &sequence, 32);
let sequence: Vec<_> = (0..2560000).collect();
check_for_collisions(&build_hasher, &sequence, 256);
let sequence: Vec<_> = (0..320000).map(|i| i * 1024).collect();
check_for_collisions(&build_hasher, &sequence, 32);
let sequence: Vec<_> = (0..2560000_u64).map(|i| i * 1024).collect();
check_for_collisions(&build_hasher, &sequence, 256);
}
#[cfg(feature = "std")]
#[test]
fn test_ahash_alias_map_construction() {
let mut map = ahash::HashMap::default();
map.insert(1, "test");
use ahash::HashMapExt;
let mut map = ahash::HashMap::with_capacity(1234);
map.insert(1, "test");
}
#[cfg(feature = "std")]
#[test]
fn test_ahash_alias_set_construction() {
let mut set = ahash::HashSet::default();
set.insert(1);
use ahash::HashSetExt;
let mut set = ahash::HashSet::with_capacity(1235);
set.insert(1);
}
#[cfg(feature = "std")]
#[test]
fn test_key_ref() {
let mut map = ahash::HashMap::default();
map.insert(1, "test");
assert_eq!(Some((1, "test")), map.remove_entry(&1));
let mut map = ahash::HashMap::default();
map.insert(&1, "test");
assert_eq!(Some((&1, "test")), map.remove_entry(&&1));
let mut m = ahash::HashSet::<Box<String>>::default();
m.insert(Box::from("hello".to_string()));
assert!(m.contains(&"hello".to_string()));
let mut m = ahash::HashSet::<String>::default();
m.insert("hello".to_string());
assert!(m.contains("hello"));
let mut m = ahash::HashSet::<Box<[u8]>>::default();
m.insert(Box::from(&b"hello"[..]));
assert!(m.contains(&b"hello"[..]));
}
#[cfg(feature = "std")]
#[test]
fn test_byte_dist() {
use rand::{SeedableRng, Rng, RngCore};
use pcg_mwc::Mwc256XXA64;
let mut r = Mwc256XXA64::seed_from_u64(0xe786_c22b_119c_1479);
let mut lowest = 2.541;
let mut highest = 2.541;
for _round in 0..100 {
let mut table: [bool; 256 * 8] = [false; 256 * 8];
let hasher = RandomState::with_seeds(r.gen(), r.gen(), r.gen(), r.gen());
for i in 0..128 {
let mut keys: [u8; 8] = hasher.hash_one((i as u64) << 30).to_ne_bytes();
//let mut keys = r.next_u64().to_ne_bytes(); //This is a control to test assert sensitivity.
for idx in 0..8 {
while table[idx * 256 + keys[idx] as usize] {
keys[idx] = keys[idx].wrapping_add(1);
}
table[idx * 256 + keys[idx] as usize] = true;
}
}
for idx in 0..8 {
let mut len = 0;
let mut total_len = 0;
let mut num_seq = 0;
for i in 0..256 {
if table[idx * 256 + i] {
len += 1;
} else if len != 0 {
num_seq += 1;
total_len += len;
len = 0;
}
}
let mean = total_len as f32 / num_seq as f32;
println!("Mean sequence length = {}", mean);
if mean > highest {
highest = mean;
}
if mean < lowest {
lowest = mean;
}
}
}
assert!(lowest > 1.9, "Lowest = {}", lowest);
assert!(highest < 3.9, "Highest = {}", highest);
}
fn ahash_vec<H: Hash>(b: &Vec<H>) -> u64 {
let mut total: u64 = 0;
for item in b {
let mut hasher = RandomState::with_seeds(12, 34, 56, 78).build_hasher();
item.hash(&mut hasher);
total = total.wrapping_add(hasher.finish());
}
total
}
fn fxhash_vec<H: Hash>(b: &Vec<H>) -> u64 {
let mut total: u64 = 0;
for item in b {
let mut hasher = FxHasher::default();
item.hash(&mut hasher);
total = total.wrapping_add(hasher.finish());
}
total
}
fn bench_ahash_words(c: &mut Criterion) {
let words = gen_word_pairs();
c.bench_function("aes_words", |b| b.iter(|| black_box(ahash_vec(&words))));
}
fn bench_fx_words(c: &mut Criterion) {
let words = gen_word_pairs();
c.bench_function("fx_words", |b| b.iter(|| black_box(fxhash_vec(&words))));
}
criterion_main!(benches);
criterion_group!(benches, bench_ahash_words, bench_fx_words,);

81
pve-rs/vendor/ahash/tests/nopanic.rs vendored Normal file
View File

@@ -0,0 +1,81 @@
use ahash::{AHasher, RandomState};
use std::hash::{BuildHasher, Hash, Hasher};
#[macro_use]
extern crate no_panic;
#[inline(never)]
#[no_panic]
fn hash_test_final(num: i32, string: &str) -> (u64, u64) {
use core::hash::Hasher;
let mut hasher1 = RandomState::with_seeds(1, 2, 3, 4).build_hasher();
let mut hasher2 = RandomState::with_seeds(3, 4, 5, 6).build_hasher();
hasher1.write_i32(num);
hasher2.write(string.as_bytes());
(hasher1.finish(), hasher2.finish())
}
#[inline(never)]
fn hash_test_final_wrapper(num: i32, string: &str) {
hash_test_final(num, string);
}
struct SimpleBuildHasher {
hasher: AHasher,
}
impl SimpleBuildHasher {
fn hash_one<T: Hash>(&self, x: T) -> u64
where
Self: Sized,
{
let mut hasher = self.build_hasher();
x.hash(&mut hasher);
hasher.finish()
}
}
impl BuildHasher for SimpleBuildHasher {
type Hasher = AHasher;
fn build_hasher(&self) -> Self::Hasher {
self.hasher.clone()
}
}
#[inline(never)]
#[no_panic]
fn hash_test_specialize(num: i32, string: &str) -> (u64, u64) {
let hasher1 = RandomState::with_seeds(1, 2, 3, 4).build_hasher();
let hasher2 = RandomState::with_seeds(1, 2, 3, 4).build_hasher();
(
SimpleBuildHasher { hasher: hasher1 }.hash_one(num),
SimpleBuildHasher { hasher: hasher2 }.hash_one(string.as_bytes()),
)
}
#[inline(never)]
fn hash_test_random_wrapper(num: i32, string: &str) {
hash_test_specialize(num, string);
}
#[inline(never)]
#[no_panic]
fn hash_test_random(num: i32, string: &str) -> (u64, u64) {
let build_hasher1 = RandomState::with_seeds(1, 2, 3, 4);
let build_hasher2 = RandomState::with_seeds(1, 2, 3, 4);
(build_hasher1.hash_one(&num), build_hasher2.hash_one(string.as_bytes()))
}
#[inline(never)]
fn hash_test_specialize_wrapper(num: i32, string: &str) {
hash_test_specialize(num, string);
}
#[test]
fn test_no_panic() {
hash_test_final_wrapper(2, "Foo");
hash_test_specialize_wrapper(2, "Bar");
hash_test_random(2, "Baz");
hash_test_random_wrapper(2, "Bat");
}

View File

@@ -0,0 +1 @@
{"files":{"COPYING":"01c266bced4a434da0051174d6bee16a4c82cf634e2679b6155d40d75012390f","Cargo.toml":"88c12a803c6c06c47cd9dabc8bcdba81f35d3bab637221d2106a86a543532731","DESIGN.md":"59c960e1b73b1d7fb41e4df6c0c1b1fcf44dd2ebc8a349597a7d0595f8cb5130","LICENSE-MIT":"0f96a83840e146e43c0ec96a22ec1f392e0680e6c1226e6f3ba87e0740af850f","README.md":"afc4d559a98cf190029af0bf320fc0022725e349cd2a303aac860254e28f3c53","UNLICENSE":"7e12e5df4bae12cb21581ba157ced20e1986a0508dd10d0e8a4ab9a4cf94e85c","rustfmt.toml":"1ca600239a27401c4a43f363cf3f38183a212affc1f31bff3ae93234bbaec228","src/ahocorasick.rs":"c699c07df70be45c666e128509ad571a7649d2073e4ae16ac1efd6793c9c6890","src/automaton.rs":"22258a3e118672413119f8f543a9b912cce954e63524575c0ebfdf9011f9c2dd","src/dfa.rs":"bfef1a94c5e7410584b1beb4e857b40d1ae2031b881cbc06fb1300409bbd555f","src/lib.rs":"2a92d5c5e930f2d306508802e8a929135e1f41c9f5f8deda8f7eb98947179dd2","src/macros.rs":"c6c52ae05b24433cffaca7b78b3645d797862c5d5feffddf9f54909095ed6e05","src/nfa/contiguous.rs":"aeb6ee5fd80eea04decbc4b46aa27d1ab270b78d416a644da25b7934f009ee66","src/nfa/mod.rs":"ee7b3109774d14bbad5239c16bb980dd6b8185ec136d94fbaf2f0dc27d5ffa15","src/nfa/noncontiguous.rs":"de94f02b04efd8744fb096759a8897c22012b0e0ca3ace161fd87c71befefe04","src/packed/api.rs":"160d3b10823316f7b0924e13c3afd222c8a7db5c0a00432401f311ef27d6a1b7","src/packed/ext.rs":"66be06fde8558429da23a290584d4b9fae665bf64c2578db4fe5f5f3ee864869","src/packed/mod.rs":"0020cd6f07ba5c8955923a9516d7f758864260eda53a6b6f629131c45ddeec62","src/packed/pattern.rs":"1e3a289a730c141fc30b295811e372d046c6619c7fd670308299b889a06c7673","src/packed/rabinkarp.rs":"403146eb1d838a84601d171393542340513cd1ee7ff750f2372161dd47746586","src/packed/teddy/README.md":"3a43194b64e221543d885176aba3beb1224a927385a20eca842daf6b0ea2f342","src/packed/teddy/builder.rs":"08ec116a4a842a2bb1221d296a2515ef3672c54906bed588fb733364c07855d3","src/packed/teddy/generic.rs":"ea252ab05b32cea7dd9d71e332071d243db7dd0362e049252a27e5881ba2bf39","src/packed/teddy/mod.rs":"17d741f7e2fb9dbac5ba7d1bd4542cf1e35e9f146ace728e23fe6bbed20028b2","src/packed/tests.rs":"8e2f56eb3890ed3876ecb47d3121996e416563127b6430110d7b516df3f83b4b","src/packed/vector.rs":"70c325cfa6f7c5c4c9a6af7b133b75a29e65990a7fe0b9a4c4ce3c3d5a0fe587","src/tests.rs":"c68192ab97b6161d0d6ee96fefd80cc7d14e4486ddcd8d1f82b5c92432c24ed5","src/transducer.rs":"02daa33a5d6dac41dcfd67f51df7c0d4a91c5131c781fb54c4de3520c585a6e1","src/util/alphabet.rs":"6dc22658a38deddc0279892035b18870d4585069e35ba7c7e649a24509acfbcc","src/util/buffer.rs":"f9e37f662c46c6ecd734458dedbe76c3bb0e84a93b6b0117c0d4ad3042413891","src/util/byte_frequencies.rs":"2fb85b381c038c1e44ce94294531cdcd339dca48b1e61f41455666e802cbbc9e","src/util/debug.rs":"ab301ad59aa912529cb97233a54a05914dd3cb2ec43e6fec7334170b97ac5998","src/util/error.rs":"ecccd60e7406305023efcc6adcc826eeeb083ab8f7fbfe3d97469438cd4c4e5c","src/util/int.rs":"e264e6abebf5622b59f6500210773db36048371c4e509c930263334095959a52","src/util/mod.rs":"7ab28d11323ecdbd982087f32eb8bceeee84f1a2583f3aae27039c36d58cf12c","src/util/prefilter.rs":"9fa4498f18bf70478b1996c1a013698b626d15f119aa81dbc536673c9f045718","src/util/primitives.rs":"f89f3fa1d8db4e37de9ca767c6d05e346404837cade6d063bba68972fafa610b","src/util/remapper.rs":"9f12d911583a325c11806eeceb46d0dfec863cfcfa241aed84d31af73da746e5","src/util/search.rs":"6af803e08b8b8c8a33db100623f1621b0d741616524ce40893d8316897f27ffe","src/util/special.rs":"7d2f9cb9dd9771f59816e829b2d96b1239996f32939ba98764e121696c52b146"},"package":"8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916"}

3
pve-rs/vendor/aho-corasick/COPYING vendored Normal file
View File

@@ -0,0 +1,3 @@
This project is dual-licensed under the Unlicense and MIT licenses.
You may use this code under the terms of either license.

74
pve-rs/vendor/aho-corasick/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,74 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2021"
rust-version = "1.60.0"
name = "aho-corasick"
version = "1.1.3"
authors = ["Andrew Gallant <jamslam@gmail.com>"]
exclude = [
"/aho-corasick-debug",
"/benchmarks",
"/tmp",
]
autotests = false
description = "Fast multiple substring searching."
homepage = "https://github.com/BurntSushi/aho-corasick"
readme = "README.md"
keywords = [
"string",
"search",
"text",
"pattern",
"multi",
]
categories = ["text-processing"]
license = "Unlicense OR MIT"
repository = "https://github.com/BurntSushi/aho-corasick"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = [
"--cfg",
"docsrs",
"--generate-link-to-definition",
]
[profile.bench]
debug = 2
[profile.release]
debug = 2
[lib]
name = "aho_corasick"
[dependencies.log]
version = "0.4.17"
optional = true
[dependencies.memchr]
version = "2.4.0"
optional = true
default-features = false
[dev-dependencies.doc-comment]
version = "0.3.3"
[features]
default = [
"std",
"perf-literal",
]
logging = ["dep:log"]
perf-literal = ["dep:memchr"]
std = ["memchr?/std"]

481
pve-rs/vendor/aho-corasick/DESIGN.md vendored Normal file
View File

@@ -0,0 +1,481 @@
This document describes the internal design of this crate, which is an object
lesson in what happens when you take a fairly simple old algorithm like
Aho-Corasick and make it fast and production ready.
The target audience of this document is Rust programmers that have some
familiarity with string searching, however, one does not need to know the
Aho-Corasick algorithm in order to read this (it is explained below). One
should, however, know what a trie is. (If you don't, go read its Wikipedia
article.)
The center-piece of this crate is an implementation of Aho-Corasick. On its
own, Aho-Corasick isn't that complicated. The complex pieces come from the
different variants of Aho-Corasick implemented in this crate. Specifically,
they are:
* Aho-Corasick as a noncontiguous NFA. States have their transitions
represented sparsely, and each state puts its transitions in its own separate
allocation. Hence the same "noncontiguous."
* Aho-Corasick as a contiguous NFA. This NFA uses a single allocation to
represent the transitions of all states. That is, transitions are laid out
contiguously in memory. Moreover, states near the starting state are
represented densely, such that finding the next state ID takes a constant
number of instructions.
* Aho-Corasick as a DFA. In this case, all states are represented densely in
a transition table that uses one allocation.
* Supporting "standard" match semantics, along with its overlapping variant,
in addition to leftmost-first and leftmost-longest semantics. The "standard"
semantics are typically what you see in a textbook description of
Aho-Corasick. However, Aho-Corasick is also useful as an optimization in
regex engines, which often use leftmost-first or leftmost-longest semantics.
Thus, it is useful to implement those semantics here. The "standard" and
"leftmost" search algorithms are subtly different, and also require slightly
different construction algorithms.
* Support for ASCII case insensitive matching.
* Support for accelerating searches when the patterns all start with a small
number of fixed bytes. Or alternatively, when the patterns all contain a
small number of rare bytes. (Searching for these bytes uses SIMD vectorized
code courtesy of `memchr`.)
* Transparent support for alternative SIMD vectorized search routines for
smaller number of literals, such as the Teddy algorithm. We called these
"packed" search routines because they use SIMD. They can often be an order of
magnitude faster than just Aho-Corasick, but don't scale as well.
* Support for searching streams. This can reuse most of the underlying code,
but does require careful buffering support.
* Support for anchored searches, which permit efficient "is prefix" checks for
a large number of patterns.
When you combine all of this together along with trying to make everything as
fast as possible, what you end up with is enitrely too much code with too much
`unsafe`. Alas, I was not smart enough to figure out how to reduce it. Instead,
we will explain it.
# Basics
The fundamental problem this crate is trying to solve is to determine the
occurrences of possibly many patterns in a haystack. The naive way to solve
this is to look for a match for each pattern at each position in the haystack:
for i in 0..haystack.len():
for p in patterns.iter():
if haystack[i..].starts_with(p.bytes()):
return Match(p.id(), i, i + p.bytes().len())
Those four lines are effectively all this crate does. The problem with those
four lines is that they are very slow, especially when you're searching for a
large number of patterns.
While there are many different algorithms available to solve this, a popular
one is Aho-Corasick. It's a common solution because it's not too hard to
implement, scales quite well even when searching for thousands of patterns and
is generally pretty fast. Aho-Corasick does well here because, regardless of
the number of patterns you're searching for, it always visits each byte in the
haystack exactly once. This means, generally speaking, adding more patterns to
an Aho-Corasick automaton does not make it slower. (Strictly speaking, however,
this is not true, since a larger automaton will make less effective use of the
CPU's cache.)
Aho-Corasick can be succinctly described as a trie with state transitions
between some of the nodes that efficiently instruct the search algorithm to
try matching alternative keys in the trie. The trick is that these state
transitions are arranged such that each byte of input needs to be inspected
only once. These state transitions are typically called "failure transitions,"
because they instruct the searcher (the thing traversing the automaton while
reading from the haystack) what to do when a byte in the haystack does not
correspond to a valid transition in the current state of the trie.
More formally, a failure transition points to a state in the automaton that may
lead to a match whose prefix is a proper suffix of the path traversed through
the trie so far. (If no such proper suffix exists, then the failure transition
points back to the start state of the trie, effectively restarting the search.)
This is perhaps simpler to explain pictorally. For example, let's say we built
an Aho-Corasick automaton with the following patterns: 'abcd' and 'cef'. The
trie looks like this:
a - S1 - b - S2 - c - S3 - d - S4*
/
S0 - c - S5 - e - S6 - f - S7*
where states marked with a `*` are match states (meaning, the search algorithm
should stop and report a match to the caller).
So given this trie, it should be somewhat straight-forward to see how it can
be used to determine whether any particular haystack *starts* with either
`abcd` or `cef`. It's easy to express this in code:
fn has_prefix(trie: &Trie, haystack: &[u8]) -> bool {
let mut state_id = trie.start();
// If the empty pattern is in trie, then state_id is a match state.
if trie.is_match(state_id) {
return true;
}
for (i, &b) in haystack.iter().enumerate() {
state_id = match trie.next_state(state_id, b) {
Some(id) => id,
// If there was no transition for this state and byte, then we know
// the haystack does not start with one of the patterns in our trie.
None => return false,
};
if trie.is_match(state_id) {
return true;
}
}
false
}
And that's pretty much it. All we do is move through the trie starting with the
bytes at the beginning of the haystack. If we find ourselves in a position
where we can't move, or if we've looked through the entire haystack without
seeing a match state, then we know the haystack does not start with any of the
patterns in the trie.
The meat of the Aho-Corasick algorithm is in how we add failure transitions to
our trie to keep searching efficient. Specifically, it permits us to not only
check whether a haystack *starts* with any one of a number of patterns, but
rather, whether the haystack contains any of a number of patterns *anywhere* in
the haystack.
As mentioned before, failure transitions connect a proper suffix of the path
traversed through the trie before, with a path that leads to a match that has a
prefix corresponding to that proper suffix. So in our case, for patterns `abcd`
and `cef`, with a haystack `abcef`, we want to transition to state `S5` (from
the diagram above) from `S3` upon seeing that the byte following `c` is not
`d`. Namely, the proper suffix in this example is `c`, which is a prefix of
`cef`. So the modified diagram looks like this:
a - S1 - b - S2 - c - S3 - d - S4*
/ /
/ ----------------
/ /
S0 - c - S5 - e - S6 - f - S7*
One thing that isn't shown in this diagram is that *all* states have a failure
transition, but only `S3` has a *non-trivial* failure transition. That is, all
other states have a failure transition back to the start state. So if our
haystack was `abzabcd`, then the searcher would transition back to `S0` after
seeing `z`, which effectively restarts the search. (Because there is no pattern
in our trie that has a prefix of `bz` or `z`.)
The code for traversing this *automaton* or *finite state machine* (it is no
longer just a trie) is not that much different from the `has_prefix` code
above:
fn contains(fsm: &FiniteStateMachine, haystack: &[u8]) -> bool {
let mut state_id = fsm.start();
// If the empty pattern is in fsm, then state_id is a match state.
if fsm.is_match(state_id) {
return true;
}
for (i, &b) in haystack.iter().enumerate() {
// While the diagram above doesn't show this, we may wind up needing
// to follow multiple failure transitions before we land on a state
// in which we can advance. Therefore, when searching for the next
// state, we need to loop until we don't see a failure transition.
//
// This loop terminates because the start state has no empty
// transitions. Every transition from the start state either points to
// another state, or loops back to the start state.
loop {
match fsm.next_state(state_id, b) {
Some(id) => {
state_id = id;
break;
}
// Unlike our code above, if there was no transition for this
// state, then we don't quit. Instead, we look for this state's
// failure transition and follow that instead.
None => {
state_id = fsm.next_fail_state(state_id);
}
};
}
if fsm.is_match(state_id) {
return true;
}
}
false
}
Other than the complication around traversing failure transitions, this code
is still roughly "traverse the automaton with bytes from the haystack, and quit
when a match is seen."
And that concludes our section on the basics. While we didn't go deep into how
the automaton is built (see `src/nfa/noncontiguous.rs`, which has detailed
comments about that), the basic structure of Aho-Corasick should be reasonably
clear.
# NFAs and DFAs
There are generally two types of finite automata: non-deterministic finite
automata (NFA) and deterministic finite automata (DFA). The difference between
them is, principally, that an NFA can be in multiple states at once. This is
typically accomplished by things called _epsilon_ transitions, where one could
move to a new state without consuming any bytes from the input. (The other
mechanism by which NFAs can be in more than one state is where the same byte in
a particular state transitions to multiple distinct states.) In contrast, a DFA
can only ever be in one state at a time. A DFA has no epsilon transitions, and
for any given state, a byte transitions to at most one other state.
By this formulation, the Aho-Corasick automaton described in the previous
section is an NFA. This is because failure transitions are, effectively,
epsilon transitions. That is, whenever the automaton is in state `S`, it is
actually in the set of states that are reachable by recursively following
failure transitions from `S` until you reach the start state. (This means
that, for example, the start state is always active since the start state is
reachable via failure transitions from any state in the automaton.)
NFAs have a lot of nice properties. They tend to be easier to construct, and
also tend to use less memory. However, their primary downside is that they are
typically slower to execute a search with. For example, the code above showing
how to search with an Aho-Corasick automaton needs to potentially iterate
through many failure transitions for every byte of input. While this is a
fairly small amount of overhead, this can add up, especially if the automaton
has a lot of overlapping patterns with a lot of failure transitions.
A DFA's search code, by contrast, looks like this:
fn contains(dfa: &DFA, haystack: &[u8]) -> bool {
let mut state_id = dfa.start();
// If the empty pattern is in dfa, then state_id is a match state.
if dfa.is_match(state_id) {
return true;
}
for (i, &b) in haystack.iter().enumerate() {
// An Aho-Corasick DFA *never* has a missing state that requires
// failure transitions to be followed. One byte of input advances the
// automaton by one state. Always.
state_id = dfa.next_state(state_id, b);
if dfa.is_match(state_id) {
return true;
}
}
false
}
The search logic here is much simpler than for the NFA, and this tends to
translate into significant performance benefits as well, since there's a lot
less work being done for each byte in the haystack. How is this accomplished?
It's done by pre-following all failure transitions for all states for all bytes
in the alphabet, and then building a single state transition table. Building
this DFA can be much more costly than building the NFA, and use much more
memory, but the better performance can be worth it.
Users of this crate can actually choose between using one of two possible NFAs
(noncontiguous or contiguous) or a DFA. By default, a contiguous NFA is used,
in most circumstances, but if the number of patterns is small enough a DFA will
be used. A contiguous NFA is chosen because it uses orders of magnitude less
memory than a DFA, takes only a little longer to build than a noncontiguous
NFA and usually gets pretty close to the search speed of a DFA. (Callers can
override this automatic selection via the `AhoCorasickBuilder::start_kind`
configuration.)
# More DFA tricks
As described in the previous section, one of the downsides of using a DFA
is that it uses more memory and can take longer to build. One small way of
mitigating these concerns is to map the alphabet used by the automaton into
a smaller space. Typically, the alphabet of a DFA has 256 elements in it:
one element for each possible value that fits into a byte. However, in many
cases, one does not need the full alphabet. For example, if all patterns in an
Aho-Corasick automaton are ASCII letters, then this only uses up 52 distinct
bytes. As far as the automaton is concerned, the rest of the 204 bytes are
indistinguishable from one another: they will never disrciminate between a
match or a non-match. Therefore, in cases like that, the alphabet can be shrunk
to just 53 elements. One for each ASCII letter, and then another to serve as a
placeholder for every other unused byte.
In practice, this library doesn't quite compute the optimal set of equivalence
classes, but it's close enough in most cases. The key idea is that this then
allows the transition table for the DFA to be potentially much smaller. The
downside of doing this, however, is that since the transition table is defined
in terms of this smaller alphabet space, every byte in the haystack must be
re-mapped to this smaller space. This requires an additional 256-byte table.
In practice, this can lead to a small search time hit, but it can be difficult
to measure. Moreover, it can sometimes lead to faster search times for bigger
automata, since it could be difference between more parts of the automaton
staying in the CPU cache or not.
One other trick for DFAs employed by this crate is the notion of premultiplying
state identifiers. Specifically, the normal way to compute the next transition
in a DFA is via the following (assuming that the transition table is laid out
sequentially in memory, in row-major order, where the rows are states):
next_state_id = dfa.transitions[current_state_id * 256 + current_byte]
However, since the value `256` is a fixed constant, we can actually premultiply
the state identifiers in the table when we build the table initially. Then, the
next transition computation simply becomes:
next_state_id = dfa.transitions[current_state_id + current_byte]
This doesn't seem like much, but when this is being executed for every byte of
input that you're searching, saving that extra multiplication instruction can
add up.
The same optimization works even when equivalence classes are enabled, as
described above. The only difference is that the premultiplication is by the
total number of equivalence classes instead of 256.
There isn't much downside to premultiplying state identifiers, other than it
imposes a smaller limit on the total number of states in the DFA. Namely, with
premultiplied state identifiers, you run out of room in your state identifier
representation more rapidly than if the identifiers are just state indices.
Both equivalence classes and premultiplication are always enabled. There is a
`AhoCorasickBuilder::byte_classes` configuration, but disabling this just makes
it so there are always 256 equivalence classes, i.e., every class corresponds
to precisely one byte. When it's disabled, the equivalence class map itself is
still used. The purpose of disabling it is when one is debugging the underlying
automaton. It can be easier to comprehend when it uses actual byte values for
its transitions instead of equivalence classes.
# Match semantics
One of the more interesting things about this implementation of Aho-Corasick
that (as far as this author knows) separates it from other implementations, is
that it natively supports leftmost-first and leftmost-longest match semantics.
Briefly, match semantics refer to the decision procedure by which searching
will disambiguate matches when there are multiple to choose from:
* **standard** match semantics emits matches as soon as they are detected by
the automaton. This is typically equivalent to the textbook non-overlapping
formulation of Aho-Corasick.
* **leftmost-first** match semantics means that 1) the next match is the match
starting at the leftmost position and 2) among multiple matches starting at
the same leftmost position, the match corresponding to the pattern provided
first by the caller is reported.
* **leftmost-longest** is like leftmost-first, except when there are multiple
matches starting at the same leftmost position, the pattern corresponding to
the longest match is returned.
(The crate API documentation discusses these differences, with examples, in
more depth on the `MatchKind` type.)
The reason why supporting these match semantics is important is because it
gives the user more control over the match procedure. For example,
leftmost-first permits users to implement match priority by simply putting the
higher priority patterns first. Leftmost-longest, on the other hand, permits
finding the longest possible match, which might be useful when trying to find
words matching a dictionary. Additionally, regex engines often want to use
Aho-Corasick as an optimization when searching for an alternation of literals.
In order to preserve correct match semantics, regex engines typically can't use
the standard textbook definition directly, since regex engines will implement
either leftmost-first (Perl-like) or leftmost-longest (POSIX) match semantics.
Supporting leftmost semantics requires a couple key changes:
* Constructing the Aho-Corasick automaton changes a bit in both how the trie is
constructed and how failure transitions are found. Namely, only a subset
of the failure transitions are added. Specifically, only the failure
transitions that either do not occur after a match or do occur after a match
but preserve that match are kept. (More details on this can be found in
`src/nfa/noncontiguous.rs`.)
* The search algorithm changes slightly. Since we are looking for the leftmost
match, we cannot quit as soon as a match is detected. Instead, after a match
is detected, we must keep searching until either the end of the input or
until a dead state is seen. (Dead states are not used for standard match
semantics. Dead states mean that searching should stop after a match has been
found.)
Most other implementations of Aho-Corasick do support leftmost match semantics,
but they do it with more overhead at search time, or even worse, with a queue
of matches and sophisticated hijinks to disambiguate the matches. While our
construction algorithm becomes a bit more complicated, the correct match
semantics fall out from the structure of the automaton itself.
# Overlapping matches
One of the nice properties of an Aho-Corasick automaton is that it can report
all possible matches, even when they overlap with one another. In this mode,
the match semantics don't matter, since all possible matches are reported.
Overlapping searches work just like regular searches, except the state
identifier at which the previous search left off is carried over to the next
search, so that it can pick up where it left off. If there are additional
matches at that state, then they are reported before resuming the search.
Enabling leftmost-first or leftmost-longest match semantics causes the
automaton to use a subset of all failure transitions, which means that
overlapping searches cannot be used. Therefore, if leftmost match semantics are
used, attempting to do an overlapping search will return an error (or panic
when using the infallible APIs). Thus, to get overlapping searches, the caller
must use the default standard match semantics. This behavior was chosen because
there are only two alternatives, which were deemed worse:
* Compile two automatons internally, one for standard semantics and one for
the semantics requested by the caller (if not standard).
* Create a new type, distinct from the `AhoCorasick` type, which has different
capabilities based on the configuration options.
The first is untenable because of the amount of memory used by the automaton.
The second increases the complexity of the API too much by adding too many
types that do similar things. It is conceptually much simpler to keep all
searching isolated to a single type.
# Stream searching
Since Aho-Corasick is an automaton, it is possible to do partial searches on
partial parts of the haystack, and then resume that search on subsequent pieces
of the haystack. This is useful when the haystack you're trying to search is
not stored contiguously in memory, or if one does not want to read the entire
haystack into memory at once.
Currently, only standard semantics are supported for stream searching. This is
some of the more complicated code in this crate, and is something I would very
much like to improve. In particular, it currently has the restriction that it
must buffer at least enough of the haystack in memory in order to fit the
longest possible match. The difficulty in getting stream searching right is
that the implementation choices (such as the buffer size) often impact what the
API looks like and what it's allowed to do.
# Prefilters
In some cases, Aho-Corasick is not the fastest way to find matches containing
multiple patterns. Sometimes, the search can be accelerated using highly
optimized SIMD routines. For example, consider searching the following
patterns:
Sherlock
Moriarty
Watson
It is plausible that it would be much faster to quickly look for occurrences of
the leading bytes, `S`, `M` or `W`, before trying to start searching via the
automaton. Indeed, this is exactly what this crate will do.
When there are more than three distinct starting bytes, then this crate will
look for three distinct bytes occurring at any position in the patterns, while
preferring bytes that are heuristically determined to be rare over others. For
example:
Abuzz
Sanchez
Vasquez
Topaz
Waltz
Here, we have more than 3 distinct starting bytes, but all of the patterns
contain `z`, which is typically a rare byte. In this case, the prefilter will
scan for `z`, back up a bit, and then execute the Aho-Corasick automaton.
If all of that fails, then a packed multiple substring algorithm will be
attempted. Currently, the only algorithm available for this is Teddy, but more
may be added in the future. Teddy is unlike the above prefilters in that it
confirms its own matches, so when Teddy is active, it might not be necessary
for Aho-Corasick to run at all. However, the current Teddy implementation
only works in `x86_64` when SSSE3 or AVX2 are available or in `aarch64`
(using NEON), and moreover, only works _well_ when there are a small number
of patterns (say, less than 100). Teddy also requires the haystack to be of a
certain length (more than 16-34 bytes). When the haystack is shorter than that,
Rabin-Karp is used instead. (See `src/packed/rabinkarp.rs`.)
There is a more thorough description of Teddy at
[`src/packed/teddy/README.md`](src/packed/teddy/README.md).

21
pve-rs/vendor/aho-corasick/LICENSE-MIT vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2015 Andrew Gallant
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

174
pve-rs/vendor/aho-corasick/README.md vendored Normal file
View File

@@ -0,0 +1,174 @@
aho-corasick
============
A library for finding occurrences of many patterns at once with SIMD
acceleration in some cases. This library provides multiple pattern
search principally through an implementation of the
[Aho-Corasick algorithm](https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm),
which builds a finite state machine for executing searches in linear time.
Features include case insensitive matching, overlapping matches, fast searching
via SIMD and optional full DFA construction and search & replace in streams.
[![Build status](https://github.com/BurntSushi/aho-corasick/workflows/ci/badge.svg)](https://github.com/BurntSushi/aho-corasick/actions)
[![crates.io](https://img.shields.io/crates/v/aho-corasick.svg)](https://crates.io/crates/aho-corasick)
Dual-licensed under MIT or the [UNLICENSE](https://unlicense.org/).
### Documentation
https://docs.rs/aho-corasick
### Usage
Run `cargo add aho-corasick` to automatically add this crate as a dependency
in your `Cargo.toml` file.
### Example: basic searching
This example shows how to search for occurrences of multiple patterns
simultaneously. Each match includes the pattern that matched along with the
byte offsets of the match.
```rust
use aho_corasick::{AhoCorasick, PatternID};
let patterns = &["apple", "maple", "Snapple"];
let haystack = "Nobody likes maple in their apple flavored Snapple.";
let ac = AhoCorasick::new(patterns).unwrap();
let mut matches = vec![];
for mat in ac.find_iter(haystack) {
matches.push((mat.pattern(), mat.start(), mat.end()));
}
assert_eq!(matches, vec![
(PatternID::must(1), 13, 18),
(PatternID::must(0), 28, 33),
(PatternID::must(2), 43, 50),
]);
```
### Example: ASCII case insensitivity
This is like the previous example, but matches `Snapple` case insensitively
using `AhoCorasickBuilder`:
```rust
use aho_corasick::{AhoCorasick, PatternID};
let patterns = &["apple", "maple", "snapple"];
let haystack = "Nobody likes maple in their apple flavored Snapple.";
let ac = AhoCorasick::builder()
.ascii_case_insensitive(true)
.build(patterns)
.unwrap();
let mut matches = vec![];
for mat in ac.find_iter(haystack) {
matches.push((mat.pattern(), mat.start(), mat.end()));
}
assert_eq!(matches, vec![
(PatternID::must(1), 13, 18),
(PatternID::must(0), 28, 33),
(PatternID::must(2), 43, 50),
]);
```
### Example: replacing matches in a stream
This example shows how to execute a search and replace on a stream without
loading the entire stream into memory first.
```rust,ignore
use aho_corasick::AhoCorasick;
let patterns = &["fox", "brown", "quick"];
let replace_with = &["sloth", "grey", "slow"];
// In a real example, these might be `std::fs::File`s instead. All you need to
// do is supply a pair of `std::io::Read` and `std::io::Write` implementations.
let rdr = "The quick brown fox.";
let mut wtr = vec![];
let ac = AhoCorasick::new(patterns).unwrap();
ac.stream_replace_all(rdr.as_bytes(), &mut wtr, replace_with)
.expect("stream_replace_all failed");
assert_eq!(b"The slow grey sloth.".to_vec(), wtr);
```
### Example: finding the leftmost first match
In the textbook description of Aho-Corasick, its formulation is typically
structured such that it reports all possible matches, even when they overlap
with another. In many cases, overlapping matches may not be desired, such as
the case of finding all successive non-overlapping matches like you might with
a standard regular expression.
Unfortunately the "obvious" way to modify the Aho-Corasick algorithm to do
this doesn't always work in the expected way, since it will report matches as
soon as they are seen. For example, consider matching the regex `Samwise|Sam`
against the text `Samwise`. Most regex engines (that are Perl-like, or
non-POSIX) will report `Samwise` as a match, but the standard Aho-Corasick
algorithm modified for reporting non-overlapping matches will report `Sam`.
A novel contribution of this library is the ability to change the match
semantics of Aho-Corasick (without additional search time overhead) such that
`Samwise` is reported instead. For example, here's the standard approach:
```rust
use aho_corasick::AhoCorasick;
let patterns = &["Samwise", "Sam"];
let haystack = "Samwise";
let ac = AhoCorasick::new(patterns).unwrap();
let mat = ac.find(haystack).expect("should have a match");
assert_eq!("Sam", &haystack[mat.start()..mat.end()]);
```
And now here's the leftmost-first version, which matches how a Perl-like
regex will work:
```rust
use aho_corasick::{AhoCorasick, MatchKind};
let patterns = &["Samwise", "Sam"];
let haystack = "Samwise";
let ac = AhoCorasick::builder()
.match_kind(MatchKind::LeftmostFirst)
.build(patterns)
.unwrap();
let mat = ac.find(haystack).expect("should have a match");
assert_eq!("Samwise", &haystack[mat.start()..mat.end()]);
```
In addition to leftmost-first semantics, this library also supports
leftmost-longest semantics, which match the POSIX behavior of a regular
expression alternation. See `MatchKind` in the docs for more details.
### Minimum Rust version policy
This crate's minimum supported `rustc` version is `1.60.0`.
The current policy is that the minimum Rust version required to use this crate
can be increased in minor version updates. For example, if `crate 1.0` requires
Rust 1.20.0, then `crate 1.0.z` for all values of `z` will also require Rust
1.20.0 or newer. However, `crate 1.y` for `y > 0` may require a newer minimum
version of Rust.
In general, this crate will be conservative with respect to the minimum
supported version of Rust.
### FFI bindings
* [G-Research/ahocorasick_rs](https://github.com/G-Research/ahocorasick_rs/)
is a Python wrapper for this library.
* [tmikus/ahocorasick_rs](https://github.com/tmikus/ahocorasick_rs) is a Go
wrapper for this library.

24
pve-rs/vendor/aho-corasick/UNLICENSE vendored Normal file
View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>

Some files were not shown because too many files have changed in this diff Show More