mirror of
https://gitlab.com/libvirt/libvirt.git
synced 2025-09-21 09:44:54 +03:00
Compare commits
566 Commits
v8.2.0
...
v8.4.0-rc2
Author | SHA1 | Date | |
---|---|---|---|
|
60d18ff746 | ||
|
14bd5036e4 | ||
|
a5d9c70621 | ||
|
7f463b4c0d | ||
|
76802e5dc6 | ||
|
bf769a4d42 | ||
|
be1d39f6bd | ||
|
1234ea1d38 | ||
|
a6f7ed6e72 | ||
|
c000499c4f | ||
|
af89e4b8f8 | ||
|
8833b42f3b | ||
|
8cd9065352 | ||
|
8c6fa38efc | ||
|
c781b025c0 | ||
|
f0bcb31a53 | ||
|
42ec0dbfc0 | ||
|
ad318a6c12 | ||
|
9e41a59ce5 | ||
|
ef17772900 | ||
|
e79bfda145 | ||
|
316de7eb12 | ||
|
38dba6a02e | ||
|
76baf935aa | ||
|
88f3727e71 | ||
|
64d5d06c56 | ||
|
7b5046ff6c | ||
|
3ccd69f8c0 | ||
|
b4662bbd1f | ||
|
1c23123732 | ||
|
3663a7d48c | ||
|
9750edcfb0 | ||
|
a415225605 | ||
|
7de9b74a84 | ||
|
fb5f2f100e | ||
|
9a179fd65e | ||
|
40d78e47e5 | ||
|
145f429f8c | ||
|
bb8abcde2c | ||
|
d0a5eb59cb | ||
|
20afcc1c53 | ||
|
4c52d75f09 | ||
|
37c8fd4f63 | ||
|
4e61aecbc8 | ||
|
7f403bc251 | ||
|
6545173b69 | ||
|
1a61e06af4 | ||
|
af9eb9e399 | ||
|
dddbc95375 | ||
|
f98baa1b54 | ||
|
afa944afb7 | ||
|
ab305ad5ca | ||
|
39d1c78967 | ||
|
18cb263100 | ||
|
8aa6063ef9 | ||
|
2c49bb1521 | ||
|
4c8c336e78 | ||
|
2df6849d78 | ||
|
e37c39747b | ||
|
a46ff97762 | ||
|
a678430492 | ||
|
8ccb4f463e | ||
|
eca2a6cc92 | ||
|
9c495f8fcb | ||
|
127fda5e84 | ||
|
122b975e40 | ||
|
e9ba2ced0b | ||
|
245ff2d663 | ||
|
4402295d37 | ||
|
c586488506 | ||
|
9453eb458a | ||
|
8d52f99f0b | ||
|
2d9fd19bf5 | ||
|
784b876035 | ||
|
38756ce5ba | ||
|
be1e16ed11 | ||
|
5fe90d471a | ||
|
b41163005c | ||
|
1df0a19869 | ||
|
45a8e3988f | ||
|
78094a4bd1 | ||
|
a54391fbc2 | ||
|
2b98d5d91d | ||
|
8d160b7979 | ||
|
6a2fea9283 | ||
|
bed3781a30 | ||
|
55ae46a6df | ||
|
4172d1aedc | ||
|
325dd585fb | ||
|
a4ed1e935f | ||
|
c500955e95 | ||
|
42cb548045 | ||
|
1bd24e79be | ||
|
b7b8cd6ad6 | ||
|
89d789670d | ||
|
3fa987cc42 | ||
|
53905292f9 | ||
|
7648e40da5 | ||
|
1ce258a570 | ||
|
a062f5f777 | ||
|
bf213aa965 | ||
|
bde66322e8 | ||
|
5c1e203a80 | ||
|
88ba34f5a0 | ||
|
14f45e5d8d | ||
|
b3e2fea3e8 | ||
|
577c336151 | ||
|
5ce918f3a5 | ||
|
f073783979 | ||
|
facaa9ae98 | ||
|
49b5200e16 | ||
|
1bd51b6afe | ||
|
5c1a99ff83 | ||
|
6921c2cf73 | ||
|
07d950adda | ||
|
4b4a7153d0 | ||
|
6e5852cb4b | ||
|
c9a37b64df | ||
|
c04bb76af7 | ||
|
17fe7ae0cb | ||
|
95f9d4290c | ||
|
49211723b6 | ||
|
28deb22485 | ||
|
07666093f7 | ||
|
0529cf77f2 | ||
|
02c898d801 | ||
|
cb12f8b8c8 | ||
|
f7c422993e | ||
|
7f50557c08 | ||
|
decc03857f | ||
|
45064119c1 | ||
|
6c9117c988 | ||
|
579403ba2e | ||
|
e90301134c | ||
|
2cada2c31c | ||
|
02fe5fe298 | ||
|
2f072e24db | ||
|
6871553c3f | ||
|
f5c8abf176 | ||
|
49ef0f95c6 | ||
|
4f6b1fbcfa | ||
|
66a014bc09 | ||
|
58404f7871 | ||
|
4438b8ac09 | ||
|
20ba291ebd | ||
|
9bed5604db | ||
|
15bdced9b3 | ||
|
7e1c24c925 | ||
|
1715dfce38 | ||
|
7869eb9b31 | ||
|
bca9047906 | ||
|
e04acdf39d | ||
|
1095803ffa | ||
|
b414c4a00a | ||
|
fcd7741c48 | ||
|
bab089d379 | ||
|
2afaa6894e | ||
|
9b765882cd | ||
|
d810ae2fae | ||
|
343920794d | ||
|
b5fd6f2b68 | ||
|
552790edf2 | ||
|
0b64b75a09 | ||
|
5b9c880e89 | ||
|
6272e780e2 | ||
|
7bbfabc6b2 | ||
|
d53e75aad0 | ||
|
9cd2c5257a | ||
|
43aa510c19 | ||
|
2e89805894 | ||
|
b6705cdefb | ||
|
a6ea77e0a5 | ||
|
dd3258da09 | ||
|
74bb510a7b | ||
|
485a336f5d | ||
|
08b0ac6683 | ||
|
c8633d35ca | ||
|
812a36557d | ||
|
42b939d13a | ||
|
50d2489b7f | ||
|
13ae2e2e30 | ||
|
c0d25738b2 | ||
|
c613b1a211 | ||
|
822a6dfc07 | ||
|
7a38d3946b | ||
|
e22284d7bb | ||
|
4b3e730c77 | ||
|
a7241f85f6 | ||
|
7939e81f66 | ||
|
f59a707d2c | ||
|
18f863a4da | ||
|
7c43765513 | ||
|
29458f0491 | ||
|
3e9cc6e78e | ||
|
29067596f2 | ||
|
55485c56a4 | ||
|
96d6fec8dc | ||
|
29d022b1eb | ||
|
ba9e1f629c | ||
|
686f83e66e | ||
|
6b3373ffa5 | ||
|
c8f255c10e | ||
|
1e925d1c17 | ||
|
278c630d2a | ||
|
76709d4f48 | ||
|
4e64cded66 | ||
|
1d3e955938 | ||
|
5077263ecb | ||
|
80f75fb758 | ||
|
bcfd23b762 | ||
|
7b0e2e4a55 | ||
|
af1933713b | ||
|
f4e2910552 | ||
|
25b2f75c7a | ||
|
43c83a2112 | ||
|
635df6c4bb | ||
|
56ab5c3ae9 | ||
|
dc2b2b3ab9 | ||
|
7f432214ab | ||
|
62daebed13 | ||
|
b5fd70b367 | ||
|
b7fa48081d | ||
|
80ffd571a8 | ||
|
de98075faf | ||
|
72887b8f59 | ||
|
387932771a | ||
|
46fa0e46e1 | ||
|
36e86dbf33 | ||
|
45c0ff3eea | ||
|
90d36d625e | ||
|
d00e51e10c | ||
|
bc3387f38f | ||
|
5b0bba799b | ||
|
21aec91790 | ||
|
75df6d2c29 | ||
|
2ba73a10fd | ||
|
05b09f039e | ||
|
f135fdabab | ||
|
b6dfb216c9 | ||
|
dfba8b3645 | ||
|
81b928fa82 | ||
|
8cb37bac33 | ||
|
728c75b93f | ||
|
55c3e969cf | ||
|
ae373781a4 | ||
|
3e009bbdb8 | ||
|
1512119afd | ||
|
5890a320f8 | ||
|
bb6cedd208 | ||
|
48341b025a | ||
|
1d6ca40ac2 | ||
|
35ce086667 | ||
|
63d633b9a4 | ||
|
3daa68e265 | ||
|
6aff36019b | ||
|
d2e4d66be3 | ||
|
3342278539 | ||
|
9759639dd4 | ||
|
26c43c8129 | ||
|
1140787c3c | ||
|
8583199c8a | ||
|
fbfbcf1729 | ||
|
f9a8e25b15 | ||
|
6392cb1f1f | ||
|
f79de95b16 | ||
|
8c09638514 | ||
|
bcea5da257 | ||
|
49d7a3a756 | ||
|
42dc978c28 | ||
|
8188716522 | ||
|
0495f841ee | ||
|
776e9a6b26 | ||
|
dff4b21585 | ||
|
29ee8c1d2e | ||
|
1e25a480f5 | ||
|
3866a40a95 | ||
|
b05dfcdfcb | ||
|
fac80100c0 | ||
|
5e5d030b1c | ||
|
47503cc859 | ||
|
b399f2c000 | ||
|
7899a11523 | ||
|
0236e6154c | ||
|
b6bd6eabc0 | ||
|
f3d4102d67 | ||
|
dddf047dd1 | ||
|
97ef63860a | ||
|
871a071abb | ||
|
d9f175379c | ||
|
41e118fa4f | ||
|
a12b2d8f21 | ||
|
aff8961dd1 | ||
|
9684c7c97e | ||
|
d0289cfa0e | ||
|
755bd47b5f | ||
|
413bf1ed54 | ||
|
a055308668 | ||
|
d4d5cebdf1 | ||
|
297ef539d0 | ||
|
381498796c | ||
|
6975ed0a94 | ||
|
4bce59d963 | ||
|
034600e601 | ||
|
b7472a1d79 | ||
|
8be766e39c | ||
|
2f1c01e672 | ||
|
7958b7d9c5 | ||
|
60d62c5ea1 | ||
|
a45e4b42fe | ||
|
85a5d64618 | ||
|
c628bce03b | ||
|
032724b9c4 | ||
|
a1d3324f02 | ||
|
da6d307a34 | ||
|
807cdbf759 | ||
|
aad910c228 | ||
|
6237f2b84b | ||
|
7852d30bff | ||
|
1ae8b1df02 | ||
|
469f3467e1 | ||
|
ec02f5719a | ||
|
2d345da361 | ||
|
3974911773 | ||
|
d946aa27c2 | ||
|
fe91f0999c | ||
|
231a6db96d | ||
|
25327c9d69 | ||
|
fe027c9b0a | ||
|
210a195394 | ||
|
9223ebbc85 | ||
|
69ee066523 | ||
|
f0c3398bc4 | ||
|
ddf5673e4e | ||
|
22eec2ae0e | ||
|
ad957d92a6 | ||
|
3172b0ed3f | ||
|
6fe2ca9ac1 | ||
|
4c6498d48a | ||
|
b1f5f14a5c | ||
|
487f15b26a | ||
|
7c9459b88b | ||
|
a2b85fcc48 | ||
|
3fdedbe6c6 | ||
|
13608488f8 | ||
|
f843cdab7f | ||
|
fb2ebfd927 | ||
|
d30be2cf25 | ||
|
e40645c105 | ||
|
339a2e7d65 | ||
|
a18324f7e8 | ||
|
ab301453bc | ||
|
51ed2a01d7 | ||
|
eefe52b8f0 | ||
|
52b1f222df | ||
|
6479917212 | ||
|
fb4d109648 | ||
|
aa2b8e3b0b | ||
|
de0a50833f | ||
|
db56fa77e9 | ||
|
02411a4597 | ||
|
4b120e51f1 | ||
|
36dbddec24 | ||
|
5e6a75ae1c | ||
|
d0c2a7c1db | ||
|
034432e47b | ||
|
7a1dbde50e | ||
|
c061b628eb | ||
|
89f35b1214 | ||
|
c08dad9ae2 | ||
|
ea42cc69cc | ||
|
04736179b2 | ||
|
abafffb931 | ||
|
d97f8807d2 | ||
|
eac8de54a6 | ||
|
136b821f18 | ||
|
e0cafba3c4 | ||
|
2a8946ca7e | ||
|
8b3d9314ae | ||
|
f01f957792 | ||
|
5002ed37f3 | ||
|
5b1eb476a4 | ||
|
ffef3fcd7b | ||
|
a87d8d4277 | ||
|
6d9ae27f62 | ||
|
a070fecbf8 | ||
|
79070dd24c | ||
|
182a12789a | ||
|
fb0fc32ba9 | ||
|
b7d936c5c2 | ||
|
703054cfec | ||
|
a277dea0a7 | ||
|
519a2c5577 | ||
|
b506874f9f | ||
|
72427767dd | ||
|
9f1925d718 | ||
|
4b3b14138b | ||
|
8f651ad4d2 | ||
|
48b9a6ae31 | ||
|
6be7beb3bd | ||
|
c8f5b33631 | ||
|
122efa6a07 | ||
|
db16792aa9 | ||
|
42fccb4716 | ||
|
0c4b391e2a | ||
|
bafcc61548 | ||
|
d12417de63 | ||
|
8283450370 | ||
|
a5af1a437c | ||
|
08e097f605 | ||
|
60375a96cd | ||
|
e33366fc19 | ||
|
67e3589120 | ||
|
de390af9d7 | ||
|
766a2d2e52 | ||
|
c08b2ac24f | ||
|
f647a4b8dd | ||
|
e55302596b | ||
|
b66a36e719 | ||
|
39d93fd8b0 | ||
|
2f99afbd9a | ||
|
658ce2c3d4 | ||
|
8b60342f76 | ||
|
b27937972f | ||
|
67263604e6 | ||
|
a89b17c2a7 | ||
|
b51afd97e5 | ||
|
d14ba4ff71 | ||
|
2356b07424 | ||
|
d838439794 | ||
|
dc57ae6fe1 | ||
|
879546fdd4 | ||
|
a2a089c65e | ||
|
fca82f4e28 | ||
|
98ffd82060 | ||
|
c72249674d | ||
|
7be42d7d0b | ||
|
df757e88fd | ||
|
c11fb2132f | ||
|
8bbb8d6a58 | ||
|
008162e12a | ||
|
879ecd602f | ||
|
db2989c3a5 | ||
|
74884bef1e | ||
|
fa9c730bdd | ||
|
f7ed8d929f | ||
|
3dd6f0e3de | ||
|
68d1056392 | ||
|
a8682ab791 | ||
|
4e6d0da550 | ||
|
9c54820eb5 | ||
|
cc05c4e5e4 | ||
|
89a95e8bd2 | ||
|
30e2f85373 | ||
|
99d7ca5a4c | ||
|
b74d418b03 | ||
|
04deb69bd3 | ||
|
d2978caea7 | ||
|
6c5ee55c3d | ||
|
629282d884 | ||
|
5c6622eff7 | ||
|
85a6474907 | ||
|
cc4542e5d3 | ||
|
19a5b054ac | ||
|
4f8ae0353f | ||
|
9890eb0056 | ||
|
7940fe02d5 | ||
|
4fba5770d2 | ||
|
a414cb76e3 | ||
|
ce3f707af5 | ||
|
5e9d8f094c | ||
|
e53c02ea20 | ||
|
5b48de594d | ||
|
8c6e726f7d | ||
|
61d51f2d15 | ||
|
a38f4d53f8 | ||
|
dd057af7ae | ||
|
86f048c85e | ||
|
29bb566a22 | ||
|
1e4d85af32 | ||
|
bc2d929e95 | ||
|
4e4def21d3 | ||
|
7ab013655d | ||
|
a8b1cbe77e | ||
|
167ac6354c | ||
|
06f5c092b8 | ||
|
77c638c3c7 | ||
|
2ad7039e7c | ||
|
d83d9dde1d | ||
|
fd3ca84c3e | ||
|
c49651ac17 | ||
|
817aa45025 | ||
|
f655f27b66 | ||
|
67c77744d7 | ||
|
5d0eeb8cd7 | ||
|
4f0480a11c | ||
|
11ad758910 | ||
|
5c03346184 | ||
|
d3f7b6fe2f | ||
|
cb1d044e38 | ||
|
edfd78d7f6 | ||
|
22acc863c7 | ||
|
b72318f36d | ||
|
47b6829edc | ||
|
0ce4f98d82 | ||
|
b948802ced | ||
|
6b62a8e977 | ||
|
931c1de5d3 | ||
|
c7e09b7b5f | ||
|
04748f0cef | ||
|
e6c8705b7f | ||
|
6139ac8b5e | ||
|
7674bafe4f | ||
|
e3ab7900da | ||
|
950b1c115c | ||
|
c377822460 | ||
|
19734c3050 | ||
|
e015606984 | ||
|
fc6cde6cb1 | ||
|
f119336162 | ||
|
b0eb1e193f | ||
|
7620b1a09a | ||
|
28ddd917be | ||
|
d8072c0015 | ||
|
428ba3608a | ||
|
579f430e3e | ||
|
672c227037 | ||
|
fd6442f381 | ||
|
99a042ea13 | ||
|
afd03c21e4 | ||
|
b7f5ad4610 | ||
|
c3d0236e67 | ||
|
fd10c72f1c | ||
|
fb7016a704 | ||
|
0a301b1969 | ||
|
c890c4962f | ||
|
f3248cca90 | ||
|
cac7f5dfb4 | ||
|
e2ba9d1525 | ||
|
1b2477c674 | ||
|
8ec0e9a800 | ||
|
9c2876f58c | ||
|
8d21bc3455 | ||
|
4dfbf28e6a | ||
|
ca5ddcc748 | ||
|
8ebafe5178 | ||
|
63b12805f2 | ||
|
51213f4d29 | ||
|
5da6e17313 | ||
|
1832e5ec6d | ||
|
8971cb41c5 | ||
|
96a0436afb | ||
|
ada572f045 | ||
|
849e56390e | ||
|
01682a0c20 | ||
|
19b1fef54a | ||
|
05a514b0b3 | ||
|
c4611b327e | ||
|
f5d7825d35 | ||
|
4717e591cb | ||
|
42b5e496a7 | ||
|
492576edb8 | ||
|
b94239a61e | ||
|
64a7b8203b | ||
|
9f1bd0fb97 | ||
|
3fa815f4f7 | ||
|
0e4ee0ed3b |
@@ -62,7 +62,8 @@ website:
|
||||
stage: builds
|
||||
image: $CI_REGISTRY_IMAGE/ci-almalinux-8:latest
|
||||
needs:
|
||||
- x86_64-almalinux-8-container
|
||||
- job: x86_64-almalinux-8-container
|
||||
optional: true
|
||||
before_script:
|
||||
- *script_variables
|
||||
script:
|
||||
@@ -80,9 +81,10 @@ website:
|
||||
|
||||
codestyle:
|
||||
stage: sanity_checks
|
||||
image: $CI_REGISTRY_IMAGE/ci-opensuse-leap-152:latest
|
||||
image: $CI_REGISTRY_IMAGE/ci-opensuse-leap-153:latest
|
||||
needs:
|
||||
- x86_64-opensuse-leap-152-container
|
||||
- job: x86_64-opensuse-leap-153-container
|
||||
optional: true
|
||||
before_script:
|
||||
- *script_variables
|
||||
script:
|
||||
@@ -98,7 +100,8 @@ potfile:
|
||||
stage: builds
|
||||
image: $CI_REGISTRY_IMAGE/ci-almalinux-8:latest
|
||||
needs:
|
||||
- x86_64-almalinux-8-container
|
||||
- job: x86_64-almalinux-8-container
|
||||
optional: true
|
||||
rules:
|
||||
- if: "$CI_COMMIT_BRANCH == 'master'"
|
||||
before_script:
|
||||
@@ -120,7 +123,8 @@ potfile:
|
||||
coverity:
|
||||
image: $CI_REGISTRY_IMAGE/ci-almalinux-8:latest
|
||||
needs:
|
||||
- x86_64-almalinux-8-container
|
||||
- job: x86_64-almalinux-8-container
|
||||
optional: true
|
||||
stage: builds
|
||||
script:
|
||||
- curl https://scan.coverity.com/download/linux64 --form project=$COVERITY_SCAN_PROJECT_NAME --form token=$COVERITY_SCAN_TOKEN -o /tmp/cov-analysis-linux64.tgz
|
||||
|
@@ -1,4 +1,4 @@
|
||||
<!-- See https://libvirt.org/bugs.html#quality for guidance -->
|
||||
<!-- See https://libvirt.org/bugs.html#how-to-file-high-quality-bug-reports -->
|
||||
|
||||
## Software environment
|
||||
- Operating system:
|
||||
|
@@ -25,7 +25,7 @@ The primary maintainers and people with commit access rights:
|
||||
* Laine Stump <laine@redhat.com>
|
||||
* Martin Kletzander <mkletzan@redhat.com>
|
||||
* Michal Prívozník <mprivozn@redhat.com>
|
||||
* Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
|
||||
* Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
|
||||
* Pavel Hrdina <phrdina@redhat.com>
|
||||
* Peter Krempa <pkrempa@redhat.com>
|
||||
* Pino Toscano <ptoscano@redhat.com>
|
||||
|
148
NEWS.rst
148
NEWS.rst
@@ -8,6 +8,66 @@ the changes introduced by each of them.
|
||||
For a more fine-grained view, use the `git log`_.
|
||||
|
||||
|
||||
v8.4.0 (unreleased)
|
||||
===================
|
||||
|
||||
* **Security**
|
||||
|
||||
* **Removed features**
|
||||
|
||||
* **New features**
|
||||
|
||||
* qemu: D-Bus display
|
||||
|
||||
Libvirt is now able to setup a D-Bus display export, either with a private
|
||||
bus or in p2p mode. This display is available in QEMU 7.0.0.
|
||||
|
||||
* qemu: ppc64 Power10 processor support
|
||||
|
||||
Support for the recently released IBM Power10 processor was added.
|
||||
|
||||
* qemu: Introduce ``absolute`` clock offset
|
||||
|
||||
The ``absolute`` clock offset type allows to set the guest clock to an
|
||||
arbitrary epoch timestamp at each start. This is useful if some VM needs
|
||||
to be kept set to an arbitrary time for e.g. testing or working around
|
||||
broken software.
|
||||
|
||||
* **Improvements**
|
||||
|
||||
* **Bug fixes**
|
||||
|
||||
* Improve heuristics for computing baseline CPU models
|
||||
|
||||
Both ``virConnectBaselineHypervisorCPU`` and ``virConnectBaselineCPU`` were
|
||||
in some cases computing the result using a CPU model which was newer than
|
||||
some of the input models. For example, ``Cascadelake-Server`` was used as a
|
||||
baseline for ``Skylake-Server-IBRS`` and ``Cascadelake-Server``. The CPU
|
||||
model selection heuristics was improved to choose a more appropriate model.
|
||||
|
||||
|
||||
v8.3.0 (2022-05-02)
|
||||
===================
|
||||
|
||||
* **Removed features**
|
||||
|
||||
* qemu: Remove support for QEMU < 3.1
|
||||
|
||||
In accordance with our platform support policy, the oldest supported QEMU
|
||||
version is now bumped from 2.11 to 3.1.
|
||||
|
||||
* **New features**
|
||||
|
||||
* qemu: Introduce support for virtio-iommu
|
||||
|
||||
This IOMMU device can be used with both Q35 and ARM virt guests.
|
||||
|
||||
* qemu: Introduce attributes rss and rss_hash_report for net interface
|
||||
|
||||
They can enable in-qemu/ebpf RSS and in-qemu RSS hash report for virtio NIC.
|
||||
Require QEMU >= 5.1.
|
||||
|
||||
|
||||
v8.2.0 (2022-04-01)
|
||||
===================
|
||||
|
||||
@@ -432,6 +492,20 @@ v7.8.0 (2021-10-01)
|
||||
active. This information can also be retrieved with the new virsh command
|
||||
``nodedev-info``.
|
||||
|
||||
* qemu: Add attribute ``queue_size`` for virtio-blk devices
|
||||
|
||||
* **Improvements**
|
||||
|
||||
* api: Add XML validation for creating of: networkport, nwfilter-binding,
|
||||
network
|
||||
|
||||
* Add flag ``VIR_NETWORK_PORT_CREATE_VALIDATE`` to validate network port
|
||||
input xml of network-port creating.
|
||||
* Add flag ``VIR_NETWORK_CREATE_VALIDATE`` to validate network input xml of
|
||||
network creating.
|
||||
* Add flag ``VIR_NWFILTER_BINDING_CREATE_VALIDATE`` to validate
|
||||
nwfilter-binding input xml of nwfilter-binding creating.
|
||||
|
||||
|
||||
v7.7.0 (2021-09-01)
|
||||
===================
|
||||
@@ -503,6 +577,8 @@ v7.7.0 (2021-09-01)
|
||||
forbidden for older qemus which don't support the update API as the guest
|
||||
could still reboot and execute some instructions until it was terminated.
|
||||
|
||||
* virsh: Support vhostuser in attach-interface
|
||||
|
||||
* **Bug fixes**
|
||||
|
||||
* qemu: Open chardev logfile on behalf of QEMU
|
||||
@@ -978,6 +1054,14 @@ v7.0.0 (2021-01-15)
|
||||
powered off or undefined. Add per-TPM emulator option ``persistent_state``
|
||||
for keeping TPM state.
|
||||
|
||||
* cpu_map: Add Snowridge CPU model
|
||||
|
||||
It's supported in QEMU 4.1 and newer.
|
||||
|
||||
* qemu: Add support for NFS disk protocol
|
||||
|
||||
Implement support for the 'nfs' native protocol driver in the qemu driver.
|
||||
|
||||
* **Improvements**
|
||||
|
||||
* qemu: Discourage users from polling ``virDomainGetBlockJobInfo`` for block
|
||||
@@ -1068,6 +1152,12 @@ v6.10.0 (2020-12-01)
|
||||
option is missing are now '1'. This ensures that only legitimate clients
|
||||
access servers, which don't have any additional form of authentication.
|
||||
|
||||
* qemu: Introduce "migrate_tls_force" qemu.conf option
|
||||
|
||||
The ``migrate_tls_force`` configuration option allows administrators to
|
||||
always force connections used for migration to be TLS secured as if the
|
||||
``VIR_MIGRATE_TLS`` flag had been used.
|
||||
|
||||
* **New features**
|
||||
|
||||
* qemu: Implement OpenSSH authorized key file management APIs
|
||||
@@ -1086,6 +1176,18 @@ v6.10.0 (2020-12-01)
|
||||
``virDomainSetVcpus()``, and ``virDomainSetVcpusFlags()`` APIs have been
|
||||
implemented in the Hyper-V driver.
|
||||
|
||||
* qemu: Add 'fmode' and 'dmode' options for 9pfs
|
||||
|
||||
Expose QEMU's 9pfs 'fmode' and 'dmode' options via attributes on the
|
||||
'filesystem' node in the domain XML. These options control the creation
|
||||
mode of files and directories, respectively, when using accessmode=mapped.
|
||||
It requires QEMU 2.10 or above.
|
||||
|
||||
* qemu: support kvm-poll-control performance hint
|
||||
|
||||
Implement the new KVM feature 'poll-control' to set this performance hint
|
||||
for KVM guests. It requires QEMU 4.2 or above.
|
||||
|
||||
* **Improvements**
|
||||
|
||||
* virsh: Support network disks in ``virsh attach-disk``
|
||||
@@ -1154,6 +1256,52 @@ v6.9.0 (2020-11-02)
|
||||
using ``<interface type='vdpa'>``. The node device APIs also now
|
||||
list and provide XML descriptions for vDPA devices.
|
||||
|
||||
* cpu_map: Add EPYC-Rome CPU model
|
||||
|
||||
It's supported in QEMU 5.0.0 and newer.
|
||||
|
||||
* cpu: Add a flag for XML validation in CPU comparison
|
||||
|
||||
The ``virConnectCompareCPU`` and ``virConnectCompareHypervisorCPU`` API
|
||||
now support the ``VIR_CONNECT_COMPARE_CPU_VALIDATE_XML`` flag, which
|
||||
enables XML validation. For virsh, this feature is enabled by passing
|
||||
the ``--validate`` option to the ``cpu-compare`` and
|
||||
``hypervisor-cpu-compare`` subcommands.
|
||||
|
||||
* qemu: Introduce virtio-balloon free page reporting feature
|
||||
|
||||
Introduce the optional attribute ``free-page-reporting`` for virtio
|
||||
memballoon device. It enables/disables the ability of the QEMU virtio
|
||||
memory balloon to return unused pages back to the hypervisor. QEMU 5.1
|
||||
and newer support this feature.
|
||||
|
||||
* **Improvements**
|
||||
|
||||
* qemu: Make 'cbitpos' & 'reducedPhysBits' attrs optional
|
||||
|
||||
Libvirt probes the underlying platform in order to fill in these SEV
|
||||
attributes automatically before launching a guest.
|
||||
|
||||
* util: support device stats collection for SR-IOV VF hostdev
|
||||
|
||||
For SR-IOV VF hostdevs, libvirt now supports retrieving device traffic
|
||||
stats via the ``virDomainInterfaceStats`` API and ``virsh domifstat``.
|
||||
|
||||
* logging: Allow disabling log rollover
|
||||
|
||||
Set ``max_len=0`` in ``virtlogd.conf`` to disable log rollover.
|
||||
|
||||
* qemu: Set noqueue qdisc for TAP devices
|
||||
|
||||
Set ``noqueue`` instead of the former ``pfifo_fast`` queue discipline
|
||||
for TAP devices. It will avoid needless cost of host CPU cycles and
|
||||
thus improve performance.
|
||||
|
||||
* qemu: virtiofs can be used without NUMA nodes
|
||||
|
||||
Virtiofs is supported for the VM without NUMA nodes but configured with
|
||||
shared memory.
|
||||
|
||||
* **Bug fixes**
|
||||
|
||||
* hyperv: ensure WQL queries work in all locales
|
||||
|
@@ -7,6 +7,7 @@ RUNUTF8 = @runutf8@
|
||||
PYTHON = @PYTHON3@
|
||||
GREP = @GREP@
|
||||
SED = @SED@
|
||||
AWK = @AWK@
|
||||
|
||||
# include syntax-check.mk file
|
||||
include $(top_srcdir)/build-aux/syntax-check.mk
|
||||
|
@@ -1,14 +1,7 @@
|
||||
syntax_check_conf = configuration_data()
|
||||
syntax_check_conf.set('top_srcdir', meson.source_root())
|
||||
syntax_check_conf.set('top_builddir', meson.build_root())
|
||||
|
||||
flake8_path = ''
|
||||
if flake8_prog.found()
|
||||
flake8_path = flake8_prog.path()
|
||||
endif
|
||||
syntax_check_conf.set('flake8_path', flake8_path)
|
||||
syntax_check_conf.set('runutf8', ' '.join(runutf8))
|
||||
syntax_check_conf.set('PYTHON3', python3_prog.path())
|
||||
|
||||
if host_machine.system() == 'freebsd' or host_machine.system() == 'darwin'
|
||||
make_prog = find_program('gmake')
|
||||
@@ -33,8 +26,18 @@ else
|
||||
grep_prog = find_program('grep')
|
||||
endif
|
||||
|
||||
syntax_check_conf.set('GREP', grep_prog.path())
|
||||
syntax_check_conf.set('SED', sed_prog.path())
|
||||
awk_prog = find_program('awk')
|
||||
|
||||
syntax_check_conf = configuration_data({
|
||||
'top_srcdir': meson.source_root(),
|
||||
'top_builddir': meson.build_root(),
|
||||
'flake8_path': flake8_path,
|
||||
'runutf8': ' '.join(runutf8),
|
||||
'PYTHON3': python3_prog.path(),
|
||||
'GREP': grep_prog.path(),
|
||||
'SED': sed_prog.path(),
|
||||
'AWK': awk_prog.path(),
|
||||
})
|
||||
|
||||
configure_file(
|
||||
input: 'Makefile.in',
|
||||
@@ -44,7 +47,7 @@ configure_file(
|
||||
|
||||
rc = run_command(
|
||||
'sed', '-n',
|
||||
's/^\\(sc_[a-zA-Z0-9_-]*\\):.*/\\1/p',
|
||||
's/^sc_\\([a-zA-Z0-9_-]*\\):.*/\\1/p',
|
||||
meson.current_source_dir() / 'syntax-check.mk',
|
||||
check: true,
|
||||
)
|
||||
@@ -59,7 +62,7 @@ if git
|
||||
test(
|
||||
target,
|
||||
make_prog,
|
||||
args: [ '-C', meson.current_build_dir(), target ],
|
||||
args: [ '-C', meson.current_build_dir(), 'sc_@0@'.format(target) ],
|
||||
depends: [
|
||||
potfiles_dep,
|
||||
],
|
||||
|
File diff suppressed because it is too large
Load Diff
@@ -140,7 +140,7 @@ endif
|
||||
CI_GIT_ARGS = \
|
||||
-c advice.detachedHead=false \
|
||||
-q \
|
||||
--local \
|
||||
--local \
|
||||
$(NULL)
|
||||
|
||||
# Args to use when running the container
|
||||
|
@@ -10,8 +10,8 @@ Cirrus CI integration
|
||||
=====================
|
||||
|
||||
libvirt currently supports three non-Linux operating systems: Windows, FreeBSD
|
||||
and macOS. Windows cross-builds can be prepared on Linux by using `MinGW`_, but
|
||||
for both FreeBSD and macOS we need to use the actual operating system, and
|
||||
and macOS. Windows cross-builds can be prepared on Linux by using `MinGW-w64`_,
|
||||
but for both FreeBSD and macOS we need to use the actual operating system, and
|
||||
unfortunately GitLab shared runners are currently not available for either.
|
||||
|
||||
To work around this limitation, we take advantage of `Cirrus CI`_'s free
|
||||
@@ -61,7 +61,7 @@ repository as usual and you'll automatically get the additional CI coverage.
|
||||
.. _Cirrus CI GitHub app: https://github.com/marketplace/cirrus-ci
|
||||
.. _Cirrus CI settings: https://cirrus-ci.com/settings/profile/
|
||||
.. _Cirrus CI: https://cirrus-ci.com/
|
||||
.. _MinGW: http://mingw.org/
|
||||
.. _MinGW-w64: https://www.mingw-w64.org/
|
||||
.. _cirrus-run: https://github.com/sio/cirrus-run/
|
||||
|
||||
|
||||
|
@@ -26,4 +26,4 @@ build_task:
|
||||
- meson setup build
|
||||
- meson dist -C build --no-tests
|
||||
- meson compile -C build
|
||||
- meson test -C build --no-suite syntax-check
|
||||
- meson test -C build --no-suite syntax-check --print-errorlogs || (cat ~/Library/Logs/DiagnosticReports/*.crash && exit 1)
|
||||
|
@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
|
||||
NINJA='/usr/local/bin/ninja'
|
||||
PACKAGING_COMMAND='pkg'
|
||||
PIP3='/usr/local/bin/pip-3.8'
|
||||
PKGS='augeas bash-completion ca_root_nss ccache codespell cppi curl cyrus-sasl diffutils diskscrub dnsmasq fusefs-libs gettext git glib gmake gnugrep gnutls gsed libpcap libpciaccess libssh libssh2 libxml2 libxslt meson ninja perl5 pkgconf polkit py38-docutils py38-flake8 python3 qemu readline yajl'
|
||||
PKGS='augeas bash-completion ca_root_nss ccache codespell cppi curl cyrus-sasl diffutils diskscrub fusefs-libs gettext git glib gmake gnugrep gnutls gsed libpcap libpciaccess libssh libssh2 libxml2 libxslt meson ninja perl5 pkgconf polkit py38-docutils py38-flake8 python3 qemu readline yajl'
|
||||
PYPI_PKGS=''
|
||||
PYTHON='/usr/local/bin/python3'
|
||||
|
@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
|
||||
NINJA='/usr/local/bin/ninja'
|
||||
PACKAGING_COMMAND='pkg'
|
||||
PIP3='/usr/local/bin/pip-3.8'
|
||||
PKGS='augeas bash-completion ca_root_nss ccache codespell cppi curl cyrus-sasl diffutils diskscrub dnsmasq fusefs-libs gettext git glib gmake gnugrep gnutls gsed libpcap libpciaccess libssh libssh2 libxml2 libxslt meson ninja perl5 pkgconf polkit py38-docutils py38-flake8 python3 qemu readline yajl'
|
||||
PKGS='augeas bash-completion ca_root_nss ccache codespell cppi curl cyrus-sasl diffutils diskscrub fusefs-libs gettext git glib gmake gnugrep gnutls gsed libpcap libpciaccess libssh libssh2 libxml2 libxslt meson ninja perl5 pkgconf polkit py38-docutils py38-flake8 python3 qemu readline yajl'
|
||||
PYPI_PKGS=''
|
||||
PYTHON='/usr/local/bin/python3'
|
||||
|
@@ -1,16 +0,0 @@
|
||||
# THIS FILE WAS AUTO-GENERATED
|
||||
#
|
||||
# $ lcitool manifest ci/manifest.yml
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
CCACHE='/usr/local/bin/ccache'
|
||||
CPAN_PKGS=''
|
||||
CROSS_PKGS=''
|
||||
MAKE='/usr/local/bin/gmake'
|
||||
NINJA='/usr/local/bin/ninja'
|
||||
PACKAGING_COMMAND='pkg'
|
||||
PIP3='/usr/local/bin/pip-3.8'
|
||||
PKGS='augeas bash-completion ca_root_nss ccache codespell cppi curl cyrus-sasl diffutils diskscrub dnsmasq fusefs-libs gettext git glib gmake gnugrep gnutls gsed libpcap libpciaccess libssh libssh2 libxml2 libxslt meson ninja perl5 pkgconf polkit py38-docutils py38-flake8 python3 qemu readline yajl'
|
||||
PYPI_PKGS=''
|
||||
PYTHON='/usr/local/bin/python3'
|
@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
|
||||
NINJA='/usr/local/bin/ninja'
|
||||
PACKAGING_COMMAND='brew'
|
||||
PIP3='/usr/local/bin/pip3'
|
||||
PKGS='augeas bash-completion ccache codespell cppi curl diffutils dnsmasq docutils flake8 gettext git glib gnu-sed gnutls grep libiscsi libpcap libssh libssh2 libxml2 libxslt make meson ninja perl pkg-config python3 qemu readline rpcgen scrub yajl'
|
||||
PKGS='augeas bash-completion ccache codespell cppi curl diffutils docutils flake8 gettext git glib gnu-sed gnutls grep libiscsi libpcap libssh libssh2 libxml2 libxslt make meson ninja perl pkg-config python3 qemu readline rpcgen scrub yajl'
|
||||
PYPI_PKGS=''
|
||||
PYTHON='/usr/local/bin/python3'
|
||||
|
@@ -22,7 +22,6 @@ RUN dnf update -y && \
|
||||
cyrus-sasl-devel \
|
||||
device-mapper-devel \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
||||
|
@@ -21,7 +21,6 @@ RUN apk update && \
|
||||
curl-dev \
|
||||
cyrus-sasl-dev \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
eudev-dev \
|
||||
fuse-dev \
|
||||
gcc \
|
||||
|
80
ci/containers/alpine-315.Dockerfile
Normal file
80
ci/containers/alpine-315.Dockerfile
Normal file
@@ -0,0 +1,80 @@
|
||||
# THIS FILE WAS AUTO-GENERATED
|
||||
#
|
||||
# $ lcitool manifest ci/manifest.yml
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
FROM docker.io/library/alpine:3.15
|
||||
|
||||
RUN apk update && \
|
||||
apk upgrade && \
|
||||
apk add \
|
||||
acl-dev \
|
||||
attr-dev \
|
||||
audit-dev \
|
||||
augeas \
|
||||
bash-completion \
|
||||
ca-certificates \
|
||||
ccache \
|
||||
ceph-dev \
|
||||
clang \
|
||||
curl-dev \
|
||||
cyrus-sasl-dev \
|
||||
diffutils \
|
||||
eudev-dev \
|
||||
fuse-dev \
|
||||
gcc \
|
||||
gettext \
|
||||
git \
|
||||
glib-dev \
|
||||
gnutls-dev \
|
||||
grep \
|
||||
iproute2 \
|
||||
iptables \
|
||||
kmod \
|
||||
libcap-ng-dev \
|
||||
libnl3-dev \
|
||||
libpcap-dev \
|
||||
libpciaccess-dev \
|
||||
libselinux-dev \
|
||||
libssh-dev \
|
||||
libssh2-dev \
|
||||
libtirpc-dev \
|
||||
libxml2-dev \
|
||||
libxml2-utils \
|
||||
libxslt \
|
||||
lvm2 \
|
||||
lvm2-dev \
|
||||
make \
|
||||
meson \
|
||||
musl-dev \
|
||||
netcf-dev \
|
||||
nfs-utils \
|
||||
numactl-dev \
|
||||
open-iscsi \
|
||||
parted-dev \
|
||||
perl \
|
||||
pkgconf \
|
||||
polkit \
|
||||
py3-docutils \
|
||||
py3-flake8 \
|
||||
python3 \
|
||||
qemu-img \
|
||||
readline-dev \
|
||||
samurai \
|
||||
sed \
|
||||
util-linux-dev \
|
||||
wireshark-dev \
|
||||
xen-dev \
|
||||
yajl-dev && \
|
||||
apk list | sort > /packages.txt && \
|
||||
mkdir -p /usr/libexec/ccache-wrappers && \
|
||||
ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/cc && \
|
||||
ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/clang && \
|
||||
ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/gcc
|
||||
|
||||
ENV LANG "en_US.UTF-8"
|
||||
ENV MAKE "/usr/bin/make"
|
||||
ENV NINJA "/usr/bin/ninja"
|
||||
ENV PYTHON "/usr/bin/python3"
|
||||
ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
|
@@ -21,7 +21,6 @@ RUN apk update && \
|
||||
curl-dev \
|
||||
cyrus-sasl-dev \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
eudev-dev \
|
||||
fuse-dev \
|
||||
gcc \
|
||||
@@ -62,6 +61,7 @@ RUN apk update && \
|
||||
python3 \
|
||||
qemu-img \
|
||||
readline-dev \
|
||||
rpcgen \
|
||||
samurai \
|
||||
sed \
|
||||
util-linux-dev \
|
||||
|
@@ -6,7 +6,7 @@
|
||||
|
||||
FROM quay.io/centos/centos:stream8
|
||||
|
||||
RUN dnf update -y && \
|
||||
RUN dnf distro-sync -y && \
|
||||
dnf install 'dnf-command(config-manager)' -y && \
|
||||
dnf config-manager --set-enabled -y powertools && \
|
||||
dnf install -y centos-release-advanced-virtualization && \
|
||||
@@ -22,7 +22,6 @@ RUN dnf update -y && \
|
||||
cyrus-sasl-devel \
|
||||
device-mapper-devel \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
||||
|
@@ -6,7 +6,7 @@
|
||||
|
||||
FROM quay.io/centos/centos:stream9
|
||||
|
||||
RUN dnf update -y && \
|
||||
RUN dnf distro-sync -y && \
|
||||
dnf install 'dnf-command(config-manager)' -y && \
|
||||
dnf config-manager --set-enabled -y crb && \
|
||||
dnf install -y \
|
||||
@@ -22,7 +22,6 @@ RUN dnf update -y && \
|
||||
cyrus-sasl-devel \
|
||||
device-mapper-devel \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -20,7 +20,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -20,7 +20,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -19,7 +19,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -20,7 +20,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -30,7 +30,6 @@ exec "$@"' > /usr/bin/nosync && \
|
||||
cyrus-sasl-devel \
|
||||
device-mapper-devel \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
||||
|
@@ -4,7 +4,7 @@
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
FROM registry.fedoraproject.org/fedora:35
|
||||
FROM registry.fedoraproject.org/fedora:36
|
||||
|
||||
RUN dnf install -y nosync && \
|
||||
echo -e '#!/bin/sh\n\
|
||||
@@ -26,7 +26,6 @@ exec "$@"' > /usr/bin/nosync && \
|
||||
cpp \
|
||||
cppi \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
@@ -4,7 +4,7 @@
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
FROM registry.fedoraproject.org/fedora:35
|
||||
FROM registry.fedoraproject.org/fedora:36
|
||||
|
||||
RUN dnf install -y nosync && \
|
||||
echo -e '#!/bin/sh\n\
|
||||
@@ -26,7 +26,6 @@ exec "$@"' > /usr/bin/nosync && \
|
||||
cpp \
|
||||
cppi \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
@@ -4,7 +4,7 @@
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
FROM registry.fedoraproject.org/fedora:34
|
||||
FROM registry.fedoraproject.org/fedora:36
|
||||
|
||||
RUN dnf install -y nosync && \
|
||||
echo -e '#!/bin/sh\n\
|
||||
@@ -30,7 +30,6 @@ exec "$@"' > /usr/bin/nosync && \
|
||||
cyrus-sasl-devel \
|
||||
device-mapper-devel \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
||||
@@ -70,7 +69,6 @@ exec "$@"' > /usr/bin/nosync && \
|
||||
lvm2 \
|
||||
make \
|
||||
meson \
|
||||
netcf-devel \
|
||||
nfs-utils \
|
||||
ninja-build \
|
||||
numactl-devel \
|
@@ -27,7 +27,6 @@ exec "$@"' > /usr/bin/nosync && \
|
||||
cpp \
|
||||
cppi \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
||||
|
@@ -27,7 +27,6 @@ exec "$@"' > /usr/bin/nosync && \
|
||||
cpp \
|
||||
cppi \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
||||
|
@@ -31,7 +31,6 @@ exec "$@"' > /usr/bin/nosync && \
|
||||
cyrus-sasl-devel \
|
||||
device-mapper-devel \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
firewalld-filesystem \
|
||||
|
@@ -4,7 +4,7 @@
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
FROM registry.opensuse.org/opensuse/leap:15.2
|
||||
FROM registry.opensuse.org/opensuse/leap:15.3
|
||||
|
||||
RUN zypper update -y && \
|
||||
zypper install -y \
|
||||
@@ -21,7 +21,6 @@ RUN zypper update -y && \
|
||||
cyrus-sasl-devel \
|
||||
device-mapper-devel \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
fuse-devel \
|
@@ -21,7 +21,6 @@ RUN zypper dist-upgrade -y && \
|
||||
cyrus-sasl-devel \
|
||||
device-mapper-devel \
|
||||
diffutils \
|
||||
dnsmasq \
|
||||
dwarves \
|
||||
ebtables \
|
||||
fuse-devel \
|
||||
|
@@ -20,7 +20,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
|
@@ -4,7 +4,7 @@
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
FROM docker.io/library/ubuntu:18.04
|
||||
FROM docker.io/library/ubuntu:22.04
|
||||
|
||||
RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
apt-get update && \
|
||||
@@ -20,14 +20,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
codespell \
|
||||
cpp \
|
||||
diffutils \
|
||||
dnsmasq-base \
|
||||
dwarves \
|
||||
ebtables \
|
||||
flake8 \
|
||||
gcc \
|
||||
gettext \
|
||||
git \
|
||||
glusterfs-common \
|
||||
grep \
|
||||
iproute2 \
|
||||
iptables \
|
||||
@@ -44,9 +42,9 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
libdevmapper-dev \
|
||||
libfuse-dev \
|
||||
libglib2.0-dev \
|
||||
libglusterfs-dev \
|
||||
libgnutls28-dev \
|
||||
libiscsi-dev \
|
||||
libnetcf-dev \
|
||||
libnl-3-dev \
|
||||
libnl-route-3-dev \
|
||||
libnuma-dev \
|
||||
@@ -70,6 +68,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
locales \
|
||||
lvm2 \
|
||||
make \
|
||||
meson \
|
||||
nfs-common \
|
||||
ninja-build \
|
||||
numad \
|
||||
@@ -79,13 +78,9 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
policykit-1 \
|
||||
python3 \
|
||||
python3-docutils \
|
||||
python3-pip \
|
||||
python3-setuptools \
|
||||
python3-wheel \
|
||||
qemu-utils \
|
||||
scrub \
|
||||
sed \
|
||||
sheepdog \
|
||||
systemtap-sdt-dev \
|
||||
wireshark-dev \
|
||||
xsltproc && \
|
||||
@@ -99,8 +94,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
|
||||
ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/clang && \
|
||||
ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/gcc
|
||||
|
||||
RUN pip3 install meson==0.56.0
|
||||
|
||||
ENV LANG "en_US.UTF-8"
|
||||
ENV MAKE "/usr/bin/make"
|
||||
ENV NINJA "/usr/bin/ninja"
|
751
ci/gitlab.yml
751
ci/gitlab.yml
@@ -4,748 +4,9 @@
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
|
||||
.container_job:
|
||||
image: docker:stable
|
||||
stage: containers
|
||||
needs: []
|
||||
services:
|
||||
- docker:dind
|
||||
before_script:
|
||||
- export TAG="$CI_REGISTRY_IMAGE/ci-$NAME:latest"
|
||||
- export COMMON_TAG="$CI_REGISTRY/libvirt/libvirt/ci-$NAME:latest"
|
||||
- docker info
|
||||
- docker login registry.gitlab.com -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
|
||||
script:
|
||||
- docker pull "$TAG" || docker pull "$COMMON_TAG" || true
|
||||
- docker build --cache-from "$TAG" --cache-from "$COMMON_TAG" --tag "$TAG" -f "ci/containers/$NAME.Dockerfile" ci/containers
|
||||
- docker push "$TAG"
|
||||
after_script:
|
||||
- docker logout
|
||||
|
||||
|
||||
.gitlab_native_build_job:
|
||||
image: $CI_REGISTRY_IMAGE/ci-$NAME:latest
|
||||
stage: builds
|
||||
|
||||
|
||||
.gitlab_cross_build_job:
|
||||
image: $CI_REGISTRY_IMAGE/ci-$NAME-cross-$CROSS:latest
|
||||
stage: builds
|
||||
|
||||
|
||||
.cirrus_build_job:
|
||||
stage: builds
|
||||
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
|
||||
needs: []
|
||||
script:
|
||||
- source ci/cirrus/$NAME.vars
|
||||
- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
|
||||
-e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
|
||||
-e "s|[@]CI_COMMIT_SHA@|$CI_COMMIT_SHA|g"
|
||||
-e "s|[@]CIRRUS_VM_INSTANCE_TYPE@|$CIRRUS_VM_INSTANCE_TYPE|g"
|
||||
-e "s|[@]CIRRUS_VM_IMAGE_SELECTOR@|$CIRRUS_VM_IMAGE_SELECTOR|g"
|
||||
-e "s|[@]CIRRUS_VM_IMAGE_NAME@|$CIRRUS_VM_IMAGE_NAME|g"
|
||||
-e "s|[@]UPDATE_COMMAND@|$UPDATE_COMMAND|g"
|
||||
-e "s|[@]UPGRADE_COMMAND@|$UPGRADE_COMMAND|g"
|
||||
-e "s|[@]INSTALL_COMMAND@|$INSTALL_COMMAND|g"
|
||||
-e "s|[@]PATH@|$PATH_EXTRA${PATH_EXTRA:+:}\$PATH|g"
|
||||
-e "s|[@]PKG_CONFIG_PATH@|$PKG_CONFIG_PATH|g"
|
||||
-e "s|[@]PKGS@|$PKGS|g"
|
||||
-e "s|[@]MAKE@|$MAKE|g"
|
||||
-e "s|[@]PYTHON@|$PYTHON|g"
|
||||
-e "s|[@]PIP3@|$PIP3|g"
|
||||
-e "s|[@]PYPI_PKGS@|$PYPI_PKGS|g"
|
||||
-e "s|[@]XML_CATALOG_FILES@|$XML_CATALOG_FILES|g"
|
||||
<ci/cirrus/build.yml >ci/cirrus/$NAME.yml
|
||||
- cat ci/cirrus/$NAME.yml
|
||||
- cirrus-run -v --show-build-log always ci/cirrus/$NAME.yml
|
||||
rules:
|
||||
- if: "$CIRRUS_GITHUB_REPO && $CIRRUS_API_TOKEN"
|
||||
|
||||
|
||||
check-dco:
|
||||
stage: sanity_checks
|
||||
needs: []
|
||||
image: registry.gitlab.com/libvirt/libvirt-ci/check-dco:master
|
||||
script:
|
||||
- /check-dco libvirt
|
||||
except:
|
||||
variables:
|
||||
- $CI_PROJECT_NAMESPACE == 'libvirt'
|
||||
variables:
|
||||
GIT_DEPTH: 1000
|
||||
|
||||
|
||||
# Native container jobs
|
||||
|
||||
x86_64-almalinux-8-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: almalinux-8
|
||||
|
||||
|
||||
x86_64-alpine-314-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: alpine-314
|
||||
|
||||
|
||||
x86_64-alpine-edge-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: alpine-edge
|
||||
|
||||
|
||||
x86_64-centos-stream-8-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: centos-stream-8
|
||||
|
||||
|
||||
x86_64-centos-stream-9-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: centos-stream-9
|
||||
|
||||
|
||||
x86_64-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
x86_64-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
x86_64-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
x86_64-fedora-34-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-34
|
||||
|
||||
|
||||
x86_64-fedora-35-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-35
|
||||
|
||||
|
||||
x86_64-fedora-rawhide-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-rawhide
|
||||
|
||||
|
||||
x86_64-opensuse-leap-152-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: opensuse-leap-152
|
||||
|
||||
|
||||
x86_64-opensuse-tumbleweed-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: opensuse-tumbleweed
|
||||
|
||||
|
||||
x86_64-ubuntu-1804-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: ubuntu-1804
|
||||
|
||||
|
||||
x86_64-ubuntu-2004-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: ubuntu-2004
|
||||
|
||||
|
||||
|
||||
# Cross container jobs
|
||||
|
||||
aarch64-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-aarch64
|
||||
|
||||
|
||||
armv6l-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10-cross-armv6l
|
||||
|
||||
|
||||
armv7l-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-armv7l
|
||||
|
||||
|
||||
i686-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-i686
|
||||
|
||||
|
||||
mips-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10-cross-mips
|
||||
|
||||
|
||||
mips64el-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-mips64el
|
||||
|
||||
|
||||
mipsel-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10-cross-mipsel
|
||||
|
||||
|
||||
ppc64le-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-ppc64le
|
||||
|
||||
|
||||
s390x-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-s390x
|
||||
|
||||
|
||||
aarch64-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-aarch64
|
||||
|
||||
|
||||
armv6l-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-armv6l
|
||||
|
||||
|
||||
armv7l-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11-cross-armv7l
|
||||
|
||||
|
||||
i686-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-i686
|
||||
|
||||
|
||||
mips64el-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11-cross-mips64el
|
||||
|
||||
|
||||
mipsel-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-mipsel
|
||||
|
||||
|
||||
ppc64le-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11-cross-ppc64le
|
||||
|
||||
|
||||
s390x-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-s390x
|
||||
|
||||
|
||||
aarch64-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-aarch64
|
||||
|
||||
|
||||
armv6l-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-armv6l
|
||||
|
||||
|
||||
armv7l-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-armv7l
|
||||
|
||||
|
||||
i686-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-i686
|
||||
|
||||
|
||||
mips64el-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-mips64el
|
||||
|
||||
|
||||
mipsel-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-mipsel
|
||||
|
||||
|
||||
ppc64le-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-ppc64le
|
||||
|
||||
|
||||
s390x-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-s390x
|
||||
|
||||
|
||||
mingw32-fedora-35-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-35-cross-mingw32
|
||||
|
||||
|
||||
mingw64-fedora-35-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-35-cross-mingw64
|
||||
|
||||
|
||||
mingw32-fedora-rawhide-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-rawhide-cross-mingw32
|
||||
|
||||
|
||||
mingw64-fedora-rawhide-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-rawhide-cross-mingw64
|
||||
|
||||
|
||||
# Native build jobs
|
||||
|
||||
x86_64-almalinux-8:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-almalinux-8-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: almalinux-8
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-almalinux-8-clang:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-almalinux-8-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CC: clang
|
||||
NAME: almalinux-8
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-alpine-314:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-alpine-314-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: alpine-314
|
||||
|
||||
|
||||
x86_64-alpine-edge:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-alpine-edge-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: alpine-edge
|
||||
|
||||
|
||||
x86_64-centos-stream-8:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-centos-stream-8-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: centos-stream-8
|
||||
artifacts:
|
||||
expire_in: 1 day
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
|
||||
x86_64-centos-stream-9:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-centos-stream-9-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: centos-stream-9
|
||||
artifacts:
|
||||
expire_in: 1 day
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
|
||||
x86_64-debian-10:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-debian-10-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
x86_64-debian-11:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-debian-11-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
x86_64-debian-11-clang:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-debian-11-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
x86_64-debian-sid:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-debian-sid-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
x86_64-fedora-34:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-fedora-34-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-34
|
||||
artifacts:
|
||||
expire_in: 1 day
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
|
||||
x86_64-fedora-35:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-fedora-35-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-35
|
||||
artifacts:
|
||||
expire_in: 1 day
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
|
||||
x86_64-fedora-rawhide:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-fedora-rawhide-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-rawhide
|
||||
|
||||
|
||||
x86_64-fedora-rawhide-clang:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-fedora-rawhide-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
CC: clang
|
||||
NAME: fedora-rawhide
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-opensuse-leap-152:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-opensuse-leap-152-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: opensuse-leap-152
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-opensuse-tumbleweed:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-opensuse-tumbleweed-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: opensuse-tumbleweed
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-ubuntu-1804:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-ubuntu-1804-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: ubuntu-1804
|
||||
|
||||
|
||||
x86_64-ubuntu-2004:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-ubuntu-2004-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
ASAN_OPTIONS: verify_asan_link_order=0
|
||||
MESON_ARGS: -Db_lundef=false -Db_sanitize=address,undefined
|
||||
NAME: ubuntu-2004
|
||||
UBSAN_OPTIONS: print_stacktrace=1:halt_on_error=1
|
||||
|
||||
|
||||
x86_64-ubuntu-2004-clang:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- x86_64-ubuntu-2004-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CC: clang
|
||||
MESON_ARGS: -Db_lundef=false -Db_sanitize=address,undefined
|
||||
NAME: ubuntu-2004
|
||||
UBSAN_OPTIONS: print_stacktrace=1:halt_on_error=1
|
||||
|
||||
|
||||
|
||||
# Cross build jobs
|
||||
|
||||
armv6l-debian-10:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- armv6l-debian-10-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: armv6l
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
mips-debian-10:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- mips-debian-10-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: mips
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
mipsel-debian-10:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- mipsel-debian-10-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: mipsel
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
armv7l-debian-11:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- armv7l-debian-11-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: armv7l
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
mips64el-debian-11:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- mips64el-debian-11-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: mips64el
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
ppc64le-debian-11:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- ppc64le-debian-11-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: ppc64le
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
aarch64-debian-sid:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- aarch64-debian-sid-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
CROSS: aarch64
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
i686-debian-sid:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- i686-debian-sid-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
CROSS: i686
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
s390x-debian-sid:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- s390x-debian-sid-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
CROSS: s390x
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
mingw64-fedora-35:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- mingw64-fedora-35-container
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: mingw64
|
||||
NAME: fedora-35
|
||||
|
||||
|
||||
mingw32-fedora-rawhide:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- mingw32-fedora-rawhide-container
|
||||
allow_failure: true
|
||||
variables:
|
||||
CROSS: mingw32
|
||||
NAME: fedora-rawhide
|
||||
|
||||
|
||||
# Native cirrus build jobs
|
||||
|
||||
x86_64-freebsd-12:
|
||||
extends: .cirrus_build_job
|
||||
needs: []
|
||||
allow_failure: false
|
||||
variables:
|
||||
CIRRUS_VM_IMAGE_NAME: freebsd-12-2
|
||||
CIRRUS_VM_IMAGE_SELECTOR: image_family
|
||||
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
|
||||
INSTALL_COMMAND: pkg install -y
|
||||
NAME: freebsd-12
|
||||
UPDATE_COMMAND: pkg update
|
||||
UPGRADE_COMMAND: pkg upgrade -y
|
||||
|
||||
|
||||
x86_64-freebsd-13:
|
||||
extends: .cirrus_build_job
|
||||
needs: []
|
||||
allow_failure: false
|
||||
variables:
|
||||
CIRRUS_VM_IMAGE_NAME: freebsd-13-0
|
||||
CIRRUS_VM_IMAGE_SELECTOR: image_family
|
||||
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
|
||||
INSTALL_COMMAND: pkg install -y
|
||||
NAME: freebsd-13
|
||||
UPDATE_COMMAND: pkg update
|
||||
UPGRADE_COMMAND: pkg upgrade -y
|
||||
|
||||
|
||||
x86_64-macos-11:
|
||||
extends: .cirrus_build_job
|
||||
needs: []
|
||||
allow_failure: false
|
||||
variables:
|
||||
CIRRUS_VM_IMAGE_NAME: big-sur-base
|
||||
CIRRUS_VM_IMAGE_SELECTOR: image
|
||||
CIRRUS_VM_INSTANCE_TYPE: osx_instance
|
||||
INSTALL_COMMAND: brew install
|
||||
NAME: macos-11
|
||||
PATH_EXTRA: /usr/local/opt/ccache/libexec:/usr/local/opt/gettext/bin:/usr/local/opt/libpcap/bin:/usr/local/opt/libxslt/bin:/usr/local/opt/rpcgen/bin
|
||||
PKG_CONFIG_PATH: /usr/local/opt/curl/lib/pkgconfig:/usr/local/opt/libpcap/lib/pkgconfig:/usr/local/opt/libxml2/lib/pkgconfig:/usr/local/opt/ncurses/lib/pkgconfig:/usr/local/opt/readline/lib/pkgconfig
|
||||
UPDATE_COMMAND: brew update
|
||||
UPGRADE_COMMAND: brew upgrade
|
||||
include:
|
||||
- local: '/ci/gitlab/container-templates.yml'
|
||||
- local: '/ci/gitlab/build-templates.yml'
|
||||
- local: '/ci/gitlab/sanity-checks.yml'
|
||||
- local: '/ci/gitlab/containers.yml'
|
||||
- local: '/ci/gitlab/builds.yml'
|
||||
|
45
ci/gitlab/build-templates.yml
Normal file
45
ci/gitlab/build-templates.yml
Normal file
@@ -0,0 +1,45 @@
|
||||
# THIS FILE WAS AUTO-GENERATED
|
||||
#
|
||||
# $ lcitool manifest ci/manifest.yml
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
|
||||
.gitlab_native_build_job:
|
||||
image: $CI_REGISTRY_IMAGE/ci-$NAME:latest
|
||||
stage: builds
|
||||
|
||||
|
||||
.gitlab_cross_build_job:
|
||||
image: $CI_REGISTRY_IMAGE/ci-$NAME-cross-$CROSS:latest
|
||||
stage: builds
|
||||
|
||||
|
||||
.cirrus_build_job:
|
||||
stage: builds
|
||||
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
|
||||
needs: []
|
||||
script:
|
||||
- source ci/cirrus/$NAME.vars
|
||||
- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
|
||||
-e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
|
||||
-e "s|[@]CI_COMMIT_SHA@|$CI_COMMIT_SHA|g"
|
||||
-e "s|[@]CIRRUS_VM_INSTANCE_TYPE@|$CIRRUS_VM_INSTANCE_TYPE|g"
|
||||
-e "s|[@]CIRRUS_VM_IMAGE_SELECTOR@|$CIRRUS_VM_IMAGE_SELECTOR|g"
|
||||
-e "s|[@]CIRRUS_VM_IMAGE_NAME@|$CIRRUS_VM_IMAGE_NAME|g"
|
||||
-e "s|[@]UPDATE_COMMAND@|$UPDATE_COMMAND|g"
|
||||
-e "s|[@]UPGRADE_COMMAND@|$UPGRADE_COMMAND|g"
|
||||
-e "s|[@]INSTALL_COMMAND@|$INSTALL_COMMAND|g"
|
||||
-e "s|[@]PATH@|$PATH_EXTRA${PATH_EXTRA:+:}\$PATH|g"
|
||||
-e "s|[@]PKG_CONFIG_PATH@|$PKG_CONFIG_PATH|g"
|
||||
-e "s|[@]PKGS@|$PKGS|g"
|
||||
-e "s|[@]MAKE@|$MAKE|g"
|
||||
-e "s|[@]PYTHON@|$PYTHON|g"
|
||||
-e "s|[@]PIP3@|$PIP3|g"
|
||||
-e "s|[@]PYPI_PKGS@|$PYPI_PKGS|g"
|
||||
-e "s|[@]XML_CATALOG_FILES@|$XML_CATALOG_FILES|g"
|
||||
<ci/cirrus/build.yml >ci/cirrus/$NAME.yml
|
||||
- cat ci/cirrus/$NAME.yml
|
||||
- cirrus-run -v --show-build-log always ci/cirrus/$NAME.yml
|
||||
rules:
|
||||
- if: "$CIRRUS_GITHUB_REPO && $CIRRUS_API_TOKEN"
|
402
ci/gitlab/builds.yml
Normal file
402
ci/gitlab/builds.yml
Normal file
@@ -0,0 +1,402 @@
|
||||
# THIS FILE WAS AUTO-GENERATED
|
||||
#
|
||||
# $ lcitool manifest ci/manifest.yml
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
|
||||
# Native build jobs
|
||||
|
||||
x86_64-almalinux-8:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-almalinux-8-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: almalinux-8
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-almalinux-8-clang:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-almalinux-8-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CC: clang
|
||||
NAME: almalinux-8
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-alpine-314:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-alpine-314-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: alpine-314
|
||||
|
||||
|
||||
x86_64-alpine-315:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-alpine-315-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: alpine-315
|
||||
|
||||
|
||||
x86_64-alpine-edge:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-alpine-edge-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: alpine-edge
|
||||
|
||||
|
||||
x86_64-centos-stream-8:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-centos-stream-8-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: centos-stream-8
|
||||
artifacts:
|
||||
expire_in: 1 day
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
|
||||
x86_64-centos-stream-9:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-centos-stream-9-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: centos-stream-9
|
||||
artifacts:
|
||||
expire_in: 1 day
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
|
||||
x86_64-debian-10:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-debian-10-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
x86_64-debian-11:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-debian-11-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
x86_64-debian-11-clang:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-debian-11-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
x86_64-debian-sid:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-debian-sid-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
x86_64-fedora-35:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-fedora-35-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-35
|
||||
artifacts:
|
||||
expire_in: 1 day
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
|
||||
x86_64-fedora-36:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-fedora-36-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-36
|
||||
|
||||
|
||||
x86_64-fedora-rawhide:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-fedora-rawhide-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-rawhide
|
||||
|
||||
|
||||
x86_64-fedora-rawhide-clang:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-fedora-rawhide-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
CC: clang
|
||||
NAME: fedora-rawhide
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-opensuse-leap-153:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-opensuse-leap-153-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: opensuse-leap-153
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-opensuse-tumbleweed:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-opensuse-tumbleweed-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: opensuse-tumbleweed
|
||||
RPM: skip
|
||||
|
||||
|
||||
x86_64-ubuntu-2004:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-ubuntu-2004-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: ubuntu-2004
|
||||
|
||||
|
||||
x86_64-ubuntu-2204:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-ubuntu-2204-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
ASAN_OPTIONS: verify_asan_link_order=0
|
||||
MESON_ARGS: -Db_lundef=false -Db_sanitize=address,undefined
|
||||
NAME: ubuntu-2204
|
||||
UBSAN_OPTIONS: print_stacktrace=1:halt_on_error=1
|
||||
|
||||
|
||||
x86_64-ubuntu-2204-clang:
|
||||
extends: .native_build_job
|
||||
needs:
|
||||
- job: x86_64-ubuntu-2204-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CC: clang
|
||||
MESON_ARGS: -Db_lundef=false -Db_sanitize=address,undefined
|
||||
NAME: ubuntu-2204
|
||||
UBSAN_OPTIONS: print_stacktrace=1:halt_on_error=1
|
||||
|
||||
|
||||
|
||||
# Cross build jobs
|
||||
|
||||
armv6l-debian-10:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: armv6l-debian-10-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: armv6l
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
mips-debian-10:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: mips-debian-10-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: mips
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
mipsel-debian-10:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: mipsel-debian-10-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: mipsel
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
armv7l-debian-11:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: armv7l-debian-11-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: armv7l
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
mips64el-debian-11:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: mips64el-debian-11-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: mips64el
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
ppc64le-debian-11:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: ppc64le-debian-11-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: ppc64le
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
aarch64-debian-sid:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: aarch64-debian-sid-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
CROSS: aarch64
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
i686-debian-sid:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: i686-debian-sid-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
CROSS: i686
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
s390x-debian-sid:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: s390x-debian-sid-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
CROSS: s390x
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
mingw64-fedora-36:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: mingw64-fedora-36-container
|
||||
optional: true
|
||||
allow_failure: false
|
||||
variables:
|
||||
CROSS: mingw64
|
||||
NAME: fedora-36
|
||||
|
||||
|
||||
mingw32-fedora-rawhide:
|
||||
extends: .cross_build_job
|
||||
needs:
|
||||
- job: mingw32-fedora-rawhide-container
|
||||
optional: true
|
||||
allow_failure: true
|
||||
variables:
|
||||
CROSS: mingw32
|
||||
NAME: fedora-rawhide
|
||||
|
||||
|
||||
# Native cirrus build jobs
|
||||
|
||||
x86_64-freebsd-12:
|
||||
extends: .cirrus_build_job
|
||||
needs: []
|
||||
allow_failure: false
|
||||
variables:
|
||||
CIRRUS_VM_IMAGE_NAME: freebsd-12-3
|
||||
CIRRUS_VM_IMAGE_SELECTOR: image_family
|
||||
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
|
||||
INSTALL_COMMAND: pkg install -y
|
||||
NAME: freebsd-12
|
||||
UPDATE_COMMAND: pkg update
|
||||
UPGRADE_COMMAND: pkg upgrade -y
|
||||
|
||||
|
||||
x86_64-freebsd-13:
|
||||
extends: .cirrus_build_job
|
||||
needs: []
|
||||
allow_failure: false
|
||||
variables:
|
||||
CIRRUS_VM_IMAGE_NAME: freebsd-13-0
|
||||
CIRRUS_VM_IMAGE_SELECTOR: image_family
|
||||
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
|
||||
INSTALL_COMMAND: pkg install -y
|
||||
NAME: freebsd-13
|
||||
UPDATE_COMMAND: pkg update
|
||||
UPGRADE_COMMAND: pkg upgrade -y
|
||||
|
||||
|
||||
x86_64-macos-11:
|
||||
extends: .cirrus_build_job
|
||||
needs: []
|
||||
allow_failure: false
|
||||
variables:
|
||||
CIRRUS_VM_IMAGE_NAME: big-sur-base
|
||||
CIRRUS_VM_IMAGE_SELECTOR: image
|
||||
CIRRUS_VM_INSTANCE_TYPE: osx_instance
|
||||
INSTALL_COMMAND: brew install
|
||||
NAME: macos-11
|
||||
PATH_EXTRA: /usr/local/opt/ccache/libexec:/usr/local/opt/gettext/bin:/usr/local/opt/libpcap/bin:/usr/local/opt/libxslt/bin:/usr/local/opt/rpcgen/bin
|
||||
PKG_CONFIG_PATH: /usr/local/opt/curl/lib/pkgconfig:/usr/local/opt/libpcap/lib/pkgconfig:/usr/local/opt/libxml2/lib/pkgconfig:/usr/local/opt/ncurses/lib/pkgconfig:/usr/local/opt/readline/lib/pkgconfig
|
||||
UPDATE_COMMAND: brew update
|
||||
UPGRADE_COMMAND: brew upgrade
|
52
ci/gitlab/container-templates.yml
Normal file
52
ci/gitlab/container-templates.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
# THIS FILE WAS AUTO-GENERATED
|
||||
#
|
||||
# $ lcitool manifest ci/manifest.yml
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
|
||||
# For upstream
|
||||
#
|
||||
# - Push to default branch:
|
||||
# -> rebuild if dockerfile changed, no cache
|
||||
# - Otherwise
|
||||
# -> rebuild if LIBVIRT_CI_CONTAINERS=1, no cache,
|
||||
# to pick up new published distro packages or
|
||||
# recover from deleted tag
|
||||
#
|
||||
# For forks
|
||||
# - Always rebuild, with cache
|
||||
#
|
||||
.container_job:
|
||||
image: docker:stable
|
||||
stage: containers
|
||||
needs: []
|
||||
services:
|
||||
- docker:dind
|
||||
before_script:
|
||||
- export TAG="$CI_REGISTRY_IMAGE/ci-$NAME:latest"
|
||||
- export COMMON_TAG="$CI_REGISTRY/libvirt/libvirt/ci-$NAME:latest"
|
||||
- docker info
|
||||
- docker login "$CI_REGISTRY" -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
|
||||
script:
|
||||
- if test $CI_PROJECT_NAMESPACE = "libvirt";
|
||||
then
|
||||
docker build --tag "$TAG" -f "ci/containers/$NAME.Dockerfile" ci/containers ;
|
||||
else
|
||||
docker pull "$TAG" || docker pull "$COMMON_TAG" || true ;
|
||||
docker build --cache-from "$TAG" --cache-from "$COMMON_TAG" --tag "$TAG" -f "ci/containers/$NAME.Dockerfile" ci/containers ;
|
||||
fi
|
||||
- docker push "$TAG"
|
||||
after_script:
|
||||
- docker logout
|
||||
rules:
|
||||
- if: '$CI_PROJECT_NAMESPACE == "libvirt" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
|
||||
when: on_success
|
||||
changes:
|
||||
- ci/gitlab/container-templates.yml
|
||||
- ci/containers/$NAME.Dockerfile
|
||||
- if: '$CI_PROJECT_NAMESPACE == "libvirt" && $LIBVIRT_CI_CONTAINERS == "1"'
|
||||
when: on_success
|
||||
- if: '$CI_PROJECT_NAMESPACE == "libvirt"'
|
||||
when: never
|
||||
- when: on_success
|
325
ci/gitlab/containers.yml
Normal file
325
ci/gitlab/containers.yml
Normal file
@@ -0,0 +1,325 @@
|
||||
# THIS FILE WAS AUTO-GENERATED
|
||||
#
|
||||
# $ lcitool manifest ci/manifest.yml
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
|
||||
# Native container jobs
|
||||
|
||||
x86_64-almalinux-8-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: almalinux-8
|
||||
|
||||
|
||||
x86_64-alpine-314-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: alpine-314
|
||||
|
||||
|
||||
x86_64-alpine-315-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: alpine-315
|
||||
|
||||
|
||||
x86_64-alpine-edge-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: alpine-edge
|
||||
|
||||
|
||||
x86_64-centos-stream-8-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: centos-stream-8
|
||||
|
||||
|
||||
x86_64-centos-stream-9-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: centos-stream-9
|
||||
|
||||
|
||||
x86_64-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10
|
||||
|
||||
|
||||
x86_64-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11
|
||||
|
||||
|
||||
x86_64-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid
|
||||
|
||||
|
||||
x86_64-fedora-35-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-35
|
||||
|
||||
|
||||
x86_64-fedora-36-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-36
|
||||
|
||||
|
||||
x86_64-fedora-rawhide-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-rawhide
|
||||
|
||||
|
||||
x86_64-opensuse-leap-153-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: opensuse-leap-153
|
||||
|
||||
|
||||
x86_64-opensuse-tumbleweed-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: opensuse-tumbleweed
|
||||
|
||||
|
||||
x86_64-ubuntu-2004-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: ubuntu-2004
|
||||
|
||||
|
||||
x86_64-ubuntu-2204-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: ubuntu-2204
|
||||
|
||||
|
||||
|
||||
# Cross container jobs
|
||||
|
||||
aarch64-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-aarch64
|
||||
|
||||
|
||||
armv6l-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10-cross-armv6l
|
||||
|
||||
|
||||
armv7l-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-armv7l
|
||||
|
||||
|
||||
i686-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-i686
|
||||
|
||||
|
||||
mips-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10-cross-mips
|
||||
|
||||
|
||||
mips64el-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-mips64el
|
||||
|
||||
|
||||
mipsel-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-10-cross-mipsel
|
||||
|
||||
|
||||
ppc64le-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-ppc64le
|
||||
|
||||
|
||||
s390x-debian-10-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-10-cross-s390x
|
||||
|
||||
|
||||
aarch64-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-aarch64
|
||||
|
||||
|
||||
armv6l-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-armv6l
|
||||
|
||||
|
||||
armv7l-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11-cross-armv7l
|
||||
|
||||
|
||||
i686-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-i686
|
||||
|
||||
|
||||
mips64el-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11-cross-mips64el
|
||||
|
||||
|
||||
mipsel-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-mipsel
|
||||
|
||||
|
||||
ppc64le-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: debian-11-cross-ppc64le
|
||||
|
||||
|
||||
s390x-debian-11-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-11-cross-s390x
|
||||
|
||||
|
||||
aarch64-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-aarch64
|
||||
|
||||
|
||||
armv6l-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-armv6l
|
||||
|
||||
|
||||
armv7l-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-armv7l
|
||||
|
||||
|
||||
i686-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-i686
|
||||
|
||||
|
||||
mips64el-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-mips64el
|
||||
|
||||
|
||||
mipsel-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-mipsel
|
||||
|
||||
|
||||
ppc64le-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-ppc64le
|
||||
|
||||
|
||||
s390x-debian-sid-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: debian-sid-cross-s390x
|
||||
|
||||
|
||||
mingw32-fedora-36-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-36-cross-mingw32
|
||||
|
||||
|
||||
mingw64-fedora-36-container:
|
||||
extends: .container_job
|
||||
allow_failure: false
|
||||
variables:
|
||||
NAME: fedora-36-cross-mingw64
|
||||
|
||||
|
||||
mingw32-fedora-rawhide-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-rawhide-cross-mingw32
|
||||
|
||||
|
||||
mingw64-fedora-rawhide-container:
|
||||
extends: .container_job
|
||||
allow_failure: true
|
||||
variables:
|
||||
NAME: fedora-rawhide-cross-mingw64
|
18
ci/gitlab/sanity-checks.yml
Normal file
18
ci/gitlab/sanity-checks.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
# THIS FILE WAS AUTO-GENERATED
|
||||
#
|
||||
# $ lcitool manifest ci/manifest.yml
|
||||
#
|
||||
# https://gitlab.com/libvirt/libvirt-ci
|
||||
|
||||
|
||||
check-dco:
|
||||
stage: sanity_checks
|
||||
needs: []
|
||||
image: registry.gitlab.com/libvirt/libvirt-ci/check-dco:master
|
||||
script:
|
||||
- /check-dco libvirt
|
||||
except:
|
||||
variables:
|
||||
- $CI_PROJECT_NAMESPACE == 'libvirt'
|
||||
variables:
|
||||
GIT_DEPTH: 1000
|
100
ci/integration-template.yml
Normal file
100
ci/integration-template.yml
Normal file
@@ -0,0 +1,100 @@
|
||||
.qemu-build-template: &qemu-build-template
|
||||
- git clone --depth 1 https://gitlab.com/qemu-project/qemu.git
|
||||
- cd qemu
|
||||
#
|
||||
# inspired by upstream QEMU's buildtest-template.yml
|
||||
- export JOBS="$(expr $(nproc) + 1)"
|
||||
- mkdir build
|
||||
- cd build
|
||||
- ../configure --prefix=/usr
|
||||
--enable-werror
|
||||
--disable-tcg
|
||||
--disable-docs
|
||||
--target-list=x86_64-softmmu || (cat config.log meson-logs/meson-log.txt && exit 1)
|
||||
- make -j"$JOBS"
|
||||
- if test -n "$MAKE_CHECK_ARGS";
|
||||
then
|
||||
make -j"$JOBS" check-build;
|
||||
fi
|
||||
- sudo make install
|
||||
|
||||
|
||||
.install-deps: &install-deps
|
||||
- sudo dnf install -y libvirt-rpms/* libvirt-perl-rpms/*
|
||||
- sudo pip3 install --prefix=/usr avocado-framework
|
||||
|
||||
|
||||
.enable-core-dumps: &enable-core-dumps
|
||||
- sudo sh -c "echo DefaultLimitCORE=infinity >> /etc/systemd/system.conf" # Explicitly allow storing cores globally
|
||||
- sudo systemctl daemon-reexec # need to reexec systemd after changing config
|
||||
|
||||
|
||||
.enable-libvirt-debugging: &enable-libvirt-debugging
|
||||
- source /etc/os-release # in order to query the vendor-provided variables
|
||||
- if test "$ID" = "centos" && test "$VERSION_ID" -lt 9 ||
|
||||
test "$ID" = "fedora" && test "$VERSION_ID" -lt 35;
|
||||
then
|
||||
DAEMONS="libvirtd virtlogd virtlockd";
|
||||
else
|
||||
DAEMONS="virtproxyd virtqemud virtinterfaced virtsecretd virtstoraged virtnwfilterd virtnodedevd virtlogd virtlockd";
|
||||
fi
|
||||
- for daemon in $DAEMONS;
|
||||
do
|
||||
LOG_OUTPUTS="1:file:/var/log/libvirt/${daemon}.log";
|
||||
LOG_FILTERS="3:remote 4:event 3:util.json 3:util.object 3:util.dbus 3:util.netlink 3:node_device 3:rpc 3:access 1:*";
|
||||
sudo augtool set /files/etc/libvirt/${daemon}.conf/log_filters "$LOG_FILTERS" &>/dev/null;
|
||||
sudo augtool set /files/etc/libvirt/${daemon}.conf/log_outputs "$LOG_OUTPUTS" &>/dev/null;
|
||||
sudo systemctl --quiet stop ${daemon}.service;
|
||||
sudo systemctl restart ${daemon}.socket;
|
||||
done
|
||||
|
||||
|
||||
.collect-logs: &collect-logs
|
||||
- mkdir logs
|
||||
- test -e "$SCRATCH_DIR"/avocado && sudo mv "$SCRATCH_DIR"/avocado/latest/test-results logs/avocado;
|
||||
- sudo coredumpctl info --no-pager > logs/coredumpctl.txt
|
||||
- sudo mv /var/log/libvirt logs/libvirt
|
||||
- sudo chown -R $(whoami):$(whoami) logs
|
||||
# rename all Avocado stderr/stdout logs to *.log so that GitLab's web UI doesn't mangle the MIME type
|
||||
- find logs/avocado/ -type f ! -name "*.log" -exec
|
||||
sh -c 'DIR=$(dirname {}); NAME=$(basename {}); mv $DIR/$NAME{,.log}' \;
|
||||
|
||||
|
||||
.integration_tests:
|
||||
stage: integration_tests
|
||||
before_script:
|
||||
- mkdir "$SCRATCH_DIR"
|
||||
- *install-deps
|
||||
- *enable-core-dumps
|
||||
- *enable-libvirt-debugging
|
||||
- sudo virsh net-start default &>/dev/null || true;
|
||||
script:
|
||||
- cd "$SCRATCH_DIR"
|
||||
- git clone --depth 1 https://gitlab.com/libvirt/libvirt-tck.git
|
||||
- cd libvirt-tck
|
||||
- sudo avocado --config avocado.config run --job-results-dir "$SCRATCH_DIR"/avocado
|
||||
after_script:
|
||||
- test "$CI_JOB_STATUS" = "success" && exit 0;
|
||||
- *collect-logs
|
||||
variables:
|
||||
SCRATCH_DIR: "/tmp/scratch"
|
||||
artifacts:
|
||||
name: logs
|
||||
paths:
|
||||
- logs
|
||||
when: on_failure
|
||||
rules:
|
||||
- if: '$LIBVIRT_CI_INTEGRATION'
|
||||
when: on_success
|
||||
- when: never
|
||||
|
||||
|
||||
# YAML anchors don't work with Shell conditions so we can't use a variable
|
||||
# to conditionally build+install QEMU from source.
|
||||
# Instead, create a new test job template for this scenario.
|
||||
.integration_tests_upstream_qemu:
|
||||
extends: .integration_tests
|
||||
before_script:
|
||||
- !reference [.integration_tests, before_script]
|
||||
- cd "$SCRATCH_DIR"
|
||||
- *qemu-build-template
|
@@ -1,55 +1,5 @@
|
||||
.integration_tests:
|
||||
stage: integration_tests
|
||||
before_script:
|
||||
- mkdir "$SCRATCH_DIR"
|
||||
- sudo sh -c "echo DefaultLimitCORE=infinity >> /etc/systemd/system.conf" # Explicitly allow storing cores globally
|
||||
- sudo systemctl daemon-reexec # need to reexec systemd after changing config
|
||||
- sudo dnf install -y libvirt-rpms/* libvirt-perl-rpms/*
|
||||
- sudo pip3 install --prefix=/usr avocado-framework
|
||||
- source /etc/os-release # in order to query the vendor-provided variables
|
||||
- if test "$ID" = "centos" && test "$VERSION_ID" -lt 9 ||
|
||||
test "$ID" = "fedora" && test "$VERSION_ID" -lt 35;
|
||||
then
|
||||
DAEMONS="libvirtd virtlogd virtlockd";
|
||||
else
|
||||
DAEMONS="virtproxyd virtqemud virtinterfaced virtsecretd virtstoraged virtnwfilterd virtnodedevd virtlogd virtlockd";
|
||||
fi
|
||||
- for daemon in $DAEMONS;
|
||||
do
|
||||
LOG_OUTPUTS="1:file:/var/log/libvirt/${daemon}.log";
|
||||
LOG_FILTERS="3:remote 4:event 3:util.json 3:util.object 3:util.dbus 3:util.netlink 3:node_device 3:rpc 3:access 1:*";
|
||||
sudo augtool set /files/etc/libvirt/${daemon}.conf/log_filters "$LOG_FILTERS" &>/dev/null;
|
||||
sudo augtool set /files/etc/libvirt/${daemon}.conf/log_outputs "$LOG_OUTPUTS" &>/dev/null;
|
||||
sudo systemctl --quiet stop ${daemon}.service;
|
||||
sudo systemctl restart ${daemon}.socket;
|
||||
done
|
||||
- sudo virsh net-start default &>/dev/null || true;
|
||||
script:
|
||||
- mkdir logs
|
||||
- cd "$SCRATCH_DIR"
|
||||
- git clone --depth 1 https://gitlab.com/libvirt/libvirt-tck.git
|
||||
- cd libvirt-tck
|
||||
- sudo avocado --config avocado.config run --job-results-dir "$SCRATCH_DIR"/avocado
|
||||
after_script:
|
||||
- test "$CI_JOB_STATUS" = "success" && exit 0;
|
||||
- test -e "$SCRATCH_DIR"/avocado && sudo mv "$SCRATCH_DIR"/avocado/latest/test-results logs/avocado;
|
||||
- sudo coredumpctl info --no-pager > logs/coredumpctl.txt
|
||||
- sudo mv /var/log/libvirt logs/libvirt
|
||||
- sudo chown -R $(whoami):$(whoami) logs
|
||||
# rename all Avocado stderr/stdout logs to *.log so that GitLab's web UI doesn't mangle the MIME type
|
||||
- find logs/avocado/ -type f ! -name "*.log" -exec
|
||||
sh -c 'DIR=$(dirname {}); NAME=$(basename {}); mv $DIR/$NAME{,.log}' \;
|
||||
variables:
|
||||
SCRATCH_DIR: "/tmp/scratch"
|
||||
artifacts:
|
||||
name: logs
|
||||
paths:
|
||||
- logs
|
||||
when: on_failure
|
||||
rules:
|
||||
- if: '$LIBVIRT_CI_INTEGRATION'
|
||||
when: on_success
|
||||
- when: never
|
||||
include:
|
||||
- 'ci/integration-template.yml'
|
||||
|
||||
centos-stream-8-tests:
|
||||
extends: .integration_tests
|
||||
@@ -83,22 +33,6 @@ centos-stream-9-tests:
|
||||
tags:
|
||||
- $LIBVIRT_CI_INTEGRATION_RUNNER_TAG
|
||||
|
||||
fedora-34-tests:
|
||||
extends: .integration_tests
|
||||
needs:
|
||||
- x86_64-fedora-34
|
||||
- project: libvirt/libvirt-perl
|
||||
job: x86_64-fedora-34
|
||||
ref: master
|
||||
artifacts: true
|
||||
variables:
|
||||
# needed by libvirt-gitlab-executor
|
||||
DISTRO: fedora-34
|
||||
# can be overridden in forks to set a different runner tag
|
||||
LIBVIRT_CI_INTEGRATION_RUNNER_TAG: redhat-vm-host
|
||||
tags:
|
||||
- $LIBVIRT_CI_INTEGRATION_RUNNER_TAG
|
||||
|
||||
fedora-35-tests:
|
||||
extends: .integration_tests
|
||||
needs:
|
||||
@@ -114,3 +48,19 @@ fedora-35-tests:
|
||||
LIBVIRT_CI_INTEGRATION_RUNNER_TAG: redhat-vm-host
|
||||
tags:
|
||||
- $LIBVIRT_CI_INTEGRATION_RUNNER_TAG
|
||||
|
||||
fedora-35-upstream-qemu-tests:
|
||||
extends: .integration_tests_upstream_qemu
|
||||
needs:
|
||||
- x86_64-fedora-35
|
||||
- project: libvirt/libvirt-perl
|
||||
job: x86_64-fedora-35
|
||||
ref: master
|
||||
artifacts: true
|
||||
variables:
|
||||
# needed by libvirt-gitlab-executor
|
||||
DISTRO: fedora-35
|
||||
# can be overridden in forks to set a different runner tag
|
||||
LIBVIRT_CI_INTEGRATION_RUNNER_TAG: redhat-vm-host
|
||||
tags:
|
||||
- $LIBVIRT_CI_INTEGRATION_RUNNER_TAG
|
||||
|
@@ -18,9 +18,9 @@ targets:
|
||||
RPM: skip
|
||||
CC: clang
|
||||
|
||||
alpine-314:
|
||||
jobs:
|
||||
- arch: x86_64
|
||||
alpine-314: x86_64
|
||||
|
||||
alpine-315: x86_64
|
||||
|
||||
alpine-edge:
|
||||
jobs:
|
||||
@@ -144,14 +144,6 @@ targets:
|
||||
- arch: s390x
|
||||
allow-failure: true
|
||||
|
||||
fedora-34:
|
||||
jobs:
|
||||
- arch: x86_64
|
||||
artifacts:
|
||||
expire_in: 1 day
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
fedora-35:
|
||||
jobs:
|
||||
- arch: x86_64
|
||||
@@ -160,8 +152,11 @@ targets:
|
||||
paths:
|
||||
- libvirt-rpms
|
||||
|
||||
fedora-36:
|
||||
jobs:
|
||||
- arch: x86_64
|
||||
|
||||
- arch: mingw32
|
||||
allow-failure: true
|
||||
builds: false
|
||||
|
||||
- arch: mingw64
|
||||
@@ -189,13 +184,7 @@ targets:
|
||||
|
||||
freebsd-13: x86_64
|
||||
|
||||
freebsd-current:
|
||||
jobs:
|
||||
- arch: x86_64
|
||||
allow-failure: true
|
||||
builds: False
|
||||
|
||||
opensuse-leap-152:
|
||||
opensuse-leap-153:
|
||||
jobs:
|
||||
- arch: x86_64
|
||||
variables:
|
||||
@@ -215,9 +204,9 @@ targets:
|
||||
PATH_EXTRA: /usr/local/opt/ccache/libexec:/usr/local/opt/gettext/bin:/usr/local/opt/libpcap/bin:/usr/local/opt/libxslt/bin:/usr/local/opt/rpcgen/bin
|
||||
PKG_CONFIG_PATH: /usr/local/opt/curl/lib/pkgconfig:/usr/local/opt/libpcap/lib/pkgconfig:/usr/local/opt/libxml2/lib/pkgconfig:/usr/local/opt/ncurses/lib/pkgconfig:/usr/local/opt/readline/lib/pkgconfig
|
||||
|
||||
ubuntu-1804: x86_64
|
||||
ubuntu-2004: x86_64
|
||||
|
||||
ubuntu-2004:
|
||||
ubuntu-2204:
|
||||
jobs:
|
||||
- arch: x86_64
|
||||
variables:
|
||||
|
@@ -219,7 +219,7 @@ Daemon and Remote Access
|
||||
|
||||
Access to libvirt drivers is primarily handled by the libvirtd daemon
|
||||
through the `remote <remote.html>`__ driver via an
|
||||
`RPC <internals/rpc.html>`__. Some hypervisors do support client-side
|
||||
`RPC <kbase/internals/rpc.html>`__. Some hypervisors do support client-side
|
||||
connections and responses, such as Test, OpenVZ, VMware, VirtualBox
|
||||
(vbox), ESX, Hyper-V, Xen, and Virtuozzo. The libvirtd daemon service is
|
||||
started on the host at system boot time and can also be restarted at any
|
||||
@@ -234,8 +234,9 @@ The libvirt client `applications <apps.html>`__ use a `URI <uri.html>`__
|
||||
to obtain the ``virConnectPtr``. The ``virConnectPtr`` keeps track of
|
||||
the driver connection plus a variety of other connections (network,
|
||||
interface, storage, etc.). The ``virConnectPtr`` is then used as a
|
||||
parameter to other virtualization `functions <#Functions>`__. Depending
|
||||
upon the driver being used, calls will be routed through the remote
|
||||
parameter to other virtualization functions
|
||||
(see `Functions and Naming Conventions`_).
|
||||
Depending upon the driver being used, calls will be routed through the remote
|
||||
driver to the libvirtd daemon. The daemon will reference the connection
|
||||
specific driver in order to retrieve the requested information and then
|
||||
pass back status and/or data through the connection back to the
|
||||
|
@@ -143,30 +143,22 @@ Desktop applications
|
||||
or text console associated with a virtual machine or container.
|
||||
`qt-remote-viewer <https://f1ash.github.io/qt-virt-manager/#virtual-machines-viewer>`__
|
||||
The Qt VNC/SPICE viewer for access to remote desktops or VMs.
|
||||
`GNOME Boxes <https://gnomeboxes.org/>`__
|
||||
A GNOME application to access virtual machines.
|
||||
|
||||
Infrastructure as a Service (IaaS)
|
||||
----------------------------------
|
||||
|
||||
`Cracow Cloud One <http://cc1.ifj.edu.pl>`__
|
||||
The CC1 system provides a complete solution for Private Cloud
|
||||
Computing. An intuitive web access interface with an administration
|
||||
module and simple installation procedure make it easy to benefit from
|
||||
private Cloud Computing technology.
|
||||
`Eucalyptus <https://github.com/eucalyptus/eucalyptus>`__
|
||||
Eucalyptus is an on-premise Infrastructure as a Service cloud
|
||||
software platform that is open source and AWS-compatible. Eucalyptus
|
||||
uses libvirt virtualization API to directly interact with Xen and KVM
|
||||
hypervisors.
|
||||
`Nimbus <http://www.nimbusproject.org>`__
|
||||
`Nimbus <https://www.nimbusproject.org/>`__
|
||||
Nimbus is an open-source toolkit focused on providing
|
||||
Infrastructure-as-a-Service (IaaS) capabilities to the scientific
|
||||
community. It uses libvirt for communication with all KVM and Xen
|
||||
virtual machines.
|
||||
`Snooze <http://snooze.inria.fr>`__
|
||||
Snooze is an open-source scalable, autonomic, and energy-efficient
|
||||
virtual machine (VM) management framework for private clouds. It
|
||||
integrates libvirt for VM monitoring, live migration, and life-cycle
|
||||
management.
|
||||
`OpenStack <https://www.openstack.org>`__
|
||||
OpenStack is a "cloud operating system" usable for both public and
|
||||
private clouds. Its various parts take care of compute, storage and
|
||||
@@ -232,14 +224,14 @@ Monitoring
|
||||
for each guest without installing collectd on the guest systems. For
|
||||
a full description, please refer to the libvirt section in the
|
||||
collectd.conf(5) manual page.
|
||||
`Host sFlow <https://www.sflow.net/>`__
|
||||
`Host sFlow <https://sflow.net/>`__
|
||||
Host sFlow is a lightweight agent running on KVM hypervisors that
|
||||
links to libvirt library and exports standardized cpu, memory,
|
||||
network and disk metrics for all virtual machines.
|
||||
`Munin <https://honk.sigxcpu.org/projects/libvirt/#munin>`__
|
||||
The plugins provided by Guido Günther allow to monitor various things
|
||||
like network and block I/O with
|
||||
`Munin <http://munin.projects.linpro.no/>`__.
|
||||
`Munin <https://munin-monitoring.org/>`__.
|
||||
`Nagios-virt <https://people.redhat.com/rjones/nagios-virt/>`__
|
||||
Nagios-virt is a configuration tool to add monitoring of your
|
||||
virtualised domains to `Nagios <https://www.nagios.org/>`__. You can
|
||||
@@ -256,12 +248,6 @@ Monitoring
|
||||
Provisioning
|
||||
------------
|
||||
|
||||
`Tivoli Provisioning Manager <https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli+Provisioning+Manager>`__
|
||||
Part of the IBM Tivoli family, Tivoli Provisioning Manager (TPM) is
|
||||
an IT lifecycle automation product. It `uses
|
||||
libvirt <http://publib.boulder.ibm.com/infocenter/tivihelp/v38r1/index.jsp?topic=/com.ibm.tivoli.tpm.apk.doc/libvirt_package.html>`__
|
||||
for communication with virtualization hosts and guest domains.
|
||||
|
||||
`Foreman <https://theforeman.org>`__
|
||||
Foreman is an open source web based application aimed to be a Single
|
||||
Address For All Machines Life Cycle Management. Foreman:
|
||||
@@ -331,6 +317,10 @@ Web applications
|
||||
Secrets
|
||||
- Create and launch VMs
|
||||
- Configure VMs with easy panels or go pro and edit the VM's XML
|
||||
`Cockpit <https://cockpit-project.org/>`__
|
||||
Cockpit is a web-based graphical interface for servers. With
|
||||
`cockpit-machines <https://github.com/cockpit-project/cockpit-machines>`__
|
||||
it can create and manage virtual machines via libvirt.
|
||||
|
||||
Other
|
||||
-----
|
||||
|
@@ -1,6 +1,3 @@
|
||||
.. role:: anchor(raw)
|
||||
:format: html
|
||||
|
||||
=============
|
||||
Bug reporting
|
||||
=============
|
||||
@@ -79,8 +76,6 @@ Linux Distribution specific bug reports
|
||||
like to have your procedure for filing bugs mentioned here, please mail the
|
||||
libvirt development list.
|
||||
|
||||
:anchor:`<a id="quality"/>`
|
||||
|
||||
How to file high quality bug reports
|
||||
------------------------------------
|
||||
|
||||
|
@@ -1,424 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>Control Groups Resource Management</h1>
|
||||
|
||||
<ul id="toc"></ul>
|
||||
|
||||
<p>
|
||||
The QEMU and LXC drivers make use of the Linux "Control Groups" facility
|
||||
for applying resource management to their virtual machines and containers.
|
||||
</p>
|
||||
|
||||
<h2><a id="requiredControllers">Required controllers</a></h2>
|
||||
|
||||
<p>
|
||||
The control groups filesystem supports multiple "controllers". By default
|
||||
the init system (such as systemd) should mount all controllers compiled
|
||||
into the kernel at <code>/sys/fs/cgroup/$CONTROLLER-NAME</code>. Libvirt
|
||||
will never attempt to mount any controllers itself, merely detect where
|
||||
they are mounted.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The QEMU driver is capable of using the <code>cpuset</code>,
|
||||
<code>cpu</code>, <code>cpuacct</code>, <code>memory</code>,
|
||||
<code>blkio</code> and <code>devices</code> controllers.
|
||||
None of them are compulsory. If any controller is not mounted,
|
||||
the resource management APIs which use it will cease to operate.
|
||||
It is possible to explicitly turn off use of a controller,
|
||||
even when mounted, via the <code>/etc/libvirt/qemu.conf</code>
|
||||
configuration file.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The LXC driver is capable of using the <code>cpuset</code>,
|
||||
<code>cpu</code>, <code>cpuacct</code>, <code>freezer</code>,
|
||||
<code>memory</code>, <code>blkio</code> and <code>devices</code>
|
||||
controllers. The <code>cpuacct</code>, <code>devices</code>
|
||||
and <code>memory</code> controllers are compulsory. Without
|
||||
them mounted, no containers can be started. If any of the
|
||||
other controllers are not mounted, the resource management APIs
|
||||
which use them will cease to operate.
|
||||
</p>
|
||||
|
||||
<h2><a id="currentLayout">Current cgroups layout</a></h2>
|
||||
|
||||
<p>
|
||||
As of libvirt 1.0.5 or later, the cgroups layout created by libvirt has been
|
||||
simplified, in order to facilitate the setup of resource control policies by
|
||||
administrators / management applications. The new layout is based on the concepts
|
||||
of "partitions" and "consumers". A "consumer" is a cgroup which holds the
|
||||
processes for a single virtual machine or container. A "partition" is a cgroup
|
||||
which does not contain any processes, but can have resource controls applied.
|
||||
A "partition" will have zero or more child directories which may be either
|
||||
"consumer" or "partition".
|
||||
</p>
|
||||
|
||||
<p>
|
||||
As of libvirt 1.1.1 or later, the cgroups layout will have some slight
|
||||
differences when running on a host with systemd 205 or later. The overall
|
||||
tree structure is the same, but there are some differences in the naming
|
||||
conventions for the cgroup directories. Thus the following docs split
|
||||
in two, one describing systemd hosts and the other non-systemd hosts.
|
||||
</p>
|
||||
|
||||
<h3><a id="currentLayoutSystemd">Systemd cgroups integration</a></h3>
|
||||
|
||||
<p>
|
||||
On hosts which use systemd, each consumer maps to a systemd scope unit,
|
||||
while partitions map to a system slice unit.
|
||||
</p>
|
||||
|
||||
<h4><a id="systemdScope">Systemd scope naming</a></h4>
|
||||
|
||||
<p>
|
||||
The systemd convention is for the scope name of virtual machines / containers
|
||||
to be of the general format <code>machine-$NAME.scope</code>. Libvirt forms the
|
||||
<code>$NAME</code> part of this by concatenating the driver type with the id
|
||||
and truncated name of the guest, and then escaping any systemd reserved
|
||||
characters.
|
||||
So for a guest <code>demo</code> running under the <code>lxc</code> driver,
|
||||
we get a <code>$NAME</code> of <code>lxc-12345-demo</code> which when escaped
|
||||
is <code>lxc\x2d12345\x2ddemo</code>. So the complete scope name is
|
||||
<code>machine-lxc\x2d12345\x2ddemo.scope</code>.
|
||||
The scope names map directly to the cgroup directory names.
|
||||
</p>
|
||||
|
||||
<h4><a id="systemdSlice">Systemd slice naming</a></h4>
|
||||
|
||||
<p>
|
||||
The systemd convention for slice naming is that a slice should include the
|
||||
name of all of its parents prepended on its own name. So for a libvirt
|
||||
partition <code>/machine/engineering/testing</code>, the slice name will
|
||||
be <code>machine-engineering-testing.slice</code>. Again the slice names
|
||||
map directly to the cgroup directory names. Systemd creates three top level
|
||||
slices by default, <code>system.slice</code> <code>user.slice</code> and
|
||||
<code>machine.slice</code>. All virtual machines or containers created
|
||||
by libvirt will be associated with <code>machine.slice</code> by default.
|
||||
</p>
|
||||
|
||||
<h4><a id="systemdLayout">Systemd cgroup layout</a></h4>
|
||||
|
||||
<p>
|
||||
Given this, a possible systemd cgroups layout involving 3 qemu guests,
|
||||
3 lxc containers and 3 custom child slices, would be:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
$ROOT
|
||||
|
|
||||
+- system.slice
|
||||
| |
|
||||
| +- libvirtd.service
|
||||
|
|
||||
+- machine.slice
|
||||
|
|
||||
+- machine-qemu\x2d1\x2dvm1.scope
|
||||
| |
|
||||
| +- libvirt
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- machine-qemu\x2d2\x2dvm2.scope
|
||||
| |
|
||||
| +- libvirt
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- machine-qemu\x2d3\x2dvm3.scope
|
||||
| |
|
||||
| +- libvirt
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- machine-engineering.slice
|
||||
| |
|
||||
| +- machine-engineering-testing.slice
|
||||
| | |
|
||||
| | +- machine-lxc\x2d11111\x2dcontainer1.scope
|
||||
| |
|
||||
| +- machine-engineering-production.slice
|
||||
| |
|
||||
| +- machine-lxc\x2d22222\x2dcontainer2.scope
|
||||
|
|
||||
+- machine-marketing.slice
|
||||
|
|
||||
+- machine-lxc\x2d33333\x2dcontainer3.scope
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Prior libvirt 7.1.0 the topology doesn't have extra
|
||||
<code>libvirt</code> directory.
|
||||
</p>
|
||||
|
||||
<h3><a id="currentLayoutGeneric">Non-systemd cgroups layout</a></h3>
|
||||
|
||||
<p>
|
||||
On hosts which do not use systemd, each consumer has a corresponding cgroup
|
||||
named <code>$VMNAME.libvirt-{qemu,lxc}</code>. Each consumer is associated
|
||||
with exactly one partition, which also have a corresponding cgroup usually
|
||||
named <code>$PARTNAME.partition</code>. The exceptions to this naming rule
|
||||
is the top level default partition for virtual machines and containers
|
||||
<code>/machine</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Given this, a possible non-systemd cgroups layout involving 3 qemu guests,
|
||||
3 lxc containers and 2 custom child slices, would be:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
$ROOT
|
||||
|
|
||||
+- machine
|
||||
|
|
||||
+- qemu-1-vm1.libvirt-qemu
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- qeme-2-vm2.libvirt-qemu
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- qemu-3-vm3.libvirt-qemu
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- engineering.partition
|
||||
| |
|
||||
| +- testing.partition
|
||||
| | |
|
||||
| | +- lxc-11111-container1.libvirt-lxc
|
||||
| |
|
||||
| +- production.partition
|
||||
| |
|
||||
| +- lxc-22222-container2.libvirt-lxc
|
||||
|
|
||||
+- marketing.partition
|
||||
|
|
||||
+- lxc-33333-container3.libvirt-lxc
|
||||
</pre>
|
||||
|
||||
<h2><a id="customPartiton">Using custom partitions</a></h2>
|
||||
|
||||
<p>
|
||||
If there is a need to apply resource constraints to groups of
|
||||
virtual machines or containers, then the single default
|
||||
partition <code>/machine</code> may not be sufficiently
|
||||
flexible. The administrator may wish to sub-divide the
|
||||
default partition, for example into "testing" and "production"
|
||||
partitions, and then assign each guest to a specific
|
||||
sub-partition. This is achieved via a small element addition
|
||||
to the guest domain XML config, just below the main <code>domain</code>
|
||||
element
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
...
|
||||
<resource>
|
||||
<partition>/machine/production</partition>
|
||||
</resource>
|
||||
...
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Note that the partition names in the guest XML are using a
|
||||
generic naming format, not the low level naming convention
|
||||
required by the underlying host OS. That is, you should not include
|
||||
any of the <code>.partition</code> or <code>.slice</code>
|
||||
suffixes in the XML config. Given a partition name
|
||||
<code>/machine/production</code>, libvirt will automatically
|
||||
apply the platform specific translation required to get
|
||||
<code>/machine/production.partition</code> (non-systemd)
|
||||
or <code>/machine.slice/machine-production.slice</code>
|
||||
(systemd) as the underlying cgroup name
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Libvirt will not auto-create the cgroups directory to back
|
||||
this partition. In the future, libvirt / virsh will provide
|
||||
APIs / commands to create custom partitions, but currently
|
||||
this is left as an exercise for the administrator.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<strong>Note:</strong> the ability to place guests in custom
|
||||
partitions is only available with libvirt >= 1.0.5, using
|
||||
the new cgroup layout. The legacy cgroups layout described
|
||||
later in this document did not support customization per guest.
|
||||
</p>
|
||||
|
||||
<h3><a id="createSystemd">Creating custom partitions (systemd)</a></h3>
|
||||
|
||||
<p>
|
||||
Given the XML config above, the admin on a systemd based host would
|
||||
need to create a unit file <code>/etc/systemd/system/machine-production.slice</code>
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# cat > /etc/systemd/system/machine-testing.slice <<EOF
|
||||
[Unit]
|
||||
Description=VM testing slice
|
||||
Before=slices.target
|
||||
Wants=machine.slice
|
||||
EOF
|
||||
# systemctl start machine-testing.slice
|
||||
</pre>
|
||||
|
||||
<h3><a id="createNonSystemd">Creating custom partitions (non-systemd)</a></h3>
|
||||
|
||||
<p>
|
||||
Given the XML config above, the admin on a non-systemd based host
|
||||
would need to create a cgroup named '/machine/production.partition'
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# cd /sys/fs/cgroup
|
||||
# for i in blkio cpu,cpuacct cpuset devices freezer memory net_cls perf_event
|
||||
do
|
||||
mkdir $i/machine/production.partition
|
||||
done
|
||||
# for i in cpuset.cpus cpuset.mems
|
||||
do
|
||||
cat cpuset/machine/$i > cpuset/machine/production.partition/$i
|
||||
done
|
||||
</pre>
|
||||
|
||||
<h2><a id="resourceAPIs">Resource management APIs/commands</a></h2>
|
||||
|
||||
<p>
|
||||
Since libvirt aims to provide an API which is portable across
|
||||
hypervisors, the concept of cgroups is not exposed directly
|
||||
in the API or XML configuration. It is considered to be an
|
||||
internal implementation detail. Instead libvirt provides a
|
||||
set of APIs for applying resource controls, which are then
|
||||
mapped to corresponding cgroup tunables
|
||||
</p>
|
||||
|
||||
<h3>Scheduler tuning</h3>
|
||||
|
||||
<p>
|
||||
Parameters from the "cpu" controller are exposed via the
|
||||
<code>schedinfo</code> command in virsh.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh schedinfo demo
|
||||
Scheduler : posix
|
||||
cpu_shares : 1024
|
||||
vcpu_period : 100000
|
||||
vcpu_quota : -1
|
||||
emulator_period: 100000
|
||||
emulator_quota : -1</pre>
|
||||
|
||||
|
||||
<h3>Block I/O tuning</h3>
|
||||
|
||||
<p>
|
||||
Parameters from the "blkio" controller are exposed via the
|
||||
<code>bkliotune</code> command in virsh.
|
||||
</p>
|
||||
|
||||
|
||||
<pre>
|
||||
# virsh blkiotune demo
|
||||
weight : 500
|
||||
device_weight : </pre>
|
||||
|
||||
<h3>Memory tuning</h3>
|
||||
|
||||
<p>
|
||||
Parameters from the "memory" controller are exposed via the
|
||||
<code>memtune</code> command in virsh.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh memtune demo
|
||||
hard_limit : 580192
|
||||
soft_limit : unlimited
|
||||
swap_hard_limit: unlimited
|
||||
</pre>
|
||||
|
||||
<h3>Network tuning</h3>
|
||||
|
||||
<p>
|
||||
The <code>net_cls</code> is not currently used. Instead traffic
|
||||
filter policies are set directly against individual virtual
|
||||
network interfaces.
|
||||
</p>
|
||||
|
||||
<h2><a id="legacyLayout">Legacy cgroups layout</a></h2>
|
||||
|
||||
<p>
|
||||
Prior to libvirt 1.0.5, the cgroups layout created by libvirt was different
|
||||
from that described above, and did not allow for administrator customization.
|
||||
Libvirt used a fixed, 3-level hierarchy <code>libvirt/{qemu,lxc}/$VMNAME</code>
|
||||
which was rooted at the point in the hierarchy where libvirtd itself was
|
||||
located. So if libvirtd was placed at <code>/system/libvirtd.service</code>
|
||||
by systemd, the groups for each virtual machine / container would be located
|
||||
at <code>/system/libvirtd.service/libvirt/{qemu,lxc}/$VMNAME</code>. In addition
|
||||
to this, the QEMU drivers further child groups for each vCPU thread and the
|
||||
emulator thread(s). This leads to a hierarchy that looked like
|
||||
</p>
|
||||
|
||||
|
||||
<pre>
|
||||
$ROOT
|
||||
|
|
||||
+- system
|
||||
|
|
||||
+- libvirtd.service
|
||||
|
|
||||
+- libvirt
|
||||
|
|
||||
+- qemu
|
||||
| |
|
||||
| +- vm1
|
||||
| | |
|
||||
| | +- emulator
|
||||
| | +- vcpu0
|
||||
| | +- vcpu1
|
||||
| |
|
||||
| +- vm2
|
||||
| | |
|
||||
| | +- emulator
|
||||
| | +- vcpu0
|
||||
| | +- vcpu1
|
||||
| |
|
||||
| +- vm3
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- lxc
|
||||
|
|
||||
+- container1
|
||||
|
|
||||
+- container2
|
||||
|
|
||||
+- container3
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Although current releases are much improved, historically the use of deep
|
||||
hierarchies has had a significant negative impact on the kernel scalability.
|
||||
The legacy libvirt cgroups layout highlighted these problems, to the detriment
|
||||
of the performance of virtual machines and containers.
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
364
docs/cgroups.rst
Normal file
364
docs/cgroups.rst
Normal file
@@ -0,0 +1,364 @@
|
||||
==================================
|
||||
Control Groups Resource Management
|
||||
==================================
|
||||
|
||||
.. contents::
|
||||
|
||||
The QEMU and LXC drivers make use of the Linux "Control Groups" facility for
|
||||
applying resource management to their virtual machines and containers.
|
||||
|
||||
Required controllers
|
||||
--------------------
|
||||
|
||||
The control groups filesystem supports multiple "controllers". By default the
|
||||
init system (such as systemd) should mount all controllers compiled into the
|
||||
kernel at ``/sys/fs/cgroup/$CONTROLLER-NAME``. Libvirt will never attempt to
|
||||
mount any controllers itself, merely detect where they are mounted.
|
||||
|
||||
The QEMU driver is capable of using the ``cpuset``, ``cpu``, ``cpuacct``,
|
||||
``memory``, ``blkio`` and ``devices`` controllers. None of them are compulsory.
|
||||
If any controller is not mounted, the resource management APIs which use it will
|
||||
cease to operate. It is possible to explicitly turn off use of a controller,
|
||||
even when mounted, via the ``/etc/libvirt/qemu.conf`` configuration file.
|
||||
|
||||
The LXC driver is capable of using the ``cpuset``, ``cpu``, ``cpuacct``,
|
||||
``freezer``, ``memory``, ``blkio`` and ``devices`` controllers. The ``cpuacct``,
|
||||
``devices`` and ``memory`` controllers are compulsory. Without them mounted, no
|
||||
containers can be started. If any of the other controllers are not mounted, the
|
||||
resource management APIs which use them will cease to operate.
|
||||
|
||||
Current cgroups layout
|
||||
----------------------
|
||||
|
||||
As of libvirt 1.0.5 or later, the cgroups layout created by libvirt has been
|
||||
simplified, in order to facilitate the setup of resource control policies by
|
||||
administrators / management applications. The new layout is based on the
|
||||
concepts of "partitions" and "consumers". A "consumer" is a cgroup which holds
|
||||
the processes for a single virtual machine or container. A "partition" is a
|
||||
cgroup which does not contain any processes, but can have resource controls
|
||||
applied. A "partition" will have zero or more child directories which may be
|
||||
either "consumer" or "partition".
|
||||
|
||||
As of libvirt 1.1.1 or later, the cgroups layout will have some slight
|
||||
differences when running on a host with systemd 205 or later. The overall tree
|
||||
structure is the same, but there are some differences in the naming conventions
|
||||
for the cgroup directories. Thus the following docs split in two, one describing
|
||||
systemd hosts and the other non-systemd hosts.
|
||||
|
||||
Systemd cgroups integration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
On hosts which use systemd, each consumer maps to a systemd scope unit, while
|
||||
partitions map to a system slice unit.
|
||||
|
||||
Systemd scope naming
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The systemd convention is for the scope name of virtual machines / containers to
|
||||
be of the general format ``machine-$NAME.scope``. Libvirt forms the ``$NAME``
|
||||
part of this by concatenating the driver type with the id and truncated name of
|
||||
the guest, and then escaping any systemd reserved characters. So for a guest
|
||||
``demo`` running under the ``lxc`` driver, we get a ``$NAME`` of
|
||||
``lxc-12345-demo`` which when escaped is ``lxc\x2d12345\x2ddemo``. So the
|
||||
complete scope name is ``machine-lxc\x2d12345\x2ddemo.scope``. The scope names
|
||||
map directly to the cgroup directory names.
|
||||
|
||||
Systemd slice naming
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The systemd convention for slice naming is that a slice should include the name
|
||||
of all of its parents prepended on its own name. So for a libvirt partition
|
||||
``/machine/engineering/testing``, the slice name will be
|
||||
``machine-engineering-testing.slice``. Again the slice names map directly to the
|
||||
cgroup directory names. Systemd creates three top level slices by default,
|
||||
``system.slice`` ``user.slice`` and ``machine.slice``. All virtual machines or
|
||||
containers created by libvirt will be associated with ``machine.slice`` by
|
||||
default.
|
||||
|
||||
Systemd cgroup layout
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Given this, a possible systemd cgroups layout involving 3 qemu guests, 3 lxc
|
||||
containers and 3 custom child slices, would be:
|
||||
|
||||
::
|
||||
|
||||
$ROOT
|
||||
|
|
||||
+- system.slice
|
||||
| |
|
||||
| +- libvirtd.service
|
||||
|
|
||||
+- machine.slice
|
||||
|
|
||||
+- machine-qemu\x2d1\x2dvm1.scope
|
||||
| |
|
||||
| +- libvirt
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- machine-qemu\x2d2\x2dvm2.scope
|
||||
| |
|
||||
| +- libvirt
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- machine-qemu\x2d3\x2dvm3.scope
|
||||
| |
|
||||
| +- libvirt
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- machine-engineering.slice
|
||||
| |
|
||||
| +- machine-engineering-testing.slice
|
||||
| | |
|
||||
| | +- machine-lxc\x2d11111\x2dcontainer1.scope
|
||||
| |
|
||||
| +- machine-engineering-production.slice
|
||||
| |
|
||||
| +- machine-lxc\x2d22222\x2dcontainer2.scope
|
||||
|
|
||||
+- machine-marketing.slice
|
||||
|
|
||||
+- machine-lxc\x2d33333\x2dcontainer3.scope
|
||||
|
||||
Prior libvirt 7.1.0 the topology doesn't have extra ``libvirt`` directory.
|
||||
|
||||
Non-systemd cgroups layout
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
On hosts which do not use systemd, each consumer has a corresponding cgroup
|
||||
named ``$VMNAME.libvirt-{qemu,lxc}``. Each consumer is associated with exactly
|
||||
one partition, which also have a corresponding cgroup usually named
|
||||
``$PARTNAME.partition``. The exceptions to this naming rule is the top level
|
||||
default partition for virtual machines and containers ``/machine``.
|
||||
|
||||
Given this, a possible non-systemd cgroups layout involving 3 qemu guests, 3 lxc
|
||||
containers and 2 custom child slices, would be:
|
||||
|
||||
::
|
||||
|
||||
$ROOT
|
||||
|
|
||||
+- machine
|
||||
|
|
||||
+- qemu-1-vm1.libvirt-qemu
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- qeme-2-vm2.libvirt-qemu
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- qemu-3-vm3.libvirt-qemu
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- engineering.partition
|
||||
| |
|
||||
| +- testing.partition
|
||||
| | |
|
||||
| | +- lxc-11111-container1.libvirt-lxc
|
||||
| |
|
||||
| +- production.partition
|
||||
| |
|
||||
| +- lxc-22222-container2.libvirt-lxc
|
||||
|
|
||||
+- marketing.partition
|
||||
|
|
||||
+- lxc-33333-container3.libvirt-lxc
|
||||
|
||||
Using custom partitions
|
||||
-----------------------
|
||||
|
||||
If there is a need to apply resource constraints to groups of virtual machines
|
||||
or containers, then the single default partition ``/machine`` may not be
|
||||
sufficiently flexible. The administrator may wish to sub-divide the default
|
||||
partition, for example into "testing" and "production" partitions, and then
|
||||
assign each guest to a specific sub-partition. This is achieved via a small
|
||||
element addition to the guest domain XML config, just below the main ``domain``
|
||||
element
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<resource>
|
||||
<partition>/machine/production</partition>
|
||||
</resource>
|
||||
...
|
||||
|
||||
Note that the partition names in the guest XML are using a generic naming
|
||||
format, not the low level naming convention required by the underlying host OS.
|
||||
That is, you should not include any of the ``.partition`` or ``.slice`` suffixes
|
||||
in the XML config. Given a partition name ``/machine/production``, libvirt will
|
||||
automatically apply the platform specific translation required to get
|
||||
``/machine/production.partition`` (non-systemd) or
|
||||
``/machine.slice/machine-production.slice`` (systemd) as the underlying cgroup
|
||||
name
|
||||
|
||||
Libvirt will not auto-create the cgroups directory to back this partition. In
|
||||
the future, libvirt / virsh will provide APIs / commands to create custom
|
||||
partitions, but currently this is left as an exercise for the administrator.
|
||||
|
||||
**Note:** the ability to place guests in custom partitions is only available
|
||||
with libvirt >= 1.0.5, using the new cgroup layout. The legacy cgroups layout
|
||||
described later in this document did not support customization per guest.
|
||||
|
||||
Creating custom partitions (systemd)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Given the XML config above, the admin on a systemd based host would need to
|
||||
create a unit file ``/etc/systemd/system/machine-production.slice``
|
||||
|
||||
::
|
||||
|
||||
# cat > /etc/systemd/system/machine-testing.slice <<EOF
|
||||
[Unit]
|
||||
Description=VM testing slice
|
||||
Before=slices.target
|
||||
Wants=machine.slice
|
||||
EOF
|
||||
# systemctl start machine-testing.slice
|
||||
|
||||
Creating custom partitions (non-systemd)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Given the XML config above, the admin on a non-systemd based host would need to
|
||||
create a cgroup named '/machine/production.partition'
|
||||
|
||||
::
|
||||
|
||||
# cd /sys/fs/cgroup
|
||||
# for i in blkio cpu,cpuacct cpuset devices freezer memory net_cls perf_event
|
||||
do
|
||||
mkdir $i/machine/production.partition
|
||||
done
|
||||
# for i in cpuset.cpus cpuset.mems
|
||||
do
|
||||
cat cpuset/machine/$i > cpuset/machine/production.partition/$i
|
||||
done
|
||||
|
||||
Resource management APIs/commands
|
||||
---------------------------------
|
||||
|
||||
Since libvirt aims to provide an API which is portable across hypervisors, the
|
||||
concept of cgroups is not exposed directly in the API or XML configuration. It
|
||||
is considered to be an internal implementation detail. Instead libvirt provides
|
||||
a set of APIs for applying resource controls, which are then mapped to
|
||||
corresponding cgroup tunables
|
||||
|
||||
Scheduler tuning
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Parameters from the "cpu" controller are exposed via the ``schedinfo`` command
|
||||
in virsh.
|
||||
|
||||
::
|
||||
|
||||
# virsh schedinfo demo
|
||||
Scheduler : posix
|
||||
cpu_shares : 1024
|
||||
vcpu_period : 100000
|
||||
vcpu_quota : -1
|
||||
emulator_period: 100000
|
||||
emulator_quota : -1
|
||||
|
||||
Block I/O tuning
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Parameters from the "blkio" controller are exposed via the ``bkliotune`` command
|
||||
in virsh.
|
||||
|
||||
::
|
||||
|
||||
# virsh blkiotune demo
|
||||
weight : 500
|
||||
device_weight :
|
||||
|
||||
Memory tuning
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Parameters from the "memory" controller are exposed via the ``memtune`` command
|
||||
in virsh.
|
||||
|
||||
::
|
||||
|
||||
# virsh memtune demo
|
||||
hard_limit : 580192
|
||||
soft_limit : unlimited
|
||||
swap_hard_limit: unlimited
|
||||
|
||||
Network tuning
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The ``net_cls`` is not currently used. Instead traffic filter policies are set
|
||||
directly against individual virtual network interfaces.
|
||||
|
||||
Legacy cgroups layout
|
||||
---------------------
|
||||
|
||||
Prior to libvirt 1.0.5, the cgroups layout created by libvirt was different from
|
||||
that described above, and did not allow for administrator customization. Libvirt
|
||||
used a fixed, 3-level hierarchy ``libvirt/{qemu,lxc}/$VMNAME`` which was rooted
|
||||
at the point in the hierarchy where libvirtd itself was located. So if libvirtd
|
||||
was placed at ``/system/libvirtd.service`` by systemd, the groups for each
|
||||
virtual machine / container would be located at
|
||||
``/system/libvirtd.service/libvirt/{qemu,lxc}/$VMNAME``. In addition to this,
|
||||
the QEMU drivers further child groups for each vCPU thread and the emulator
|
||||
thread(s). This leads to a hierarchy that looked like
|
||||
|
||||
::
|
||||
|
||||
$ROOT
|
||||
|
|
||||
+- system
|
||||
|
|
||||
+- libvirtd.service
|
||||
|
|
||||
+- libvirt
|
||||
|
|
||||
+- qemu
|
||||
| |
|
||||
| +- vm1
|
||||
| | |
|
||||
| | +- emulator
|
||||
| | +- vcpu0
|
||||
| | +- vcpu1
|
||||
| |
|
||||
| +- vm2
|
||||
| | |
|
||||
| | +- emulator
|
||||
| | +- vcpu0
|
||||
| | +- vcpu1
|
||||
| |
|
||||
| +- vm3
|
||||
| |
|
||||
| +- emulator
|
||||
| +- vcpu0
|
||||
| +- vcpu1
|
||||
|
|
||||
+- lxc
|
||||
|
|
||||
+- container1
|
||||
|
|
||||
+- container2
|
||||
|
|
||||
+- container3
|
||||
|
||||
Although current releases are much improved, historically the use of deep
|
||||
hierarchies has had a significant negative impact on the kernel scalability. The
|
||||
legacy libvirt cgroups layout highlighted these problems, to the detriment of
|
||||
the performance of virtual machines and containers.
|
@@ -1,6 +1,3 @@
|
||||
.. role:: anchor(raw)
|
||||
:format: html
|
||||
|
||||
===================================
|
||||
Contacting the project contributors
|
||||
===================================
|
||||
@@ -17,8 +14,6 @@ issues <securityprocess.html>`__ that should be used instead. So if your issue
|
||||
has security implications, ignore the rest of this page and follow the `security
|
||||
process <securityprocess.html>`__ instead.
|
||||
|
||||
:anchor:`<a id="email"/>`
|
||||
|
||||
Mailing lists
|
||||
-------------
|
||||
|
||||
@@ -78,10 +73,8 @@ discussion. Wherever possible, please generate the patches by using
|
||||
regarding developing libvirt and/or contributing is available on our
|
||||
`Contributor Guidelines <hacking.html>`__ page.
|
||||
|
||||
:anchor:`<a id="irc"/>`
|
||||
|
||||
IRC discussion
|
||||
--------------
|
||||
IRC
|
||||
---
|
||||
|
||||
Some of the libvirt developers may be found on IRC on the `OFTC
|
||||
IRC <https://oftc.net>`__ network. Use the settings:
|
||||
|
@@ -67,7 +67,7 @@ to libvirt. If you have ideas for other contributions feel free to follow them.
|
||||
have, or run into trouble with managing an existing deployment. While some
|
||||
users may be able to contact a software vendor to obtain support, it is
|
||||
common to rely on community help forums such as `libvirt users mailing
|
||||
list <contact.html#email>`__, or sites such as
|
||||
list <contact.html#mailing-lists>`__, or sites such as
|
||||
`stackoverflow. <https://stackoverflow.com/questions/tagged/libvirt>`__
|
||||
People who are familiar with libvirt and have ability & desire to help other
|
||||
users are encouraged to participate in these help forums.
|
||||
@@ -82,10 +82,10 @@ for communication between contributors:
|
||||
Mailing lists
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The project has a number of `mailing lists <contact.html#email>`__ for general
|
||||
communication between contributors. In general any design discussions and review
|
||||
of contributions will take place on the mailing lists, so it is important for
|
||||
all contributors to follow the traffic.
|
||||
The project has a number of `mailing lists <contact.html#mailing-lists>`__ for
|
||||
general communication between contributors. In general any design discussions
|
||||
and review of contributions will take place on the mailing lists, so it is
|
||||
important for all contributors to follow the traffic.
|
||||
|
||||
Instant messaging / chat
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@@ -1,470 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>C# API bindings</h1>
|
||||
|
||||
<ul id="toc"></ul>
|
||||
|
||||
<h2><a id="description">Description</a></h2>
|
||||
|
||||
<p>
|
||||
The C# libvirt bindings are a class library. They use a Microsoft
|
||||
Visual Studio project architecture, and have been tested with Windows
|
||||
.NET, and Mono, on both Linux and Windows.
|
||||
</p>
|
||||
<p>
|
||||
Compiling them produces <b>LibvirtBindings.dll</b>, which can
|
||||
be added as a .NET reference to any .NET project needing access
|
||||
to libvirt.
|
||||
</p>
|
||||
|
||||
<h2><a id="requirements">Requirements</a></h2>
|
||||
|
||||
<p>
|
||||
These bindings depend upon the libvirt libraries being installed.
|
||||
</p>
|
||||
<p>
|
||||
In the .NET case, this is <b>libvirt-0.dll</b>, produced from
|
||||
compiling libvirt for windows.
|
||||
</p>
|
||||
|
||||
<h2><a id="git">GIT source repository</a></h2>
|
||||
<p>
|
||||
The C# bindings source code is maintained in a <a
|
||||
href="https://git-scm.com/">git</a> repository available on
|
||||
<a href="https://gitlab.com/libvirt/libvirt-csharp">gitlab.com</a>:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
git clone https://gitlab.com/libvirt/libvirt-csharp.git
|
||||
</pre>
|
||||
|
||||
<h2><a id="usage">Usage</a></h2>
|
||||
|
||||
<p>
|
||||
The libvirt C# bindings class library exposes the <b>Libvirt</b>
|
||||
namespace. This namespace exposes all of the needed types (enum,
|
||||
struct), plus many classes exposing the libvirt API methods.
|
||||
</p>
|
||||
<p>
|
||||
These classes are grouped into functional areas, with each class
|
||||
exposing libvirt methods related to that area.
|
||||
</p>
|
||||
<p>
|
||||
For example, the libvirt methods related to connections, such as
|
||||
<b>virConnectOpenAuth</b> and <b>virConnectNumOfDomains</b>, are in
|
||||
the <b>Connect</b> class.
|
||||
<br />
|
||||
They are accessed as <b>Connect.OpenAuth</b>, and
|
||||
<b>Connect.NumOfDomains</b> respectively.
|
||||
</p>
|
||||
<p>
|
||||
In the same manner, the other class name mappings are:
|
||||
</p>
|
||||
<table class="top_table">
|
||||
<tr><th>Name of libvirt function</th><th>C# class name</th></tr>
|
||||
<tr><td>virDomain...</td><td>Domain</td></tr>
|
||||
<tr><td>virEvent...</td><td>Event</td></tr>
|
||||
<tr><td>virInterface...</td><td>Interface</td></tr>
|
||||
<tr><td>virNetwork...</td><td>Network</td></tr>
|
||||
<tr><td>virNode...</td><td>Node</td></tr>
|
||||
<tr><td>virSecret...</td><td>Secret</td></tr>
|
||||
<tr><td>virStoragePool...</td><td>StoragePool</td></tr>
|
||||
<tr><td>virStorageVolume...</td><td>StorageVolume</td></tr>
|
||||
<tr><td>virStream...</td><td>Stream</td></tr>
|
||||
</table>
|
||||
<p>
|
||||
There are some additions as well:
|
||||
</p>
|
||||
<ul>
|
||||
<li>
|
||||
There is a class named <b>Library</b>, exposing the
|
||||
<b>virGetVersion</b> and <b>virInitialize</b> methods
|
||||
</li>
|
||||
<li>
|
||||
There is a class named <b>Errors</b>, exposing the error
|
||||
related methods. For example, <b>virSetErrorFunc</b> and
|
||||
<b>virConnResetLastError</b>.
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<h2><a id="authors">Authors</a></h2>
|
||||
|
||||
<p>
|
||||
The C# bindings are the work of Arnaud Champion
|
||||
<<a href="mailto:arnaud.champion AT devatom.fr">arnaud.champion AT devatom.fr</a>>,
|
||||
based upon the previous work of Jaromír Červenka.
|
||||
</p>
|
||||
|
||||
<h2><a id="notes">Test Configuration</a></h2>
|
||||
|
||||
<p>
|
||||
Testing is performed using the following configurations:
|
||||
</p>
|
||||
<ul>
|
||||
<li>Windows 7 (64 bits) / .Net 4</li>
|
||||
<li>Windows 7 (64 bits) / Mono 2.6.7 (compiled in 32 bits)</li>
|
||||
<li>Ubuntu 10.10 amd64 / Mono 2.6.7 (compiled in 64 bits)</li>
|
||||
</ul>
|
||||
|
||||
<h2><a id="type">Type Coverage</a></h2>
|
||||
|
||||
<p>
|
||||
Coverage of the libvirt types is:
|
||||
</p>
|
||||
<table class="top_table">
|
||||
<tr><th>Type</th><th>Name</th><th>Binding?</th><th>Tested?</th><th>Sample Code?</th><th>Works?</th><th>Tested .Net/Windows Works?</th><th>Tested Mono (32-bit)/Windows Works?</th><th>Tested Mono (64-bit)/Linux Works?</th></tr>
|
||||
<tr><td>enum</td><td>virCPUCompareResult</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virConnect</td><td>Yes, an IntPtr as the struct is not public</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virConnectAuth</td><td>Yes</td><td>Yes</td><td>virConnectOpenAuth</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>struct</td><td>virConnectCredential</td><td>Yes</td><td>Yes</td><td>virConnectOpenAuth</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virConnectCredentialType</td><td>Yes</td><td>Yes</td><td>virConnectOpenAuth</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virConnectFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virDomain</td><td>Yes, an IntPtr as the struct is not public</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virDomainBlockInfo</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virDomainBlockStatsInfo</td><td>Yes</td><td>Yes</td><td>virDomainStats</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virDomainCoreDumpFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainCreateFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainDeviceModifyFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventDefinedDetailType</td><td>Yes</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>struct</td><td>virDomainEventGraphicsAddress</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventGraphicsAddressType</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventGraphicsPhase</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virDomainEventGraphicsSubject</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virDomainEventGraphicsSubjectIdentity</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventID</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventIOErrorAction</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventResumedDetailType</td><td>Yes</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventStartedDetailType</td><td>Yes</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventStoppedDetailType</td><td>Yes</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventSuspendedDetailType</td><td>Yes</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventType</td><td>Yes</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventUndefinedDetailType</td><td>Yes</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>enum</td><td>virDomainEventWatchdogAction</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virDomainInfo</td><td>Yes</td><td>Yes</td><td>virConnectSetErrorFunc, virDomainStats</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>struct</td><td>virDomainInterfaceStatsStruct</td><td>Yes</td><td>Yes</td><td>virDomainStats</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>struct</td><td>virDomainJobInfo</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainJobType</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainMemoryFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virDomainMemoryStatStruct</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainMemoryStatTags</td><td>Yes</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainMigrateFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virDomainSnapshot</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainSnapshotDeleteFlags</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainState</td><td>Yes</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virDomainXMLFlags</td><td>Yes</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virEventHandleType</td><td>Yes</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>struct</td><td>virInterface</td><td>Yes, an IntPtr as the struct is not public</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virInterfaceXMLFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virNWFilter</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virNetwork</td><td>Yes, an IntPtr as the struct is not public</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virNodeDevice</td><td>Yes, an IntPtr as the struct is not public</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virNodeInfo</td><td>Yes</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virSchedParameter</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virSchedParameterType</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virSecret</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virSecretUsageType</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virSecurityLabel</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virSecurityModel</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virStoragePoolBuildFlags</td><td>Yes</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virStoragePoolDeleteFlags</td><td>Yes</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virStoragePoolInfo</td><td>Yes</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virStoragePool</td><td>Yes, an IntPtr as the struct is not public</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virStoragePoolState</td><td>Yes</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virStorageVol</td><td>Yes, an IntPtr as the struct is not public</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virStorageVolDeleteFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virStorageVolInfo</td><td>Yes</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virStorageVolType</td><td>Yes</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virStream</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virStreamEventType</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virStreamFlags</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virVcpuInfo</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>enum</td><td>virVcpuState</td><td>No</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>struct</td><td>virError</td><td>Yes</td><td>Yes</td><td>virConnectSetErrorFunc, virDomainStats</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
</table>
|
||||
|
||||
<p></p>
|
||||
|
||||
<h2><a id="funccover">Function Coverage</a></h2>
|
||||
|
||||
<p>
|
||||
Coverage of the libvirt functions is:
|
||||
</p>
|
||||
<table class="top_table">
|
||||
<tr><th>Name</th><th>Binding?</th><th>Type?</th><th>Tested?</th><th>Sample Code?</th><th>Working?</th><th>Tested .Net/Windows Works?</th><th>Tested Mono (32-bit)/Windows Works?</th><th>Tested Mono (64-bit)/Linux Works?</th></tr>
|
||||
<tr><td>virConnectAuthCallback</td><td>Yes</td><td>delegate</td><td>Yes</td><td>virConnectOpenAuth</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectBaselineCPU</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectClose</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpenAuth</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectCompareCPU</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventCallback</td><td>Yes</td><td>delegate</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventDeregister</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventDeregisterAny</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventGenericCallback</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventGraphicsCallback</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventIOErrorCallback</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventIOErrorReasonCallback</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventRTCChangeCallback</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventRegister</td><td>Yes</td><td>function</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectDomainEventRegisterAny</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainEventWatchdogCallback</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainXMLFromNative</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectDomainXMLToNative</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectFindStoragePoolSources</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectGetCapabilities</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectGetHostname</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectGetLibVersion</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectGetMaxVcpus</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectGetType</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectGetURI</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectGetVersion</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectIsEncrypted</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectIsSecure</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectListDefinedDomains</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpenAuth</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectListDefinedInterfaces </td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectListDefinedNetworks</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectListDefinedStoragePools</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectListDomains</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpenAuth, virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectListInterfaces</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes, if the host handle the method</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectListNWFilters </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectListNetworks</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectListSecrets</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectListStoragePools</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpen</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectNumOfDefinedDomains</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpenAuth</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectNumOfDefinedInterfaces</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectNumOfDefinedNetworks</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectNumOfDefinedStoragePools</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectNumOfDomains</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpenAuth, virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectNumOfInterfaces</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectNumOfNWFilters</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectNumOfNetworks </td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectNumOfSecrets</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectNumOfStoragePools</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpen</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectOpen</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpen, virEventRegisterImpl, virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectOpenAuth</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpenAuth</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnectOpenReadOnly</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virConnectRef</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainAbortJob</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainAttachDevice</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainAttachDeviceFlags</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainBlockPeek</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainBlockStats</td><td>Yes</td><td>function</td><td>Yes</td><td>virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virDomainCoreDump</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainCreate</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainCreateLinux</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainCreateWithFlags</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainCreateXML</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainDefineXML</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainDestroy</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainDetachDevice</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainDetachDeviceFlags</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainFree</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetAutostart</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetBlockInfo</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetConnect</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetID</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetInfo</td><td>Yes</td><td>function</td><td>Yes</td><td>virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virDomainGetJobInfo</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetMaxMemory</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetMaxVcpus</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetName</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpenAuth, virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virDomainGetOSType</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetSchedulerParameters</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetSchedulerType</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetSecurityLabel</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetUUID</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetUUIDString</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetVcpus</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainGetXMLDesc</td><td>Yes</td><td>function</td><td>Yes</td><td>virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virDomainHasCurrentSnapshot</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainHasManagedSaveImage</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainInterfaceStats </td><td>No</td><td>function</td><td>Yes</td><td>virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virDomainIsActive</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainIsPersistent</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainLookupByID</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectOpenAuth, virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virDomainLookupByName</td><td>Yes</td><td>function</td><td>Yes</td><td>virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virDomainLookupByUUID</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainLookupByUUIDString</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainManagedSave </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainManagedSaveRemove</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainMemoryPeek</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainMemoryStats</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainMigrate</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainMigrateSetMaxDowntime</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainMigrateToURI </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainPinVcpu</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainReboot</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainRef </td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainRestore</td><td>Yes </td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainResume </td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainRevertToSnapshot</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSave</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSetAutostart</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSetMaxMemory </td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSetMemory</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSetSchedulerParameters</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSetVcpus</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainShutdown</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSnapshotCreateXML</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSnapshotCurrent</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSnapshotDelete</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSnapshotFree</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSnapshotGetXMLDesc</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSnapshotListNames</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSnapshotLookupByName</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSnapshotNum</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainSuspend</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainUndefine</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virDomainUpdateDeviceFlags</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virEventAddHandleFunc</td><td>Yes</td><td>delegate</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virEventAddTimeoutFunc</td><td>Yes</td><td>delegate</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virEventHandleCallback</td><td>Yes</td><td>delegate</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virEventRegisterImpl</td><td>Yes</td><td>function</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virEventRemoveHandleFunc</td><td>Yes</td><td>delegate</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virEventRemoveTimeoutFunc</td><td>Yes</td><td>delegate</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virEventTimeoutCallback</td><td>Yes</td><td>delegate</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virEventUpdateHandleFunc</td><td>Yes</td><td>delegate</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virEventUpdateTimeoutFunc</td><td>Yes</td><td>delegate</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virFreeCallback</td><td>Yes</td><td>function</td><td>Yes</td><td>virEventRegisterImpl</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virGetVersion</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInitialize</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceCreate</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceDefineXML</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceDestroy</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceFree</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceGetConnect</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceGetMACString</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceGetName</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceGetXMLDesc</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceIsActive</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceLookupByMACString</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceLookupByName</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceRef </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virInterfaceUndefine</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterDefineXML</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterFree</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterGetName</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterGetUUID</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterGetUUIDString</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterGetXMLDesc</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterLookupByName </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterLookupByUUID</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterLookupByUUIDString</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterRef </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNWFilterUndefine</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkCreate</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkCreateXML</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkDefineXML</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkDestroy</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkFree</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkGetAutostart</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkGetBridgeName</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkGetConnect</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkGetName</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkGetUUID</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkGetUUIDString </td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkGetXMLDesc</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkIsActive</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkIsPersistent</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkLookupByName</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkLookupByUUID</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkLookupByUUIDString</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkRef</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkSetAutostart</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNetworkUndefine</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceCreateXML</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceDestroy</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceDettach</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceFree</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceGetName</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceGetParent</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceGetXMLDesc</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceListCaps</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceLookupByName</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceNumOfCaps</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceReAttach</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceRef</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeDeviceReset</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeGetCellsFreeMemory</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeGetFreeMemory</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeGetInfo</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeGetSecurityModel </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeListDevices</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virNodeNumOfDevices</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretDefineXML</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretFree </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretGetConnect</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretGetUUID</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretGetUUIDString </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretGetUsageID</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretGetUsageType</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretGetValue</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretGetXMLDesc</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretLookupByUUID</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretLookupByUUIDString</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretLookupByUsage</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretRef</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretSetValue</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virSecretUndefine</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolBuild</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolCreate</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolCreateXML </td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolDefineXML</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolDelete</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolDestroy</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolFree</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolGetAutostart</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolGetConnect</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolGetInfo</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolGetName</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolGetUUID</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolGetUUIDString</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolGetXMLDesc</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolIsActive</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolIsPersistent</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolListVolumes</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolLookupByName</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolLookupByUUID</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolLookupByUUIDString</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolLookupByVolume</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolNumOfVolumes</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolRef</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolRefresh</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolSetAutostart</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStoragePoolUndefine</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolCreateXML</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolCreateXMLFrom</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolDelete</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolFree</td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolGetConnect </td><td>Yes</td><td>function</td><td>No</td><td></td><td>Maybe</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolGetInfo</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolGetKey</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolGetName</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolGetPath</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolGetXMLDesc </td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolLookupByKey</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolLookupByName</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolLookupByPath</td><td>Yes</td><td>function</td><td>Yes</td><td></td><td>Yes</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolRef</td><td>Yes</td><td>function</td><td>No</td><td></td><td>No</td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStorageVolWipe</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamAbort </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamEventAddCallback</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamEventCallback</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamEventRemoveCallback</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamEventUpdateCallback</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamFinish </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamFree </td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamNew</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamRecv</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamRecvAll</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamRef</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamSend</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamSendAll</td><td>No</td><td>function</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamSinkFunc</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virStreamSourceFunc</td><td>No</td><td>delegate</td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
|
||||
<tr><td>virGetLastError</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectSetErrorFunc</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virConnSetErrorFunc</td><td>Yes</td><td>function</td><td>Yes</td><td>virConnectSetErrorFunc</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
<tr><td>virErrorFunc</td><td>Yes</td><td>delegate</td><td>Yes</td><td>virConnectSetErrorFunc, virDomainInfos</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
|
||||
</table>
|
||||
</body>
|
||||
</html>
|
38
docs/csharp.rst
Normal file
38
docs/csharp.rst
Normal file
@@ -0,0 +1,38 @@
|
||||
===============
|
||||
C# API bindings
|
||||
===============
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
The C# libvirt bindings are a class library. They use a Microsoft Visual Studio
|
||||
project architecture, and have been tested with Windows .NET, and Mono, on both
|
||||
Linux and Windows.
|
||||
|
||||
Compiling them produces **LibvirtBindings.dll**, which can be added as a .NET
|
||||
reference to any .NET project needing access to libvirt.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
These bindings depend upon the libvirt libraries being installed.
|
||||
|
||||
In the .NET case, this is **libvirt-0.dll**, produced from compiling libvirt for
|
||||
windows.
|
||||
|
||||
GIT source repository
|
||||
---------------------
|
||||
|
||||
The C# bindings source code is maintained in a ``git`` repository available on
|
||||
`gitlab.com <https://gitlab.com/libvirt/libvirt-csharp>`__:
|
||||
|
||||
::
|
||||
|
||||
git clone https://gitlab.com/libvirt/libvirt-csharp.git
|
||||
|
||||
Authors
|
||||
-------
|
||||
|
||||
The C# bindings are the work of Arnaud Champion <`arnaud.champion AT
|
||||
devatom.fr <mailto:arnaud.champion%20AT%20devatom.fr>`__>, based upon the
|
||||
previous work of Jaromír Červenka.
|
@@ -95,6 +95,7 @@
|
||||
margin-right: 1em;
|
||||
}
|
||||
|
||||
main,
|
||||
.document {
|
||||
margin-left: auto;
|
||||
margin-right: auto;
|
||||
@@ -104,9 +105,13 @@
|
||||
width: 70em;
|
||||
}
|
||||
|
||||
main#index,
|
||||
#index.document,
|
||||
#docs.document,
|
||||
main#hvsupport,
|
||||
#hvsupport.document,
|
||||
main#documentation,
|
||||
#documentation.document,
|
||||
main#knowledge-base,
|
||||
#knowledge-base.document
|
||||
{
|
||||
width: inherit;
|
||||
@@ -397,6 +402,9 @@ h6:hover > a.headerlink {
|
||||
}
|
||||
|
||||
div.panel,
|
||||
#documentation section,
|
||||
#documentation .section,
|
||||
#knowledge-base section,
|
||||
#knowledge-base .section
|
||||
{
|
||||
width: 24%;
|
||||
@@ -406,6 +414,9 @@ div.panel,
|
||||
}
|
||||
|
||||
div.panel h2,
|
||||
#documentation section h2,
|
||||
#documentation .section h1,
|
||||
#knowledge-base section h2,
|
||||
#knowledge-base .section h1 {
|
||||
margin-top: 0px;
|
||||
padding: 0.5em;
|
||||
@@ -423,15 +434,12 @@ div.panel h2,
|
||||
height: 300px;
|
||||
}
|
||||
|
||||
#knowledge-base.document > h1 {
|
||||
#documentation > h1,
|
||||
#knowledge-base > h1 {
|
||||
text-align: center;
|
||||
padding: 1em;
|
||||
}
|
||||
|
||||
#docs.document h1 {
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
br.clear {
|
||||
clear: both;
|
||||
border: 0px;
|
||||
@@ -485,11 +493,13 @@ br.clear {
|
||||
}
|
||||
|
||||
div.panel dd,
|
||||
#documentation dd,
|
||||
#knowledge-base dd {
|
||||
font-size: smaller;
|
||||
}
|
||||
|
||||
div.panel a,
|
||||
#documentation a,
|
||||
#knowledge-base a {
|
||||
text-decoration: none;
|
||||
}
|
||||
@@ -497,6 +507,9 @@ div.panel a,
|
||||
div.panel ul,
|
||||
div.panel p,
|
||||
div.panel dl,
|
||||
#documentation ul,
|
||||
#documentation p,
|
||||
#documentation dl,
|
||||
#knowledge-base ul,
|
||||
#knowledge-base p,
|
||||
#knowledge-base dl {
|
||||
@@ -505,16 +518,19 @@ div.panel dl,
|
||||
}
|
||||
|
||||
div.panel ul,
|
||||
#documentation ul,
|
||||
#knowledge-base ul {
|
||||
margin-left: 1em;
|
||||
}
|
||||
|
||||
div.panel dt,
|
||||
#documentation dt,
|
||||
#knowledge-base dt {
|
||||
margin: 0px;
|
||||
}
|
||||
|
||||
div.panel dd,
|
||||
#documentation dd,
|
||||
#knowledge-base dd {
|
||||
margin: 0px;
|
||||
margin-bottom: 1em;
|
||||
|
@@ -100,7 +100,7 @@ optionally, one or two TCP sockets:
|
||||
with full read-write privileges. A connection to this socket gives the
|
||||
client privileges that are equivalent to having a root shell. Access control
|
||||
can be enforced either through validation of `x509 certificates
|
||||
<tlscerts.html>`__, and/or by enabling an `authentication mechanism
|
||||
<kbase/tlscerts.html>`__, and/or by enabling an `authentication mechanism
|
||||
<auth.html>`__.
|
||||
|
||||
NB, some distros will use ``/run`` instead of ``/var/run``.
|
||||
|
@@ -1,94 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>D-Bus API bindings</h1>
|
||||
|
||||
<ul id="toc"></ul>
|
||||
|
||||
<h2><a id="description">Description</a></h2>
|
||||
|
||||
<p>
|
||||
libvirt-dbus wraps libvirt API to provide a high-level object-oriented
|
||||
API better suited for dbus-based applications.
|
||||
</p>
|
||||
|
||||
<h2><a id="git">GIT source repository</a></h2>
|
||||
<p>
|
||||
The D-Bus bindings source code is maintained in a
|
||||
<a href="https://git-scm.com/">git</a> repository available on
|
||||
<a href="https://gitlab.com/libvirt/libvirt-dbus">gitlab.com</a>:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
git clone https://gitlab.com/libvirt/libvirt-dbus.git
|
||||
</pre>
|
||||
|
||||
<h2><a id="usage">Usage</a></h2>
|
||||
|
||||
<p>
|
||||
libvirt-dbus exports libvirt API using D-Bus objects with methods and
|
||||
properties described by interfaces. Currently only local connection
|
||||
to libvirt is exported and the list of supported drivers depends
|
||||
on the type of the bus connection (session or system).
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The name of the libvirt-dbus service is <code>org.libvirt</code>.
|
||||
libvirt-dbus distributes an interface XML descriptions which can be
|
||||
usually found at <code>/usr/share/dbus-1/interfaces/</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
By default unprivileged user has access only to the session D-Bus
|
||||
connection. In order to allow specific user "foo" to access the system
|
||||
D-Bus connection you need to create a file
|
||||
<code>/etc/dbus-1/system.d/org.libvirt.conf</code> that contains:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
<?xml version="1.0"?>
|
||||
<!DOCTYPE busconfig PUBLIC "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
|
||||
"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
|
||||
|
||||
<busconfig>
|
||||
|
||||
<policy user="foo">
|
||||
<allow send_destination="org.libvirt"/>
|
||||
</policy>
|
||||
|
||||
</busconfig>
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
To get a list of supported drivers for the specific bus connection
|
||||
you can run these commands (not all drivers may be available on
|
||||
the host):
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
gdbus introspect --xml --session --dest org.libvirt --object-path /org/libvirt
|
||||
gdbus introspect --xml --system --dest org.libvirt --object-path /org/libvirt
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Every object is introspectable so you can get a list of available
|
||||
interfaces with methods, signals and properties running this command:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
gdbus introspect --xml --system --dest org.libvirt --object-path /org/libvirt/QEMU
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
To get a list of domains for specific connection driver you can run
|
||||
this command:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
gdbus call --system --dest org.libvirt --object-path /org/libvirt/QEMU \
|
||||
--method org.libvirt.Connect.ListDomains 0
|
||||
</pre>
|
||||
|
||||
</body>
|
||||
</html>
|
75
docs/dbus.rst
Normal file
75
docs/dbus.rst
Normal file
@@ -0,0 +1,75 @@
|
||||
==================
|
||||
D-Bus API bindings
|
||||
==================
|
||||
|
||||
.. contents::
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
libvirt-dbus wraps libvirt API to provide a high-level object-oriented API
|
||||
better suited for dbus-based applications.
|
||||
|
||||
GIT source repository
|
||||
---------------------
|
||||
|
||||
The D-Bus bindings source code is maintained in a `git <https://git-scm.com/>`__
|
||||
repository available on
|
||||
`gitlab.com <https://gitlab.com/libvirt/libvirt-dbus>`__:
|
||||
|
||||
::
|
||||
|
||||
git clone https://gitlab.com/libvirt/libvirt-dbus.git
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
libvirt-dbus exports libvirt API using D-Bus objects with methods and properties
|
||||
described by interfaces. Currently only local connection to libvirt is exported
|
||||
and the list of supported drivers depends on the type of the bus connection
|
||||
(session or system).
|
||||
|
||||
The name of the libvirt-dbus service is ``org.libvirt``. libvirt-dbus
|
||||
distributes an interface XML descriptions which can be usually found at
|
||||
``/usr/share/dbus-1/interfaces/``.
|
||||
|
||||
By default unprivileged user has access only to the session D-Bus connection. In
|
||||
order to allow specific user "foo" to access the system D-Bus connection you
|
||||
need to create a file ``/etc/dbus-1/system.d/org.libvirt.conf`` that contains:
|
||||
|
||||
::
|
||||
|
||||
<?xml version="1.0"?>
|
||||
<!DOCTYPE busconfig PUBLIC "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
|
||||
"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
|
||||
|
||||
<busconfig>
|
||||
|
||||
<policy user="foo">
|
||||
<allow send_destination="org.libvirt"/>
|
||||
</policy>
|
||||
|
||||
</busconfig>
|
||||
|
||||
To get a list of supported drivers for the specific bus connection you can run
|
||||
these commands (not all drivers may be available on the host):
|
||||
|
||||
::
|
||||
|
||||
gdbus introspect --xml --session --dest org.libvirt --object-path /org/libvirt
|
||||
gdbus introspect --xml --system --dest org.libvirt --object-path /org/libvirt
|
||||
|
||||
Every object is introspectable so you can get a list of available interfaces
|
||||
with methods, signals and properties running this command:
|
||||
|
||||
::
|
||||
|
||||
gdbus introspect --xml --system --dest org.libvirt --object-path /org/libvirt/QEMU
|
||||
|
||||
To get a list of domains for specific connection driver you can run this
|
||||
command:
|
||||
|
||||
::
|
||||
|
||||
gdbus call --system --dest org.libvirt --object-path /org/libvirt/QEMU \
|
||||
--method org.libvirt.Connect.ListDomains 0
|
@@ -1,191 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body id="docs">
|
||||
<h1>Documentation</h1>
|
||||
<div class="panel">
|
||||
<h2>Deployment / operation</h2>
|
||||
|
||||
<dl>
|
||||
<dt><a href="apps.html">Applications</a></dt>
|
||||
<dd>Applications known to use libvirt</dd>
|
||||
|
||||
<dt><a href="manpages/index.html">Manual pages</a></dt>
|
||||
<dd>Manual pages for libvirt tools / daemons</dd>
|
||||
|
||||
<dt><a href="windows.html">Windows</a></dt>
|
||||
<dd>Downloads for Windows</dd>
|
||||
|
||||
<dt><a href="macos.html">macOS</a></dt>
|
||||
<dd>Working with libvirt on macOS</dd>
|
||||
|
||||
<dt><a href="migration.html">Migration</a></dt>
|
||||
<dd>Migrating guests between machines</dd>
|
||||
|
||||
<dt><a href="daemons.html">Daemons</a></dt>
|
||||
<dd>Overview of the daemons provided by libvirt</dd>
|
||||
|
||||
<dt><a href="remote.html">Remote access</a></dt>
|
||||
<dd>Enable remote access over TCP</dd>
|
||||
|
||||
<dt><a href="tlscerts.html">TLS certs</a></dt>
|
||||
<dd>Generate and deploy x509 certificates for TLS</dd>
|
||||
|
||||
<dt><a href="auth.html">Authentication</a></dt>
|
||||
<dd>Configure authentication for the libvirt daemon</dd>
|
||||
|
||||
<dt><a href="acl.html">Access control</a></dt>
|
||||
<dd>Configure access control libvirt APIs with <a href="aclpolkit.html">polkit</a></dd>
|
||||
|
||||
<dt><a href="logging.html">Logging</a></dt>
|
||||
<dd>The library and the daemon logging support</dd>
|
||||
|
||||
<dt><a href="auditlog.html">Audit log</a></dt>
|
||||
<dd>Audit trail logs for host operations</dd>
|
||||
|
||||
<dt><a href="firewall.html">Firewall</a></dt>
|
||||
<dd>Firewall and network filter configuration</dd>
|
||||
|
||||
<dt><a href="hooks.html">Hooks</a></dt>
|
||||
<dd>Hooks for system specific management</dd>
|
||||
|
||||
<dt><a href="nss.html">NSS module</a></dt>
|
||||
<dd>Enable domain host name translation to IP addresses</dd>
|
||||
|
||||
<dt><a href="https://wiki.libvirt.org/page/FAQ">FAQ</a></dt>
|
||||
<dd>Frequently asked questions</dd>
|
||||
</dl>
|
||||
|
||||
</div>
|
||||
|
||||
<div class="panel">
|
||||
<h2>Application development</h2>
|
||||
<dl>
|
||||
<dt><a href="html/index.html">API reference</a></dt>
|
||||
<dd>Reference manual for the C public API, split in
|
||||
<a href="html/libvirt-libvirt-common.html">common</a>,
|
||||
<a href="html/libvirt-libvirt-domain.html">domain</a>,
|
||||
<a href="html/libvirt-libvirt-domain-checkpoint.html">domain checkpoint</a>,
|
||||
<a href="html/libvirt-libvirt-domain-snapshot.html">domain snapshot</a>,
|
||||
<a href="html/libvirt-virterror.html">error</a>,
|
||||
<a href="html/libvirt-libvirt-event.html">event</a>,
|
||||
<a href="html/libvirt-libvirt-host.html">host</a>,
|
||||
<a href="html/libvirt-libvirt-interface.html">interface</a>,
|
||||
<a href="html/libvirt-libvirt-network.html">network</a>,
|
||||
<a href="html/libvirt-libvirt-nodedev.html">node device</a>,
|
||||
<a href="html/libvirt-libvirt-nwfilter.html">network filter</a>,
|
||||
<a href="html/libvirt-libvirt-secret.html">secret</a>,
|
||||
<a href="html/libvirt-libvirt-storage.html">storage</a>,
|
||||
<a href="html/libvirt-libvirt-stream.html">stream</a>
|
||||
and
|
||||
<a href="html/index-admin.html">admin</a>,
|
||||
<a href="html/index-qemu.html">QEMU</a>,
|
||||
<a href="html/index-lxc.html">LXC</a> libs
|
||||
</dd>
|
||||
|
||||
<dt><a href="bindings.html">Language bindings and API modules</a></dt>
|
||||
<dd>Bindings of the libvirt API for
|
||||
<a href="csharp.html">c#</a>,
|
||||
<a href="https://pkg.go.dev/libvirt.org/go/libvirt">go</a>,
|
||||
<a href="java.html">java</a>,
|
||||
<a href="https://libvirt.org/ocaml/">ocaml</a>,
|
||||
<a href="https://search.cpan.org/dist/Sys-Virt/">perl</a>,
|
||||
<a href="python.html">python</a>,
|
||||
<a href="php.html">php</a>,
|
||||
<a href="https://libvirt.org/ruby/">ruby</a>
|
||||
and integration API modules for
|
||||
<a href="dbus.html">D-Bus</a></dd>
|
||||
|
||||
|
||||
<dt><a href="format.html">XML schemas</a></dt>
|
||||
<dd>Description of the XML schemas for
|
||||
<a href="formatdomain.html">domains</a>,
|
||||
<a href="formatnetwork.html">networks</a>,
|
||||
<a href="formatnetworkport.html">network ports</a>,
|
||||
<a href="formatnwfilter.html">network filtering</a>,
|
||||
<a href="formatstorage.html">storage</a>,
|
||||
<a href="formatstorageencryption.html">storage encryption</a>,
|
||||
<a href="formatcaps.html">capabilities</a>,
|
||||
<a href="formatdomaincaps.html">domain capabilities</a>,
|
||||
<a href="formatstoragecaps.html">storage pool capabilities</a>,
|
||||
<a href="formatnode.html">node devices</a>,
|
||||
<a href="formatsecret.html">secrets</a>,
|
||||
<a href="formatsnapshot.html">snapshots</a>,
|
||||
<a href="formatcheckpoint.html">checkpoints</a>,
|
||||
<a href="formatbackup.html">backup jobs</a></dd>
|
||||
|
||||
<dt><a href="uri.html">URI format</a></dt>
|
||||
<dd>The URI formats used for connecting to libvirt</dd>
|
||||
|
||||
<dt><a href="cgroups.html">CGroups</a></dt>
|
||||
<dd>Control groups integration</dd>
|
||||
|
||||
<dt><a href="drivers.html">Drivers</a></dt>
|
||||
<dd>Hypervisor specific driver information</dd>
|
||||
|
||||
<dt><a href="support.html">Support guarantees</a></dt>
|
||||
<dd>Details of support status for various interfaces</dd>
|
||||
|
||||
<dt><a href="hvsupport.html">Driver support</a></dt>
|
||||
<dd>matrix of API support per hypervisor per release</dd>
|
||||
|
||||
<dt><a href="kbase/index.html">Knowledge Base</a></dt>
|
||||
<dd>Task oriented guides to key features</dd>
|
||||
</dl>
|
||||
</div>
|
||||
|
||||
<div class="panel">
|
||||
<h2>Project development</h2>
|
||||
<dl>
|
||||
<dt><a href="hacking.html">Contributor guidelines</a></dt>
|
||||
<dd>General hacking guidelines for contributors</dd>
|
||||
|
||||
<dt><a href="styleguide.html">Docs style guide</a></dt>
|
||||
<dd>Style guidelines for reStructuredText docs</dd>
|
||||
|
||||
<dt><a href="strategy.html">Project strategy</a></dt>
|
||||
<dd>Sets a vision for future direction & technical choices</dd>
|
||||
|
||||
<dt><a href="ci.html">CI Testing</a></dt>
|
||||
<dd>Details of the Continuous Integration testing strategy</dd>
|
||||
|
||||
<dt><a href="bugs.html">Bug reports</a></dt>
|
||||
<dd>How and where to report bugs and request features</dd>
|
||||
|
||||
<dt><a href="compiling.html">Compiling</a></dt>
|
||||
<dd>How to compile libvirt</dd>
|
||||
|
||||
<dt><a href="goals.html">Goals</a></dt>
|
||||
<dd>Terminology and goals of libvirt API</dd>
|
||||
|
||||
<dt><a href="api.html">API concepts</a></dt>
|
||||
<dd>The libvirt API concepts</dd>
|
||||
|
||||
<dt><a href="api_extension.html">API extensions</a></dt>
|
||||
<dd>Adding new public libvirt APIs</dd>
|
||||
|
||||
<dt><a href="internals/eventloop.html">Event loop and worker pool</a></dt>
|
||||
<dd>Libvirt's event loop and worker pool mode</dd>
|
||||
|
||||
<dt><a href="internals/command.html">Spawning commands</a></dt>
|
||||
<dd>Spawning commands from libvirt driver code</dd>
|
||||
|
||||
<dt><a href="internals/rpc.html">RPC protocol & APIs</a></dt>
|
||||
<dd>RPC protocol information and API / dispatch guide</dd>
|
||||
|
||||
<dt><a href="internals/locking.html">Lock managers</a></dt>
|
||||
<dd>Use lock managers to protect disk content</dd>
|
||||
|
||||
<dt><a href="testsuites.html">Functional testing</a></dt>
|
||||
<dd>Testing libvirt with <a href="testtck.html">TCK test suite</a> and
|
||||
<a href="testapi.html">Libvirt-test-API</a></dd>
|
||||
|
||||
<dt><a href="newreposetup.html">New repo setup</a></dt>
|
||||
<dd>Procedure for configuring new git repositories for libvirt</dd>
|
||||
</dl>
|
||||
</div>
|
||||
|
||||
<br class="clear"/>
|
||||
|
||||
</body>
|
||||
</html>
|
163
docs/docs.rst
Normal file
163
docs/docs.rst
Normal file
@@ -0,0 +1,163 @@
|
||||
=============
|
||||
Documentation
|
||||
=============
|
||||
|
||||
Deployment / operation
|
||||
----------------------
|
||||
|
||||
`Applications <apps.html>`__
|
||||
Applications known to use libvirt
|
||||
|
||||
`Manual pages <manpages/index.html>`__
|
||||
Manual pages for libvirt tools / daemons
|
||||
|
||||
`Windows <windows.html>`__
|
||||
Downloads for Windows
|
||||
|
||||
`macOS <macos.html>`__
|
||||
Working with libvirt on macOS
|
||||
|
||||
`Migration <migration.html>`__
|
||||
Migrating guests between machines
|
||||
|
||||
`Daemons <daemons.html>`__
|
||||
Overview of the daemons provided by libvirt
|
||||
|
||||
`Remote access <remote.html>`__
|
||||
Enable remote access over TCP
|
||||
|
||||
`TLS certs <tlscerts.html>`__
|
||||
Generate and deploy x509 certificates for TLS
|
||||
|
||||
`Authentication <auth.html>`__
|
||||
Configure authentication for the libvirt daemon
|
||||
|
||||
`Access control <acl.html>`__
|
||||
Configure access control libvirt APIs with `polkit <aclpolkit.html>`__
|
||||
|
||||
`Logging <logging.html>`__
|
||||
The library and the daemon logging support
|
||||
|
||||
`Audit log <auditlog.html>`__
|
||||
Audit trail logs for host operations
|
||||
|
||||
`Firewall <firewall.html>`__
|
||||
Firewall and network filter configuration
|
||||
|
||||
`Hooks <hooks.html>`__
|
||||
Hooks for system specific management
|
||||
|
||||
`NSS module <nss.html>`__
|
||||
Enable domain host name translation to IP addresses
|
||||
|
||||
`FAQ <https://wiki.libvirt.org/page/FAQ>`__
|
||||
Frequently asked questions
|
||||
|
||||
Application development
|
||||
-----------------------
|
||||
|
||||
`API reference <html/index.html>`__
|
||||
Reference manual for the C public API, split in
|
||||
`common <html/libvirt-libvirt-common.html>`__,
|
||||
`domain <html/libvirt-libvirt-domain.html>`__,
|
||||
`domain checkpoint <html/libvirt-libvirt-domain-checkpoint.html>`__,
|
||||
`domain snapshot <html/libvirt-libvirt-domain-snapshot.html>`__,
|
||||
`error <html/libvirt-virterror.html>`__,
|
||||
`event <html/libvirt-libvirt-event.html>`__,
|
||||
`host <html/libvirt-libvirt-host.html>`__,
|
||||
`interface <html/libvirt-libvirt-interface.html>`__,
|
||||
`network <html/libvirt-libvirt-network.html>`__,
|
||||
`node device <html/libvirt-libvirt-nodedev.html>`__,
|
||||
`network filter <html/libvirt-libvirt-nwfilter.html>`__,
|
||||
`secret <html/libvirt-libvirt-secret.html>`__,
|
||||
`storage <html/libvirt-libvirt-storage.html>`__,
|
||||
`stream <html/libvirt-libvirt-stream.html>`__ and
|
||||
`admin <html/index-admin.html>`__,
|
||||
`QEMU <html/index-qemu.html>`__,
|
||||
`LXC <html/index-lxc.html>`__ libs
|
||||
|
||||
`Language bindings and API modules <bindings.html>`__
|
||||
Bindings of the libvirt API for
|
||||
`c# <csharp.html>`__,
|
||||
`go <https://pkg.go.dev/libvirt.org/go/libvirt>`__,
|
||||
`java <java.html>`__,
|
||||
`ocaml <https://libvirt.org/ocaml/>`__,
|
||||
`perl <https://search.cpan.org/dist/Sys-Virt/>`__,
|
||||
`python <python.html>`__,
|
||||
`php <php.html>`__,
|
||||
`ruby <https://libvirt.org/ruby/>`__
|
||||
and integration API modules for
|
||||
`D-Bus <dbus.html>`__
|
||||
|
||||
`XML schemas <format.html>`__
|
||||
Description of the XML schemas for
|
||||
`domains <formatdomain.html>`__,
|
||||
`networks <formatnetwork.html>`__,
|
||||
`network ports <formatnetworkport.html>`__,
|
||||
`network filtering <formatnwfilter.html>`__,
|
||||
`storage <formatstorage.html>`__,
|
||||
`storage encryption <formatstorageencryption.html>`__,
|
||||
`capabilities <formatcaps.html>`__,
|
||||
`domain capabilities <formatdomaincaps.html>`__,
|
||||
`storage pool capabilities <formatstoragecaps.html>`__,
|
||||
`node devices <formatnode.html>`__,
|
||||
`secrets <formatsecret.html>`__,
|
||||
`snapshots <formatsnapshot.html>`__,
|
||||
`checkpoints <formatcheckpoint.html>`__,
|
||||
`backup jobs <formatbackup.html>`__
|
||||
|
||||
`URI format <uri.html>`__
|
||||
The URI formats used for connecting to libvirt
|
||||
|
||||
`CGroups <cgroups.html>`__
|
||||
Control groups integration
|
||||
|
||||
`Drivers <drivers.html>`__
|
||||
Hypervisor specific driver information
|
||||
|
||||
`Support guarantees <support.html>`__
|
||||
Details of support status for various interfaces
|
||||
|
||||
`Driver support <hvsupport.html>`__
|
||||
matrix of API support per hypervisor per release
|
||||
|
||||
`Knowledge Base <kbase/index.html>`__
|
||||
Task oriented guides to key features
|
||||
|
||||
Project development
|
||||
-------------------
|
||||
|
||||
`Contributor guidelines <hacking.html>`__
|
||||
General hacking guidelines for contributors
|
||||
|
||||
`Docs style guide <styleguide.html>`__
|
||||
Style guidelines for reStructuredText docs
|
||||
|
||||
`Project strategy <strategy.html>`__
|
||||
Sets a vision for future direction & technical choices
|
||||
|
||||
`CI Testing <ci.html>`__
|
||||
Details of the Continuous Integration testing strategy
|
||||
|
||||
`Bug reports <bugs.html>`__
|
||||
How and where to report bugs and request features
|
||||
|
||||
`Compiling <compiling.html>`__
|
||||
How to compile libvirt
|
||||
|
||||
`Goals <goals.html>`__
|
||||
Terminology and goals of libvirt API
|
||||
|
||||
`API concepts <api.html>`__
|
||||
The libvirt API concepts
|
||||
|
||||
`API extensions <api_extension.html>`__
|
||||
Adding new public libvirt APIs
|
||||
|
||||
`Functional testing <testsuites.html>`__
|
||||
Testing libvirt with
|
||||
`TCK test suite <testtck.html>`__ and
|
||||
`Libvirt-test-API <testapi.html>`__
|
||||
|
||||
`New repo setup <newreposetup.html>`__
|
||||
Procedure for configuring new git repositories for libvirt
|
@@ -2,7 +2,7 @@
|
||||
Internal drivers
|
||||
================
|
||||
|
||||
- `Hypervisor drivers <#hypervisor-drivers>`__
|
||||
- `Hypervisor drivers`_
|
||||
- `Storage drivers <storage.html>`__
|
||||
- `Node device driver <drvnodedev.html>`__
|
||||
- `Secret driver <drvsecret.html>`__
|
||||
@@ -19,7 +19,7 @@ The hypervisor drivers currently supported by libvirt are:
|
||||
|
||||
- `LXC <drvlxc.html>`__ - Linux Containers
|
||||
- `OpenVZ <drvopenvz.html>`__
|
||||
- `QEMU <drvqemu.html>`__
|
||||
- `QEMU/KVM/HVF <drvqemu.html>`__
|
||||
- `Test <drvtest.html>`__ - Used for testing
|
||||
- `VirtualBox <drvvbox.html>`__
|
||||
- `VMware ESX <drvesx.html>`__
|
||||
|
@@ -1,583 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>Bhyve driver</h1>
|
||||
|
||||
<ul id="toc"></ul>
|
||||
|
||||
<p>
|
||||
Bhyve is a FreeBSD hypervisor. It first appeared in FreeBSD 10.0. However, it's
|
||||
recommended to keep tracking FreeBSD 10-STABLE to make sure all new features
|
||||
of bhyve are supported.
|
||||
|
||||
In order to enable bhyve on your FreeBSD host, you'll need to load the <code>vmm</code>
|
||||
kernel module. Additionally, <code>if_tap</code> and <code>if_bridge</code> modules
|
||||
should be loaded for networking support. Also, <span class="since">since 3.2.0</span> the
|
||||
<code>virt-host-validate(1)</code> supports the bhyve host validation and could be
|
||||
used like this:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
$ virt-host-validate bhyve
|
||||
BHYVE: Checking for vmm module : PASS
|
||||
BHYVE: Checking for if_tap module : PASS
|
||||
BHYVE: Checking for if_bridge module : PASS
|
||||
BHYVE: Checking for nmdm module : PASS
|
||||
$
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Additional information on bhyve could be obtained on <a href="https://bhyve.org/">bhyve.org</a>.
|
||||
</p>
|
||||
|
||||
<h2><a id="uri">Connections to the Bhyve driver</a></h2>
|
||||
<p>
|
||||
The libvirt bhyve driver is a single-instance privileged driver. Some sample
|
||||
connection URIs are:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
bhyve:///system (local access)
|
||||
bhyve+unix:///system (local access)
|
||||
bhyve+ssh://root@example.com/system (remote access, SSH tunnelled)
|
||||
</pre>
|
||||
|
||||
<h2><a id="exconfig">Example guest domain XML configurations</a></h2>
|
||||
|
||||
<h3>Example config</h3>
|
||||
<p>
|
||||
The bhyve driver in libvirt is in its early stage and under active development. So it supports
|
||||
only limited number of features bhyve provides.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Note: in older libvirt versions, only a single network device and a single
|
||||
disk device were supported per-domain. However,
|
||||
<span class="since">since 1.2.6</span> the libvirt bhyve driver supports
|
||||
up to 31 PCI devices.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Note: the Bhyve driver in libvirt will boot whichever device is first. If you
|
||||
want to install from CD, put the CD device first. If not, put the root HDD
|
||||
first.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Note: Only the SATA bus is supported. Only <code>cdrom</code>- and
|
||||
<code>disk</code>-type disks are supported.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
<domain type='bhyve'>
|
||||
<name>bhyve</name>
|
||||
<uuid>df3be7e7-a104-11e3-aeb0-50e5492bd3dc</uuid>
|
||||
<memory>219136</memory>
|
||||
<currentMemory>219136</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<os>
|
||||
<type>hvm</type>
|
||||
</os>
|
||||
<features>
|
||||
<apic/>
|
||||
<acpi/>
|
||||
</features>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<disk type='file'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/bhyve_freebsd.img'/>
|
||||
<target dev='hda' bus='sata'/>
|
||||
</disk>
|
||||
<disk type='file' device='cdrom'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/cdrom.iso'/>
|
||||
<target dev='hdc' bus='sata'/>
|
||||
<readonly/>
|
||||
</disk>
|
||||
<interface type='bridge'>
|
||||
<model type='virtio'/>
|
||||
<source bridge="virbr0"/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<p>(The <disk> sections may be swapped in order to install from
|
||||
<em>cdrom.iso</em>.)</p>
|
||||
|
||||
<h3>Example config (Linux guest)</h3>
|
||||
|
||||
<p>
|
||||
Note the addition of <bootloader>.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
<domain type='bhyve'>
|
||||
<name>linux_guest</name>
|
||||
<uuid>df3be7e7-a104-11e3-aeb0-50e5492bd3dc</uuid>
|
||||
<memory>131072</memory>
|
||||
<currentMemory>131072</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<bootloader>/usr/local/sbin/grub-bhyve</bootloader>
|
||||
<os>
|
||||
<type>hvm</type>
|
||||
</os>
|
||||
<features>
|
||||
<apic/>
|
||||
<acpi/>
|
||||
</features>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<disk type='file' device='disk'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/guest_hdd.img'/>
|
||||
<target dev='hda' bus='sata'/>
|
||||
</disk>
|
||||
<disk type='file' device='cdrom'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/cdrom.iso'/>
|
||||
<target dev='hdc' bus='sata'/>
|
||||
<readonly/>
|
||||
</disk>
|
||||
<interface type='bridge'>
|
||||
<model type='virtio'/>
|
||||
<source bridge="virbr0"/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<h3>Example config (Linux UEFI guest, VNC, tablet)</h3>
|
||||
|
||||
<p>This is an example to boot into Fedora 25 installation:</p>
|
||||
|
||||
<pre>
|
||||
<domain type='bhyve'>
|
||||
<name>fedora_uefi_vnc_tablet</name>
|
||||
<memory unit='G'>4</memory>
|
||||
<vcpu>2</vcpu>
|
||||
<os>
|
||||
<type>hvm</type>
|
||||
<b><loader readonly="yes" type="pflash">/usr/local/share/uefi-firmware/BHYVE_UEFI.fd</loader></b>
|
||||
</os>
|
||||
<features>
|
||||
<apic/>
|
||||
<acpi/>
|
||||
</features>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<disk type='file' device='cdrom'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/Fedora-Workstation-Live-x86_64-25-1.3.iso'/>
|
||||
<target dev='hdc' bus='sata'/>
|
||||
<readonly/>
|
||||
</disk>
|
||||
<disk type='file' device='disk'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/linux_uefi.img'/>
|
||||
<target dev='hda' bus='sata'/>
|
||||
</disk>
|
||||
<interface type='bridge'>
|
||||
<model type='virtio'/>
|
||||
<source bridge="virbr0"/>
|
||||
</interface>
|
||||
<serial type="nmdm">
|
||||
<source master="/dev/nmdm0A" slave="/dev/nmdm0B"/>
|
||||
</serial>
|
||||
<b><graphics type='vnc' port='5904'>
|
||||
<listen type='address' address='127.0.0.1'/>
|
||||
</graphics>
|
||||
<controller type='usb' model='nec-xhci'/>
|
||||
<input type='tablet' bus='usb'/></b>
|
||||
</devices>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<p>Please refer to the <a href="#uefi">UEFI</a> section for a more detailed explanation.</p>
|
||||
|
||||
<h2><a id="usage">Guest usage / management</a></h2>
|
||||
|
||||
<h3><a id="console">Connecting to a guest console</a></h3>
|
||||
|
||||
<p>
|
||||
Guest console connection is supported through the <code>nmdm</code> device. It could be enabled by adding
|
||||
the following to the domain XML (<span class="since">Since 1.2.4</span>):
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
...
|
||||
<devices>
|
||||
<serial type="nmdm">
|
||||
<source master="/dev/nmdm0A" slave="/dev/nmdm0B"/>
|
||||
</serial>
|
||||
</devices>
|
||||
...</pre>
|
||||
|
||||
|
||||
<p>Make sure to load the <code>nmdm</code> kernel module if you plan to use that.</p>
|
||||
|
||||
<p>
|
||||
Then <code>virsh console</code> command can be used to connect to the text console
|
||||
of a guest.</p>
|
||||
|
||||
<p><b>NB:</b> Some versions of bhyve have a bug that prevents guests from booting
|
||||
until the console is opened by a client. This bug was fixed in
|
||||
<a href="https://svnweb.freebsd.org/changeset/base/262884">FreeBSD changeset r262884</a>. If
|
||||
an older version is used, one either has to open a console manually with <code>virsh console</code>
|
||||
to let a guest boot or start a guest using:</p>
|
||||
|
||||
<pre>start --console domname</pre>
|
||||
|
||||
<p><b>NB:</b> A bootloader configured to require user interaction will prevent
|
||||
the domain from starting (and thus <code>virsh console</code> or <code>start
|
||||
--console</code> from functioning) until the user interacts with it manually on
|
||||
the VM host. Because users typically do not have access to the VM host,
|
||||
interactive bootloaders are unsupported by libvirt. <em>However,</em> if you happen to
|
||||
run into this scenario and also happen to have access to the Bhyve host
|
||||
machine, you may select a boot option and allow the domain to finish starting
|
||||
by using an alternative terminal client on the VM host to connect to the
|
||||
domain-configured null modem device. One example (assuming
|
||||
<code>/dev/nmdm0B</code> is configured as the slave end of the domain serial
|
||||
device) is:</p>
|
||||
|
||||
<pre>cu -l /dev/nmdm0B</pre>
|
||||
|
||||
<h3><a id="xmltonative">Converting from domain XML to Bhyve args</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh domxml-to-native</code> command can preview the actual
|
||||
<code>bhyve</code> commands that will be executed for a given domain.
|
||||
It outputs two lines, the first line is a <code>bhyveload</code> command and
|
||||
the second is a <code>bhyve</code> command.
|
||||
</p>
|
||||
|
||||
<p>Please note that the <code>virsh domxml-to-native</code> doesn't do any
|
||||
real actions other than printing the command, for example, it doesn't try to
|
||||
find a proper TAP interface and create it, like what is done when starting
|
||||
a domain; and always returns <code>tap0</code> for the network interface. So
|
||||
if you're going to run these commands manually, most likely you might want to
|
||||
tweak them.</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c "bhyve:///system" domxml-to-native --format bhyve-argv --xml /path/to/bhyve.xml
|
||||
/usr/sbin/bhyveload -m 214 -d /home/user/vm1.img vm1
|
||||
/usr/sbin/bhyve -c 2 -m 214 -A -I -H -P -s 0:0,hostbridge \
|
||||
-s 3:0,virtio-net,tap0,mac=52:54:00:5d:74:e3 -s 2:0,virtio-blk,/home/user/vm1.img \
|
||||
-s 1,lpc -l com1,/dev/nmdm0A vm1
|
||||
</pre>
|
||||
|
||||
<h3><a id="zfsvolume">Using ZFS volumes</a></h3>
|
||||
|
||||
<p>It's possible to use ZFS volumes as disk devices <span class="since">since 1.2.8</span>.
|
||||
An example of domain XML device entry for that will look like:</p>
|
||||
|
||||
<pre>
|
||||
...
|
||||
<disk type='volume' device='disk'>
|
||||
<source pool='zfspool' volume='vol1'/>
|
||||
<target dev='vdb' bus='virtio'/>
|
||||
</disk>
|
||||
...</pre>
|
||||
|
||||
<p>Please refer to the <a href="storage.html">Storage documentation</a> for more details on storage
|
||||
management.</p>
|
||||
|
||||
<h3><a id="grubbhyve">Using grub2-bhyve or Alternative Bootloaders</a></h3>
|
||||
|
||||
<p>It's possible to boot non-FreeBSD guests by specifying an explicit
|
||||
bootloader, e.g. <code>grub-bhyve(1)</code>. Arguments to the bootloader may be
|
||||
specified as well. If the bootloader is <code>grub-bhyve</code> and arguments
|
||||
are omitted, libvirt will try and infer boot ordering from user-supplied
|
||||
<boot order='N'> configuration in the domain. Failing that, it will boot
|
||||
the first disk in the domain (either <code>cdrom</code>- or
|
||||
<code>disk</code>-type devices). If the disk type is <code>disk</code>, it will
|
||||
attempt to boot from the first partition in the disk image.</p>
|
||||
|
||||
<pre>
|
||||
...
|
||||
<bootloader>/usr/local/sbin/grub-bhyve</bootloader>
|
||||
<bootloader_args>...</bootloader_args>
|
||||
...
|
||||
</pre>
|
||||
|
||||
<p>Caveat: <code>bootloader_args</code> does not support any quoting.
|
||||
Filenames, etc, must not have spaces or they will be tokenized incorrectly.</p>
|
||||
|
||||
<h3><a id="uefi">Using UEFI bootrom, VNC, and USB tablet</a></h3>
|
||||
|
||||
<p><span class="since">Since 3.2.0</span>, in addition to <a href="#grubbhyve">grub-bhyve</a>,
|
||||
non-FreeBSD guests could be also booted using an UEFI boot ROM, provided both guest OS and
|
||||
installed <code>bhyve(1)</code> version support UEFI. To use that, <code>loader</code>
|
||||
should be specified in the <code>os</code> section:</p>
|
||||
|
||||
<pre>
|
||||
<domain type='bhyve'>
|
||||
...
|
||||
<os>
|
||||
<type>hvm</type>
|
||||
<loader readonly="yes" type="pflash">/usr/local/share/uefi-firmware/BHYVE_UEFI.fd</loader>
|
||||
</os>
|
||||
...
|
||||
</pre>
|
||||
|
||||
<p>This uses the UEFI firmware provided by
|
||||
the <a href="https://www.freshports.org/sysutils/bhyve-firmware/">sysutils/bhyve-firmware</a>
|
||||
FreeBSD port.</p>
|
||||
|
||||
<p>VNC and the tablet input device could be configured this way:</p>
|
||||
|
||||
<pre>
|
||||
<domain type='bhyve'>
|
||||
<devices>
|
||||
...
|
||||
<graphics type='vnc' port='5904'>
|
||||
<listen type='address' address='127.0.0.1'/>
|
||||
</graphics>
|
||||
<controller type='usb' model='nec-xhci'/>
|
||||
<input type='tablet' bus='usb'/>
|
||||
</devices>
|
||||
...
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<p>This way, VNC will be accessible on <code>127.0.0.1:5904</code>.</p>
|
||||
|
||||
<p>Please note that the tablet device requires to have a USB controller
|
||||
of the <code>nec-xhci</code> model. Currently, only a single controller of this
|
||||
type and a single tablet are supported per domain.</p>
|
||||
|
||||
<p><span class="since">Since 3.5.0</span>, it's possible to configure how the video device is exposed
|
||||
to the guest using the <code>vgaconf</code> attribute:</p>
|
||||
|
||||
<pre>
|
||||
<domain type='bhyve'>
|
||||
<devices>
|
||||
...
|
||||
<graphics type='vnc' port='5904'>
|
||||
<listen type='address' address='127.0.0.1'/>
|
||||
</graphics>
|
||||
<video>
|
||||
<driver vgaconf='on'/>
|
||||
<model type='gop' heads='1' primary='yes'/>
|
||||
</video>
|
||||
...
|
||||
</devices>
|
||||
...
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<p>If not specified, bhyve's default mode for <code>vgaconf</code>
|
||||
will be used. Please refer to the
|
||||
<a href="https://www.freebsd.org/cgi/man.cgi?query=bhyve&sektion=8&manpath=FreeBSD+12-current">bhyve(8)</a>
|
||||
manual page and the <a href="https://wiki.freebsd.org/bhyve">bhyve wiki</a> for more details on using
|
||||
the <code>vgaconf</code> option.</p>
|
||||
|
||||
<p><span class="since">Since 3.7.0</span>, it's possible to use <code>autoport</code>
|
||||
to let libvirt allocate VNC port automatically (instead of explicitly specifying
|
||||
it with the <code>port</code> attribute):</p>
|
||||
|
||||
<pre>
|
||||
<graphics type='vnc' autoport='yes'>
|
||||
</pre>
|
||||
|
||||
<p><span class="since">Since 6.8.0</span>, it's possible to set framebuffer resolution
|
||||
using the <code>resolution</code> sub-element:</p>
|
||||
|
||||
<pre>
|
||||
<video>
|
||||
<model type='gop' heads='1' primary='yes'>
|
||||
<resolution x='800' y='600'/>
|
||||
</model>
|
||||
</video>
|
||||
</pre>
|
||||
|
||||
<p><span class="since">Since 6.8.0</span>, VNC server can be configured to use
|
||||
password based authentication:</p>
|
||||
|
||||
<pre>
|
||||
<graphics type='vnc' port='5904' passwd='foobar'>
|
||||
<listen type='address' address='127.0.0.1'/>
|
||||
</graphics>
|
||||
</pre>
|
||||
|
||||
<p>Note: VNC password authentication is known to be cryptographically weak.
|
||||
Additionally, the password is passed as a command line argument in clear text.
|
||||
Make sure you understand the risks associated with this feature before using it.</p>
|
||||
|
||||
<h3><a id="clockconfig">Clock configuration</a></h3>
|
||||
|
||||
<p>Originally bhyve supported only localtime for RTC. Support for UTC time was introduced in
|
||||
<a href="https://svnweb.freebsd.org/changeset/base/284894">FreeBSD changeset r284894</a>
|
||||
for <i>10-STABLE</i> and
|
||||
in <a href="https://svnweb.freebsd.org/changeset/base/279225">changeset r279225</a>
|
||||
for <i>-CURRENT</i>. It's possible to use this in libvirt <span class="since">since 1.2.18</span>,
|
||||
just place the following to domain XML:</p>
|
||||
|
||||
<pre>
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<clock offset='utc'/>
|
||||
...
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<p>Please note that if you run the older bhyve version that doesn't support UTC time, you'll
|
||||
fail to start a domain. As UTC is used as a default when you do not specify clock settings,
|
||||
you'll need to explicitly specify 'localtime' in this case:</p>
|
||||
|
||||
<pre>
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<clock offset='localtime'/>
|
||||
...
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<h3><a id="e1000">e1000 NIC</a></h3>
|
||||
|
||||
<p>As of <a href="https://svnweb.freebsd.org/changeset/base/302504">FreeBSD changeset r302504</a>
|
||||
bhyve supports Intel e1000 network adapter emulation. It's supported in libvirt
|
||||
<span class="since">since 3.1.0</span> and could be used as follows:</p>
|
||||
|
||||
<pre>
|
||||
...
|
||||
<interface type='bridge'>
|
||||
<source bridge='virbr0'/>
|
||||
<model type='<b>e1000</b>'/>
|
||||
</interface>
|
||||
...
|
||||
</pre>
|
||||
|
||||
<h3><a id="sound">Sound device</a></h3>
|
||||
|
||||
<p>As of <a href="https://svnweb.freebsd.org/changeset/base/349355">FreeBSD changeset r349355</a>
|
||||
bhyve supports sound device emulation. It's supported in libvirt
|
||||
<span class="since">since 6.7.0</span>.</p>
|
||||
|
||||
<pre>
|
||||
...
|
||||
<sound model='ich7'>
|
||||
<audio id='1'/>
|
||||
</sound>
|
||||
<audio id='1' type='oss'>
|
||||
<input dev='/dev/dsp0'/>
|
||||
<output dev='/dev/dsp0'/>
|
||||
</audio>
|
||||
...
|
||||
</pre>
|
||||
|
||||
<p>Here, the <code>sound</code> element specifies the sound device as it's exposed
|
||||
to the guest, with <code>ich7</code> being the only supported model now,
|
||||
and the <code>audio</code> element specifies how the guest device is mapped
|
||||
to the host sound device.</p>
|
||||
|
||||
<h3><a id="fs-9p">Virtio-9p filesystem</a></h3>
|
||||
|
||||
<p>As of <a href="https://svnweb.freebsd.org/changeset/base/366413">FreeBSD changeset r366413</a>
|
||||
bhyve supports sharing arbitrary directory tree between the guest and the host.
|
||||
It's supported in libvirt <span class="since">since 6.9.0</span>.</p>
|
||||
|
||||
<pre>
|
||||
...
|
||||
<filesystem>
|
||||
<source dir='/shared/dir'/>
|
||||
<target dir='shared_dir'/>
|
||||
</filesystem>
|
||||
...
|
||||
</pre>
|
||||
|
||||
<p>This share could be made read only by adding the <code><readonly/></code> sub-element.</p>
|
||||
|
||||
<p>In the Linux guest, this could be mounted using:</p>
|
||||
|
||||
<pre>mount -t 9p shared_dir /mnt/shared_dir</pre>
|
||||
|
||||
<h3><a id="wired">Wiring guest memory</a></h3>
|
||||
|
||||
<p><span class="since">Since 4.4.0</span>, it's possible to specify that guest memory should
|
||||
be wired and cannot be swapped out as follows:</p>
|
||||
<pre>
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<memoryBacking>
|
||||
<locked/>
|
||||
</memoryBacking>
|
||||
...
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<h3><a id="cputopology">CPU topology</a></h3>
|
||||
|
||||
<p><span class="since">Since 4.5.0</span>, it's possible to specify guest CPU topology, if bhyve
|
||||
supports that. Support for specifying guest CPU topology was added to bhyve in
|
||||
<a href="https://svnweb.freebsd.org/changeset/base/332298">FreeBSD changeset r332298</a>
|
||||
for <i>-CURRENT</i>.
|
||||
Example:</p>
|
||||
<pre>
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<cpu>
|
||||
<topology sockets='1' cores='2' threads='1'/>
|
||||
</cpu>
|
||||
...
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<h3><a id="msrs">Ignoring unknown MSRs reads and writes</a></h3>
|
||||
|
||||
<p><span class="since">Since 5.1.0</span>, it's possible to make bhyve
|
||||
ignore accesses to unimplemented Model Specific Registers (MSRs).
|
||||
Example:</p>
|
||||
|
||||
<pre>
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<features>
|
||||
...
|
||||
<msrs unknown='ignore'/>
|
||||
...
|
||||
</features>
|
||||
...
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<h3><a id="bhyvecommand">Pass-through of arbitrary bhyve commands</a></h3>
|
||||
|
||||
<p><span class="since">Since 5.1.0</span>, it's possible to pass additional command-line
|
||||
arguments to the bhyve process when starting the domain using the
|
||||
<code><bhyve:commandline></code> element under <code>domain</code>.
|
||||
To supply an argument, use the element <code><bhyve:arg></code> with
|
||||
the attribute <code>value</code> set to additional argument to be added.
|
||||
The arg element may be repeated multiple times. To use this XML addition, it is necessary
|
||||
to issue an XML namespace request (the special <code>xmlns:<i>name</i></code> attribute)
|
||||
that pulls in <code>http://libvirt.org/schemas/domain/bhyve/1.0</code>;
|
||||
typically, the namespace is given the name of <code>bhyve</code>.
|
||||
</p>
|
||||
<p>Example:</p>
|
||||
<pre>
|
||||
<domain type="bhyve" xmlns:bhyve="http://libvirt.org/schemas/domain/bhyve/1.0">
|
||||
...
|
||||
<bhyve:commandline>
|
||||
<bhyve:arg value='-somebhyvearg'/>
|
||||
</bhyve:commandline>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<p>Note that these extensions are for testing and development purposes only.
|
||||
They are <b>unsupported</b>, using them may result in inconsistent state,
|
||||
and upgrading either bhyve or libvirtd maybe break behavior of a domain that
|
||||
was relying on a specific commands pass-through.</p>
|
||||
|
||||
</body>
|
||||
</html>
|
584
docs/drvbhyve.rst
Normal file
584
docs/drvbhyve.rst
Normal file
@@ -0,0 +1,584 @@
|
||||
.. role:: since
|
||||
|
||||
============
|
||||
Bhyve driver
|
||||
============
|
||||
|
||||
.. contents::
|
||||
|
||||
Bhyve is a FreeBSD hypervisor. It first appeared in FreeBSD 10.0. However, it's
|
||||
recommended to keep tracking FreeBSD 10-STABLE to make sure all new features of
|
||||
bhyve are supported. In order to enable bhyve on your FreeBSD host, you'll need
|
||||
to load the ``vmm`` kernel module. Additionally, ``if_tap`` and ``if_bridge``
|
||||
modules should be loaded for networking support. Also, :since:`since 3.2.0` the
|
||||
``virt-host-validate(1)`` supports the bhyve host validation and could be used
|
||||
like this:
|
||||
|
||||
::
|
||||
|
||||
$ virt-host-validate bhyve
|
||||
BHYVE: Checking for vmm module : PASS
|
||||
BHYVE: Checking for if_tap module : PASS
|
||||
BHYVE: Checking for if_bridge module : PASS
|
||||
BHYVE: Checking for nmdm module : PASS
|
||||
$
|
||||
|
||||
Additional information on bhyve could be obtained on
|
||||
`bhyve.org <https://bhyve.org/>`__.
|
||||
|
||||
Connections to the Bhyve driver
|
||||
-------------------------------
|
||||
|
||||
The libvirt bhyve driver is a single-instance privileged driver. Some sample
|
||||
connection URIs are:
|
||||
|
||||
::
|
||||
|
||||
bhyve:///system (local access)
|
||||
bhyve+unix:///system (local access)
|
||||
bhyve+ssh://root@example.com/system (remote access, SSH tunnelled)
|
||||
|
||||
Example guest domain XML configurations
|
||||
---------------------------------------
|
||||
|
||||
Example config
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The bhyve driver in libvirt is in its early stage and under active development.
|
||||
So it supports only limited number of features bhyve provides.
|
||||
|
||||
Note: in older libvirt versions, only a single network device and a single disk
|
||||
device were supported per-domain. However, :since:`since 1.2.6` the libvirt
|
||||
bhyve driver supports up to 31 PCI devices.
|
||||
|
||||
Note: the Bhyve driver in libvirt will boot whichever device is first. If you
|
||||
want to install from CD, put the CD device first. If not, put the root HDD
|
||||
first.
|
||||
|
||||
Note: Only the SATA bus is supported. Only ``cdrom``- and ``disk``-type disks
|
||||
are supported.
|
||||
|
||||
::
|
||||
|
||||
<domain type='bhyve'>
|
||||
<name>bhyve</name>
|
||||
<uuid>df3be7e7-a104-11e3-aeb0-50e5492bd3dc</uuid>
|
||||
<memory>219136</memory>
|
||||
<currentMemory>219136</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<os>
|
||||
<type>hvm</type>
|
||||
</os>
|
||||
<features>
|
||||
<apic/>
|
||||
<acpi/>
|
||||
</features>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<disk type='file'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/bhyve_freebsd.img'/>
|
||||
<target dev='hda' bus='sata'/>
|
||||
</disk>
|
||||
<disk type='file' device='cdrom'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/cdrom.iso'/>
|
||||
<target dev='hdc' bus='sata'/>
|
||||
<readonly/>
|
||||
</disk>
|
||||
<interface type='bridge'>
|
||||
<model type='virtio'/>
|
||||
<source bridge="virbr0"/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
|
||||
(The <disk> sections may be swapped in order to install from *cdrom.iso*.)
|
||||
|
||||
Example config (Linux guest)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Note the addition of <bootloader>.
|
||||
|
||||
::
|
||||
|
||||
<domain type='bhyve'>
|
||||
<name>linux_guest</name>
|
||||
<uuid>df3be7e7-a104-11e3-aeb0-50e5492bd3dc</uuid>
|
||||
<memory>131072</memory>
|
||||
<currentMemory>131072</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<bootloader>/usr/local/sbin/grub-bhyve</bootloader>
|
||||
<os>
|
||||
<type>hvm</type>
|
||||
</os>
|
||||
<features>
|
||||
<apic/>
|
||||
<acpi/>
|
||||
</features>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<disk type='file' device='disk'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/guest_hdd.img'/>
|
||||
<target dev='hda' bus='sata'/>
|
||||
</disk>
|
||||
<disk type='file' device='cdrom'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/cdrom.iso'/>
|
||||
<target dev='hdc' bus='sata'/>
|
||||
<readonly/>
|
||||
</disk>
|
||||
<interface type='bridge'>
|
||||
<model type='virtio'/>
|
||||
<source bridge="virbr0"/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
|
||||
Example config (Linux UEFI guest, VNC, tablet)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This is an example to boot into Fedora 25 installation:
|
||||
|
||||
::
|
||||
|
||||
<domain type='bhyve'>
|
||||
<name>fedora_uefi_vnc_tablet</name>
|
||||
<memory unit='G'>4</memory>
|
||||
<vcpu>2</vcpu>
|
||||
<os>
|
||||
<type>hvm</type>
|
||||
<loader readonly="yes" type="pflash">/usr/local/share/uefi-firmware/BHYVE_UEFI.fd</loader>
|
||||
</os>
|
||||
<features>
|
||||
<apic/>
|
||||
<acpi/>
|
||||
</features>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<disk type='file' device='cdrom'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/Fedora-Workstation-Live-x86_64-25-1.3.iso'/>
|
||||
<target dev='hdc' bus='sata'/>
|
||||
<readonly/>
|
||||
</disk>
|
||||
<disk type='file' device='disk'>
|
||||
<driver name='file' type='raw'/>
|
||||
<source file='/path/to/linux_uefi.img'/>
|
||||
<target dev='hda' bus='sata'/>
|
||||
</disk>
|
||||
<interface type='bridge'>
|
||||
<model type='virtio'/>
|
||||
<source bridge="virbr0"/>
|
||||
</interface>
|
||||
<serial type="nmdm">
|
||||
<source master="/dev/nmdm0A" slave="/dev/nmdm0B"/>
|
||||
</serial>
|
||||
<graphics type='vnc' port='5904'>
|
||||
<listen type='address' address='127.0.0.1'/>
|
||||
</graphics>
|
||||
<controller type='usb' model='nec-xhci'/>
|
||||
<input type='tablet' bus='usb'/>
|
||||
</devices>
|
||||
</domain>
|
||||
|
||||
Please refer to the `Using UEFI bootrom, VNC, and USB tablet`_ section for a
|
||||
more detailed explanation.
|
||||
|
||||
Guest usage / management
|
||||
------------------------
|
||||
|
||||
Connecting to a guest console
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Guest console connection is supported through the ``nmdm`` device. It could be
|
||||
enabled by adding the following to the domain XML ( :since:`Since 1.2.4` ):
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<devices>
|
||||
<serial type="nmdm">
|
||||
<source master="/dev/nmdm0A" slave="/dev/nmdm0B"/>
|
||||
</serial>
|
||||
</devices>
|
||||
...
|
||||
|
||||
Make sure to load the ``nmdm`` kernel module if you plan to use that.
|
||||
|
||||
Then ``virsh console`` command can be used to connect to the text console of a
|
||||
guest.
|
||||
|
||||
**NB:** Some versions of bhyve have a bug that prevents guests from booting
|
||||
until the console is opened by a client. This bug was fixed in `FreeBSD
|
||||
changeset r262884 <https://svnweb.freebsd.org/changeset/base/262884>`__. If an
|
||||
older version is used, one either has to open a console manually with
|
||||
``virsh console`` to let a guest boot or start a guest using:
|
||||
|
||||
::
|
||||
|
||||
start --console domname
|
||||
|
||||
**NB:** A bootloader configured to require user interaction will prevent the
|
||||
domain from starting (and thus ``virsh console`` or ``start --console`` from
|
||||
functioning) until the user interacts with it manually on the VM host. Because
|
||||
users typically do not have access to the VM host, interactive bootloaders are
|
||||
unsupported by libvirt. *However,* if you happen to run into this scenario and
|
||||
also happen to have access to the Bhyve host machine, you may select a boot
|
||||
option and allow the domain to finish starting by using an alternative terminal
|
||||
client on the VM host to connect to the domain-configured null modem device. One
|
||||
example (assuming ``/dev/nmdm0B`` is configured as the slave end of the domain
|
||||
serial device) is:
|
||||
|
||||
::
|
||||
|
||||
cu -l /dev/nmdm0B
|
||||
|
||||
Converting from domain XML to Bhyve args
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh domxml-to-native`` command can preview the actual ``bhyve`` commands
|
||||
that will be executed for a given domain. It outputs two lines, the first line
|
||||
is a ``bhyveload`` command and the second is a ``bhyve`` command.
|
||||
|
||||
Please note that the ``virsh domxml-to-native`` doesn't do any real actions
|
||||
other than printing the command, for example, it doesn't try to find a proper
|
||||
TAP interface and create it, like what is done when starting a domain; and
|
||||
always returns ``tap0`` for the network interface. So if you're going to run
|
||||
these commands manually, most likely you might want to tweak them.
|
||||
|
||||
::
|
||||
|
||||
# virsh -c "bhyve:///system" domxml-to-native --format bhyve-argv --xml /path/to/bhyve.xml
|
||||
/usr/sbin/bhyveload -m 214 -d /home/user/vm1.img vm1
|
||||
/usr/sbin/bhyve -c 2 -m 214 -A -I -H -P -s 0:0,hostbridge \
|
||||
-s 3:0,virtio-net,tap0,mac=52:54:00:5d:74:e3 -s 2:0,virtio-blk,/home/user/vm1.img \
|
||||
-s 1,lpc -l com1,/dev/nmdm0A vm1
|
||||
|
||||
Using ZFS volumes
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
It's possible to use ZFS volumes as disk devices :since:`since 1.2.8` . An
|
||||
example of domain XML device entry for that will look like:
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<disk type='volume' device='disk'>
|
||||
<source pool='zfspool' volume='vol1'/>
|
||||
<target dev='vdb' bus='virtio'/>
|
||||
</disk>
|
||||
...
|
||||
|
||||
Please refer to the `Storage documentation <storage.html>`__ for more details on
|
||||
storage management.
|
||||
|
||||
Using grub2-bhyve or Alternative Bootloaders
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
It's possible to boot non-FreeBSD guests by specifying an explicit bootloader,
|
||||
e.g. ``grub-bhyve(1)``. Arguments to the bootloader may be specified as well. If
|
||||
the bootloader is ``grub-bhyve`` and arguments are omitted, libvirt will try and
|
||||
infer boot ordering from user-supplied <boot order='N'> configuration in the
|
||||
domain. Failing that, it will boot the first disk in the domain (either
|
||||
``cdrom``- or ``disk``-type devices). If the disk type is ``disk``, it will
|
||||
attempt to boot from the first partition in the disk image.
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<bootloader>/usr/local/sbin/grub-bhyve</bootloader>
|
||||
<bootloader_args>...</bootloader_args>
|
||||
...
|
||||
|
||||
Caveat: ``bootloader_args`` does not support any quoting. Filenames, etc, must
|
||||
not have spaces or they will be tokenized incorrectly.
|
||||
|
||||
Using UEFI bootrom, VNC, and USB tablet
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:since:`Since 3.2.0` , in addition to
|
||||
`Using grub2-bhyve or Alternative Bootloaders`_, non-FreeBSD
|
||||
guests could be also booted using an UEFI boot ROM, provided both guest OS and
|
||||
installed ``bhyve(1)`` version support UEFI. To use that, ``loader`` should be
|
||||
specified in the ``os`` section:
|
||||
|
||||
::
|
||||
|
||||
<domain type='bhyve'>
|
||||
...
|
||||
<os>
|
||||
<type>hvm</type>
|
||||
<loader readonly="yes" type="pflash">/usr/local/share/uefi-firmware/BHYVE_UEFI.fd</loader>
|
||||
</os>
|
||||
...
|
||||
|
||||
This uses the UEFI firmware provided by the
|
||||
`sysutils/bhyve-firmware <https://www.freshports.org/sysutils/bhyve-firmware/>`__
|
||||
FreeBSD port.
|
||||
|
||||
VNC and the tablet input device could be configured this way:
|
||||
|
||||
::
|
||||
|
||||
<domain type='bhyve'>
|
||||
<devices>
|
||||
...
|
||||
<graphics type='vnc' port='5904'>
|
||||
<listen type='address' address='127.0.0.1'/>
|
||||
</graphics>
|
||||
<controller type='usb' model='nec-xhci'/>
|
||||
<input type='tablet' bus='usb'/>
|
||||
</devices>
|
||||
...
|
||||
</domain>
|
||||
|
||||
This way, VNC will be accessible on ``127.0.0.1:5904``.
|
||||
|
||||
Please note that the tablet device requires to have a USB controller of the
|
||||
``nec-xhci`` model. Currently, only a single controller of this type and a
|
||||
single tablet are supported per domain.
|
||||
|
||||
:since:`Since 3.5.0` , it's possible to configure how the video device is
|
||||
exposed to the guest using the ``vgaconf`` attribute:
|
||||
|
||||
::
|
||||
|
||||
<domain type='bhyve'>
|
||||
<devices>
|
||||
...
|
||||
<graphics type='vnc' port='5904'>
|
||||
<listen type='address' address='127.0.0.1'/>
|
||||
</graphics>
|
||||
<video>
|
||||
<driver vgaconf='on'/>
|
||||
<model type='gop' heads='1' primary='yes'/>
|
||||
</video>
|
||||
...
|
||||
</devices>
|
||||
...
|
||||
</domain>
|
||||
|
||||
If not specified, bhyve's default mode for ``vgaconf`` will be used. Please
|
||||
refer to the
|
||||
`bhyve(8) <https://www.freebsd.org/cgi/man.cgi?query=bhyve&sektion=8&manpath=FreeBSD+12-current>`__
|
||||
manual page and the `bhyve wiki <https://wiki.freebsd.org/bhyve>`__ for more
|
||||
details on using the ``vgaconf`` option.
|
||||
|
||||
:since:`Since 3.7.0` , it's possible to use ``autoport`` to let libvirt allocate
|
||||
VNC port automatically (instead of explicitly specifying it with the ``port``
|
||||
attribute):
|
||||
|
||||
::
|
||||
|
||||
<graphics type='vnc' autoport='yes'>
|
||||
|
||||
:since:`Since 6.8.0` , it's possible to set framebuffer resolution using the
|
||||
``resolution`` sub-element:
|
||||
|
||||
::
|
||||
|
||||
<video>
|
||||
<model type='gop' heads='1' primary='yes'>
|
||||
<resolution x='800' y='600'/>
|
||||
</model>
|
||||
</video>
|
||||
|
||||
:since:`Since 6.8.0` , VNC server can be configured to use password based
|
||||
authentication:
|
||||
|
||||
::
|
||||
|
||||
<graphics type='vnc' port='5904' passwd='foobar'>
|
||||
<listen type='address' address='127.0.0.1'/>
|
||||
</graphics>
|
||||
|
||||
Note: VNC password authentication is known to be cryptographically weak.
|
||||
Additionally, the password is passed as a command line argument in clear text.
|
||||
Make sure you understand the risks associated with this feature before using it.
|
||||
|
||||
Clock configuration
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Originally bhyve supported only localtime for RTC. Support for UTC time was
|
||||
introduced in `FreeBSD changeset
|
||||
r284894 <https://svnweb.freebsd.org/changeset/base/284894>`__ for *10-STABLE*
|
||||
and in `changeset r279225 <https://svnweb.freebsd.org/changeset/base/279225>`__
|
||||
for *-CURRENT*. It's possible to use this in libvirt :since:`since 1.2.18` ,
|
||||
just place the following to domain XML:
|
||||
|
||||
::
|
||||
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<clock offset='utc'/>
|
||||
...
|
||||
</domain>
|
||||
|
||||
Please note that if you run the older bhyve version that doesn't support UTC
|
||||
time, you'll fail to start a domain. As UTC is used as a default when you do not
|
||||
specify clock settings, you'll need to explicitly specify 'localtime' in this
|
||||
case:
|
||||
|
||||
::
|
||||
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<clock offset='localtime'/>
|
||||
...
|
||||
</domain>
|
||||
|
||||
e1000 NIC
|
||||
~~~~~~~~~
|
||||
|
||||
As of `FreeBSD changeset
|
||||
r302504 <https://svnweb.freebsd.org/changeset/base/302504>`__ bhyve supports
|
||||
Intel e1000 network adapter emulation. It's supported in libvirt :since:`since
|
||||
3.1.0` and could be used as follows:
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<interface type='bridge'>
|
||||
<source bridge='virbr0'/>
|
||||
<model type='e1000'/>
|
||||
</interface>
|
||||
...
|
||||
|
||||
Sound device
|
||||
~~~~~~~~~~~~
|
||||
|
||||
As of `FreeBSD changeset
|
||||
r349355 <https://svnweb.freebsd.org/changeset/base/349355>`__ bhyve supports
|
||||
sound device emulation. It's supported in libvirt :since:`since 6.7.0` .
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<sound model='ich7'>
|
||||
<audio id='1'/>
|
||||
</sound>
|
||||
<audio id='1' type='oss'>
|
||||
<input dev='/dev/dsp0'/>
|
||||
<output dev='/dev/dsp0'/>
|
||||
</audio>
|
||||
...
|
||||
|
||||
Here, the ``sound`` element specifies the sound device as it's exposed to the
|
||||
guest, with ``ich7`` being the only supported model now, and the ``audio``
|
||||
element specifies how the guest device is mapped to the host sound device.
|
||||
|
||||
Virtio-9p filesystem
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As of `FreeBSD changeset
|
||||
r366413 <https://svnweb.freebsd.org/changeset/base/366413>`__ bhyve supports
|
||||
sharing arbitrary directory tree between the guest and the host. It's supported
|
||||
in libvirt :since:`since 6.9.0` .
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<filesystem>
|
||||
<source dir='/shared/dir'/>
|
||||
<target dir='shared_dir'/>
|
||||
</filesystem>
|
||||
...
|
||||
|
||||
This share could be made read only by adding the ``<readonly/>`` sub-element.
|
||||
|
||||
In the Linux guest, this could be mounted using:
|
||||
|
||||
::
|
||||
|
||||
mount -t 9p shared_dir /mnt/shared_dir
|
||||
|
||||
Wiring guest memory
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:since:`Since 4.4.0` , it's possible to specify that guest memory should be
|
||||
wired and cannot be swapped out as follows:
|
||||
|
||||
::
|
||||
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<memoryBacking>
|
||||
<locked/>
|
||||
</memoryBacking>
|
||||
...
|
||||
</domain>
|
||||
|
||||
CPU topology
|
||||
~~~~~~~~~~~~
|
||||
|
||||
:since:`Since 4.5.0` , it's possible to specify guest CPU topology, if bhyve
|
||||
supports that. Support for specifying guest CPU topology was added to bhyve in
|
||||
`FreeBSD changeset r332298 <https://svnweb.freebsd.org/changeset/base/332298>`__
|
||||
for *-CURRENT*. Example:
|
||||
|
||||
::
|
||||
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<cpu>
|
||||
<topology sockets='1' cores='2' threads='1'/>
|
||||
</cpu>
|
||||
...
|
||||
</domain>
|
||||
|
||||
Ignoring unknown MSRs reads and writes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:since:`Since 5.1.0` , it's possible to make bhyve ignore accesses to
|
||||
unimplemented Model Specific Registers (MSRs). Example:
|
||||
|
||||
::
|
||||
|
||||
<domain type="bhyve">
|
||||
...
|
||||
<features>
|
||||
...
|
||||
<msrs unknown='ignore'/>
|
||||
...
|
||||
</features>
|
||||
...
|
||||
</domain>
|
||||
|
||||
Pass-through of arbitrary bhyve commands
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:since:`Since 5.1.0` , it's possible to pass additional command-line arguments
|
||||
to the bhyve process when starting the domain using the ``<bhyve:commandline>``
|
||||
element under ``domain``. To supply an argument, use the element ``<bhyve:arg>``
|
||||
with the attribute ``value`` set to additional argument to be added. The arg
|
||||
element may be repeated multiple times. To use this XML addition, it is
|
||||
necessary to issue an XML namespace request (the special ``xmlns:name``
|
||||
attribute) that pulls in ``http://libvirt.org/schemas/domain/bhyve/1.0``;
|
||||
typically, the namespace is given the name of ``bhyve``.
|
||||
|
||||
Example:
|
||||
|
||||
::
|
||||
|
||||
<domain type="bhyve" xmlns:bhyve="http://libvirt.org/schemas/domain/bhyve/1.0">
|
||||
...
|
||||
<bhyve:commandline>
|
||||
<bhyve:arg value='-somebhyvearg'/>
|
||||
</bhyve:commandline>
|
||||
</domain>
|
||||
|
||||
Note that these extensions are for testing and development purposes only. They
|
||||
are **unsupported**, using them may result in inconsistent state, and upgrading
|
||||
either bhyve or libvirtd maybe break behavior of a domain that was relying on a
|
||||
specific commands pass-through.
|
@@ -1,838 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>VMware ESX hypervisor driver</h1>
|
||||
<ul id="toc"></ul>
|
||||
<p>
|
||||
The libvirt VMware ESX driver can manage VMware ESX/ESXi 3.5/4.x/5.x and
|
||||
VMware GSX 2.0, also called VMware Server 2.0, and possibly later
|
||||
versions. <span class="since">Since 0.8.3</span> the driver can also
|
||||
connect to a VMware vCenter 2.5/4.x/5.x (VPX).
|
||||
</p>
|
||||
|
||||
<h2><a id="project">Project Links</a></h2>
|
||||
|
||||
<ul>
|
||||
<li>
|
||||
The <a href="https://www.vmware.com/">VMware ESX and GSX</a>
|
||||
hypervisors
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<h2><a id="prereq">Deployment pre-requisites</a></h2>
|
||||
<p>
|
||||
None. Any out-of-the-box installation of VPX/ESX(i)/GSX should work. No
|
||||
preparations are required on the server side, no libvirtd must be
|
||||
installed on the ESX server. The driver uses version 2.5 of the remote,
|
||||
SOAP based
|
||||
<a href="https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/">
|
||||
VMware Virtual Infrastructure API</a> (VI API) to communicate with the
|
||||
ESX server, like the VMware Virtual Infrastructure Client (VI client)
|
||||
does. Since version 4.0 this API is called
|
||||
<a href="https://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/">
|
||||
VMware vSphere API</a>.
|
||||
</p>
|
||||
|
||||
<h2><a id="uri">Connections to the VMware ESX driver</a></h2>
|
||||
<p>
|
||||
Some example remote connection URIs for the driver are:
|
||||
</p>
|
||||
<pre>
|
||||
vpx://example-vcenter.com/dc1/srv1 (VPX over HTTPS, select ESX server 'srv1' in datacenter 'dc1')
|
||||
esx://example-esx.com (ESX over HTTPS)
|
||||
gsx://example-gsx.com (GSX over HTTPS)
|
||||
esx://example-esx.com/?transport=http (ESX over HTTP)
|
||||
esx://example-esx.com/?no_verify=1 (ESX over HTTPS, but doesn't verify the server's SSL certificate)
|
||||
</pre>
|
||||
<p>
|
||||
<strong>Note</strong>: In contrast to other drivers, the ESX driver is
|
||||
a client-side-only driver. It connects to the ESX server using HTTP(S).
|
||||
Therefore, the <a href="remote.html">remote transport mechanism</a>
|
||||
provided by the remote driver and libvirtd will not work, and you
|
||||
cannot use URIs like <code>esx+ssh://example.com</code>.
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="uriformat">URI Format</a></h3>
|
||||
<p>
|
||||
URIs have this general form (<code>[...]</code> marks an optional part).
|
||||
</p>
|
||||
<pre>
|
||||
type://[username@]hostname[:port]/[[folder/...]datacenter/[folder/...][cluster/]server][?extraparameters]
|
||||
</pre>
|
||||
<p>
|
||||
The <code>type://</code> is either <code>esx://</code> or
|
||||
<code>gsx://</code> or <code>vpx://</code> <span class="since">since 0.8.3</span>.
|
||||
The driver selects the default port depending on the <code>type://</code>.
|
||||
For <code>esx://</code> and <code>vpx://</code> the default HTTPS port
|
||||
is 443, for <code>gsx://</code> it is 8333.
|
||||
If the port parameter is given, it overrides the default port.
|
||||
</p>
|
||||
<p>
|
||||
A <code>vpx://</code> connection is currently restricted to a single
|
||||
ESX server. This might be relaxed in the future. The path part of the
|
||||
URI is used to specify the datacenter and the ESX server in it. If the
|
||||
ESX server is part of a cluster then the cluster has to be specified too.
|
||||
</p>
|
||||
<p>
|
||||
An example: ESX server <code>example-esx.com</code> is managed by
|
||||
vCenter <code>example-vcenter.com</code> and part of cluster
|
||||
<code>cluster1</code>. This cluster is part of datacenter <code>dc1</code>.
|
||||
</p>
|
||||
<pre>
|
||||
vpx://example-vcenter.com/dc1/cluster1/example-esx.com
|
||||
</pre>
|
||||
<p>
|
||||
Datacenters and clusters can be organized in folders, those have to be
|
||||
specified as well. The driver can handle folders
|
||||
<span class="since">since 0.9.7</span>.
|
||||
</p>
|
||||
<pre>
|
||||
vpx://example-vcenter.com/folder1/dc1/folder2/example-esx.com
|
||||
</pre>
|
||||
|
||||
|
||||
<h4><a id="extraparams">Extra parameters</a></h4>
|
||||
<p>
|
||||
Extra parameters can be added to a URI as part of the query string
|
||||
(the part following <code>?</code>). A single parameter is formed by a
|
||||
<code>name=value</code> pair. Multiple parameters are separated by
|
||||
<code>&</code>.
|
||||
</p>
|
||||
<pre>
|
||||
?<span style="color: #E50000">no_verify=1</span>&<span style="color: #00B200">auto_answer=1</span>&<span style="color: #0000E5">proxy=socks://example-proxy.com:23456</span>
|
||||
</pre>
|
||||
<p>
|
||||
The driver understands the extra parameters shown below.
|
||||
</p>
|
||||
<table class="top_table">
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Values</th>
|
||||
<th>Meaning</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>transport</code>
|
||||
</td>
|
||||
<td>
|
||||
<code>http</code> or <code>https</code>
|
||||
</td>
|
||||
<td>
|
||||
Overrides the default HTTPS transport. For <code>esx://</code>
|
||||
and <code>vpx://</code> the default HTTP port is 80, for
|
||||
<code>gsx://</code> it is 8222.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>vcenter</code>
|
||||
</td>
|
||||
<td>
|
||||
Hostname of a VMware vCenter or <code>*</code>
|
||||
</td>
|
||||
<td>
|
||||
In order to perform a migration the driver needs to know the
|
||||
VMware vCenter for the ESX server. If set to <code>*</code>,
|
||||
the driver connects to the vCenter known to the ESX server.
|
||||
This parameter in useful when connecting to an ESX server only.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>no_verify</code>
|
||||
</td>
|
||||
<td>
|
||||
<code>0</code> or <code>1</code>
|
||||
</td>
|
||||
<td>
|
||||
If set to 1, this disables libcurl client checks of the server's
|
||||
SSL certificate. The default value is 0. See the
|
||||
<a href="#certificates">Certificates for HTTPS</a> section for
|
||||
details.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>auto_answer</code>
|
||||
</td>
|
||||
<td>
|
||||
<code>0</code> or <code>1</code>
|
||||
</td>
|
||||
<td>
|
||||
If set to 1, the driver answers all
|
||||
<a href="#questions">questions</a> with the default answer.
|
||||
If set to 0, questions are reported as errors. The default
|
||||
value is 0. <span class="since">Since 0.7.5</span>.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>proxy</code>
|
||||
</td>
|
||||
<td>
|
||||
<code>[type://]hostname[:port]</code>
|
||||
</td>
|
||||
<td>
|
||||
Allows to specify a proxy for HTTP and HTTPS communication.
|
||||
<span class="since">Since 0.8.2</span>.
|
||||
The optional <code>type</code> part may be one of:
|
||||
<code>http</code>, <code>socks</code>, <code>socks4</code>,
|
||||
<code>socks4a</code> or <code>socks5</code>. The default is
|
||||
<code>http</code> and <code>socks</code> is synonymous for
|
||||
<code>socks5</code>. The optional <code>port</code> allows to
|
||||
override the default port 1080.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
||||
<h3><a id="auth">Authentication</a></h3>
|
||||
<p>
|
||||
In order to perform any useful operation the driver needs to log into
|
||||
the ESX server. Therefore, only <code>virConnectOpenAuth</code> can be
|
||||
used to connect to an ESX server, <code>virConnectOpen</code> and
|
||||
<code>virConnectOpenReadOnly</code> don't work.
|
||||
To log into an ESX server or vCenter the driver will request
|
||||
credentials using the callback passed to the
|
||||
<code>virConnectOpenAuth</code> function. The driver passes the
|
||||
hostname as challenge parameter to the callback. This enables the
|
||||
callback to distinguish between requests for ESX server and vCenter.
|
||||
</p>
|
||||
<p>
|
||||
<strong>Note</strong>: During the ongoing driver development, testing
|
||||
is done using an unrestricted <code>root</code> account. Problems may
|
||||
occur if you use a restricted account. Detailed testing with restricted
|
||||
accounts has not been done yet.
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="certificates">Certificates for HTTPS</a></h3>
|
||||
<p>
|
||||
By default the ESX driver uses HTTPS to communicate with an ESX server.
|
||||
Proper HTTPS communication requires correctly configured SSL
|
||||
certificates. This certificates are different from the ones libvirt
|
||||
uses for <a href="remote.html">secure communication over TLS</a> to a
|
||||
libvirtd one a remote server.
|
||||
</p>
|
||||
<p>
|
||||
By default the driver tries to verify the server's SSL certificate
|
||||
using the CA certificate pool installed on your client computer. With
|
||||
an out-of-the-box installed ESX server this won't work, because a newly
|
||||
installed ESX server uses auto-generated self-signed certificates.
|
||||
Those are signed by a CA certificate that is typically not known to your
|
||||
client computer and libvirt will report an error like this one:
|
||||
</p>
|
||||
<pre>
|
||||
error: internal error curl_easy_perform() returned an error: Peer certificate cannot be authenticated with known CA certificates (60)
|
||||
</pre>
|
||||
<p>
|
||||
Where are two ways to solve this problem:
|
||||
</p>
|
||||
<ul>
|
||||
<li>
|
||||
Use the <code>no_verify=1</code> <a href="#extraparams">extra parameter</a>
|
||||
to disable server certificate verification.
|
||||
</li>
|
||||
<li>
|
||||
Generate new SSL certificates signed by a CA known to your client
|
||||
computer and replace the original ones on your ESX server. See the
|
||||
section <i>Replace a Default Certificate with a CA-Signed Certificate</i>
|
||||
in the <a href="https://www.vmware.com/pdf/vsphere4/r40/vsp_40_esx_server_config.pdf">ESX Configuration Guide</a>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<h3><a id="connproblems">Connection problems</a></h3>
|
||||
<p>
|
||||
There are also other causes for connection problems than the
|
||||
<a href="#certificates">HTTPS certificate</a> related ones.
|
||||
</p>
|
||||
<ul>
|
||||
<li>
|
||||
As stated before the ESX driver doesn't need the
|
||||
<a href="remote.html">remote transport mechanism</a>
|
||||
provided by the remote driver and libvirtd, nor does the ESX driver
|
||||
support it. Therefore, using an URI including a transport in the
|
||||
scheme won't work. Only <a href="#uriformat">URIs as described</a>
|
||||
are supported by the ESX driver. Here's a collection of possible
|
||||
error messages:
|
||||
<pre>
|
||||
$ virsh -c esx+tcp://example.com/
|
||||
error: unable to connect to libvirtd at 'example.com': Connection refused
|
||||
</pre>
|
||||
<pre>
|
||||
$ virsh -c esx+tls://example.com/
|
||||
error: Cannot access CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
|
||||
</pre>
|
||||
<pre>
|
||||
$ virsh -c esx+ssh://example.com/
|
||||
error: cannot recv data: ssh: connect to host example.com port 22: Connection refused
|
||||
</pre>
|
||||
<pre>
|
||||
$ virsh -c esx+ssh://example.com/
|
||||
error: cannot recv data: Resource temporarily unavailable
|
||||
</pre>
|
||||
</li>
|
||||
<li>
|
||||
<span class="since">Since 0.7.0</span> libvirt contains the ESX
|
||||
driver. Earlier versions of libvirt will report a misleading error
|
||||
about missing certificates when you try to connect to an ESX server.
|
||||
<pre>
|
||||
$ virsh -c esx://example.com/
|
||||
error: Cannot access CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
|
||||
</pre>
|
||||
<p>
|
||||
Don't let this error message confuse you. Setting up certificates
|
||||
as described on the <a href="remote.html#Remote_certificates">remote transport mechanism</a> page
|
||||
does not help, as this is not a certificate related problem.
|
||||
</p>
|
||||
<p>
|
||||
To fix this problem you need to update your libvirt to 0.7.0 or newer.
|
||||
You may also see this error when you use a libvirt version that
|
||||
contains the ESX driver but you or your distro disabled the ESX
|
||||
driver during compilation. <span class="since">Since 0.8.3</span>
|
||||
the error message has been improved in this case:
|
||||
</p>
|
||||
<pre>
|
||||
$ virsh -c esx://example.com/
|
||||
error: invalid argument in libvirt was built without the 'esx' driver
|
||||
</pre>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<h2><a id="questions">Questions blocking tasks</a></h2>
|
||||
<p>
|
||||
Some methods of the VI API start tasks, for example
|
||||
<code>PowerOnVM_Task()</code>. Such tasks may be blocked by questions
|
||||
if the ESX server detects an issue with the domain that requires user
|
||||
interaction. The ESX driver cannot prompt the user to answer a
|
||||
question, libvirt doesn't have an API for something like this.
|
||||
</p>
|
||||
<p>
|
||||
The VI API provides the <code>AnswerVM()</code> method to
|
||||
programmatically answer a questions. So the driver has two options
|
||||
how to handle such a situation: either answer the questions with the
|
||||
default answer or report the question as an error and cancel the
|
||||
blocked task if possible. The
|
||||
<a href="#uriformat"><code>auto_answer</code></a> query parameter
|
||||
controls the answering behavior.
|
||||
</p>
|
||||
|
||||
|
||||
<h2><a id="xmlspecial">Specialities in the domain XML config</a></h2>
|
||||
<p>
|
||||
There are several specialities in the domain XML config for ESX domains.
|
||||
</p>
|
||||
|
||||
<h3><a id="restrictions">Restrictions</a></h3>
|
||||
<p>
|
||||
There are some restrictions for some values of the domain XML config.
|
||||
The driver will complain if this restrictions are violated.
|
||||
</p>
|
||||
<ul>
|
||||
<li>
|
||||
Memory size has to be a multiple of 4096
|
||||
</li>
|
||||
<li>
|
||||
Number of virtual CPU has to be 1 or a multiple of 2.
|
||||
<span class="since">Since 4.10.0</span> any number of vCPUs is
|
||||
supported.
|
||||
</li>
|
||||
<li>
|
||||
Valid MAC address prefixes are <code>00:0c:29</code> and
|
||||
<code>00:50:56</code>. <span class="since">Since 0.7.6</span>
|
||||
arbitrary <a href="#macaddresses">MAC addresses</a> are supported.
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<h3><a id="datastore">Datastore references</a></h3>
|
||||
<p>
|
||||
Storage is managed in datastores. VMware uses a special path format to
|
||||
reference files in a datastore. Basically, the datastore name is put
|
||||
into squared braces in front of the path.
|
||||
</p>
|
||||
<pre>
|
||||
[datastore] directory/filename
|
||||
</pre>
|
||||
<p>
|
||||
To define a new domain the driver converts the domain XML into a
|
||||
VMware VMX file and uploads it to a datastore known to the ESX server.
|
||||
Because multiple datastores may be known to an ESX server the driver
|
||||
needs to decide to which datastore the VMX file should be uploaded.
|
||||
The driver deduces this information from the path of the source of the
|
||||
first file-based harddisk listed in the domain XML.
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="macaddresses">MAC addresses</a></h3>
|
||||
<p>
|
||||
VMware has registered two MAC address prefixes for domains:
|
||||
<code>00:0c:29</code> and <code>00:50:56</code>. These prefixes are
|
||||
split into ranges for different purposes.
|
||||
</p>
|
||||
<table class="top_table">
|
||||
<tr>
|
||||
<th>Range</th>
|
||||
<th>Purpose</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>00:0c:29:00:00:00</code> - <code>00:0c:29:ff:ff:ff</code>
|
||||
</td>
|
||||
<td>
|
||||
An ESX server autogenerates MAC addresses from this range if
|
||||
the VMX file doesn't contain a MAC address when trying to start
|
||||
a domain.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>00:50:56:00:00:00</code> - <code>00:50:56:3f:ff:ff</code>
|
||||
</td>
|
||||
<td>
|
||||
MAC addresses from this range can by manually assigned by the
|
||||
user in the VI client.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>00:50:56:80:00:00</code> - <code>00:50:56:bf:ff:ff</code>
|
||||
</td>
|
||||
<td>
|
||||
A VI client autogenerates MAC addresses from this range for
|
||||
newly defined domains.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
<p>
|
||||
The VMX files generated by the ESX driver always contain a MAC address,
|
||||
because libvirt generates a random one if an interface element in the
|
||||
domain XML file lacks a MAC address.
|
||||
<span class="since">Since 0.7.6</span> the ESX driver sets the prefix
|
||||
for generated MAC addresses to <code>00:0c:29</code>. Before 0.7.6
|
||||
the <code>00:50:56</code> prefix was used. Sometimes this resulted in
|
||||
the generation of out-of-range MAC address that were rejected by the
|
||||
ESX server.
|
||||
</p>
|
||||
<p>
|
||||
Also <span class="since">since 0.7.6</span> every MAC address outside
|
||||
this ranges can be used. For such MAC addresses the ESX server-side
|
||||
check is disabled in the VMX file to stop the ESX server from rejecting
|
||||
out-of-predefined-range MAC addresses.
|
||||
</p>
|
||||
<pre>
|
||||
ethernet0.checkMACAddress = "false"
|
||||
</pre>
|
||||
<p>
|
||||
<span class="since">Since 6.6.0</span>, one can force libvirt to keep the
|
||||
provided MAC address when it's in the reserved VMware range by adding a
|
||||
<code>type="static"</code> attribute to the <code><mac/></code> element.
|
||||
Note that this attribute is useless if the provided MAC address is outside of
|
||||
the reserved VMWare ranges.
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="hardware">Available hardware</a></h3>
|
||||
<p>
|
||||
VMware ESX supports different models of SCSI controllers and network
|
||||
cards.
|
||||
</p>
|
||||
|
||||
<h4>SCSI controller models</h4>
|
||||
<dl>
|
||||
<dt><code>auto</code></dt>
|
||||
<dd>
|
||||
This isn't an actual controller model. If specified the ESX driver
|
||||
tries to detect the SCSI controller model referenced in the
|
||||
<code>.vmdk</code> file and use it. Autodetection fails when a
|
||||
SCSI controller has multiple disks attached and the SCSI controller
|
||||
models referenced in the <code>.vmdk</code> files are inconsistent.
|
||||
<span class="since">Since 0.8.3</span>
|
||||
</dd>
|
||||
<dt><code>buslogic</code></dt>
|
||||
<dd>
|
||||
BusLogic SCSI controller for older guests.
|
||||
</dd>
|
||||
<dt><code>lsilogic</code></dt>
|
||||
<dd>
|
||||
LSI Logic SCSI controller for recent guests.
|
||||
</dd>
|
||||
<dt><code>lsisas1068</code></dt>
|
||||
<dd>
|
||||
LSI Logic SAS 1068 controller. <span class="since">Since 0.8.0</span>
|
||||
</dd>
|
||||
<dt><code>vmpvscsi</code></dt>
|
||||
<dd>
|
||||
Special VMware Paravirtual SCSI controller, requires VMware tools inside
|
||||
the guest. See <a href="https://kb.vmware.com/kb/1010398">VMware KB1010398</a>
|
||||
for details. <span class="since">Since 0.8.3</span>
|
||||
</dd>
|
||||
</dl>
|
||||
<p>
|
||||
Here a domain XML snippet:
|
||||
</p>
|
||||
<pre>
|
||||
...
|
||||
<disk type='file' device='disk'>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
<address type='drive' controller='0' bus='0' unit='0'/>
|
||||
</disk>
|
||||
<controller type='scsi' index='0' model='<strong>lsilogic</strong>'/>
|
||||
...
|
||||
</pre>
|
||||
<p>
|
||||
The controller element is supported <span class="since">since 0.8.2</span>.
|
||||
Prior to this <code><driver name='lsilogic'/></code> was abused to
|
||||
specify the SCSI controller model. This attribute usage is deprecated now.
|
||||
</p>
|
||||
<pre>
|
||||
...
|
||||
<disk type='file' device='disk'>
|
||||
<driver name='<strong>lsilogic</strong>'/>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
</disk>
|
||||
...
|
||||
</pre>
|
||||
|
||||
|
||||
<h4>Network card models</h4>
|
||||
<dl>
|
||||
<dt><code>vlance</code></dt>
|
||||
<dd>
|
||||
AMD PCnet32 network card for older guests.
|
||||
</dd>
|
||||
<dt><code>vmxnet</code>, <code>vmxnet2</code>, <code>vmxnet3</code></dt>
|
||||
<dd>
|
||||
Special VMware VMXnet network card, requires VMware tools inside
|
||||
the guest. See <a href="https://kb.vmware.com/kb/1001805">VMware KB1001805</a>
|
||||
for details.
|
||||
</dd>
|
||||
<dt><code>e1000</code></dt>
|
||||
<dd>
|
||||
Intel E1000 network card for recent guests.
|
||||
</dd>
|
||||
</dl>
|
||||
<p>
|
||||
Here a domain XML snippet:
|
||||
</p>
|
||||
<pre>
|
||||
...
|
||||
<interface type='bridge'>
|
||||
<mac address='00:50:56:25:48:c7'/>
|
||||
<source bridge='VM Network'/>
|
||||
<model type='<strong>e1000</strong>'/>
|
||||
</interface>
|
||||
...
|
||||
</pre>
|
||||
|
||||
|
||||
<h2><a id="importexport">Import and export of domain XML configs</a></h2>
|
||||
<p>
|
||||
The ESX driver currently supports a native config format known as
|
||||
<code>vmware-vmx</code> to handle VMware VMX configs.
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="xmlimport">Converting from VMware VMX config to domain XML config</a></h3>
|
||||
<p>
|
||||
The <code>virsh domxml-from-native</code> provides a way to convert an
|
||||
existing VMware VMX config into a domain XML config that can then be
|
||||
used by libvirt.
|
||||
</p>
|
||||
<pre>
|
||||
$ cat > demo.vmx << EOF
|
||||
#!/usr/bin/vmware
|
||||
config.version = "8"
|
||||
virtualHW.version = "4"
|
||||
floppy0.present = "false"
|
||||
nvram = "Fedora11.nvram"
|
||||
deploymentPlatform = "windows"
|
||||
virtualHW.productCompatibility = "hosted"
|
||||
tools.upgrade.policy = "useGlobal"
|
||||
powerType.powerOff = "default"
|
||||
powerType.powerOn = "default"
|
||||
powerType.suspend = "default"
|
||||
powerType.reset = "default"
|
||||
displayName = "Fedora11"
|
||||
extendedConfigFile = "Fedora11.vmxf"
|
||||
scsi0.present = "true"
|
||||
scsi0.sharedBus = "none"
|
||||
scsi0.virtualDev = "lsilogic"
|
||||
memsize = "1024"
|
||||
scsi0:0.present = "true"
|
||||
scsi0:0.fileName = "/vmfs/volumes/498076b2-02796c1a-ef5b-000ae484a6a3/Fedora11/Fedora11.vmdk"
|
||||
scsi0:0.deviceType = "scsi-hardDisk"
|
||||
ide0:0.present = "true"
|
||||
ide0:0.clientDevice = "true"
|
||||
ide0:0.deviceType = "cdrom-raw"
|
||||
ide0:0.startConnected = "false"
|
||||
ethernet0.present = "true"
|
||||
ethernet0.networkName = "VM Network"
|
||||
ethernet0.addressType = "vpx"
|
||||
ethernet0.generatedAddress = "00:50:56:91:48:c7"
|
||||
chipset.onlineStandby = "false"
|
||||
guestOSAltName = "Red Hat Enterprise Linux 5 (32-Bit)"
|
||||
guestOS = "rhel5"
|
||||
uuid.bios = "50 11 5e 16 9b dc 49 d7-f1 71 53 c4 d7 f9 17 10"
|
||||
snapshot.action = "keep"
|
||||
sched.cpu.min = "0"
|
||||
sched.cpu.units = "mhz"
|
||||
sched.cpu.shares = "normal"
|
||||
sched.mem.minsize = "0"
|
||||
sched.mem.shares = "normal"
|
||||
toolScripts.afterPowerOn = "true"
|
||||
toolScripts.afterResume = "true"
|
||||
toolScripts.beforeSuspend = "true"
|
||||
toolScripts.beforePowerOff = "true"
|
||||
scsi0:0.redo = ""
|
||||
tools.syncTime = "false"
|
||||
uuid.location = "56 4d b5 06 a2 bd fb eb-ae 86 f7 d8 49 27 d0 c4"
|
||||
sched.cpu.max = "unlimited"
|
||||
sched.swap.derivedName = "/vmfs/volumes/498076b2-02796c1a-ef5b-000ae484a6a3/Fedora11/Fedora11-7de040d8.vswp"
|
||||
tools.remindInstall = "TRUE"
|
||||
EOF
|
||||
|
||||
$ virsh -c esx://example.com domxml-from-native vmware-vmx demo.vmx
|
||||
Enter username for example.com [root]:
|
||||
Enter root password for example.com:
|
||||
<domain type='vmware'>
|
||||
<name>Fedora11</name>
|
||||
<uuid>50115e16-9bdc-49d7-f171-53c4d7f91710</uuid>
|
||||
<memory>1048576</memory>
|
||||
<currentMemory>1048576</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<os>
|
||||
<type arch='i686'>hvm</type>
|
||||
</os>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<disk type='file' device='disk'>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
<address type='drive' controller='0' bus='0' unit='0'/>
|
||||
</disk>
|
||||
<controller type='scsi' index='0' model='lsilogic'/>
|
||||
<interface type='bridge'>
|
||||
<mac address='00:50:56:91:48:c7'/>
|
||||
<source bridge='VM Network'/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
|
||||
<h3><a id="xmlexport">Converting from domain XML config to VMware VMX config</a></h3>
|
||||
<p>
|
||||
The <code>virsh domxml-to-native</code> provides a way to convert a
|
||||
domain XML config into a VMware VMX config.
|
||||
</p>
|
||||
<pre>
|
||||
$ cat > demo.xml << EOF
|
||||
<domain type='vmware'>
|
||||
<name>Fedora11</name>
|
||||
<uuid>50115e16-9bdc-49d7-f171-53c4d7f91710</uuid>
|
||||
<memory>1048576</memory>
|
||||
<currentMemory>1048576</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<os>
|
||||
<type arch='x86_64'>hvm</type>
|
||||
</os>
|
||||
<devices>
|
||||
<disk type='file' device='disk'>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
<address type='drive' controller='0' bus='0' unit='0'/>
|
||||
</disk>
|
||||
<controller type='scsi' index='0' model='lsilogic'/>
|
||||
<interface type='bridge'>
|
||||
<mac address='00:50:56:25:48:c7'/>
|
||||
<source bridge='VM Network'/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
EOF
|
||||
|
||||
$ virsh -c esx://example.com domxml-to-native vmware-vmx demo.xml
|
||||
Enter username for example.com [root]:
|
||||
Enter root password for example.com:
|
||||
config.version = "8"
|
||||
virtualHW.version = "4"
|
||||
guestOS = "other-64"
|
||||
uuid.bios = "50 11 5e 16 9b dc 49 d7-f1 71 53 c4 d7 f9 17 10"
|
||||
displayName = "Fedora11"
|
||||
memsize = "1024"
|
||||
numvcpus = "1"
|
||||
scsi0.present = "true"
|
||||
scsi0.virtualDev = "lsilogic"
|
||||
scsi0:0.present = "true"
|
||||
scsi0:0.deviceType = "scsi-hardDisk"
|
||||
scsi0:0.fileName = "/vmfs/volumes/local-storage/Fedora11/Fedora11.vmdk"
|
||||
ethernet0.present = "true"
|
||||
ethernet0.networkName = "VM Network"
|
||||
ethernet0.connectionType = "bridged"
|
||||
ethernet0.addressType = "static"
|
||||
ethernet0.address = "00:50:56:25:48:C7"
|
||||
</pre>
|
||||
|
||||
|
||||
<h2><a id="xmlconfig">Example domain XML configs</a></h2>
|
||||
|
||||
<h3>Fedora11 on x86_64</h3>
|
||||
<pre>
|
||||
<domain type='vmware'>
|
||||
<name>Fedora11</name>
|
||||
<uuid>50115e16-9bdc-49d7-f171-53c4d7f91710</uuid>
|
||||
<memory>1048576</memory>
|
||||
<currentMemory>1048576</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<os>
|
||||
<type arch='x86_64'>hvm</type>
|
||||
</os>
|
||||
<devices>
|
||||
<disk type='file' device='disk'>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
<address type='drive' controller='0' bus='0' unit='0'/>
|
||||
</disk>
|
||||
<controller type='scsi' index='0'/>
|
||||
<interface type='bridge'>
|
||||
<mac address='00:50:56:25:48:c7'/>
|
||||
<source bridge='VM Network'/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
|
||||
<h2><a id="migration">Migration</a></h2>
|
||||
<p>
|
||||
A migration cannot be initiated on an ESX server directly, a VMware
|
||||
vCenter is necessary for this. The <code>vcenter</code> query
|
||||
parameter must be set either to the hostname or IP address of the
|
||||
vCenter managing the ESX server or to <code>*</code>. Setting it
|
||||
to <code>*</code> causes the driver to connect to the vCenter known to
|
||||
the ESX server. If the ESX server is not managed by a vCenter an error
|
||||
is reported.
|
||||
</p>
|
||||
<pre>
|
||||
esx://example.com/?vcenter=example-vcenter.com
|
||||
</pre>
|
||||
<p>
|
||||
Here's an example how to migrate the domain <code>Fedora11</code> from
|
||||
ESX server <code>example-src.com</code> to ESX server
|
||||
<code>example-dst.com</code> implicitly involving vCenter
|
||||
<code>example-vcenter.com</code> using <code>virsh</code>.
|
||||
</p>
|
||||
<pre>
|
||||
$ virsh -c esx://example-src.com/?vcenter=* migrate Fedora11 esx://example-dst.com/?vcenter=*
|
||||
Enter username for example-src.com [root]:
|
||||
Enter root password for example-src.com:
|
||||
Enter username for example-vcenter.com [administrator]:
|
||||
Enter administrator password for example-vcenter.com:
|
||||
Enter username for example-dst.com [root]:
|
||||
Enter root password for example-dst.com:
|
||||
Enter username for example-vcenter.com [administrator]:
|
||||
Enter administrator password for example-vcenter.com:
|
||||
</pre>
|
||||
<p>
|
||||
<span class="since">Since 0.8.3</span> you can directly connect to a vCenter.
|
||||
This simplifies migration a bit. Here's the same migration as above but
|
||||
using <code>vpx://</code> connections and assuming both ESX server are in
|
||||
datacenter <code>dc1</code> and aren't part of a cluster.
|
||||
</p>
|
||||
<pre>
|
||||
$ virsh -c vpx://example-vcenter.com/dc1/example-src.com migrate Fedora11 vpx://example-vcenter.com/dc1/example-dst.com
|
||||
Enter username for example-vcenter.com [administrator]:
|
||||
Enter administrator password for example-vcenter.com:
|
||||
Enter username for example-vcenter.com [administrator]:
|
||||
Enter administrator password for example-vcenter.com:
|
||||
</pre>
|
||||
|
||||
|
||||
<h2><a id="scheduler">Scheduler configuration</a></h2>
|
||||
<p>
|
||||
The driver exposes the ESX CPU scheduler. The parameters listed below
|
||||
are available to control the scheduler.
|
||||
</p>
|
||||
<dl>
|
||||
<dt><code>reservation</code></dt>
|
||||
<dd>
|
||||
The amount of CPU resource in MHz that is guaranteed to be
|
||||
available to the domain. Valid values are 0 and greater.
|
||||
</dd>
|
||||
<dt><code>limit</code></dt>
|
||||
<dd>
|
||||
The CPU utilization of the domain will be
|
||||
limited to this value in MHz, even if more CPU resources are
|
||||
available. If the limit is set to -1, the CPU utilization of the
|
||||
domain is unlimited. If the limit is not set to -1, it must be
|
||||
greater than or equal to the reservation.
|
||||
</dd>
|
||||
<dt><code>shares</code></dt>
|
||||
<dd>
|
||||
Shares are used to determine relative CPU
|
||||
allocation between domains. In general, a domain with more shares
|
||||
gets proportionally more of the CPU resource. Valid values are 0
|
||||
and greater. The special values -1, -2 and -3 represent the
|
||||
predefined shares level <code>low</code>, <code>normal</code> and
|
||||
<code>high</code>.
|
||||
</dd>
|
||||
</dl>
|
||||
|
||||
|
||||
<h2><a id="tools">VMware tools</a></h2>
|
||||
<p>
|
||||
Some actions require installed VMware tools. If the VMware tools are
|
||||
not installed in the guest and one of the actions below is to be
|
||||
performed the ESX server raises an error and the driver reports it.
|
||||
</p>
|
||||
<ul>
|
||||
<li>
|
||||
<code>virDomainGetHostname</code>
|
||||
</li>
|
||||
<li>
|
||||
<code>virDomainInterfaceAddresses</code> (only for the
|
||||
<code>VIR_DOMAIN_INTERFACE_ADDRESSES_SRC_AGENT</code> source)
|
||||
</li>
|
||||
<li>
|
||||
<code>virDomainReboot</code>
|
||||
</li>
|
||||
<li>
|
||||
<code>virDomainShutdown</code>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<h2><a id="links">Links</a></h2>
|
||||
<ul>
|
||||
<li>
|
||||
<a href="https://www.vmware.com/support/developer/vc-sdk/">
|
||||
VMware vSphere Web Services SDK Documentation
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://www.vmware.com/pdf/esx3_memory.pdf">
|
||||
The Role of Memory in VMware ESX Server 3
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://www.sanbarrow.com/vmx.html">
|
||||
VMware VMX config parameters
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://www.vmware.com/pdf/vsp_4_pvscsi_perf.pdf">
|
||||
VMware ESX 4.0 PVSCSI Storage Performance
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
</body></html>
|
679
docs/drvesx.rst
Normal file
679
docs/drvesx.rst
Normal file
@@ -0,0 +1,679 @@
|
||||
.. role:: since
|
||||
|
||||
============================
|
||||
VMware ESX hypervisor driver
|
||||
============================
|
||||
|
||||
.. contents::
|
||||
|
||||
The libvirt VMware ESX driver can manage VMware ESX/ESXi 3.5/4.x/5.x and VMware
|
||||
GSX 2.0, also called VMware Server 2.0, and possibly later versions.
|
||||
:since:`Since 0.8.3` the driver can also connect to a VMware vCenter 2.5/4.x/5.x
|
||||
(VPX).
|
||||
|
||||
Project Links
|
||||
-------------
|
||||
|
||||
- The `VMware ESX and GSX <https://www.vmware.com/>`__ hypervisors
|
||||
|
||||
Deployment pre-requisites
|
||||
-------------------------
|
||||
|
||||
None. Any out-of-the-box installation of VPX/ESX(i)/GSX should work. No
|
||||
preparations are required on the server side, no libvirtd must be installed on
|
||||
the ESX server. The driver uses version 2.5 of the remote, SOAP based `VMware
|
||||
Virtual Infrastructure
|
||||
API <https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/>`__
|
||||
(VI API) to communicate with the ESX server, like the VMware Virtual
|
||||
Infrastructure Client (VI client) does. Since version 4.0 this API is called
|
||||
`VMware vSphere
|
||||
API <https://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/>`__.
|
||||
|
||||
Connections to the VMware ESX driver
|
||||
------------------------------------
|
||||
|
||||
Some example remote connection URIs for the driver are:
|
||||
|
||||
::
|
||||
|
||||
vpx://example-vcenter.com/dc1/srv1 (VPX over HTTPS, select ESX server 'srv1' in datacenter 'dc1')
|
||||
esx://example-esx.com (ESX over HTTPS)
|
||||
gsx://example-gsx.com (GSX over HTTPS)
|
||||
esx://example-esx.com/?transport=http (ESX over HTTP)
|
||||
esx://example-esx.com/?no_verify=1 (ESX over HTTPS, but doesn't verify the server's SSL certificate)
|
||||
|
||||
**Note**: In contrast to other drivers, the ESX driver is a client-side-only
|
||||
driver. It connects to the ESX server using HTTP(S). Therefore, the `remote
|
||||
transport mechanism <remote.html>`__ provided by the remote driver and libvirtd
|
||||
will not work, and you cannot use URIs like ``esx+ssh://example.com``.
|
||||
|
||||
URI Format
|
||||
~~~~~~~~~~
|
||||
|
||||
URIs have this general form (``[...]`` marks an optional part).
|
||||
|
||||
::
|
||||
|
||||
type://[username@]hostname[:port]/[[folder/...]datacenter/[folder/...][cluster/]server][?extraparameters]
|
||||
|
||||
The ``type://`` is either ``esx://`` or ``gsx://`` or ``vpx://`` :since:`since
|
||||
0.8.3` . The driver selects the default port depending on the ``type://``. For
|
||||
``esx://`` and ``vpx://`` the default HTTPS port is 443, for ``gsx://`` it is
|
||||
8333. If the port parameter is given, it overrides the default port.
|
||||
|
||||
A ``vpx://`` connection is currently restricted to a single ESX server. This
|
||||
might be relaxed in the future. The path part of the URI is used to specify the
|
||||
datacenter and the ESX server in it. If the ESX server is part of a cluster then
|
||||
the cluster has to be specified too.
|
||||
|
||||
An example: ESX server ``example-esx.com`` is managed by vCenter
|
||||
``example-vcenter.com`` and part of cluster ``cluster1``. This cluster is part
|
||||
of datacenter ``dc1``.
|
||||
|
||||
::
|
||||
|
||||
vpx://example-vcenter.com/dc1/cluster1/example-esx.com
|
||||
|
||||
Datacenters and clusters can be organized in folders, those have to be specified
|
||||
as well. The driver can handle folders :since:`since 0.9.7` .
|
||||
|
||||
::
|
||||
|
||||
vpx://example-vcenter.com/folder1/dc1/folder2/example-esx.com
|
||||
|
||||
Extra parameters
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
Extra parameters can be added to a URI as part of the query string (the part
|
||||
following ``?``). A single parameter is formed by a ``name=value`` pair.
|
||||
Multiple parameters are separated by ``&``.
|
||||
|
||||
::
|
||||
|
||||
?no_verify=1&auto_answer=1&proxy=socks://example-proxy.com:23456
|
||||
|
||||
The driver understands the extra parameters shown below.
|
||||
|
||||
+-----------------+-----------------------------+-----------------------------+
|
||||
| Name | Values | Meaning |
|
||||
+=================+=============================+=============================+
|
||||
| ``transport`` | ``http`` or ``https`` | Overrides the default HTTPS |
|
||||
| | | transport. For ``esx://`` |
|
||||
| | | and ``vpx://`` the default |
|
||||
| | | HTTP port is 80, for |
|
||||
| | | ``gsx://`` it is 8222. |
|
||||
+-----------------+-----------------------------+-----------------------------+
|
||||
| ``vcenter`` | Hostname of a VMware | In order to perform a |
|
||||
| | vCenter or ``*`` | migration the driver needs |
|
||||
| | | to know the VMware vCenter |
|
||||
| | | for the ESX server. If set |
|
||||
| | | to ``*``, the driver |
|
||||
| | | connects to the vCenter |
|
||||
| | | known to the ESX server. |
|
||||
| | | This parameter in useful |
|
||||
| | | when connecting to an ESX |
|
||||
| | | server only. |
|
||||
+-----------------+-----------------------------+-----------------------------+
|
||||
| ``no_verify`` | ``0`` or ``1`` | If set to 1, this disables |
|
||||
| | | libcurl client checks of |
|
||||
| | | the server's SSL |
|
||||
| | | certificate. The default |
|
||||
| | | value is 0. See the |
|
||||
| | | `Certificates for HTTPS`_ |
|
||||
| | | section for details. |
|
||||
+-----------------+-----------------------------+-----------------------------+
|
||||
| ``auto_answer`` | ``0`` or ``1`` | If set to 1, the driver |
|
||||
| | | answers all |
|
||||
| | | `Questions blocking tasks`_ |
|
||||
| | | with the default answer. If |
|
||||
| | | set to 0, questions are |
|
||||
| | | reported as errors. The |
|
||||
| | | default value is 0. |
|
||||
| | | :since:`Since 0.7.5` . |
|
||||
+-----------------+-----------------------------+-----------------------------+
|
||||
| ``proxy`` | ``[type://]host[:port]`` | Allows to specify a proxy |
|
||||
| | | for HTTP and HTTPS |
|
||||
| | | communication. |
|
||||
| | | :since:`Since 0.8.2` . The |
|
||||
| | | optional ``type`` part may |
|
||||
| | | be one of: ``http``, |
|
||||
| | | ``socks``, ``socks4``, |
|
||||
| | | ``socks4a`` or ``socks5``. |
|
||||
| | | The default is ``http`` and |
|
||||
| | | ``socks`` is synonymous for |
|
||||
| | | ``socks5``. The optional |
|
||||
| | | ``port`` allows to override |
|
||||
| | | the default port 1080. |
|
||||
+-----------------+-----------------------------+-----------------------------+
|
||||
|
||||
Authentication
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
In order to perform any useful operation the driver needs to log into the ESX
|
||||
server. Therefore, only ``virConnectOpenAuth`` can be used to connect to an ESX
|
||||
server, ``virConnectOpen`` and ``virConnectOpenReadOnly`` don't work. To log
|
||||
into an ESX server or vCenter the driver will request credentials using the
|
||||
callback passed to the ``virConnectOpenAuth`` function. The driver passes the
|
||||
hostname as challenge parameter to the callback. This enables the callback to
|
||||
distinguish between requests for ESX server and vCenter.
|
||||
|
||||
**Note**: During the ongoing driver development, testing is done using an
|
||||
unrestricted ``root`` account. Problems may occur if you use a restricted
|
||||
account. Detailed testing with restricted accounts has not been done yet.
|
||||
|
||||
Certificates for HTTPS
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
By default the ESX driver uses HTTPS to communicate with an ESX server. Proper
|
||||
HTTPS communication requires correctly configured SSL certificates. This
|
||||
certificates are different from the ones libvirt uses for `secure communication
|
||||
over TLS <remote.html>`__ to a libvirtd one a remote server.
|
||||
|
||||
By default the driver tries to verify the server's SSL certificate using the CA
|
||||
certificate pool installed on your client computer. With an out-of-the-box
|
||||
installed ESX server this won't work, because a newly installed ESX server uses
|
||||
auto-generated self-signed certificates. Those are signed by a CA certificate
|
||||
that is typically not known to your client computer and libvirt will report an
|
||||
error like this one:
|
||||
|
||||
::
|
||||
|
||||
error: internal error curl_easy_perform() returned an error: Peer certificate cannot be authenticated with known CA certificates (60)
|
||||
|
||||
Where are two ways to solve this problem:
|
||||
|
||||
- Use the ``no_verify=1`` `Extra parameters`_ to disable server
|
||||
certificate verification.
|
||||
- Generate new SSL certificates signed by a CA known to your client computer
|
||||
and replace the original ones on your ESX server. See the section *Replace a
|
||||
Default Certificate with a CA-Signed Certificate* in the `ESX Configuration
|
||||
Guide <https://www.vmware.com/pdf/vsphere4/r40/vsp_40_esx_server_config.pdf>`__
|
||||
|
||||
Connection problems
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
There are also other causes for connection problems than those related to
|
||||
`Certificates for HTTPS`_ .
|
||||
|
||||
- As stated before the ESX driver doesn't need the `remote transport
|
||||
mechanism <remote.html>`__ provided by the remote driver and libvirtd, nor
|
||||
does the ESX driver support it. Therefore, using an URI including a transport
|
||||
in the scheme won't work. Only URIs as described in `URI Format`_ are
|
||||
supported by the ESX driver. Here's a collection of possible error messages:
|
||||
|
||||
::
|
||||
|
||||
$ virsh -c esx+tcp://example.com/
|
||||
error: unable to connect to libvirtd at 'example.com': Connection refused
|
||||
|
||||
::
|
||||
|
||||
$ virsh -c esx+tls://example.com/
|
||||
error: Cannot access CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
|
||||
|
||||
::
|
||||
|
||||
$ virsh -c esx+ssh://example.com/
|
||||
error: cannot recv data: ssh: connect to host example.com port 22: Connection refused
|
||||
|
||||
::
|
||||
|
||||
$ virsh -c esx+ssh://example.com/
|
||||
error: cannot recv data: Resource temporarily unavailable
|
||||
|
||||
- :since:`Since 0.7.0` libvirt contains the ESX driver. Earlier versions of
|
||||
libvirt will report a misleading error about missing certificates when you
|
||||
try to connect to an ESX server.
|
||||
|
||||
::
|
||||
|
||||
$ virsh -c esx://example.com/
|
||||
error: Cannot access CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
|
||||
|
||||
Don't let this error message confuse you. Setting up certificates as
|
||||
described on the `tls certificates <kbase/tlscerts.html>`__ page does not
|
||||
help, as this is not a certificate related problem.
|
||||
|
||||
To fix this problem you need to update your libvirt to 0.7.0 or newer. You
|
||||
may also see this error when you use a libvirt version that contains the ESX
|
||||
driver but you or your distro disabled the ESX driver during compilation.
|
||||
:since:`Since 0.8.3` the error message has been improved in this case:
|
||||
|
||||
::
|
||||
|
||||
$ virsh -c esx://example.com/
|
||||
error: invalid argument in libvirt was built without the 'esx' driver
|
||||
|
||||
Questions blocking tasks
|
||||
------------------------
|
||||
|
||||
Some methods of the VI API start tasks, for example ``PowerOnVM_Task()``. Such
|
||||
tasks may be blocked by questions if the ESX server detects an issue with the
|
||||
domain that requires user interaction. The ESX driver cannot prompt the user to
|
||||
answer a question, libvirt doesn't have an API for something like this.
|
||||
|
||||
The VI API provides the ``AnswerVM()`` method to programmatically answer a
|
||||
questions. So the driver has two options how to handle such a situation: either
|
||||
answer the questions with the default answer or report the question as an error
|
||||
and cancel the blocked task if possible. The ``auto_answer`` query
|
||||
parameter (see `URI Format`_) controls the answering behavior.
|
||||
|
||||
Specialities in the domain XML config
|
||||
-------------------------------------
|
||||
|
||||
There are several specialities in the domain XML config for ESX domains.
|
||||
|
||||
Restrictions
|
||||
~~~~~~~~~~~~
|
||||
|
||||
There are some restrictions for some values of the domain XML config. The driver
|
||||
will complain if this restrictions are violated.
|
||||
|
||||
- Memory size has to be a multiple of 4096
|
||||
- Number of virtual CPU has to be 1 or a multiple of 2. :since:`Since 4.10.0`
|
||||
any number of vCPUs is supported.
|
||||
- Valid MAC address prefixes are ``00:0c:29`` and ``00:50:56``. :since:`Since
|
||||
0.7.6` arbitrary `MAC addresses`_ are supported.
|
||||
|
||||
Datastore references
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Storage is managed in datastores. VMware uses a special path format to reference
|
||||
files in a datastore. Basically, the datastore name is put into squared braces
|
||||
in front of the path.
|
||||
|
||||
::
|
||||
|
||||
[datastore] directory/filename
|
||||
|
||||
To define a new domain the driver converts the domain XML into a VMware VMX file
|
||||
and uploads it to a datastore known to the ESX server. Because multiple
|
||||
datastores may be known to an ESX server the driver needs to decide to which
|
||||
datastore the VMX file should be uploaded. The driver deduces this information
|
||||
from the path of the source of the first file-based harddisk listed in the
|
||||
domain XML.
|
||||
|
||||
MAC addresses
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
VMware has registered two MAC address prefixes for domains: ``00:0c:29`` and
|
||||
``00:50:56``. These prefixes are split into ranges for different purposes.
|
||||
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| Range | Purpose |
|
||||
+======================================+======================================+
|
||||
| ``00:0c:29:00:00:00`` - | An ESX server autogenerates MAC |
|
||||
| ``00:0c:29:ff:ff:ff`` | addresses from this range if the VMX |
|
||||
| | file doesn't contain a MAC address |
|
||||
| | when trying to start a domain. |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| ``00:50:56:00:00:00`` - | MAC addresses from this range can by |
|
||||
| ``00:50:56:3f:ff:ff`` | manually assigned by the user in the |
|
||||
| | VI client. |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| ``00:50:56:80:00:00`` - | A VI client autogenerates MAC |
|
||||
| ``00:50:56:bf:ff:ff`` | addresses from this range for newly |
|
||||
| | defined domains. |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
|
||||
The VMX files generated by the ESX driver always contain a MAC address, because
|
||||
libvirt generates a random one if an interface element in the domain XML file
|
||||
lacks a MAC address. :since:`Since 0.7.6` the ESX driver sets the prefix for
|
||||
generated MAC addresses to ``00:0c:29``. Before 0.7.6 the ``00:50:56`` prefix
|
||||
was used. Sometimes this resulted in the generation of out-of-range MAC address
|
||||
that were rejected by the ESX server.
|
||||
|
||||
Also :since:`since 0.7.6` every MAC address outside this ranges can be used. For
|
||||
such MAC addresses the ESX server-side check is disabled in the VMX file to stop
|
||||
the ESX server from rejecting out-of-predefined-range MAC addresses.
|
||||
|
||||
::
|
||||
|
||||
ethernet0.checkMACAddress = "false"
|
||||
|
||||
:since:`Since 6.6.0` , one can force libvirt to keep the provided MAC address
|
||||
when it's in the reserved VMware range by adding a ``type="static"`` attribute
|
||||
to the ``<mac/>`` element. Note that this attribute is useless if the provided
|
||||
MAC address is outside of the reserved VMWare ranges.
|
||||
|
||||
Available hardware
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
VMware ESX supports different models of SCSI controllers and network cards.
|
||||
|
||||
SCSI controller models
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
``auto``
|
||||
This isn't an actual controller model. If specified the ESX driver tries to
|
||||
detect the SCSI controller model referenced in the ``.vmdk`` file and use it.
|
||||
Autodetection fails when a SCSI controller has multiple disks attached and
|
||||
the SCSI controller models referenced in the ``.vmdk`` files are
|
||||
inconsistent. :since:`Since 0.8.3`
|
||||
``buslogic``
|
||||
BusLogic SCSI controller for older guests.
|
||||
``lsilogic``
|
||||
LSI Logic SCSI controller for recent guests.
|
||||
``lsisas1068``
|
||||
LSI Logic SAS 1068 controller. :since:`Since 0.8.0`
|
||||
``vmpvscsi``
|
||||
Special VMware Paravirtual SCSI controller, requires VMware tools inside the
|
||||
guest. See `VMware KB1010398 <https://kb.vmware.com/kb/1010398>`__ for
|
||||
details. :since:`Since 0.8.3`
|
||||
|
||||
Here a domain XML snippet:
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<disk type='file' device='disk'>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
<address type='drive' controller='0' bus='0' unit='0'/>
|
||||
</disk>
|
||||
<controller type='scsi' index='0' model='lsilogic'/>
|
||||
...
|
||||
|
||||
The controller element is supported :since:`since 0.8.2` . Prior to this
|
||||
``<driver name='lsilogic'/>`` was abused to specify the SCSI controller model.
|
||||
This attribute usage is deprecated now.
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<disk type='file' device='disk'>
|
||||
<driver name='lsilogic'/>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
</disk>
|
||||
...
|
||||
|
||||
Network card models
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
``vlance``
|
||||
AMD PCnet32 network card for older guests.
|
||||
``vmxnet``, ``vmxnet2``, ``vmxnet3``
|
||||
Special VMware VMXnet network card, requires VMware tools inside the guest.
|
||||
See `VMware KB1001805 <https://kb.vmware.com/kb/1001805>`__ for details.
|
||||
``e1000``
|
||||
Intel E1000 network card for recent guests.
|
||||
|
||||
Here a domain XML snippet:
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<interface type='bridge'>
|
||||
<mac address='00:50:56:25:48:c7'/>
|
||||
<source bridge='VM Network'/>
|
||||
<model type='e1000'/>
|
||||
</interface>
|
||||
...
|
||||
|
||||
Import and export of domain XML configs
|
||||
---------------------------------------
|
||||
|
||||
The ESX driver currently supports a native config format known as ``vmware-vmx``
|
||||
to handle VMware VMX configs.
|
||||
|
||||
Converting from VMware VMX config to domain XML config
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh domxml-from-native`` provides a way to convert an existing VMware
|
||||
VMX config into a domain XML config that can then be used by libvirt.
|
||||
|
||||
::
|
||||
|
||||
$ cat > demo.vmx << EOF
|
||||
#!/usr/bin/vmware
|
||||
config.version = "8"
|
||||
virtualHW.version = "4"
|
||||
floppy0.present = "false"
|
||||
nvram = "Fedora11.nvram"
|
||||
deploymentPlatform = "windows"
|
||||
virtualHW.productCompatibility = "hosted"
|
||||
tools.upgrade.policy = "useGlobal"
|
||||
powerType.powerOff = "default"
|
||||
powerType.powerOn = "default"
|
||||
powerType.suspend = "default"
|
||||
powerType.reset = "default"
|
||||
displayName = "Fedora11"
|
||||
extendedConfigFile = "Fedora11.vmxf"
|
||||
scsi0.present = "true"
|
||||
scsi0.sharedBus = "none"
|
||||
scsi0.virtualDev = "lsilogic"
|
||||
memsize = "1024"
|
||||
scsi0:0.present = "true"
|
||||
scsi0:0.fileName = "/vmfs/volumes/498076b2-02796c1a-ef5b-000ae484a6a3/Fedora11/Fedora11.vmdk"
|
||||
scsi0:0.deviceType = "scsi-hardDisk"
|
||||
ide0:0.present = "true"
|
||||
ide0:0.clientDevice = "true"
|
||||
ide0:0.deviceType = "cdrom-raw"
|
||||
ide0:0.startConnected = "false"
|
||||
ethernet0.present = "true"
|
||||
ethernet0.networkName = "VM Network"
|
||||
ethernet0.addressType = "vpx"
|
||||
ethernet0.generatedAddress = "00:50:56:91:48:c7"
|
||||
chipset.onlineStandby = "false"
|
||||
guestOSAltName = "Red Hat Enterprise Linux 5 (32-Bit)"
|
||||
guestOS = "rhel5"
|
||||
uuid.bios = "50 11 5e 16 9b dc 49 d7-f1 71 53 c4 d7 f9 17 10"
|
||||
snapshot.action = "keep"
|
||||
sched.cpu.min = "0"
|
||||
sched.cpu.units = "mhz"
|
||||
sched.cpu.shares = "normal"
|
||||
sched.mem.minsize = "0"
|
||||
sched.mem.shares = "normal"
|
||||
toolScripts.afterPowerOn = "true"
|
||||
toolScripts.afterResume = "true"
|
||||
toolScripts.beforeSuspend = "true"
|
||||
toolScripts.beforePowerOff = "true"
|
||||
scsi0:0.redo = ""
|
||||
tools.syncTime = "false"
|
||||
uuid.location = "56 4d b5 06 a2 bd fb eb-ae 86 f7 d8 49 27 d0 c4"
|
||||
sched.cpu.max = "unlimited"
|
||||
sched.swap.derivedName = "/vmfs/volumes/498076b2-02796c1a-ef5b-000ae484a6a3/Fedora11/Fedora11-7de040d8.vswp"
|
||||
tools.remindInstall = "TRUE"
|
||||
EOF
|
||||
|
||||
$ virsh -c esx://example.com domxml-from-native vmware-vmx demo.vmx
|
||||
Enter username for example.com [root]:
|
||||
Enter root password for example.com:
|
||||
<domain type='vmware'>
|
||||
<name>Fedora11</name>
|
||||
<uuid>50115e16-9bdc-49d7-f171-53c4d7f91710</uuid>
|
||||
<memory>1048576</memory>
|
||||
<currentMemory>1048576</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<os>
|
||||
<type arch='i686'>hvm</type>
|
||||
</os>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<disk type='file' device='disk'>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
<address type='drive' controller='0' bus='0' unit='0'/>
|
||||
</disk>
|
||||
<controller type='scsi' index='0' model='lsilogic'/>
|
||||
<interface type='bridge'>
|
||||
<mac address='00:50:56:91:48:c7'/>
|
||||
<source bridge='VM Network'/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
|
||||
Converting from domain XML config to VMware VMX config
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh domxml-to-native`` provides a way to convert a domain XML config
|
||||
into a VMware VMX config.
|
||||
|
||||
::
|
||||
|
||||
$ cat > demo.xml << EOF
|
||||
<domain type='vmware'>
|
||||
<name>Fedora11</name>
|
||||
<uuid>50115e16-9bdc-49d7-f171-53c4d7f91710</uuid>
|
||||
<memory>1048576</memory>
|
||||
<currentMemory>1048576</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<os>
|
||||
<type arch='x86_64'>hvm</type>
|
||||
</os>
|
||||
<devices>
|
||||
<disk type='file' device='disk'>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
<address type='drive' controller='0' bus='0' unit='0'/>
|
||||
</disk>
|
||||
<controller type='scsi' index='0' model='lsilogic'/>
|
||||
<interface type='bridge'>
|
||||
<mac address='00:50:56:25:48:c7'/>
|
||||
<source bridge='VM Network'/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
EOF
|
||||
|
||||
$ virsh -c esx://example.com domxml-to-native vmware-vmx demo.xml
|
||||
Enter username for example.com [root]:
|
||||
Enter root password for example.com:
|
||||
config.version = "8"
|
||||
virtualHW.version = "4"
|
||||
guestOS = "other-64"
|
||||
uuid.bios = "50 11 5e 16 9b dc 49 d7-f1 71 53 c4 d7 f9 17 10"
|
||||
displayName = "Fedora11"
|
||||
memsize = "1024"
|
||||
numvcpus = "1"
|
||||
scsi0.present = "true"
|
||||
scsi0.virtualDev = "lsilogic"
|
||||
scsi0:0.present = "true"
|
||||
scsi0:0.deviceType = "scsi-hardDisk"
|
||||
scsi0:0.fileName = "/vmfs/volumes/local-storage/Fedora11/Fedora11.vmdk"
|
||||
ethernet0.present = "true"
|
||||
ethernet0.networkName = "VM Network"
|
||||
ethernet0.connectionType = "bridged"
|
||||
ethernet0.addressType = "static"
|
||||
ethernet0.address = "00:50:56:25:48:C7"
|
||||
|
||||
Example domain XML configs
|
||||
--------------------------
|
||||
|
||||
Fedora11 on x86_64
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
<domain type='vmware'>
|
||||
<name>Fedora11</name>
|
||||
<uuid>50115e16-9bdc-49d7-f171-53c4d7f91710</uuid>
|
||||
<memory>1048576</memory>
|
||||
<currentMemory>1048576</currentMemory>
|
||||
<vcpu>1</vcpu>
|
||||
<os>
|
||||
<type arch='x86_64'>hvm</type>
|
||||
</os>
|
||||
<devices>
|
||||
<disk type='file' device='disk'>
|
||||
<source file='[local-storage] Fedora11/Fedora11.vmdk'/>
|
||||
<target dev='sda' bus='scsi'/>
|
||||
<address type='drive' controller='0' bus='0' unit='0'/>
|
||||
</disk>
|
||||
<controller type='scsi' index='0'/>
|
||||
<interface type='bridge'>
|
||||
<mac address='00:50:56:25:48:c7'/>
|
||||
<source bridge='VM Network'/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
|
||||
Migration
|
||||
---------
|
||||
|
||||
A migration cannot be initiated on an ESX server directly, a VMware vCenter is
|
||||
necessary for this. The ``vcenter`` query parameter must be set either to the
|
||||
hostname or IP address of the vCenter managing the ESX server or to ``*``.
|
||||
Setting it to ``*`` causes the driver to connect to the vCenter known to the ESX
|
||||
server. If the ESX server is not managed by a vCenter an error is reported.
|
||||
|
||||
::
|
||||
|
||||
esx://example.com/?vcenter=example-vcenter.com
|
||||
|
||||
Here's an example how to migrate the domain ``Fedora11`` from ESX server
|
||||
``example-src.com`` to ESX server ``example-dst.com`` implicitly involving
|
||||
vCenter ``example-vcenter.com`` using ``virsh``.
|
||||
|
||||
::
|
||||
|
||||
$ virsh -c esx://example-src.com/?vcenter=* migrate Fedora11 esx://example-dst.com/?vcenter=*
|
||||
Enter username for example-src.com [root]:
|
||||
Enter root password for example-src.com:
|
||||
Enter username for example-vcenter.com [administrator]:
|
||||
Enter administrator password for example-vcenter.com:
|
||||
Enter username for example-dst.com [root]:
|
||||
Enter root password for example-dst.com:
|
||||
Enter username for example-vcenter.com [administrator]:
|
||||
Enter administrator password for example-vcenter.com:
|
||||
|
||||
:since:`Since 0.8.3` you can directly connect to a vCenter. This simplifies
|
||||
migration a bit. Here's the same migration as above but using ``vpx://``
|
||||
connections and assuming both ESX server are in datacenter ``dc1`` and aren't
|
||||
part of a cluster.
|
||||
|
||||
::
|
||||
|
||||
$ virsh -c vpx://example-vcenter.com/dc1/example-src.com migrate Fedora11 vpx://example-vcenter.com/dc1/example-dst.com
|
||||
Enter username for example-vcenter.com [administrator]:
|
||||
Enter administrator password for example-vcenter.com:
|
||||
Enter username for example-vcenter.com [administrator]:
|
||||
Enter administrator password for example-vcenter.com:
|
||||
|
||||
Scheduler configuration
|
||||
-----------------------
|
||||
|
||||
The driver exposes the ESX CPU scheduler. The parameters listed below are
|
||||
available to control the scheduler.
|
||||
|
||||
``reservation``
|
||||
The amount of CPU resource in MHz that is guaranteed to be available to the
|
||||
domain. Valid values are 0 and greater.
|
||||
``limit``
|
||||
The CPU utilization of the domain will be limited to this value in MHz, even
|
||||
if more CPU resources are available. If the limit is set to -1, the CPU
|
||||
utilization of the domain is unlimited. If the limit is not set to -1, it
|
||||
must be greater than or equal to the reservation.
|
||||
``shares``
|
||||
Shares are used to determine relative CPU allocation between domains. In
|
||||
general, a domain with more shares gets proportionally more of the CPU
|
||||
resource. Valid values are 0 and greater. The special values -1, -2 and -3
|
||||
represent the predefined shares level ``low``, ``normal`` and ``high``.
|
||||
|
||||
VMware tools
|
||||
------------
|
||||
|
||||
Some actions require installed VMware tools. If the VMware tools are not
|
||||
installed in the guest and one of the actions below is to be performed the ESX
|
||||
server raises an error and the driver reports it.
|
||||
|
||||
- ``virDomainGetHostname``
|
||||
- ``virDomainInterfaceAddresses`` (only for the
|
||||
``VIR_DOMAIN_INTERFACE_ADDRESSES_SRC_AGENT`` source)
|
||||
- ``virDomainReboot``
|
||||
- ``virDomainShutdown``
|
||||
|
||||
Links
|
||||
-----
|
||||
|
||||
- `VMware vSphere Web Services SDK
|
||||
Documentation <https://www.vmware.com/support/developer/vc-sdk/>`__
|
||||
- `The Role of Memory in VMware ESX Server
|
||||
3 <https://www.vmware.com/pdf/esx3_memory.pdf>`__
|
||||
- `VMware VMX config parameters <https://www.sanbarrow.com/vmx.html>`__
|
||||
- `VMware ESX 4.0 PVSCSI Storage
|
||||
Performance <https://www.vmware.com/pdf/vsp_4_pvscsi_perf.pdf>`__
|
@@ -1,150 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>Microsoft Hyper-V hypervisor driver</h1>
|
||||
<ul id="toc"></ul>
|
||||
<p>
|
||||
The libvirt Microsoft Hyper-V driver can manage Hyper-V 2012 R2 and newer.
|
||||
</p>
|
||||
|
||||
|
||||
<h2><a id="project">Project Links</a></h2>
|
||||
<ul>
|
||||
<li>
|
||||
The <a href="http://www.microsoft.com/hyper-v-server/">Microsoft Hyper-V</a>
|
||||
hypervisor
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<h2><a id="uri">Connections to the Microsoft Hyper-V driver</a></h2>
|
||||
<p>
|
||||
Some example remote connection URIs for the driver are:
|
||||
</p>
|
||||
<pre>
|
||||
hyperv://example-hyperv.com (over HTTPS)
|
||||
hyperv://example-hyperv.com/?transport=http (over HTTP)
|
||||
</pre>
|
||||
<p>
|
||||
<strong>Note</strong>: In contrast to other drivers, the Hyper-V driver
|
||||
is a client-side-only driver. It connects to the Hyper-V server using
|
||||
WS-Management over HTTP(S). Therefore, the
|
||||
<a href="remote.html">remote transport mechanism</a> provided by the
|
||||
remote driver and libvirtd will not work, and you cannot use URIs like
|
||||
<code>hyperv+ssh://example.com</code>.
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="uriformat">URI Format</a></h3>
|
||||
<p>
|
||||
URIs have this general form (<code>[...]</code> marks an optional part).
|
||||
</p>
|
||||
<pre>
|
||||
hyperv://[username@]hostname[:port]/[?extraparameters]
|
||||
</pre>
|
||||
<p>
|
||||
The default HTTPS ports is 5986. If the port parameter is given, it
|
||||
overrides the default port.
|
||||
</p>
|
||||
|
||||
|
||||
<h4><a id="extraparams">Extra parameters</a></h4>
|
||||
<p>
|
||||
Extra parameters can be added to a URI as part of the query string
|
||||
(the part following <code>?</code>). A single parameter is formed by a
|
||||
<code>name=value</code> pair. Multiple parameters are separated by
|
||||
<code>&</code>.
|
||||
</p>
|
||||
<pre>
|
||||
?transport=http
|
||||
</pre>
|
||||
<p>
|
||||
The driver understands the extra parameters shown below.
|
||||
</p>
|
||||
<table class="top_table">
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Values</th>
|
||||
<th>Meaning</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<code>transport</code>
|
||||
</td>
|
||||
<td>
|
||||
<code>http</code> or <code>https</code>
|
||||
</td>
|
||||
<td>
|
||||
Overrides the default HTTPS transport. The default HTTP port
|
||||
is 5985.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
||||
<h3><a id="auth">Authentication</a></h3>
|
||||
<p>
|
||||
In order to perform any useful operation the driver needs to log into
|
||||
the Hyper-V server. Therefore, only <code>virConnectOpenAuth</code> can
|
||||
be used to connect to an Hyper-V server, <code>virConnectOpen</code> and
|
||||
<code>virConnectOpenReadOnly</code> don't work.
|
||||
To log into an Hyper-V server the driver will request credentials using
|
||||
the callback passed to the <code>virConnectOpenAuth</code> function.
|
||||
The driver passes the hostname as challenge parameter to the callback.
|
||||
</p>
|
||||
<p>
|
||||
<strong>Note</strong>: Currently only <code>Basic</code> authentication
|
||||
is supported by libvirt. This method is disabled by default on the
|
||||
Hyper-V server and can be enabled via the WinRM commandline tool.
|
||||
</p>
|
||||
<pre>
|
||||
winrm set winrm/config/service/auth @{Basic="true"}
|
||||
</pre>
|
||||
<p>
|
||||
To allow <code>Basic</code> authentication with HTTP transport WinRM
|
||||
needs to allow unencrypted communication. This can be enabled via the
|
||||
WinRM commandline tool. However, this is not the recommended
|
||||
communication mode.
|
||||
</p>
|
||||
<pre>
|
||||
winrm set winrm/config/service @{AllowUnencrypted="true"}
|
||||
</pre>
|
||||
|
||||
|
||||
<h2><a id="versions">Version Numbers</a></h2>
|
||||
<p>
|
||||
Since Microsoft's build numbers are almost always over 1000, this driver
|
||||
needs to pack the value differently compared to the format defined by
|
||||
<code>virConnectGetVersion</code>.
|
||||
To preserve all of the digits, the following format is used:
|
||||
</p>
|
||||
<pre>major * 100000000 + minor * 1000000 + micro</pre>
|
||||
<p>
|
||||
This results in <code>virsh version</code> producing unexpected output.
|
||||
</p>
|
||||
<table class="top_table">
|
||||
<thead>
|
||||
<th>Windows Release</th>
|
||||
<th>Kernel Version</th>
|
||||
<th>libvirt Representation</th>
|
||||
</thead>
|
||||
<tr>
|
||||
<td>Windows Server 2012 R2</td>
|
||||
<td>6.3.9600</td>
|
||||
<td>603.9.600</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Windows Server 2016</td>
|
||||
<td>10.0.14393</td>
|
||||
<td>1000.14.393</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Windows Server 2019</td>
|
||||
<td>10.0.17763</td>
|
||||
<td>1000.17.763</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
||||
</body></html>
|
121
docs/drvhyperv.rst
Normal file
121
docs/drvhyperv.rst
Normal file
@@ -0,0 +1,121 @@
|
||||
===================================
|
||||
Microsoft Hyper-V hypervisor driver
|
||||
===================================
|
||||
|
||||
.. contents::
|
||||
|
||||
The libvirt Microsoft Hyper-V driver can manage Hyper-V 2012 R2 and newer.
|
||||
|
||||
Project Links
|
||||
-------------
|
||||
|
||||
- The `Microsoft Hyper-V <https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-on-windows-server>`__
|
||||
hypervisor
|
||||
|
||||
Connections to the Microsoft Hyper-V driver
|
||||
-------------------------------------------
|
||||
|
||||
Some example remote connection URIs for the driver are:
|
||||
|
||||
::
|
||||
|
||||
hyperv://example-hyperv.com (over HTTPS)
|
||||
hyperv://example-hyperv.com/?transport=http (over HTTP)
|
||||
|
||||
**Note**: In contrast to other drivers, the Hyper-V driver is a client-side-only
|
||||
driver. It connects to the Hyper-V server using WS-Management over HTTP(S).
|
||||
Therefore, the `remote transport mechanism <remote.html>`__ provided by the
|
||||
remote driver and libvirtd will not work, and you cannot use URIs like
|
||||
``hyperv+ssh://example.com``.
|
||||
|
||||
URI Format
|
||||
~~~~~~~~~~
|
||||
|
||||
URIs have this general form (``[...]`` marks an optional part).
|
||||
|
||||
::
|
||||
|
||||
hyperv://[username@]hostname[:port]/[?extraparameters]
|
||||
|
||||
The default HTTPS ports is 5986. If the port parameter is given, it overrides
|
||||
the default port.
|
||||
|
||||
Extra parameters
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
Extra parameters can be added to a URI as part of the query string (the part
|
||||
following ``?``). A single parameter is formed by a ``name=value`` pair.
|
||||
Multiple parameters are separated by ``&``.
|
||||
|
||||
::
|
||||
|
||||
?transport=http
|
||||
|
||||
The driver understands the extra parameters shown below.
|
||||
|
||||
+---------------+-----------------------+-------------------------------------+
|
||||
| Name | Values | Meaning |
|
||||
+===============+=======================+=====================================+
|
||||
| ``transport`` | ``http`` or ``https`` | Overrides the default HTTPS |
|
||||
| | | transport. The default HTTP port is |
|
||||
| | | 5985. |
|
||||
+---------------+-----------------------+-------------------------------------+
|
||||
|
||||
Authentication
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
In order to perform any useful operation the driver needs to log into the
|
||||
Hyper-V server. Therefore, only ``virConnectOpenAuth`` can be used to connect to
|
||||
an Hyper-V server, ``virConnectOpen`` and ``virConnectOpenReadOnly`` don't work.
|
||||
To log into an Hyper-V server the driver will request credentials using the
|
||||
callback passed to the ``virConnectOpenAuth`` function. The driver passes the
|
||||
hostname as challenge parameter to the callback.
|
||||
|
||||
**Note**: Currently only ``Basic`` authentication is supported by libvirt. This
|
||||
method is disabled by default on the Hyper-V server and can be enabled via the
|
||||
WinRM commandline tool.
|
||||
|
||||
::
|
||||
|
||||
winrm set winrm/config/service/auth @{Basic="true"}
|
||||
|
||||
To allow ``Basic`` authentication with HTTP transport WinRM needs to allow
|
||||
unencrypted communication. This can be enabled via the WinRM commandline tool.
|
||||
However, this is not the recommended communication mode.
|
||||
|
||||
::
|
||||
|
||||
winrm set winrm/config/service @{AllowUnencrypted="true"}
|
||||
|
||||
Version Numbers
|
||||
---------------
|
||||
|
||||
Since Microsoft's build numbers are almost always over 1000, this driver needs
|
||||
to pack the value differently compared to the format defined by
|
||||
``virConnectGetVersion``. To preserve all of the digits, the following format is
|
||||
used:
|
||||
|
||||
::
|
||||
|
||||
major * 100000000 + minor * 1000000 + micro
|
||||
|
||||
This results in ``virsh version`` producing unexpected output.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
* - Windows Release
|
||||
- Kernel Version
|
||||
- libvirt Representation
|
||||
|
||||
* - Windows Server 2012 R2
|
||||
- 6.3.9600
|
||||
- 603.9.600
|
||||
|
||||
* - Windows Server 2016
|
||||
- 10.0.14393
|
||||
- 1000.14.393
|
||||
|
||||
* - Windows Server 2019
|
||||
- 10.0.17763
|
||||
- 1000.17.763
|
@@ -1,822 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>LXC container driver</h1>
|
||||
|
||||
<ul id="toc"></ul>
|
||||
|
||||
<p>
|
||||
The libvirt LXC driver manages "Linux Containers". At their simplest, containers
|
||||
can just be thought of as a collection of processes, separated from the main
|
||||
host processes via a set of resource namespaces and constrained via control
|
||||
groups resource tunables. The libvirt LXC driver has no dependency on the LXC
|
||||
userspace tools hosted on sourceforge.net. It directly utilizes the relevant
|
||||
kernel features to build the container environment. This allows for sharing
|
||||
of many libvirt technologies across both the QEMU/KVM and LXC drivers. In
|
||||
particular sVirt for mandatory access control, auditing of operations,
|
||||
integration with control groups and many other features.
|
||||
</p>
|
||||
|
||||
<h2><a id="cgroups">Control groups Requirements</a></h2>
|
||||
|
||||
<p>
|
||||
In order to control the resource usage of processes inside containers, the
|
||||
libvirt LXC driver requires that certain cgroups controllers are mounted on
|
||||
the host OS. The minimum required controllers are 'cpuacct', 'memory' and
|
||||
'devices', while recommended extra controllers are 'cpu', 'freezer' and
|
||||
'blkio'. Libvirt will not mount the cgroups filesystem itself, leaving
|
||||
this up to the init system to take care of. Systemd will do the right thing
|
||||
in this respect, while for other init systems the <code>cgconfig</code>
|
||||
init service will be required. For further information, consult the general
|
||||
libvirt <a href="cgroups.html">cgroups documentation</a>.
|
||||
</p>
|
||||
|
||||
<h2><a id="namespaces">Namespace requirements</a></h2>
|
||||
|
||||
<p>
|
||||
In order to separate processes inside a container from those in the
|
||||
primary "host" OS environment, the libvirt LXC driver requires that
|
||||
certain kernel namespaces are compiled in. Libvirt currently requires
|
||||
the 'mount', 'ipc', 'pid', and 'uts' namespaces to be available. If
|
||||
separate network interfaces are desired, then the 'net' namespace is
|
||||
required. If the guest configuration declares a
|
||||
<a href="formatdomain.html#elementsOSContainer">UID or GID mapping</a>,
|
||||
the 'user' namespace will be enabled to apply these. <strong>A suitably
|
||||
configured UID/GID mapping is a pre-requisite to making containers
|
||||
secure, in the absence of sVirt confinement.</strong>
|
||||
</p>
|
||||
|
||||
<h2><a id="init">Default container setup</a></h2>
|
||||
|
||||
<h3><a id="cliargs">Command line arguments</a></h3>
|
||||
|
||||
<p>
|
||||
When the container "init" process is started, it will typically
|
||||
not be given any command line arguments (eg the equivalent of
|
||||
the bootloader args visible in <code>/proc/cmdline</code>). If
|
||||
any arguments are desired, then must be explicitly set in the
|
||||
container XML configuration via one or more <code>initarg</code>
|
||||
elements. For example, to run <code>systemd --unit emergency.service</code>
|
||||
would use the following XML
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
<os>
|
||||
<type arch='x86_64'>exe</type>
|
||||
<init>/bin/systemd</init>
|
||||
<initarg>--unit</initarg>
|
||||
<initarg>emergency.service</initarg>
|
||||
</os>
|
||||
</pre>
|
||||
|
||||
<h3><a id="envvars">Environment variables</a></h3>
|
||||
|
||||
<p>
|
||||
When the container "init" process is started, it will be given several useful
|
||||
environment variables. The following standard environment variables are mandated
|
||||
by <a href="https://www.freedesktop.org/wiki/Software/systemd/ContainerInterface">systemd container interface</a>
|
||||
to be provided by all container technologies on Linux.
|
||||
</p>
|
||||
|
||||
<dl>
|
||||
<dt><code>container</code></dt>
|
||||
<dd>The fixed string <code>libvirt-lxc</code> to identify libvirt as the creator</dd>
|
||||
<dt><code>container_uuid</code></dt>
|
||||
<dd>The UUID assigned to the container by libvirt</dd>
|
||||
<dt><code>PATH</code></dt>
|
||||
<dd>The fixed string <code>/bin:/usr/bin</code></dd>
|
||||
<dt><code>TERM</code></dt>
|
||||
<dd>The fixed string <code>linux</code></dd>
|
||||
<dt><code>HOME</code></dt>
|
||||
<dd>The fixed string <code>/</code></dd>
|
||||
</dl>
|
||||
|
||||
<p>
|
||||
In addition to the standard variables, the following libvirt specific
|
||||
environment variables are also provided
|
||||
</p>
|
||||
|
||||
<dl>
|
||||
<dt><code>LIBVIRT_LXC_NAME</code></dt>
|
||||
<dd>The name assigned to the container by libvirt</dd>
|
||||
<dt><code>LIBVIRT_LXC_UUID</code></dt>
|
||||
<dd>The UUID assigned to the container by libvirt</dd>
|
||||
<dt><code>LIBVIRT_LXC_CMDLINE</code></dt>
|
||||
<dd>The unparsed command line arguments specified in the container configuration.
|
||||
Use of this is discouraged, in favour of passing arguments directly to the
|
||||
container init process via the <code>initarg</code> config element.</dd>
|
||||
</dl>
|
||||
|
||||
<h3><a id="fsmounts">Filesystem mounts</a></h3>
|
||||
|
||||
<p>
|
||||
In the absence of any explicit configuration, the container will
|
||||
inherit the host OS filesystem mounts. A number of mount points will
|
||||
be made read only, or re-mounted with new instances to provide
|
||||
container specific data. The following special mounts are setup
|
||||
by libvirt
|
||||
</p>
|
||||
|
||||
<ul>
|
||||
<li><code>/dev</code> a new "tmpfs" pre-populated with authorized device nodes</li>
|
||||
<li><code>/dev/pts</code> a new private "devpts" instance for console devices</li>
|
||||
<li><code>/sys</code> the host "sysfs" instance remounted read-only</li>
|
||||
<li><code>/proc</code> a new instance of the "proc" filesystem</li>
|
||||
<li><code>/proc/sys</code> the host "/proc/sys" bind-mounted read-only</li>
|
||||
<li><code>/sys/fs/selinux</code> the host "selinux" instance remounted read-only</li>
|
||||
<li><code>/sys/fs/cgroup/NNNN</code> the host cgroups controllers bind-mounted to
|
||||
only expose the sub-tree associated with the container</li>
|
||||
<li><code>/proc/meminfo</code> a FUSE backed file reflecting memory limits of the container</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<h3><a id="devnodes">Device nodes</a></h3>
|
||||
|
||||
<p>
|
||||
The container init process will be started with <code>CAP_MKNOD</code>
|
||||
capability removed and blocked from re-acquiring it. As such it will
|
||||
not be able to create any device nodes in <code>/dev</code> or anywhere
|
||||
else in its filesystems. Libvirt itself will take care of pre-populating
|
||||
the <code>/dev</code> filesystem with any devices that the container
|
||||
is authorized to use. The current devices that will be made available
|
||||
to all containers are
|
||||
</p>
|
||||
|
||||
<ul>
|
||||
<li><code>/dev/zero</code></li>
|
||||
<li><code>/dev/null</code></li>
|
||||
<li><code>/dev/full</code></li>
|
||||
<li><code>/dev/random</code></li>
|
||||
<li><code>/dev/urandom</code></li>
|
||||
<li><code>/dev/stdin</code> symlinked to <code>/proc/self/fd/0</code></li>
|
||||
<li><code>/dev/stdout</code> symlinked to <code>/proc/self/fd/1</code></li>
|
||||
<li><code>/dev/stderr</code> symlinked to <code>/proc/self/fd/2</code></li>
|
||||
<li><code>/dev/fd</code> symlinked to <code>/proc/self/fd</code></li>
|
||||
<li><code>/dev/ptmx</code> symlinked to <code>/dev/pts/ptmx</code></li>
|
||||
<li><code>/dev/console</code> symlinked to <code>/dev/pts/0</code></li>
|
||||
</ul>
|
||||
|
||||
<p>
|
||||
In addition, for every console defined in the guest configuration,
|
||||
a symlink will be created from <code>/dev/ttyN</code> symlinked to
|
||||
the corresponding <code>/dev/pts/M</code> pseudo TTY device. The
|
||||
first console will be <code>/dev/tty1</code>, with further consoles
|
||||
numbered incrementally from there.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Since /dev/ttyN and /dev/console are linked to the pts devices. The
|
||||
tty device of login program is pts device. The pam module securetty
|
||||
may prevent root user from logging in container. If you want root
|
||||
user to log in container successfully, add the pts device to the file
|
||||
/etc/securetty of container.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Further block or character devices will be made available to containers
|
||||
depending on their configuration.
|
||||
</p>
|
||||
|
||||
<h2><a id="security">Security considerations</a></h2>
|
||||
|
||||
<p>
|
||||
The libvirt LXC driver is fairly flexible in how it can be configured,
|
||||
and as such does not enforce a requirement for strict security
|
||||
separation between a container and the host. This allows it to be used
|
||||
in scenarios where only resource control capabilities are important,
|
||||
and resource sharing is desired. Applications wishing to ensure secure
|
||||
isolation between a container and the host must ensure that they are
|
||||
writing a suitable configuration.
|
||||
</p>
|
||||
|
||||
<h3><a id="securenetworking">Network isolation</a></h3>
|
||||
|
||||
<p>
|
||||
If the guest configuration does not list any network interfaces,
|
||||
the <code>network</code> namespace will not be activated, and thus
|
||||
the container will see all the host's network interfaces. This will
|
||||
allow apps in the container to bind to/connect from TCP/UDP addresses
|
||||
and ports from the host OS. It also allows applications to access
|
||||
UNIX domain sockets associated with the host OS, which are in the
|
||||
abstract namespace. If access to UNIX domains sockets in the abstract
|
||||
namespace is not wanted, then applications should set the
|
||||
<code><privnet/></code> flag in the
|
||||
<code><features>....</features></code> element.
|
||||
</p>
|
||||
|
||||
<h3><a id="securefs">Filesystem isolation</a></h3>
|
||||
|
||||
<p>
|
||||
If the guest configuration does not list any filesystems, then
|
||||
the container will be set up with a root filesystem that matches
|
||||
the host's root filesystem. As noted earlier, only a few locations
|
||||
such as <code>/dev</code>, <code>/proc</code> and <code>/sys</code>
|
||||
will be altered. This means that, in the absence of restrictions
|
||||
from sVirt, a process running as user/group N:M inside the container
|
||||
will be able to access almost exactly the same files as a process
|
||||
running as user/group N:M in the host.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
There are multiple options for restricting this. It is possible to
|
||||
simply map the existing root filesystem through to the container in
|
||||
read-only mode. Alternatively a completely separate root filesystem
|
||||
can be configured for the guest. In both cases, further sub-mounts
|
||||
can be applied to customize the content that is made visible. Note
|
||||
that in the absence of sVirt controls, it is still possible for the
|
||||
root user in a container to unmount any sub-mounts applied. The user
|
||||
namespace feature can also be used to restrict access to files based
|
||||
on the UID/GID mappings.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Sharing the host filesystem tree, also allows applications to access
|
||||
UNIX domains sockets associated with the host OS, which are in the
|
||||
filesystem namespaces. It should be noted that a number of init
|
||||
systems including at least <code>systemd</code> and <code>upstart</code>
|
||||
have UNIX domain socket which are used to control their operation.
|
||||
Thus, if the directory/filesystem holding their UNIX domain socket is
|
||||
exposed to the container, it will be possible for a user in the container
|
||||
to invoke operations on the init service in the same way it could if
|
||||
outside the container. This also applies to other applications in the
|
||||
host which use UNIX domain sockets in the filesystem, such as DBus,
|
||||
Libvirtd, and many more. If this is not desired, then applications
|
||||
should either specify the UID/GID mapping in the configuration to
|
||||
enable user namespaces and thus block access to the UNIX domain socket
|
||||
based on permissions, or should ensure the relevant directories have
|
||||
a bind mount to hide them. This is particularly important for the
|
||||
<code>/run</code> or <code>/var/run</code> directories.
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="secureusers">User and group isolation</a></h3>
|
||||
|
||||
<p>
|
||||
If the guest configuration does not list any ID mapping, then the
|
||||
user and group IDs used inside the container will match those used
|
||||
outside the container. In addition, the capabilities associated with
|
||||
a process in the container will infer the same privileges they would
|
||||
for a process in the host. This has obvious implications for security,
|
||||
since a root user inside the container will be able to access any
|
||||
file owned by root that is visible to the container, and perform more
|
||||
or less any privileged kernel operation. In the absence of additional
|
||||
protection from sVirt, this means that the root user inside a container
|
||||
is effectively as powerful as the root user in the host. There is no
|
||||
security isolation of the root user.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The ID mapping facility was introduced to allow for stricter control
|
||||
over the privileges of users inside the container. It allows apps to
|
||||
define rules such as "user ID 0 in the container maps to user ID 1000
|
||||
in the host". In addition the privileges associated with capabilities
|
||||
are somewhat reduced so that they cannot be used to escape from the
|
||||
container environment. A full description of user namespaces is outside
|
||||
the scope of this document, however LWN has
|
||||
<a href="https://lwn.net/Articles/532593/">a good write-up on the topic</a>.
|
||||
From the libvirt point of view, the key thing to remember is that defining
|
||||
an ID mapping for users and groups in the container XML configuration
|
||||
causes libvirt to activate the user namespace feature.
|
||||
</p>
|
||||
|
||||
|
||||
<h2><a id="configFiles">Location of configuration files</a></h2>
|
||||
|
||||
<p>
|
||||
The LXC driver comes with sane default values. However, during its
|
||||
initialization it reads a configuration file which offers system
|
||||
administrator to override some of that default. The file is located
|
||||
under <code>/etc/libvirt/lxc.conf</code>
|
||||
</p>
|
||||
|
||||
|
||||
<h2><a id="activation">Systemd Socket Activation Integration</a></h2>
|
||||
|
||||
<p>
|
||||
The libvirt LXC driver provides the ability to pass across pre-opened file
|
||||
descriptors when starting LXC guests. This allows for libvirt LXC to support
|
||||
systemd's <a href="http://0pointer.de/blog/projects/socket-activated-containers.html">socket
|
||||
activation capability</a>, where an incoming client connection
|
||||
in the host OS will trigger the startup of a container, which runs another
|
||||
copy of systemd which gets passed the server socket, and then activates the
|
||||
actual service handler in the container.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Let us assume that you already have a LXC guest created, running
|
||||
a systemd instance as PID 1 inside the container, which has an
|
||||
SSHD service configured. The goal is to automatically activate
|
||||
the container when the first SSH connection is made. The first
|
||||
step is to create a couple of unit files for the host OS systemd
|
||||
instance. The <code>/etc/systemd/system/mycontainer.service</code>
|
||||
unit file specifies how systemd will start the libvirt LXC container
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
[Unit]
|
||||
Description=My little container
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/bin/virsh -c lxc:///system start --pass-fds 3 mycontainer
|
||||
ExecStop=/usr/bin/virsh -c lxc:///system destroy mycontainer
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
KillMode=none
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
The <code>--pass-fds 3</code> argument specifies that the file
|
||||
descriptor number 3 that <code>virsh</code> inherits from systemd,
|
||||
is to be passed into the container. Since <code>virsh</code> will
|
||||
exit immediately after starting the container, the <code>RemainAfterExit</code>
|
||||
and <code>KillMode</code> settings must be altered from their defaults.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Next, the <code>/etc/systemd/system/mycontainer.socket</code> unit
|
||||
file is created to get the host systemd to listen on port 23 for
|
||||
TCP connections. When this unit file is activated by the first
|
||||
incoming connection, it will cause the <code>mycontainer.service</code>
|
||||
unit to be activated with the FD corresponding to the listening TCP
|
||||
socket passed in as FD 3.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
[Unit]
|
||||
Description=The SSH socket of my little container
|
||||
|
||||
[Socket]
|
||||
ListenStream=23
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Port 23 was picked here so that the container doesn't conflict
|
||||
with the host's SSH which is on the normal port 22. That's it
|
||||
in terms of host side configuration.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Inside the container, the <code>/etc/systemd/system/sshd.socket</code>
|
||||
unit file must be created
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
[Unit]
|
||||
Description=SSH Socket for Per-Connection Servers
|
||||
|
||||
[Socket]
|
||||
ListenStream=23
|
||||
Accept=yes
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
The <code>ListenStream</code> value listed in this unit file, must
|
||||
match the value used in the host file. When systemd in the container
|
||||
receives the pre-opened FD from libvirt during container startup, it
|
||||
looks at the <code>ListenStream</code> values to figure out which
|
||||
FD to give to which service. The actual service to start is defined
|
||||
by a correspondingly named <code>/etc/systemd/system/sshd@.service</code>
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
[Unit]
|
||||
Description=SSH Per-Connection Server for %I
|
||||
|
||||
[Service]
|
||||
ExecStart=-/usr/sbin/sshd -i
|
||||
StandardInput=socket
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Finally, make sure this SSH service is set to start on boot of the container,
|
||||
by running the following command inside the container:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# mkdir -p /etc/systemd/system/sockets.target.wants/
|
||||
# ln -s /etc/systemd/system/sshd.socket /etc/systemd/system/sockets.target.wants/
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
This example shows how to activate the container based on an incoming
|
||||
SSH connection. If the container was also configured to have an httpd
|
||||
service, it may be desirable to activate it upon either an httpd or a
|
||||
sshd connection attempt. In this case, the <code>mycontainer.socket</code>
|
||||
file in the host would simply list multiple socket ports. Inside the
|
||||
container a separate <code>xxxxx.socket</code> file would need to be
|
||||
created for each service, with a corresponding <code>ListenStream</code>
|
||||
value set.
|
||||
</p>
|
||||
|
||||
<!--
|
||||
<h2>Container configuration</h2>
|
||||
|
||||
<h3>Init process</h3>
|
||||
|
||||
<h3>Console devices</h3>
|
||||
|
||||
<h3>Filesystem devices</h3>
|
||||
|
||||
<h3>Disk devices</h3>
|
||||
|
||||
<h3>Block devices</h3>
|
||||
|
||||
<h3>USB devices</h3>
|
||||
|
||||
<h3>Character devices</h3>
|
||||
|
||||
<h3>Network devices</h3>
|
||||
-->
|
||||
|
||||
<h2>Container security</h2>
|
||||
|
||||
<h3>sVirt SELinux</h3>
|
||||
|
||||
<p>
|
||||
In the absence of the "user" namespace being used, containers cannot
|
||||
be considered secure against exploits of the host OS. The sVirt SELinux
|
||||
driver provides a way to secure containers even when the "user" namespace
|
||||
is not used. The cost is that writing a policy to allow execution of
|
||||
arbitrary OS is not practical. The SELinux sVirt policy is typically
|
||||
tailored to work with a simpler application confinement use case,
|
||||
as provided by the "libvirt-sandbox" project.
|
||||
</p>
|
||||
|
||||
<h3>Auditing</h3>
|
||||
|
||||
<p>
|
||||
The LXC driver is integrated with libvirt's auditing subsystem, which
|
||||
causes audit messages to be logged whenever there is an operation
|
||||
performed against a container which has impact on host resources.
|
||||
So for example, start/stop, device hotplug will all log audit messages
|
||||
providing details about what action occurred and any resources
|
||||
associated with it. There are the following 3 types of audit messages
|
||||
</p>
|
||||
|
||||
<ul>
|
||||
<li><code>VIRT_MACHINE_ID</code> - details of the SELinux process and
|
||||
image security labels assigned to the container.</li>
|
||||
<li><code>VIRT_CONTROL</code> - details of an action / operation
|
||||
performed against a container. There are the following types of
|
||||
operation
|
||||
<ul>
|
||||
<li><code>op=start</code> - a container has been started. Provides
|
||||
the machine name, uuid and PID of the <code>libvirt_lxc</code>
|
||||
controller process</li>
|
||||
<li><code>op=init</code> - the init PID of the container has been
|
||||
started. Provides the machine name, uuid and PID of the
|
||||
<code>libvirt_lxc</code> controller process and PID of the
|
||||
init process (in the host PID namespace)</li>
|
||||
<li><code>op=stop</code> - a container has been stopped. Provides
|
||||
the machine name, uuid</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><code>VIRT_RESOURCE</code> - details of a host resource
|
||||
associated with a container action.</li>
|
||||
</ul>
|
||||
|
||||
<h3>Device access</h3>
|
||||
|
||||
<p>
|
||||
All containers are launched with the CAP_MKNOD capability cleared
|
||||
and removed from the bounding set. Libvirt will ensure that the
|
||||
/dev filesystem is pre-populated with all devices that a container
|
||||
is allowed to use. In addition, the cgroup "device" controller is
|
||||
configured to block read/write/mknod from all devices except those
|
||||
that a container is authorized to use.
|
||||
</p>
|
||||
|
||||
<h2><a id="exconfig">Example configurations</a></h2>
|
||||
|
||||
<h3>Example config version 1</h3>
|
||||
<p></p>
|
||||
<pre>
|
||||
<domain type='lxc'>
|
||||
<name>vm1</name>
|
||||
<memory>500000</memory>
|
||||
<os>
|
||||
<type>exe</type>
|
||||
<init>/bin/sh</init>
|
||||
</os>
|
||||
<vcpu>1</vcpu>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<emulator>/usr/libexec/libvirt_lxc</emulator>
|
||||
<interface type='network'>
|
||||
<source network='default'/>
|
||||
</interface>
|
||||
<console type='pty' />
|
||||
</devices>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
In the <emulator> element, be sure you specify the correct path
|
||||
to libvirt_lxc, if it does not live in /usr/libexec on your system.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The next example assumes there is a private root filesystem
|
||||
(perhaps hand-crafted using busybox, or installed from media,
|
||||
debootstrap, whatever) under /opt/vm-1-root:
|
||||
</p>
|
||||
<p></p>
|
||||
<pre>
|
||||
<domain type='lxc'>
|
||||
<name>vm1</name>
|
||||
<memory>32768</memory>
|
||||
<os>
|
||||
<type>exe</type>
|
||||
<init>/init</init>
|
||||
</os>
|
||||
<vcpu>1</vcpu>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<emulator>/usr/libexec/libvirt_lxc</emulator>
|
||||
<filesystem type='mount'>
|
||||
<source dir='/opt/vm-1-root'/>
|
||||
<target dir='/'/>
|
||||
</filesystem>
|
||||
<interface type='network'>
|
||||
<source network='default'/>
|
||||
</interface>
|
||||
<console type='pty' />
|
||||
</devices>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<h2><a id="capabilities">Altering the available capabilities</a></h2>
|
||||
|
||||
<p>
|
||||
By default the libvirt LXC driver drops some capabilities among which CAP_MKNOD.
|
||||
However <span class="since">since 1.2.6</span> libvirt can be told to keep or
|
||||
drop some capabilities using a domain configuration like the following:
|
||||
</p>
|
||||
<pre>
|
||||
...
|
||||
<features>
|
||||
<capabilities policy='default'>
|
||||
<mknod state='on'/>
|
||||
<sys_chroot state='off'/>
|
||||
</capabilities>
|
||||
</features>
|
||||
...
|
||||
</pre>
|
||||
<p>
|
||||
The capabilities children elements are named after the capabilities as defined in
|
||||
<code>man 7 capabilities</code>. An <code>off</code> state tells libvirt to drop the
|
||||
capability, while an <code>on</code> state will force to keep the capability even though
|
||||
this one is dropped by default.
|
||||
</p>
|
||||
<p>
|
||||
The <code>policy</code> attribute can be one of <code>default</code>, <code>allow</code>
|
||||
or <code>deny</code>. It defines the default rules for capabilities: either keep the
|
||||
default behavior that is dropping a few selected capabilities, or keep all capabilities
|
||||
or drop all capabilities. The interest of <code>allow</code> and <code>deny</code> is that
|
||||
they guarantee that all capabilities will be kept (or removed) even if new ones are added
|
||||
later.
|
||||
</p>
|
||||
<p>
|
||||
The following example, drops all capabilities but CAP_MKNOD:
|
||||
</p>
|
||||
<pre>
|
||||
...
|
||||
<features>
|
||||
<capabilities policy='deny'>
|
||||
<mknod state='on'/>
|
||||
</capabilities>
|
||||
</features>
|
||||
...
|
||||
</pre>
|
||||
<p>
|
||||
Note that allowing capabilities that are normally dropped by default can seriously
|
||||
affect the security of the container and the host.
|
||||
</p>
|
||||
|
||||
<h2><a id="share">Inherit namespaces</a></h2>
|
||||
|
||||
<p>
|
||||
Libvirt allows you to inherit the namespace from container/process just like lxc tools
|
||||
or docker provides to share the network namespace. The following can be used to share
|
||||
required namespaces. If we want to share only one then the other namespaces can be ignored.
|
||||
The netns option is specific to sharenet. It can be used in cases we want to use existing network namespace
|
||||
rather than creating new network namespace for the container. In this case privnet option will be
|
||||
ignored.
|
||||
</p>
|
||||
<pre>
|
||||
<domain type='lxc' xmlns:lxc='http://libvirt.org/schemas/domain/lxc/1.0'>
|
||||
...
|
||||
<lxc:namespace>
|
||||
<lxc:sharenet type='netns' value='red'/>
|
||||
<lxc:shareuts type='name' value='container1'/>
|
||||
<lxc:shareipc type='pid' value='12345'/>
|
||||
</lxc:namespace>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
The use of namespace passthrough requires libvirt >= 1.2.19
|
||||
</p>
|
||||
|
||||
<h2><a id="usage">Container usage / management</a></h2>
|
||||
|
||||
<p>
|
||||
As with any libvirt virtualization driver, LXC containers can be
|
||||
managed via a wide variety of libvirt based tools. At the lowest
|
||||
level the <code>virsh</code> command can be used to perform many
|
||||
tasks, by passing the <code>-c lxc:///system</code> argument. As an
|
||||
alternative to repeating the URI with every command, the <code>LIBVIRT_DEFAULT_URI</code>
|
||||
environment variable can be set to <code>lxc:///system</code>. The
|
||||
examples that follow outline some common operations with virsh
|
||||
and LXC. For further details about usage of virsh consult its
|
||||
manual page.
|
||||
</p>
|
||||
|
||||
<h3><a id="usageSave">Defining (saving) container configuration</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh define</code> command takes an XML configuration
|
||||
document and loads it into libvirt, saving the configuration on disk
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system define myguest.xml
|
||||
</pre>
|
||||
|
||||
<h3><a id="usageView">Viewing container configuration</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh dumpxml</code> command can be used to view the
|
||||
current XML configuration of a container. By default the XML
|
||||
output reflects the current state of the container. If the
|
||||
container is running, it is possible to explicitly request the
|
||||
persistent configuration, instead of the current live configuration
|
||||
using the <code>--inactive</code> flag
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system dumpxml myguest
|
||||
</pre>
|
||||
|
||||
<h3><a id="usageStart">Starting containers</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh start</code> command can be used to start a
|
||||
container from a previously defined persistent configuration
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system start myguest
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
It is also possible to start so called "transient" containers,
|
||||
which do not require a persistent configuration to be saved
|
||||
by libvirt, using the <code>virsh create</code> command.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system create myguest.xml
|
||||
</pre>
|
||||
|
||||
|
||||
<h3><a id="usageStop">Stopping containers</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh shutdown</code> command can be used
|
||||
to request a graceful shutdown of the container. By default
|
||||
this command will first attempt to send a message to the
|
||||
init process via the <code>/dev/initctl</code> device node.
|
||||
If no such device node exists, then it will send SIGTERM
|
||||
to PID 1 inside the container.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system shutdown myguest
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
If the container does not respond to the graceful shutdown
|
||||
request, it can be forcibly stopped using the <code>virsh destroy</code>
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system destroy myguest
|
||||
</pre>
|
||||
|
||||
|
||||
<h3><a id="usageReboot">Rebooting a container</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh reboot</code> command can be used
|
||||
to request a graceful shutdown of the container. By default
|
||||
this command will first attempt to send a message to the
|
||||
init process via the <code>/dev/initctl</code> device node.
|
||||
If no such device node exists, then it will send SIGHUP
|
||||
to PID 1 inside the container.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system reboot myguest
|
||||
</pre>
|
||||
|
||||
<h3><a id="usageDelete">Undefining (deleting) a container configuration</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh undefine</code> command can be used to delete the
|
||||
persistent configuration of a container. If the guest is currently
|
||||
running, this will turn it into a "transient" guest.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system undefine myguest
|
||||
</pre>
|
||||
|
||||
<h3><a id="usageConnect">Connecting to a container console</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh console</code> command can be used to connect
|
||||
to the text console associated with a container.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system console myguest
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
If the container has been configured with multiple console devices,
|
||||
then the <code>--devname</code> argument can be used to choose the
|
||||
console to connect to.
|
||||
In LXC, multiple consoles will be named
|
||||
as 'console0', 'console1', 'console2', etc.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system console myguest --devname console1
|
||||
</pre>
|
||||
|
||||
<h3><a id="usageEnter">Running commands in a container</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh lxc-enter-namespace</code> command can be used
|
||||
to enter the namespaces and security context of a container
|
||||
and then execute an arbitrary command.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system lxc-enter-namespace myguest -- /bin/ls -al /dev
|
||||
</pre>
|
||||
|
||||
<h3><a id="usageTop">Monitoring container utilization</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virt-top</code> command can be used to monitor the
|
||||
activity and resource utilization of all containers on a
|
||||
host
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virt-top -c lxc:///system
|
||||
</pre>
|
||||
|
||||
<h3><a id="usageConvert">Converting LXC container configuration</a></h3>
|
||||
|
||||
<p>
|
||||
The <code>virsh domxml-from-native</code> command can be used to convert
|
||||
most of the LXC container configuration into a domain XML fragment
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
# virsh -c lxc:///system domxml-from-native lxc-tools /var/lib/lxc/myguest/config
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
This conversion has some limitations due to the fact that the
|
||||
domxml-from-native command output has to be independent of the host. Here
|
||||
are a few things to take care of before converting:
|
||||
</p>
|
||||
|
||||
<ul>
|
||||
<li>
|
||||
Replace the fstab file referenced by <tt>lxc.mount</tt> by the corresponding
|
||||
lxc.mount.entry lines.
|
||||
</li>
|
||||
<li>
|
||||
Replace all relative sizes of tmpfs mount entries to absolute sizes. Also
|
||||
make sure that tmpfs entries all have a size option (default is 50%).
|
||||
</li>
|
||||
<li>
|
||||
Define <tt>lxc.cgroup.memory.limit_in_bytes</tt> to properly limit the memory
|
||||
available to the container. The conversion will use 64MiB as the default.
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
</body>
|
||||
</html>
|
670
docs/drvlxc.rst
Normal file
670
docs/drvlxc.rst
Normal file
@@ -0,0 +1,670 @@
|
||||
.. role:: since
|
||||
|
||||
====================
|
||||
LXC container driver
|
||||
====================
|
||||
|
||||
.. contents::
|
||||
|
||||
The libvirt LXC driver manages "Linux Containers". At their simplest, containers
|
||||
can just be thought of as a collection of processes, separated from the main
|
||||
host processes via a set of resource namespaces and constrained via control
|
||||
groups resource tunables. The libvirt LXC driver has no dependency on the LXC
|
||||
userspace tools hosted on sourceforge.net. It directly utilizes the relevant
|
||||
kernel features to build the container environment. This allows for sharing of
|
||||
many libvirt technologies across both the QEMU/KVM and LXC drivers. In
|
||||
particular sVirt for mandatory access control, auditing of operations,
|
||||
integration with control groups and many other features.
|
||||
|
||||
Control groups Requirements
|
||||
---------------------------
|
||||
|
||||
In order to control the resource usage of processes inside containers, the
|
||||
libvirt LXC driver requires that certain cgroups controllers are mounted on the
|
||||
host OS. The minimum required controllers are 'cpuacct', 'memory' and 'devices',
|
||||
while recommended extra controllers are 'cpu', 'freezer' and 'blkio'. Libvirt
|
||||
will not mount the cgroups filesystem itself, leaving this up to the init system
|
||||
to take care of. Systemd will do the right thing in this respect, while for
|
||||
other init systems the ``cgconfig`` init service will be required. For further
|
||||
information, consult the general libvirt `cgroups
|
||||
documentation <cgroups.html>`__.
|
||||
|
||||
Namespace requirements
|
||||
----------------------
|
||||
|
||||
In order to separate processes inside a container from those in the primary
|
||||
"host" OS environment, the libvirt LXC driver requires that certain kernel
|
||||
namespaces are compiled in. Libvirt currently requires the 'mount', 'ipc',
|
||||
'pid', and 'uts' namespaces to be available. If separate network interfaces are
|
||||
desired, then the 'net' namespace is required. If the guest configuration
|
||||
declares a `UID or GID mapping <formatdomain.html#elementsOSContainer>`__, the
|
||||
'user' namespace will be enabled to apply these. **A suitably configured UID/GID
|
||||
mapping is a pre-requisite to making containers secure, in the absence of sVirt
|
||||
confinement.**
|
||||
|
||||
Default container setup
|
||||
-----------------------
|
||||
|
||||
Command line arguments
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When the container "init" process is started, it will typically not be given any
|
||||
command line arguments (eg the equivalent of the bootloader args visible in
|
||||
``/proc/cmdline``). If any arguments are desired, then must be explicitly set in
|
||||
the container XML configuration via one or more ``initarg`` elements. For
|
||||
example, to run ``systemd --unit emergency.service`` would use the following XML
|
||||
|
||||
::
|
||||
|
||||
<os>
|
||||
<type arch='x86_64'>exe</type>
|
||||
<init>/bin/systemd</init>
|
||||
<initarg>--unit</initarg>
|
||||
<initarg>emergency.service</initarg>
|
||||
</os>
|
||||
|
||||
Environment variables
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When the container "init" process is started, it will be given several useful
|
||||
environment variables. The following standard environment variables are mandated
|
||||
by `systemd container
|
||||
interface <https://www.freedesktop.org/wiki/Software/systemd/ContainerInterface>`__
|
||||
to be provided by all container technologies on Linux.
|
||||
|
||||
``container``
|
||||
The fixed string ``libvirt-lxc`` to identify libvirt as the creator
|
||||
``container_uuid``
|
||||
The UUID assigned to the container by libvirt
|
||||
``PATH``
|
||||
The fixed string ``/bin:/usr/bin``
|
||||
``TERM``
|
||||
The fixed string ``linux``
|
||||
``HOME``
|
||||
The fixed string ``/``
|
||||
|
||||
In addition to the standard variables, the following libvirt specific
|
||||
environment variables are also provided
|
||||
|
||||
``LIBVIRT_LXC_NAME``
|
||||
The name assigned to the container by libvirt
|
||||
``LIBVIRT_LXC_UUID``
|
||||
The UUID assigned to the container by libvirt
|
||||
``LIBVIRT_LXC_CMDLINE``
|
||||
The unparsed command line arguments specified in the container configuration.
|
||||
Use of this is discouraged, in favour of passing arguments directly to the
|
||||
container init process via the ``initarg`` config element.
|
||||
|
||||
Filesystem mounts
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
In the absence of any explicit configuration, the container will inherit the
|
||||
host OS filesystem mounts. A number of mount points will be made read only, or
|
||||
re-mounted with new instances to provide container specific data. The following
|
||||
special mounts are setup by libvirt
|
||||
|
||||
- ``/dev`` a new "tmpfs" pre-populated with authorized device nodes
|
||||
- ``/dev/pts`` a new private "devpts" instance for console devices
|
||||
- ``/sys`` the host "sysfs" instance remounted read-only
|
||||
- ``/proc`` a new instance of the "proc" filesystem
|
||||
- ``/proc/sys`` the host "/proc/sys" bind-mounted read-only
|
||||
- ``/sys/fs/selinux`` the host "selinux" instance remounted read-only
|
||||
- ``/sys/fs/cgroup/NNNN`` the host cgroups controllers bind-mounted to only
|
||||
expose the sub-tree associated with the container
|
||||
- ``/proc/meminfo`` a FUSE backed file reflecting memory limits of the
|
||||
container
|
||||
|
||||
Device nodes
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The container init process will be started with ``CAP_MKNOD`` capability removed
|
||||
and blocked from re-acquiring it. As such it will not be able to create any
|
||||
device nodes in ``/dev`` or anywhere else in its filesystems. Libvirt itself
|
||||
will take care of pre-populating the ``/dev`` filesystem with any devices that
|
||||
the container is authorized to use. The current devices that will be made
|
||||
available to all containers are
|
||||
|
||||
- ``/dev/zero``
|
||||
- ``/dev/null``
|
||||
- ``/dev/full``
|
||||
- ``/dev/random``
|
||||
- ``/dev/urandom``
|
||||
- ``/dev/stdin`` symlinked to ``/proc/self/fd/0``
|
||||
- ``/dev/stdout`` symlinked to ``/proc/self/fd/1``
|
||||
- ``/dev/stderr`` symlinked to ``/proc/self/fd/2``
|
||||
- ``/dev/fd`` symlinked to ``/proc/self/fd``
|
||||
- ``/dev/ptmx`` symlinked to ``/dev/pts/ptmx``
|
||||
- ``/dev/console`` symlinked to ``/dev/pts/0``
|
||||
|
||||
In addition, for every console defined in the guest configuration, a symlink
|
||||
will be created from ``/dev/ttyN`` symlinked to the corresponding ``/dev/pts/M``
|
||||
pseudo TTY device. The first console will be ``/dev/tty1``, with further
|
||||
consoles numbered incrementally from there.
|
||||
|
||||
Since /dev/ttyN and /dev/console are linked to the pts devices. The tty device
|
||||
of login program is pts device. The pam module securetty may prevent root user
|
||||
from logging in container. If you want root user to log in container
|
||||
successfully, add the pts device to the file /etc/securetty of container.
|
||||
|
||||
Further block or character devices will be made available to containers
|
||||
depending on their configuration.
|
||||
|
||||
Security considerations
|
||||
-----------------------
|
||||
|
||||
The libvirt LXC driver is fairly flexible in how it can be configured, and as
|
||||
such does not enforce a requirement for strict security separation between a
|
||||
container and the host. This allows it to be used in scenarios where only
|
||||
resource control capabilities are important, and resource sharing is desired.
|
||||
Applications wishing to ensure secure isolation between a container and the host
|
||||
must ensure that they are writing a suitable configuration.
|
||||
|
||||
Network isolation
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the guest configuration does not list any network interfaces, the ``network``
|
||||
namespace will not be activated, and thus the container will see all the host's
|
||||
network interfaces. This will allow apps in the container to bind to/connect
|
||||
from TCP/UDP addresses and ports from the host OS. It also allows applications
|
||||
to access UNIX domain sockets associated with the host OS, which are in the
|
||||
abstract namespace. If access to UNIX domains sockets in the abstract namespace
|
||||
is not wanted, then applications should set the ``<privnet/>`` flag in the
|
||||
``<features>....</features>`` element.
|
||||
|
||||
Filesystem isolation
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the guest configuration does not list any filesystems, then the container
|
||||
will be set up with a root filesystem that matches the host's root filesystem.
|
||||
As noted earlier, only a few locations such as ``/dev``, ``/proc`` and ``/sys``
|
||||
will be altered. This means that, in the absence of restrictions from sVirt, a
|
||||
process running as user/group N:M inside the container will be able to access
|
||||
almost exactly the same files as a process running as user/group N:M in the
|
||||
host.
|
||||
|
||||
There are multiple options for restricting this. It is possible to simply map
|
||||
the existing root filesystem through to the container in read-only mode.
|
||||
Alternatively a completely separate root filesystem can be configured for the
|
||||
guest. In both cases, further sub-mounts can be applied to customize the content
|
||||
that is made visible. Note that in the absence of sVirt controls, it is still
|
||||
possible for the root user in a container to unmount any sub-mounts applied. The
|
||||
user namespace feature can also be used to restrict access to files based on the
|
||||
UID/GID mappings.
|
||||
|
||||
Sharing the host filesystem tree, also allows applications to access UNIX
|
||||
domains sockets associated with the host OS, which are in the filesystem
|
||||
namespaces. It should be noted that a number of init systems including at least
|
||||
``systemd`` and ``upstart`` have UNIX domain socket which are used to control
|
||||
their operation. Thus, if the directory/filesystem holding their UNIX domain
|
||||
socket is exposed to the container, it will be possible for a user in the
|
||||
container to invoke operations on the init service in the same way it could if
|
||||
outside the container. This also applies to other applications in the host which
|
||||
use UNIX domain sockets in the filesystem, such as DBus, Libvirtd, and many
|
||||
more. If this is not desired, then applications should either specify the
|
||||
UID/GID mapping in the configuration to enable user namespaces and thus block
|
||||
access to the UNIX domain socket based on permissions, or should ensure the
|
||||
relevant directories have a bind mount to hide them. This is particularly
|
||||
important for the ``/run`` or ``/var/run`` directories.
|
||||
|
||||
User and group isolation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the guest configuration does not list any ID mapping, then the user and group
|
||||
IDs used inside the container will match those used outside the container. In
|
||||
addition, the capabilities associated with a process in the container will infer
|
||||
the same privileges they would for a process in the host. This has obvious
|
||||
implications for security, since a root user inside the container will be able
|
||||
to access any file owned by root that is visible to the container, and perform
|
||||
more or less any privileged kernel operation. In the absence of additional
|
||||
protection from sVirt, this means that the root user inside a container is
|
||||
effectively as powerful as the root user in the host. There is no security
|
||||
isolation of the root user.
|
||||
|
||||
The ID mapping facility was introduced to allow for stricter control over the
|
||||
privileges of users inside the container. It allows apps to define rules such as
|
||||
"user ID 0 in the container maps to user ID 1000 in the host". In addition the
|
||||
privileges associated with capabilities are somewhat reduced so that they cannot
|
||||
be used to escape from the container environment. A full description of user
|
||||
namespaces is outside the scope of this document, however LWN has `a good
|
||||
write-up on the topic <https://lwn.net/Articles/532593/>`__. From the libvirt
|
||||
point of view, the key thing to remember is that defining an ID mapping for
|
||||
users and groups in the container XML configuration causes libvirt to activate
|
||||
the user namespace feature.
|
||||
|
||||
Location of configuration files
|
||||
-------------------------------
|
||||
|
||||
The LXC driver comes with sane default values. However, during its
|
||||
initialization it reads a configuration file which offers system administrator
|
||||
to override some of that default. The file is located under
|
||||
``/etc/libvirt/lxc.conf``
|
||||
|
||||
Systemd Socket Activation Integration
|
||||
-------------------------------------
|
||||
|
||||
The libvirt LXC driver provides the ability to pass across pre-opened file
|
||||
descriptors when starting LXC guests. This allows for libvirt LXC to support
|
||||
systemd's `socket activation
|
||||
capability <https://0pointer.de/blog/projects/socket-activated-containers.html>`__,
|
||||
where an incoming client connection in the host OS will trigger the startup of a
|
||||
container, which runs another copy of systemd which gets passed the server
|
||||
socket, and then activates the actual service handler in the container.
|
||||
|
||||
Let us assume that you already have a LXC guest created, running a systemd
|
||||
instance as PID 1 inside the container, which has an SSHD service configured.
|
||||
The goal is to automatically activate the container when the first SSH
|
||||
connection is made. The first step is to create a couple of unit files for the
|
||||
host OS systemd instance. The ``/etc/systemd/system/mycontainer.service`` unit
|
||||
file specifies how systemd will start the libvirt LXC container
|
||||
|
||||
::
|
||||
|
||||
[Unit]
|
||||
Description=My little container
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/bin/virsh -c lxc:///system start --pass-fds 3 mycontainer
|
||||
ExecStop=/usr/bin/virsh -c lxc:///system destroy mycontainer
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
KillMode=none
|
||||
|
||||
The ``--pass-fds 3`` argument specifies that the file descriptor number 3 that
|
||||
``virsh`` inherits from systemd, is to be passed into the container. Since
|
||||
``virsh`` will exit immediately after starting the container, the
|
||||
``RemainAfterExit`` and ``KillMode`` settings must be altered from their
|
||||
defaults.
|
||||
|
||||
Next, the ``/etc/systemd/system/mycontainer.socket`` unit file is created to get
|
||||
the host systemd to listen on port 23 for TCP connections. When this unit file
|
||||
is activated by the first incoming connection, it will cause the
|
||||
``mycontainer.service`` unit to be activated with the FD corresponding to the
|
||||
listening TCP socket passed in as FD 3.
|
||||
|
||||
::
|
||||
|
||||
[Unit]
|
||||
Description=The SSH socket of my little container
|
||||
|
||||
[Socket]
|
||||
ListenStream=23
|
||||
|
||||
Port 23 was picked here so that the container doesn't conflict with the host's
|
||||
SSH which is on the normal port 22. That's it in terms of host side
|
||||
configuration.
|
||||
|
||||
Inside the container, the ``/etc/systemd/system/sshd.socket`` unit file must be
|
||||
created
|
||||
|
||||
::
|
||||
|
||||
[Unit]
|
||||
Description=SSH Socket for Per-Connection Servers
|
||||
|
||||
[Socket]
|
||||
ListenStream=23
|
||||
Accept=yes
|
||||
|
||||
The ``ListenStream`` value listed in this unit file, must match the value used
|
||||
in the host file. When systemd in the container receives the pre-opened FD from
|
||||
libvirt during container startup, it looks at the ``ListenStream`` values to
|
||||
figure out which FD to give to which service. The actual service to start is
|
||||
defined by a correspondingly named ``/etc/systemd/system/sshd@.service``
|
||||
|
||||
::
|
||||
|
||||
[Unit]
|
||||
Description=SSH Per-Connection Server for %I
|
||||
|
||||
[Service]
|
||||
ExecStart=-/usr/sbin/sshd -i
|
||||
StandardInput=socket
|
||||
|
||||
Finally, make sure this SSH service is set to start on boot of the container, by
|
||||
running the following command inside the container:
|
||||
|
||||
::
|
||||
|
||||
# mkdir -p /etc/systemd/system/sockets.target.wants/
|
||||
# ln -s /etc/systemd/system/sshd.socket /etc/systemd/system/sockets.target.wants/
|
||||
|
||||
This example shows how to activate the container based on an incoming SSH
|
||||
connection. If the container was also configured to have an httpd service, it
|
||||
may be desirable to activate it upon either an httpd or a sshd connection
|
||||
attempt. In this case, the ``mycontainer.socket`` file in the host would simply
|
||||
list multiple socket ports. Inside the container a separate ``xxxxx.socket``
|
||||
file would need to be created for each service, with a corresponding
|
||||
``ListenStream`` value set.
|
||||
|
||||
Container security
|
||||
------------------
|
||||
|
||||
sVirt SELinux
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
In the absence of the "user" namespace being used, containers cannot be
|
||||
considered secure against exploits of the host OS. The sVirt SELinux driver
|
||||
provides a way to secure containers even when the "user" namespace is not used.
|
||||
The cost is that writing a policy to allow execution of arbitrary OS is not
|
||||
practical. The SELinux sVirt policy is typically tailored to work with a simpler
|
||||
application confinement use case, as provided by the "libvirt-sandbox" project.
|
||||
|
||||
Auditing
|
||||
~~~~~~~~
|
||||
|
||||
The LXC driver is integrated with libvirt's auditing subsystem, which causes
|
||||
audit messages to be logged whenever there is an operation performed against a
|
||||
container which has impact on host resources. So for example, start/stop, device
|
||||
hotplug will all log audit messages providing details about what action occurred
|
||||
and any resources associated with it. There are the following 3 types of audit
|
||||
messages
|
||||
|
||||
- ``VIRT_MACHINE_ID`` - details of the SELinux process and image security
|
||||
labels assigned to the container.
|
||||
- ``VIRT_CONTROL`` - details of an action / operation performed against a
|
||||
container. There are the following types of operation
|
||||
|
||||
- ``op=start`` - a container has been started. Provides the machine name,
|
||||
uuid and PID of the ``libvirt_lxc`` controller process
|
||||
- ``op=init`` - the init PID of the container has been started. Provides the
|
||||
machine name, uuid and PID of the ``libvirt_lxc`` controller process and
|
||||
PID of the init process (in the host PID namespace)
|
||||
- ``op=stop`` - a container has been stopped. Provides the machine name,
|
||||
uuid
|
||||
|
||||
- ``VIRT_RESOURCE`` - details of a host resource associated with a container
|
||||
action.
|
||||
|
||||
Device access
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
All containers are launched with the CAP_MKNOD capability cleared and removed
|
||||
from the bounding set. Libvirt will ensure that the /dev filesystem is
|
||||
pre-populated with all devices that a container is allowed to use. In addition,
|
||||
the cgroup "device" controller is configured to block read/write/mknod from all
|
||||
devices except those that a container is authorized to use.
|
||||
|
||||
Example configurations
|
||||
----------------------
|
||||
|
||||
Example config version 1
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
<domain type='lxc'>
|
||||
<name>vm1</name>
|
||||
<memory>500000</memory>
|
||||
<os>
|
||||
<type>exe</type>
|
||||
<init>/bin/sh</init>
|
||||
</os>
|
||||
<vcpu>1</vcpu>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<emulator>/usr/libexec/libvirt_lxc</emulator>
|
||||
<interface type='network'>
|
||||
<source network='default'/>
|
||||
</interface>
|
||||
<console type='pty' />
|
||||
</devices>
|
||||
</domain>
|
||||
|
||||
In the <emulator> element, be sure you specify the correct path to libvirt_lxc,
|
||||
if it does not live in /usr/libexec on your system.
|
||||
|
||||
The next example assumes there is a private root filesystem (perhaps
|
||||
hand-crafted using busybox, or installed from media, debootstrap, whatever)
|
||||
under /opt/vm-1-root:
|
||||
|
||||
::
|
||||
|
||||
<domain type='lxc'>
|
||||
<name>vm1</name>
|
||||
<memory>32768</memory>
|
||||
<os>
|
||||
<type>exe</type>
|
||||
<init>/init</init>
|
||||
</os>
|
||||
<vcpu>1</vcpu>
|
||||
<clock offset='utc'/>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<emulator>/usr/libexec/libvirt_lxc</emulator>
|
||||
<filesystem type='mount'>
|
||||
<source dir='/opt/vm-1-root'/>
|
||||
<target dir='/'/>
|
||||
</filesystem>
|
||||
<interface type='network'>
|
||||
<source network='default'/>
|
||||
</interface>
|
||||
<console type='pty' />
|
||||
</devices>
|
||||
</domain>
|
||||
|
||||
Altering the available capabilities
|
||||
-----------------------------------
|
||||
|
||||
By default the libvirt LXC driver drops some capabilities among which CAP_MKNOD.
|
||||
However :since:`since 1.2.6` libvirt can be told to keep or drop some
|
||||
capabilities using a domain configuration like the following:
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<features>
|
||||
<capabilities policy='default'>
|
||||
<mknod state='on'/>
|
||||
<sys_chroot state='off'/>
|
||||
</capabilities>
|
||||
</features>
|
||||
...
|
||||
|
||||
The capabilities children elements are named after the capabilities as defined
|
||||
in ``man 7 capabilities``. An ``off`` state tells libvirt to drop the
|
||||
capability, while an ``on`` state will force to keep the capability even though
|
||||
this one is dropped by default.
|
||||
|
||||
The ``policy`` attribute can be one of ``default``, ``allow`` or ``deny``. It
|
||||
defines the default rules for capabilities: either keep the default behavior
|
||||
that is dropping a few selected capabilities, or keep all capabilities or drop
|
||||
all capabilities. The interest of ``allow`` and ``deny`` is that they guarantee
|
||||
that all capabilities will be kept (or removed) even if new ones are added
|
||||
later.
|
||||
|
||||
The following example, drops all capabilities but CAP_MKNOD:
|
||||
|
||||
::
|
||||
|
||||
...
|
||||
<features>
|
||||
<capabilities policy='deny'>
|
||||
<mknod state='on'/>
|
||||
</capabilities>
|
||||
</features>
|
||||
...
|
||||
|
||||
Note that allowing capabilities that are normally dropped by default can
|
||||
seriously affect the security of the container and the host.
|
||||
|
||||
Inherit namespaces
|
||||
------------------
|
||||
|
||||
Libvirt allows you to inherit the namespace from container/process just like lxc
|
||||
tools or docker provides to share the network namespace. The following can be
|
||||
used to share required namespaces. If we want to share only one then the other
|
||||
namespaces can be ignored. The netns option is specific to sharenet. It can be
|
||||
used in cases we want to use existing network namespace rather than creating new
|
||||
network namespace for the container. In this case privnet option will be
|
||||
ignored.
|
||||
|
||||
::
|
||||
|
||||
<domain type='lxc' xmlns:lxc='http://libvirt.org/schemas/domain/lxc/1.0'>
|
||||
...
|
||||
<lxc:namespace>
|
||||
<lxc:sharenet type='netns' value='red'/>
|
||||
<lxc:shareuts type='name' value='container1'/>
|
||||
<lxc:shareipc type='pid' value='12345'/>
|
||||
</lxc:namespace>
|
||||
</domain>
|
||||
|
||||
The use of namespace passthrough requires libvirt >= 1.2.19
|
||||
|
||||
Container usage / management
|
||||
----------------------------
|
||||
|
||||
As with any libvirt virtualization driver, LXC containers can be managed via a
|
||||
wide variety of libvirt based tools. At the lowest level the ``virsh`` command
|
||||
can be used to perform many tasks, by passing the ``-c lxc:///system`` argument.
|
||||
As an alternative to repeating the URI with every command, the
|
||||
``LIBVIRT_DEFAULT_URI`` environment variable can be set to ``lxc:///system``.
|
||||
The examples that follow outline some common operations with virsh and LXC. For
|
||||
further details about usage of virsh consult its manual page.
|
||||
|
||||
Defining (saving) container configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh define`` command takes an XML configuration document and loads it
|
||||
into libvirt, saving the configuration on disk
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system define myguest.xml
|
||||
|
||||
Viewing container configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh dumpxml`` command can be used to view the current XML configuration
|
||||
of a container. By default the XML output reflects the current state of the
|
||||
container. If the container is running, it is possible to explicitly request the
|
||||
persistent configuration, instead of the current live configuration using the
|
||||
``--inactive`` flag
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system dumpxml myguest
|
||||
|
||||
Starting containers
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh start`` command can be used to start a container from a previously
|
||||
defined persistent configuration
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system start myguest
|
||||
|
||||
It is also possible to start so called "transient" containers, which do not
|
||||
require a persistent configuration to be saved by libvirt, using the
|
||||
``virsh create`` command.
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system create myguest.xml
|
||||
|
||||
Stopping containers
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh shutdown`` command can be used to request a graceful shutdown of the
|
||||
container. By default this command will first attempt to send a message to the
|
||||
init process via the ``/dev/initctl`` device node. If no such device node
|
||||
exists, then it will send SIGTERM to PID 1 inside the container.
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system shutdown myguest
|
||||
|
||||
If the container does not respond to the graceful shutdown request, it can be
|
||||
forcibly stopped using the ``virsh destroy``
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system destroy myguest
|
||||
|
||||
Rebooting a container
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh reboot`` command can be used to request a graceful shutdown of the
|
||||
container. By default this command will first attempt to send a message to the
|
||||
init process via the ``/dev/initctl`` device node. If no such device node
|
||||
exists, then it will send SIGHUP to PID 1 inside the container.
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system reboot myguest
|
||||
|
||||
Undefining (deleting) a container configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh undefine`` command can be used to delete the persistent
|
||||
configuration of a container. If the guest is currently running, this will turn
|
||||
it into a "transient" guest.
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system undefine myguest
|
||||
|
||||
Connecting to a container console
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh console`` command can be used to connect to the text console
|
||||
associated with a container.
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system console myguest
|
||||
|
||||
If the container has been configured with multiple console devices, then the
|
||||
``--devname`` argument can be used to choose the console to connect to. In LXC,
|
||||
multiple consoles will be named as 'console0', 'console1', 'console2', etc.
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system console myguest --devname console1
|
||||
|
||||
Running commands in a container
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh lxc-enter-namespace`` command can be used to enter the namespaces
|
||||
and security context of a container and then execute an arbitrary command.
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system lxc-enter-namespace myguest -- /bin/ls -al /dev
|
||||
|
||||
Monitoring container utilization
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virt-top`` command can be used to monitor the activity and resource
|
||||
utilization of all containers on a host
|
||||
|
||||
::
|
||||
|
||||
# virt-top -c lxc:///system
|
||||
|
||||
Converting LXC container configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh domxml-from-native`` command can be used to convert most of the LXC
|
||||
container configuration into a domain XML fragment
|
||||
|
||||
::
|
||||
|
||||
# virsh -c lxc:///system domxml-from-native lxc-tools /var/lib/lxc/myguest/config
|
||||
|
||||
This conversion has some limitations due to the fact that the domxml-from-native
|
||||
command output has to be independent of the host. Here are a few things to take
|
||||
care of before converting:
|
||||
|
||||
- Replace the fstab file referenced by lxc.mount by the corresponding
|
||||
lxc.mount.entry lines.
|
||||
- Replace all relative sizes of tmpfs mount entries to absolute sizes. Also
|
||||
make sure that tmpfs entries all have a size option (default is 50%).
|
||||
- Define lxc.cgroup.memory.limit_in_bytes to properly limit the memory
|
||||
available to the container. The conversion will use 64MiB as the default.
|
@@ -1,383 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>Host device management</h1>
|
||||
|
||||
<p>
|
||||
Libvirt provides management of both physical and virtual host devices
|
||||
(historically also referred to as node devices) like USB, PCI, SCSI, and
|
||||
network devices. This also includes various virtualization capabilities
|
||||
which the aforementioned devices provide for utilization, for example
|
||||
SR-IOV, NPIV, MDEV, DRM, etc.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The node device driver provides means to list and show details about host
|
||||
devices (<code>virsh nodedev-list</code>, <code>virsh nodedev-info</code>,
|
||||
and <code>virsh nodedev-dumpxml</code>), which are generic and can be used
|
||||
with all devices. It also provides the means to manage virtual devices.
|
||||
Persistently-defined virtual devices are only supported for mediated
|
||||
devices, while transient devices are supported by both mediated devices
|
||||
and NPIV (<a href="https://wiki.libvirt.org/page/NPIV_in_libvirt">more
|
||||
info about NPIV)</a>).
|
||||
</p>
|
||||
<p>
|
||||
Persistent virtual devices are managed with
|
||||
<code>virsh nodedev-define</code> and <code>virsh nodedev-undefine</code>.
|
||||
Persistent devices can be configured to start manually or automatically
|
||||
using <code>virsh nodedev-autostart</code>. Inactive devices can be made
|
||||
active with <code>virsh nodedev-start</code>.
|
||||
</p>
|
||||
<p>
|
||||
Transient virtual devices are started and stopped with the commands
|
||||
<code>virsh nodedev-create</code> and <code>virsh nodedev-destroy</code>.
|
||||
</p>
|
||||
<p>
|
||||
Devices on the host system are arranged in a tree-like hierarchy, with
|
||||
the root node being called <code>computer</code>. The node device driver
|
||||
supports udev backend (HAL backend was removed in <code>6.8.0</code>).
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Details of the XML format of a host device can be found <a
|
||||
href="formatnode.html">here</a>. Of particular interest is the
|
||||
<code>capability</code> element, which describes features supported by
|
||||
the device. Some specific device types are addressed in more detail
|
||||
below.
|
||||
</p>
|
||||
<h2>Basic structure of a node device</h2>
|
||||
<pre>
|
||||
<device>
|
||||
<name>pci_0000_00_17_0</name>
|
||||
<path>/sys/devices/pci0000:00/0000:00:17.0</path>
|
||||
<parent>computer</parent>
|
||||
<driver>
|
||||
<name>ahci</name>
|
||||
</driver>
|
||||
<capability type='pci'>
|
||||
...
|
||||
</capability>
|
||||
</device></pre>
|
||||
|
||||
<ul id="toc"/>
|
||||
|
||||
<h2><a id="PCI">PCI host devices</a></h2>
|
||||
<dl>
|
||||
<dt><code>capability</code></dt>
|
||||
<dd>
|
||||
When used as top level element, the supported values for the
|
||||
<code>type</code> attribute are <code>pci</code> and
|
||||
<code>phys_function</code> (see <a href="#SRIOVCap">SR-IOV below</a>).
|
||||
</dd>
|
||||
</dl>
|
||||
<pre>
|
||||
<device>
|
||||
<name>pci_0000_04_00_1</name>
|
||||
<path>/sys/devices/pci0000:00/0000:00:06.0/0000:04:00.1</path>
|
||||
<parent>pci_0000_00_06_0</parent>
|
||||
<driver>
|
||||
<name>igb</name>
|
||||
</driver>
|
||||
<capability type='pci'>
|
||||
<domain>0</domain>
|
||||
<bus>4</bus>
|
||||
<slot>0</slot>
|
||||
<function>1</function>
|
||||
<product id='0x10c9'>82576 Gigabit Network Connection</product>
|
||||
<vendor id='0x8086'>Intel Corporation</vendor>
|
||||
<iommuGroup number='15'>
|
||||
<address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
|
||||
</iommuGroup>
|
||||
<numa node='0'/>
|
||||
<pci-express>
|
||||
<link validity='cap' port='1' speed='2.5' width='2'/>
|
||||
<link validity='sta' speed='2.5' width='2'/>
|
||||
</pci-express>
|
||||
</capability>
|
||||
</device></pre>
|
||||
|
||||
<p>
|
||||
The XML format for a PCI device stays the same for any further
|
||||
capabilities it supports, a single nested <code><capability></code>
|
||||
element will be included for each capability the device supports.
|
||||
</p>
|
||||
|
||||
<h3><a id="SRIOVCap">SR-IOV capability</a></h3>
|
||||
<p>
|
||||
Single root input/output virtualization (SR-IOV) allows sharing of the
|
||||
PCIe resources by multiple virtual environments. That is achieved by
|
||||
slicing up a single full-featured physical resource called physical
|
||||
function (PF) into multiple devices called virtual functions (VFs) sharing
|
||||
their configuration with the underlying PF. Despite the SR-IOV
|
||||
specification, the amount of VFs that can be created on a PF varies among
|
||||
manufacturers.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Suppose the NIC <a href="#PCI">above</a> was also SR-IOV capable, it would
|
||||
also include a nested
|
||||
<code><capability></code> element enumerating all virtual
|
||||
functions available on the physical device (physical port) like in the
|
||||
example below.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
<capability type='pci'>
|
||||
...
|
||||
<capability type='virt_functions' maxCount='7'>
|
||||
<address domain='0x0000' bus='0x04' slot='0x10' function='0x1'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x10' function='0x3'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x10' function='0x5'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x10' function='0x7'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x11' function='0x1'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x11' function='0x3'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x11' function='0x5'/>
|
||||
</capability>
|
||||
...
|
||||
</capability></pre>
|
||||
<p>
|
||||
A SR-IOV child device on the other hand, would then report its top level
|
||||
capability type as a <code>phys_function</code> instead:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
<device>
|
||||
...
|
||||
<capability type='phys_function'>
|
||||
<address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
|
||||
</capability>
|
||||
...
|
||||
</device></pre>
|
||||
|
||||
<h3><a id="MDEVCap">MDEV capability</a></h3>
|
||||
<p>
|
||||
A device capable of creating mediated devices will include a nested
|
||||
capability <code>mdev_types</code> which enumerates all supported mdev
|
||||
types on the physical device, along with the type attributes available
|
||||
through sysfs. A detailed description of the XML format for the
|
||||
<code>mdev_types</code> capability can be found
|
||||
<a href="formatnode.html#MDEVTypesCap">here</a>.
|
||||
</p>
|
||||
<p>
|
||||
The following example shows how we might represent an NVIDIA GPU device
|
||||
that supports mediated devices. See below for <a href="#MDEV">more
|
||||
information about mediated devices</a>.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
<device>
|
||||
...
|
||||
<driver>
|
||||
<name>nvidia</name>
|
||||
</driver>
|
||||
<capability type='pci'>
|
||||
...
|
||||
<capability type='mdev_types'>
|
||||
<type id='nvidia-11'>
|
||||
<name>GRID M60-0B</name>
|
||||
<deviceAPI>vfio-pci</deviceAPI>
|
||||
<availableInstances>16</availableInstances>
|
||||
</type>
|
||||
<!-- Here would come the rest of the available mdev types -->
|
||||
</capability>
|
||||
...
|
||||
</capability>
|
||||
</device></pre>
|
||||
|
||||
<h3><a id="VPDCap">VPD capability</a></h3>
|
||||
<p>
|
||||
A device that exposes a PCI/PCIe VPD capability will include a nested
|
||||
capability <code>vpd</code> which presents data stored in the Vital Product
|
||||
Data (VPD). VPD provides a device name and a number of other standard-defined
|
||||
read-only fields (change level, manufacture id, part number, serial number) and
|
||||
vendor-specific read-only fields. Additionally, if a device supports it,
|
||||
read-write fields (asset tag, vendor-specific fields or system fields) may
|
||||
also be present. The VPD capability is optional for PCI/PCIe devices and the
|
||||
set of exposed fields may vary depending on a device. The XML format follows
|
||||
the binary format described in "I.3. VPD Definitions" in PCI Local Bus (2.2+)
|
||||
and the identical format in PCIe 4.0+. At the time of writing, the support for
|
||||
exposing this capability is only present on Linux-based systems (kernel version
|
||||
v2.6.26 is the first one to expose VPD via sysfs which Libvirt relies on).
|
||||
Reading the VPD contents requires root privileges, therefore,
|
||||
<code>virsh nodedev-dumpxml</code> must be executed accordingly.
|
||||
A description of the XML format for the <code>vpd</code> capability can
|
||||
be found <a href="formatnode.html#VPDCap">here</a>.
|
||||
</p>
|
||||
<p>
|
||||
The following example shows a VPD representation for a device that exposes the
|
||||
VPD capability with read-only and read-write fields. Among other things,
|
||||
the VPD of this particular device includes a unique board serial number.
|
||||
</p>
|
||||
<pre>
|
||||
<device>
|
||||
<name>pci_0000_42_00_0</name>
|
||||
<capability type='pci'>
|
||||
<class>0x020000</class>
|
||||
<domain>0</domain>
|
||||
<bus>66</bus>
|
||||
<slot>0</slot>
|
||||
<function>0</function>
|
||||
<product id='0xa2d6'>MT42822 BlueField-2 integrated ConnectX-6 Dx network controller</product>
|
||||
<vendor id='0x15b3'>Mellanox Technologies</vendor>
|
||||
<capability type='virt_functions' maxCount='16'/>
|
||||
<capability type='vpd'>
|
||||
<name>BlueField-2 DPU 25GbE Dual-Port SFP56, Crypto Enabled, 16GB on-board DDR, 1GbE OOB management, Tall Bracket</name>
|
||||
<fields access='readonly'>
|
||||
<change_level>B1</change_level>
|
||||
<manufacture_id>foobar</manufacture_id>
|
||||
<part_number>MBF2H332A-AEEOT</part_number>
|
||||
<serial_number>MT2113X00000</serial_number>
|
||||
<vendor_field index='0'>PCIeGen4 x8</vendor_field>
|
||||
<vendor_field index='2'>MBF2H332A-AEEOT</vendor_field>
|
||||
<vendor_field index='3'>3c53d07eec484d8aab34dabd24fe575aa</vendor_field>
|
||||
<vendor_field index='A'>MLX:MN=MLNX:CSKU=V2:UUID=V3:PCI=V0:MODL=BF2H332A</vendor_field>
|
||||
</fields>
|
||||
<fields access='readwrite'>
|
||||
<asset_tag>fooasset</asset_tag>
|
||||
<vendor_field index='0'>vendorfield0</vendor_field>
|
||||
<vendor_field index='2'>vendorfield2</vendor_field>
|
||||
<vendor_field index='A'>vendorfieldA</vendor_field>
|
||||
<system_field index='B'>systemfieldB</system_field>
|
||||
<system_field index='0'>systemfield0</system_field>
|
||||
</fields>
|
||||
</capability>
|
||||
<iommuGroup number='65'>
|
||||
<address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/>
|
||||
</iommuGroup>
|
||||
<numa node='0'/>
|
||||
<pci-express>
|
||||
<link validity='cap' port='0' speed='16' width='8'/>
|
||||
<link validity='sta' speed='8' width='8'/>
|
||||
</pci-express>
|
||||
</capability>
|
||||
</device>
|
||||
</pre>
|
||||
|
||||
<h2><a id="MDEV">Mediated devices (MDEVs)</a></h2>
|
||||
<p>
|
||||
Mediated devices (<span class="since">Since 3.2.0</span>) are software
|
||||
devices defining resource allocation on the backing physical device which
|
||||
in turn allows the parent physical device's resources to be divided into
|
||||
several mediated devices, thus sharing the physical device's performance
|
||||
among multiple guests. Unlike SR-IOV however, where a PCIe device appears
|
||||
as multiple separate PCIe devices on the host's PCI bus, mediated devices
|
||||
only appear on the mdev virtual bus. Therefore, no detach/reattach
|
||||
procedure from/to the host driver procedure is involved even though
|
||||
mediated devices are used in a direct device assignment manner. A
|
||||
detailed description of the XML format for the <code>mdev</code>
|
||||
capability can be found <a href="formatnode.html#mdev">here</a>.
|
||||
</p>
|
||||
|
||||
<h3>Example of a mediated device</h3>
|
||||
<pre>
|
||||
<device>
|
||||
<name>mdev_4b20d080_1b54_4048_85b3_a6a62d165c01</name>
|
||||
<path>/sys/devices/pci0000:00/0000:00:02.0/4b20d080-1b54-4048-85b3-a6a62d165c01</path>
|
||||
<parent>pci_0000_06_00_0</parent>
|
||||
<driver>
|
||||
<name>vfio_mdev</name>
|
||||
</driver>
|
||||
<capability type='mdev'>
|
||||
<type id='nvidia-11'/>
|
||||
<uuid>4b20d080-1b54-4048-85b3-a6a62d165c01</uuid>
|
||||
<iommuGroup number='12'/>
|
||||
</capability>
|
||||
</device></pre>
|
||||
|
||||
<p>
|
||||
The support of mediated device's framework in libvirt's node device driver
|
||||
covers the following features:
|
||||
</p>
|
||||
|
||||
<ul>
|
||||
<li>
|
||||
list available mediated devices on the host
|
||||
(<span class="since">Since 3.4.0</span>)
|
||||
</li>
|
||||
<li>
|
||||
display device details
|
||||
(<span class="since">Since 3.4.0</span>)
|
||||
</li>
|
||||
<li>
|
||||
create transient mediated devices
|
||||
(<span class="since">Since 6.5.0</span>)
|
||||
</li>
|
||||
<li>
|
||||
define persistent mediated devices
|
||||
(<span class="since">Since 7.3.0</span>)
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<p>
|
||||
Because mediated devices are instantiated from vendor specific templates,
|
||||
simply called 'types', information describing these types is contained
|
||||
within the parent device's capabilities (see the example in <a
|
||||
href="#PCI">PCI host devices</a>). To list all devices capable of
|
||||
creating mediated devices, the following command can be used.
|
||||
</p>
|
||||
<pre>$ virsh nodedev-list --cap mdev_types</pre>
|
||||
|
||||
<p>
|
||||
To see the supported mediated device types on a specific physical device
|
||||
use the following:
|
||||
</p>
|
||||
|
||||
<pre>$ virsh nodedev-dumpxml <device></pre>
|
||||
|
||||
<p>
|
||||
Before creating a mediated device, unbind the device from the respective
|
||||
device driver, eg. subchannel I/O driver for a CCW device. Then bind the
|
||||
device to the respective VFIO driver. For a CCW device, also unbind the
|
||||
corresponding subchannel of the CCW device from the subchannel I/O driver
|
||||
and then bind the subchannel (instead of the CCW device) to the vfio_ccw
|
||||
driver. The below example shows the unbinding and binding steps for a CCW
|
||||
device.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
device="0.0.1234"
|
||||
subchannel="0.0.0123"
|
||||
echo $device > /sys/bus/ccw/devices/$device/driver/unbind
|
||||
echo $subchannel > /sys/bus/css/devices/$subchannel/driver/unbind
|
||||
echo $subchannel > /sys/bus/css/drivers/vfio_ccw/bind
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
To instantiate a transient mediated device, create an XML file representing the
|
||||
device. See above for information about the mediated device xml format.
|
||||
</p>
|
||||
|
||||
<pre>$ virsh nodedev-create <xml-file>
|
||||
Node device '<device-name>' created from '<xml-file>'</pre>
|
||||
|
||||
<p>
|
||||
If you would like to persistently define the device so that it will be
|
||||
maintained across host reboots, use <code>virsh nodedev-define</code>
|
||||
instead of <code>nodedev-create</code>:
|
||||
</p>
|
||||
|
||||
<pre>$ virsh nodedev-define <xml-file>
|
||||
Node device '<device-name>' defined from '<xml-file>'</pre>
|
||||
|
||||
<p>
|
||||
To start an instance of this device definition, use the following command:
|
||||
</p>
|
||||
|
||||
<pre>$ virsh nodedev-start <device-name></pre>
|
||||
<p>
|
||||
Active mediated device instances can be stopped using <code>virsh
|
||||
nodedev-destroy</code>, and persistent device definitions can be removed
|
||||
using <code>virsh nodedev-undefine</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
If a mediated device is defined persistently, it can also be set to be
|
||||
automatically started whenever the host reboots or when the parent device
|
||||
becomes available. In order to autostart a mediated device, use the
|
||||
following command:
|
||||
</p>
|
||||
|
||||
<pre>$ virsh nodedev-autostart <device-name></pre>
|
||||
</body>
|
||||
</html>
|
348
docs/drvnodedev.rst
Normal file
348
docs/drvnodedev.rst
Normal file
@@ -0,0 +1,348 @@
|
||||
.. role:: since
|
||||
|
||||
======================
|
||||
Host device management
|
||||
======================
|
||||
|
||||
.. contents::
|
||||
|
||||
Libvirt provides management of both physical and virtual host devices
|
||||
(historically also referred to as node devices) like USB, PCI, SCSI, and network
|
||||
devices. This also includes various virtualization capabilities which the
|
||||
aforementioned devices provide for utilization, for example SR-IOV, NPIV, MDEV,
|
||||
DRM, etc.
|
||||
|
||||
The node device driver provides means to list and show details about host
|
||||
devices (``virsh nodedev-list``, ``virsh nodedev-info``, and
|
||||
``virsh nodedev-dumpxml``), which are generic and can be used with all devices.
|
||||
It also provides the means to manage virtual devices. Persistently-defined
|
||||
virtual devices are only supported for mediated devices, while transient devices
|
||||
are supported by both mediated devices and NPIV (`more info about
|
||||
NPIV) <https://wiki.libvirt.org/page/NPIV_in_libvirt>`__).
|
||||
|
||||
Persistent virtual devices are managed with ``virsh nodedev-define`` and
|
||||
``virsh nodedev-undefine``. Persistent devices can be configured to start
|
||||
manually or automatically using ``virsh nodedev-autostart``. Inactive devices
|
||||
can be made active with ``virsh nodedev-start``.
|
||||
|
||||
Transient virtual devices are started and stopped with the commands
|
||||
``virsh nodedev-create`` and ``virsh nodedev-destroy``.
|
||||
|
||||
Devices on the host system are arranged in a tree-like hierarchy, with the root
|
||||
node being called ``computer``. The node device driver supports udev backend
|
||||
(HAL backend was removed in ``6.8.0``).
|
||||
|
||||
Details of the XML format of a host device can be found
|
||||
`here <formatnode.html>`__. Of particular interest is the ``capability``
|
||||
element, which describes features supported by the device. Some specific device
|
||||
types are addressed in more detail below.
|
||||
|
||||
Basic structure of a node device
|
||||
--------------------------------
|
||||
|
||||
::
|
||||
|
||||
<device>
|
||||
<name>pci_0000_00_17_0</name>
|
||||
<path>/sys/devices/pci0000:00/0000:00:17.0</path>
|
||||
<parent>computer</parent>
|
||||
<driver>
|
||||
<name>ahci</name>
|
||||
</driver>
|
||||
<capability type='pci'>
|
||||
...
|
||||
</capability>
|
||||
</device>
|
||||
|
||||
PCI host devices
|
||||
----------------
|
||||
|
||||
``capability``
|
||||
When used as top level element, the supported values for the ``type``
|
||||
attribute are ``pci`` and ``phys_function`` (see `SR-IOV capability`_ below).
|
||||
|
||||
::
|
||||
|
||||
<device>
|
||||
<name>pci_0000_04_00_1</name>
|
||||
<path>/sys/devices/pci0000:00/0000:00:06.0/0000:04:00.1</path>
|
||||
<parent>pci_0000_00_06_0</parent>
|
||||
<driver>
|
||||
<name>igb</name>
|
||||
</driver>
|
||||
<capability type='pci'>
|
||||
<domain>0</domain>
|
||||
<bus>4</bus>
|
||||
<slot>0</slot>
|
||||
<function>1</function>
|
||||
<product id='0x10c9'>82576 Gigabit Network Connection</product>
|
||||
<vendor id='0x8086'>Intel Corporation</vendor>
|
||||
<iommuGroup number='15'>
|
||||
<address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
|
||||
</iommuGroup>
|
||||
<numa node='0'/>
|
||||
<pci-express>
|
||||
<link validity='cap' port='1' speed='2.5' width='2'/>
|
||||
<link validity='sta' speed='2.5' width='2'/>
|
||||
</pci-express>
|
||||
</capability>
|
||||
</device>
|
||||
|
||||
The XML format for a PCI device stays the same for any further capabilities it
|
||||
supports, a single nested ``<capability>`` element will be included for each
|
||||
capability the device supports.
|
||||
|
||||
SR-IOV capability
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Single root input/output virtualization (SR-IOV) allows sharing of the PCIe
|
||||
resources by multiple virtual environments. That is achieved by slicing up a
|
||||
single full-featured physical resource called physical function (PF) into
|
||||
multiple devices called virtual functions (VFs) sharing their configuration with
|
||||
the underlying PF. Despite the SR-IOV specification, the amount of VFs that can
|
||||
be created on a PF varies among manufacturers.
|
||||
|
||||
Suppose the NIC above in `PCI host devices`_ was also SR-IOV capable, it would
|
||||
also include a nested ``<capability>`` element enumerating all virtual
|
||||
functions available on the physical device (physical port) like in the example
|
||||
below.
|
||||
|
||||
::
|
||||
|
||||
<capability type='pci'>
|
||||
...
|
||||
<capability type='virt_functions' maxCount='7'>
|
||||
<address domain='0x0000' bus='0x04' slot='0x10' function='0x1'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x10' function='0x3'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x10' function='0x5'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x10' function='0x7'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x11' function='0x1'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x11' function='0x3'/>
|
||||
<address domain='0x0000' bus='0x04' slot='0x11' function='0x5'/>
|
||||
</capability>
|
||||
...
|
||||
</capability>
|
||||
|
||||
A SR-IOV child device on the other hand, would then report its top level
|
||||
capability type as a ``phys_function`` instead:
|
||||
|
||||
::
|
||||
|
||||
<device>
|
||||
...
|
||||
<capability type='phys_function'>
|
||||
<address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
|
||||
</capability>
|
||||
...
|
||||
</device>
|
||||
|
||||
MDEV capability
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
A device capable of creating mediated devices will include a nested capability
|
||||
``mdev_types`` which enumerates all supported mdev types on the physical device,
|
||||
along with the type attributes available through sysfs. A detailed description
|
||||
of the XML format for the ``mdev_types`` capability can be found
|
||||
`here <formatnode.html#mdev-types-capability>`__.
|
||||
|
||||
The following example shows how we might represent an NVIDIA GPU device that
|
||||
supports mediated devices. See below for more info on
|
||||
`Mediated devices (MDEVs)`_.
|
||||
|
||||
::
|
||||
|
||||
<device>
|
||||
...
|
||||
<driver>
|
||||
<name>nvidia</name>
|
||||
</driver>
|
||||
<capability type='pci'>
|
||||
...
|
||||
<capability type='mdev_types'>
|
||||
<type id='nvidia-11'>
|
||||
<name>GRID M60-0B</name>
|
||||
<deviceAPI>vfio-pci</deviceAPI>
|
||||
<availableInstances>16</availableInstances>
|
||||
</type>
|
||||
<!-- Here would come the rest of the available mdev types -->
|
||||
</capability>
|
||||
...
|
||||
</capability>
|
||||
</device>
|
||||
|
||||
VPD capability
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
A device that exposes a PCI/PCIe VPD capability will include a nested capability
|
||||
``vpd`` which presents data stored in the Vital Product Data (VPD). VPD provides
|
||||
a device name and a number of other standard-defined read-only fields (change
|
||||
level, manufacture id, part number, serial number) and vendor-specific read-only
|
||||
fields. Additionally, if a device supports it, read-write fields (asset tag,
|
||||
vendor-specific fields or system fields) may also be present. The VPD capability
|
||||
is optional for PCI/PCIe devices and the set of exposed fields may vary
|
||||
depending on a device. The XML format follows the binary format described in
|
||||
"I.3. VPD Definitions" in PCI Local Bus (2.2+) and the identical format in PCIe
|
||||
4.0+. At the time of writing, the support for exposing this capability is only
|
||||
present on Linux-based systems (kernel version v2.6.26 is the first one to
|
||||
expose VPD via sysfs which Libvirt relies on). Reading the VPD contents requires
|
||||
root privileges, therefore, ``virsh nodedev-dumpxml`` must be executed
|
||||
accordingly. A description of the XML format for the ``vpd`` capability can be
|
||||
found `here <formatnode.html#vpd-capability>`__.
|
||||
|
||||
The following example shows a VPD representation for a device that exposes the
|
||||
VPD capability with read-only and read-write fields. Among other things, the VPD
|
||||
of this particular device includes a unique board serial number.
|
||||
|
||||
::
|
||||
|
||||
<device>
|
||||
<name>pci_0000_42_00_0</name>
|
||||
<capability type='pci'>
|
||||
<class>0x020000</class>
|
||||
<domain>0</domain>
|
||||
<bus>66</bus>
|
||||
<slot>0</slot>
|
||||
<function>0</function>
|
||||
<product id='0xa2d6'>MT42822 BlueField-2 integrated ConnectX-6 Dx network controller</product>
|
||||
<vendor id='0x15b3'>Mellanox Technologies</vendor>
|
||||
<capability type='virt_functions' maxCount='16'/>
|
||||
<capability type='vpd'>
|
||||
<name>BlueField-2 DPU 25GbE Dual-Port SFP56, Crypto Enabled, 16GB on-board DDR, 1GbE OOB management, Tall Bracket</name>
|
||||
<fields access='readonly'>
|
||||
<change_level>B1</change_level>
|
||||
<manufacture_id>foobar</manufacture_id>
|
||||
<part_number>MBF2H332A-AEEOT</part_number>
|
||||
<serial_number>MT2113X00000</serial_number>
|
||||
<vendor_field index='0'>PCIeGen4 x8</vendor_field>
|
||||
<vendor_field index='2'>MBF2H332A-AEEOT</vendor_field>
|
||||
<vendor_field index='3'>3c53d07eec484d8aab34dabd24fe575aa</vendor_field>
|
||||
<vendor_field index='A'>MLX:MN=MLNX:CSKU=V2:UUID=V3:PCI=V0:MODL=BF2H332A</vendor_field>
|
||||
</fields>
|
||||
<fields access='readwrite'>
|
||||
<asset_tag>fooasset</asset_tag>
|
||||
<vendor_field index='0'>vendorfield0</vendor_field>
|
||||
<vendor_field index='2'>vendorfield2</vendor_field>
|
||||
<vendor_field index='A'>vendorfieldA</vendor_field>
|
||||
<system_field index='B'>systemfieldB</system_field>
|
||||
<system_field index='0'>systemfield0</system_field>
|
||||
</fields>
|
||||
</capability>
|
||||
<iommuGroup number='65'>
|
||||
<address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/>
|
||||
</iommuGroup>
|
||||
<numa node='0'/>
|
||||
<pci-express>
|
||||
<link validity='cap' port='0' speed='16' width='8'/>
|
||||
<link validity='sta' speed='8' width='8'/>
|
||||
</pci-express>
|
||||
</capability>
|
||||
</device>
|
||||
|
||||
Mediated devices (MDEVs)
|
||||
------------------------
|
||||
|
||||
Mediated devices ( :since:`Since 3.2.0` ) are software devices defining resource
|
||||
allocation on the backing physical device which in turn allows the parent
|
||||
physical device's resources to be divided into several mediated devices, thus
|
||||
sharing the physical device's performance among multiple guests. Unlike SR-IOV
|
||||
however, where a PCIe device appears as multiple separate PCIe devices on the
|
||||
host's PCI bus, mediated devices only appear on the mdev virtual bus. Therefore,
|
||||
no detach/reattach procedure from/to the host driver procedure is involved even
|
||||
though mediated devices are used in a direct device assignment manner. A
|
||||
detailed description of the XML format for the ``mdev`` capability can be found
|
||||
`here <formatnode.html#mdev>`__.
|
||||
|
||||
Example of a mediated device
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
<device>
|
||||
<name>mdev_4b20d080_1b54_4048_85b3_a6a62d165c01</name>
|
||||
<path>/sys/devices/pci0000:00/0000:00:02.0/4b20d080-1b54-4048-85b3-a6a62d165c01</path>
|
||||
<parent>pci_0000_06_00_0</parent>
|
||||
<driver>
|
||||
<name>vfio_mdev</name>
|
||||
</driver>
|
||||
<capability type='mdev'>
|
||||
<type id='nvidia-11'/>
|
||||
<uuid>4b20d080-1b54-4048-85b3-a6a62d165c01</uuid>
|
||||
<iommuGroup number='12'/>
|
||||
</capability>
|
||||
</device>
|
||||
|
||||
The support of mediated device's framework in libvirt's node device driver
|
||||
covers the following features:
|
||||
|
||||
- list available mediated devices on the host ( :since:`Since 3.4.0` )
|
||||
- display device details ( :since:`Since 3.4.0` )
|
||||
- create transient mediated devices ( :since:`Since 6.5.0` )
|
||||
- define persistent mediated devices ( :since:`Since 7.3.0` )
|
||||
|
||||
Because mediated devices are instantiated from vendor specific templates, simply
|
||||
called 'types', information describing these types is contained within the
|
||||
parent device's capabilities (see the example in `PCI host devices`_).
|
||||
To list all devices capable of creating mediated devices, the following command
|
||||
can be used.
|
||||
|
||||
::
|
||||
|
||||
$ virsh nodedev-list --cap mdev_types
|
||||
|
||||
To see the supported mediated device types on a specific physical device use the
|
||||
following:
|
||||
|
||||
::
|
||||
|
||||
$ virsh nodedev-dumpxml <device>
|
||||
|
||||
Before creating a mediated device, unbind the device from the respective device
|
||||
driver, eg. subchannel I/O driver for a CCW device. Then bind the device to the
|
||||
respective VFIO driver. For a CCW device, also unbind the corresponding
|
||||
subchannel of the CCW device from the subchannel I/O driver and then bind the
|
||||
subchannel (instead of the CCW device) to the vfio_ccw driver. The below example
|
||||
shows the unbinding and binding steps for a CCW device.
|
||||
|
||||
::
|
||||
|
||||
device="0.0.1234"
|
||||
subchannel="0.0.0123"
|
||||
echo $device > /sys/bus/ccw/devices/$device/driver/unbind
|
||||
echo $subchannel > /sys/bus/css/devices/$subchannel/driver/unbind
|
||||
echo $subchannel > /sys/bus/css/drivers/vfio_ccw/bind
|
||||
|
||||
To instantiate a transient mediated device, create an XML file representing the
|
||||
device. See above for information about the mediated device xml format.
|
||||
|
||||
::
|
||||
|
||||
$ virsh nodedev-create <xml-file>
|
||||
Node device '<device-name>' created from '<xml-file>'
|
||||
|
||||
If you would like to persistently define the device so that it will be
|
||||
maintained across host reboots, use ``virsh nodedev-define`` instead of
|
||||
``nodedev-create``:
|
||||
|
||||
::
|
||||
|
||||
$ virsh nodedev-define <xml-file>
|
||||
Node device '<device-name>' defined from '<xml-file>'
|
||||
|
||||
To start an instance of this device definition, use the following command:
|
||||
|
||||
::
|
||||
|
||||
$ virsh nodedev-start <device-name>
|
||||
|
||||
Active mediated device instances can be stopped using
|
||||
``virsh nodedev-destroy``, and persistent device definitions can be
|
||||
removed using ``virsh nodedev-undefine``.
|
||||
|
||||
If a mediated device is defined persistently, it can also be set to be
|
||||
automatically started whenever the host reboots or when the parent device
|
||||
becomes available. In order to autostart a mediated device, use the following
|
||||
command:
|
||||
|
||||
::
|
||||
|
||||
$ virsh nodedev-autostart <device-name>
|
@@ -1,123 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>OpenVZ container driver</h1>
|
||||
|
||||
<ul id="toc"></ul>
|
||||
|
||||
<p>
|
||||
The OpenVZ driver for libvirt allows use and management of container
|
||||
based virtualization on a Linux host OS. Prior to using the OpenVZ
|
||||
driver, the OpenVZ enabled kernel must be installed & booted, and the
|
||||
OpenVZ userspace tools installed. The libvirt driver has been tested
|
||||
with OpenVZ 3.0.22, but other 3.0.x versions should also work without
|
||||
undue trouble.
|
||||
</p>
|
||||
|
||||
<h2><a id="project">Project Links</a></h2>
|
||||
|
||||
<ul>
|
||||
<li>
|
||||
The <a href="https://openvz.org/">OpenVZ</a> Linux container
|
||||
system
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<h2><a id="connections">Connections to OpenVZ driver</a></h2>
|
||||
|
||||
<p>
|
||||
The libvirt OpenVZ driver is a single-instance privileged driver,
|
||||
with a driver name of 'openvz'. Some example connection URIs for
|
||||
the libvirt driver are:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
openvz:///system (local access)
|
||||
openvz+unix:///system (local access)
|
||||
openvz://example.com/system (remote access, TLS/x509)
|
||||
openvz+tcp://example.com/system (remote access, SASl/Kerberos)
|
||||
openvz+ssh://root@example.com/system (remote access, SSH tunnelled)
|
||||
</pre>
|
||||
|
||||
<h2><a id="notes">Notes on bridged networking</a></h2>
|
||||
|
||||
<p>
|
||||
Bridged networking enables a guest domain (ie container) to have its
|
||||
network interface connected directly to the host's physical LAN. Before
|
||||
this can be used there are a couple of configuration pre-requisites for
|
||||
the host OS.
|
||||
</p>
|
||||
|
||||
<h3><a id="host">Host network devices</a></h3>
|
||||
|
||||
<p>
|
||||
One or more of the physical devices must be attached to a bridge. The
|
||||
process for this varies according to the operating system in use, so
|
||||
for up to date notes consult the <a href="https://wiki.libvirt.org">Wiki</a>
|
||||
or your operating system's networking documentation. The basic idea is
|
||||
that the host OS should end up with a bridge device "br0" containing a
|
||||
physical device "eth0", or a bonding device "bond0".
|
||||
</p>
|
||||
|
||||
<h3><a id="tools">OpenVZ tools configuration</a></h3>
|
||||
|
||||
<p>
|
||||
OpenVZ releases later than 3.0.23 ship with a standard network device
|
||||
setup script that is able to setup bridging, named
|
||||
<code>/usr/sbin/vznetaddbr</code>. For releases prior to 3.0.23, this
|
||||
script must be created manually by the host OS administrator. The
|
||||
simplest way is to just download the latest version of this script
|
||||
from a newer OpenVZ release, or upstream source repository. Then
|
||||
a generic configuration file <code>/etc/vz/vznet.conf</code>
|
||||
must be created containing
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
#!/bin/bash
|
||||
EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
The host OS is now ready to allow bridging of guest containers, which
|
||||
will work whether the container is started with libvirt, or OpenVZ
|
||||
tools.
|
||||
</p>
|
||||
|
||||
|
||||
<h2><a id="example">Example guest domain XML configuration</a></h2>
|
||||
|
||||
<p>
|
||||
The current libvirt OpenVZ driver has a restriction that the
|
||||
domain names must match the OpenVZ container VEID, which by
|
||||
convention start at 100, and are incremented from there. The
|
||||
choice of OS template to use inside the container is determined
|
||||
by the <code>filesystem</code> tag, and the template source name
|
||||
matches the templates known to OpenVZ tools.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
<domain type='openvz' id='104'>
|
||||
<name>104</name>
|
||||
<uuid>86c12009-e591-a159-6e9f-91d18b85ef78</uuid>
|
||||
<vcpu>3</vcpu>
|
||||
<os>
|
||||
<type>exe</type>
|
||||
<init>/sbin/init</init>
|
||||
</os>
|
||||
<devices>
|
||||
<filesystem type='template'>
|
||||
<source name='fedora-9-i386-minimal'/>
|
||||
<target dir='/'/>
|
||||
</filesystem>
|
||||
<interface type='bridge'>
|
||||
<mac address='00:18:51:5b:ea:bf'/>
|
||||
<source bridge='br0'/>
|
||||
<target dev='veth101.0'/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
||||
</pre>
|
||||
|
||||
</body>
|
||||
</html>
|
97
docs/drvopenvz.rst
Normal file
97
docs/drvopenvz.rst
Normal file
@@ -0,0 +1,97 @@
|
||||
=======================
|
||||
OpenVZ container driver
|
||||
=======================
|
||||
|
||||
.. contents::
|
||||
|
||||
The OpenVZ driver for libvirt allows use and management of container based
|
||||
virtualization on a Linux host OS. Prior to using the OpenVZ driver, the OpenVZ
|
||||
enabled kernel must be installed & booted, and the OpenVZ userspace tools
|
||||
installed. The libvirt driver has been tested with OpenVZ 3.0.22, but other
|
||||
3.0.x versions should also work without undue trouble.
|
||||
|
||||
Project Links
|
||||
-------------
|
||||
|
||||
- The `OpenVZ <https://openvz.org/>`__ Linux container system
|
||||
|
||||
Connections to OpenVZ driver
|
||||
----------------------------
|
||||
|
||||
The libvirt OpenVZ driver is a single-instance privileged driver, with a driver
|
||||
name of 'openvz'. Some example connection URIs for the libvirt driver are:
|
||||
|
||||
::
|
||||
|
||||
openvz:///system (local access)
|
||||
openvz+unix:///system (local access)
|
||||
openvz://example.com/system (remote access, TLS/x509)
|
||||
openvz+tcp://example.com/system (remote access, SASl/Kerberos)
|
||||
openvz+ssh://root@example.com/system (remote access, SSH tunnelled)
|
||||
|
||||
Notes on bridged networking
|
||||
---------------------------
|
||||
|
||||
Bridged networking enables a guest domain (ie container) to have its network
|
||||
interface connected directly to the host's physical LAN. Before this can be used
|
||||
there are a couple of configuration pre-requisites for the host OS.
|
||||
|
||||
Host network devices
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
One or more of the physical devices must be attached to a bridge. The process
|
||||
for this varies according to the operating system in use, so for up to date
|
||||
notes consult the `Wiki <https://wiki.libvirt.org>`__ or your operating system's
|
||||
networking documentation. The basic idea is that the host OS should end up with
|
||||
a bridge device "br0" containing a physical device "eth0", or a bonding device
|
||||
"bond0".
|
||||
|
||||
OpenVZ tools configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenVZ releases later than 3.0.23 ship with a standard network device setup
|
||||
script that is able to setup bridging, named ``/usr/sbin/vznetaddbr``. For
|
||||
releases prior to 3.0.23, this script must be created manually by the host OS
|
||||
administrator. The simplest way is to just download the latest version of this
|
||||
script from a newer OpenVZ release, or upstream source repository. Then a
|
||||
generic configuration file ``/etc/vz/vznet.conf`` must be created containing
|
||||
|
||||
::
|
||||
|
||||
#!/bin/bash
|
||||
EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
|
||||
|
||||
The host OS is now ready to allow bridging of guest containers, which will work
|
||||
whether the container is started with libvirt, or OpenVZ tools.
|
||||
|
||||
Example guest domain XML configuration
|
||||
--------------------------------------
|
||||
|
||||
The current libvirt OpenVZ driver has a restriction that the domain names must
|
||||
match the OpenVZ container VEID, which by convention start at 100, and are
|
||||
incremented from there. The choice of OS template to use inside the container is
|
||||
determined by the ``filesystem`` tag, and the template source name matches the
|
||||
templates known to OpenVZ tools.
|
||||
|
||||
::
|
||||
|
||||
<domain type='openvz' id='104'>
|
||||
<name>104</name>
|
||||
<uuid>86c12009-e591-a159-6e9f-91d18b85ef78</uuid>
|
||||
<vcpu>3</vcpu>
|
||||
<os>
|
||||
<type>exe</type>
|
||||
<init>/sbin/init</init>
|
||||
</os>
|
||||
<devices>
|
||||
<filesystem type='template'>
|
||||
<source name='fedora-9-i386-minimal'/>
|
||||
<target dir='/'/>
|
||||
</filesystem>
|
||||
<interface type='bridge'>
|
||||
<mac address='00:18:51:5b:ea:bf'/>
|
||||
<source bridge='br0'/>
|
||||
<target dev='veth101.0'/>
|
||||
</interface>
|
||||
</devices>
|
||||
</domain>
|
@@ -5,7 +5,7 @@
|
||||
QEMU/KVM/HVF hypervisor driver
|
||||
==============================
|
||||
|
||||
The libvirt KVM/QEMU driver can manage any QEMU emulator from version 2.11.0 or
|
||||
The libvirt KVM/QEMU driver can manage any QEMU emulator from version 3.1.0 or
|
||||
later.
|
||||
|
||||
It supports multiple QEMU accelerators: software
|
||||
@@ -20,7 +20,7 @@ Project Links
|
||||
|
||||
- The `KVM <https://www.linux-kvm.org/>`__ Linux hypervisor
|
||||
- The `QEMU <https://wiki.qemu.org/Index.html>`__ emulator
|
||||
- `Hypervisor.framework`<https://developer.apple.com/documentation/hypervisor>__` reference
|
||||
- `Hypervisor.framework <https://developer.apple.com/documentation/hypervisor>`__ reference
|
||||
|
||||
Deployment pre-requisites
|
||||
-------------------------
|
||||
@@ -34,8 +34,8 @@ Deployment pre-requisites
|
||||
``qemu-kvm`` and ``/dev/kvm`` device node. If both are found, then KVM fully
|
||||
virtualized, hardware accelerated guests will be available.
|
||||
- **Hypervisor.framework (HVF)**: The driver will probe ``sysctl`` for the
|
||||
presence of ``Hypervisor.framework``. If it is found and QEMU is newer than
|
||||
2.12, then it will be possible to create hardware accelerated guests.
|
||||
presence of ``Hypervisor.framework``. If it is found it will be possible to
|
||||
create hardware accelerated guests.
|
||||
|
||||
Connections to QEMU driver
|
||||
--------------------------
|
||||
@@ -443,10 +443,16 @@ Converting from domain XML to QEMU args
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``virsh domxml-to-native`` provides a way to convert a guest description
|
||||
using libvirt Domain XML, into a set of QEMU args that can be run manually. Note
|
||||
that currently the command line formatted by libvirt is no longer suited for
|
||||
manually running qemu as the configuration expects various resources and open
|
||||
file descriptors passed to the process which are usually prepared by libvirtd.
|
||||
using libvirt Domain XML, into a set of QEMU args that would be used by libvirt
|
||||
to start the qemu process.
|
||||
|
||||
Note that currently the command line formatted by libvirt is no longer suited
|
||||
for manually running qemu as the configuration expects various resources and
|
||||
open file descriptors passed to the process which are usually prepared by
|
||||
libvirtd as well as certain features being configured via the monitor.
|
||||
|
||||
The qemu arguments as returned by ``virsh domxml-to-native`` thus are not
|
||||
trivially usable outside of libvirt.
|
||||
|
||||
Pass-through of arbitrary qemu commands
|
||||
---------------------------------------
|
||||
|
@@ -1,7 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>Remote management driver</h1>
|
||||
</body>
|
||||
</html>
|
@@ -1,82 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>Secret information management</h1>
|
||||
|
||||
<p>
|
||||
The secrets driver in libvirt provides a simple interface for
|
||||
storing and retrieving secret information.
|
||||
</p>
|
||||
|
||||
<h2><a id="uris">Connections to SECRET driver</a></h2>
|
||||
|
||||
<p>
|
||||
The libvirt SECRET driver is a multi-instance driver, providing a single
|
||||
system wide privileged driver (the "system" instance), and per-user
|
||||
unprivileged drivers (the "session" instance). A connection to the secret
|
||||
driver is automatically available when opening a connection to one of the
|
||||
stateful primary hypervisor drivers. It is none the less also possible to
|
||||
explicitly open just the secret driver, using the URI protocol "secret"
|
||||
Some example connection URIs for the driver are:
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
secret:///session (local access to per-user instance)
|
||||
secret+unix:///session (local access to per-user instance)
|
||||
|
||||
secret:///system (local access to system instance)
|
||||
secret+unix:///system (local access to system instance)
|
||||
secret://example.com/system (remote access, TLS/x509)
|
||||
secret+tcp://example.com/system (remote access, SASl/Kerberos)
|
||||
secret+ssh://root@example.com/system (remote access, SSH tunnelled)
|
||||
</pre>
|
||||
|
||||
<h3><a id="uriembedded">Embedded driver</a></h3>
|
||||
|
||||
<p>
|
||||
Since 6.1.0 the secret driver has experimental support for operating
|
||||
in an embedded mode. In this scenario, rather than connecting to
|
||||
the libvirtd daemon, the secret driver runs in the client application
|
||||
process directly. To open the driver in embedded mode the app use the
|
||||
new URI path and specify a virtual root directory under which the
|
||||
driver will create content.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
secret:///embed?root=/some/dir
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Under the specified root directory the following locations will
|
||||
be used
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
/some/dir
|
||||
|
|
||||
+- etc
|
||||
| |
|
||||
| +- secrets
|
||||
|
|
||||
+- run
|
||||
|
|
||||
+- secrets
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
The application is responsible for recursively purging the contents
|
||||
of this directory tree once they no longer require a connection,
|
||||
though it can also be left intact for reuse when opening a future
|
||||
connection.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The range of functionality is intended to be on a par with that
|
||||
seen when using the traditional system or session libvirt connections
|
||||
to QEMU. Normal practice would be to open the secret driver in embedded
|
||||
mode any time one of the other drivers is opened in embedded mode so
|
||||
that the two drivers can interact in-process.
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
65
docs/drvsecret.rst
Normal file
65
docs/drvsecret.rst
Normal file
@@ -0,0 +1,65 @@
|
||||
=============================
|
||||
Secret information management
|
||||
=============================
|
||||
|
||||
The secrets driver in libvirt provides a simple interface for storing and
|
||||
retrieving secret information.
|
||||
|
||||
Connections to SECRET driver
|
||||
----------------------------
|
||||
|
||||
The libvirt SECRET driver is a multi-instance driver, providing a single system
|
||||
wide privileged driver (the "system" instance), and per-user unprivileged
|
||||
drivers (the "session" instance). A connection to the secret driver is
|
||||
automatically available when opening a connection to one of the stateful primary
|
||||
hypervisor drivers. It is none the less also possible to explicitly open just
|
||||
the secret driver, using the URI protocol "secret" Some example connection URIs
|
||||
for the driver are:
|
||||
|
||||
::
|
||||
|
||||
secret:///session (local access to per-user instance)
|
||||
secret+unix:///session (local access to per-user instance)
|
||||
|
||||
secret:///system (local access to system instance)
|
||||
secret+unix:///system (local access to system instance)
|
||||
secret://example.com/system (remote access, TLS/x509)
|
||||
secret+tcp://example.com/system (remote access, SASl/Kerberos)
|
||||
secret+ssh://root@example.com/system (remote access, SSH tunnelled)
|
||||
|
||||
Embedded driver
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Since 6.1.0 the secret driver has experimental support for operating in an
|
||||
embedded mode. In this scenario, rather than connecting to the libvirtd daemon,
|
||||
the secret driver runs in the client application process directly. To open the
|
||||
driver in embedded mode the app use the new URI path and specify a virtual root
|
||||
directory under which the driver will create content.
|
||||
|
||||
::
|
||||
|
||||
secret:///embed?root=/some/dir
|
||||
|
||||
Under the specified root directory the following locations will be used
|
||||
|
||||
::
|
||||
|
||||
/some/dir
|
||||
|
|
||||
+- etc
|
||||
| |
|
||||
| +- secrets
|
||||
|
|
||||
+- run
|
||||
|
|
||||
+- secrets
|
||||
|
||||
The application is responsible for recursively purging the contents of this
|
||||
directory tree once they no longer require a connection, though it can also be
|
||||
left intact for reuse when opening a future connection.
|
||||
|
||||
The range of functionality is intended to be on a par with that seen when using
|
||||
the traditional system or session libvirt connections to QEMU. Normal practice
|
||||
would be to open the secret driver in embedded mode any time one of the other
|
||||
drivers is opened in embedded mode so that the two drivers can interact
|
||||
in-process.
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user