5
0
mirror of git://git.proxmox.com/git/pve-docs.git synced 2025-01-21 18:03:45 +03:00

use/define more/better block IDs

This commit is contained in:
Dietmar Maurer 2016-10-13 08:40:48 +02:00
parent 4b98565835
commit 80c0adcbc3
14 changed files with 57 additions and 14 deletions

View File

@ -1,3 +1,4 @@
[[datacenter_configuration_file]]
ifdef::manvolnum[]
datacenter.cfg(5)
=================
@ -19,7 +20,6 @@ SYNOPSIS
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
Datacenter Configuration
========================

View File

@ -1,3 +1,4 @@
[[getting_help]]
Getting Help
------------
include::attributes.txt[]

View File

@ -1,4 +1,4 @@
[[chapter-ha-manager]]
[[chapter_ha_manager]]
ifdef::manvolnum[]
ha-manager(1)
=============
@ -124,6 +124,7 @@ Requirements
* optional hardware fencing devices
[[ha_manager_resources]]
Resources
---------
@ -313,6 +314,7 @@ the update process can be too long which, in the worst case, may result in
a watchdog reset.
[[ha_manager_fencing]]
Fencing
-------
@ -382,6 +384,7 @@ That minimizes the possibility of an overload, which else could cause an
unresponsive node and as a result a chain reaction of node failures in the
cluster.
[[ha_manager_groups]]
Groups
------

View File

@ -1,3 +1,4 @@
[[chapter_pct]]
ifdef::manvolnum[]
pct(1)
======
@ -103,7 +104,7 @@ will affect a random unprivileged user, and so would be a generic
kernel security bug rather than an LXC issue. The LXC team thinks
unprivileged containers are safe by design.
[[pct_configuration]]
Configuration
-------------
@ -164,6 +165,7 @@ or
Those settings are directly passed to the LXC low-level tools.
[[pct_snapshots]]
Snapshots
~~~~~~~~~
@ -260,12 +262,14 @@ NOTE: Container start fails if the configured `ostype` differs from the auto
detected type.
[[pct_options]]
Options
~~~~~~~
include::pct.conf.5-opts.adoc[]
[[pct_container_images]]
Container Images
----------------
@ -332,6 +336,7 @@ example you can delete that image later with:
pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
[[pct_container_storage]]
Container Storage
-----------------
@ -489,6 +494,7 @@ ACLs allow you to set more detailed file ownership than the traditional user/
group/others model.
[[pct_container_network]]
Container Network
-----------------

View File

@ -1,3 +1,4 @@
[[chapter_pve_firewall]]
ifdef::manvolnum[]
pve-firewall(8)
===============
@ -19,7 +20,6 @@ include::pve-firewall.8-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
{pve} Firewall
==============
@ -82,6 +82,7 @@ comments. Sections starts with a header line containing the section
name enclosed in `[` and `]`.
[[pve_firewall_cluster_wide_setup]]
Cluster Wide Setup
~~~~~~~~~~~~~~~~~~
@ -144,6 +145,7 @@ To simplify that task, you can instead create an IPSet called
firewall rules to access the GUI from remote.
[[pve_firewall_host_specific_configuration]]
Host Specific Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -247,6 +249,7 @@ OUT ACCEPT # accept all outgoing packages
----
[[pve_firewall_security_groups]]
Security Groups
---------------
@ -357,7 +360,7 @@ Traffic from these IPs is dropped by every host's and VM's firewall.
----
[[ipfilter-section]]
[[pve_firewall_ipfilter_section]]
Standard IP set `ipfilter-net*`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -486,7 +489,7 @@ As for the link local addresses required for NDP, there's also an ``IP Filter''
(`ipfilter: 1`) option which can be enabled which has the same effect as adding
an `ipfilter-net*` ipset for each of the VM's network interfaces containing the
corresponding link local addresses. (See the
<<ipfilter-section,Standard IP set `ipfilter-net*`>> section for details.)
<<pve_firewall_ipfilter_section,Standard IP set `ipfilter-net*`>> section for details.)
Ports used by {pve}

View File

@ -1,3 +1,4 @@
[[sysadmin_network_configuration]]
Network Configuration
---------------------
include::attributes.txt[]

View File

@ -1,3 +1,4 @@
[[sysadmin_package_repositories]]
Package Repositories
--------------------
include::attributes.txt[]

View File

@ -12,7 +12,7 @@ supports clustering, this means that multiple {pve} installations
can be centrally managed thanks to the included cluster functionality.
{pve} can use local storage (DAS), SAN, NAS and also distributed
storage (Ceph RBD). For details see xref:chapter-storage[chapter storage].
storage (Ceph RBD). For details see xref:chapter_storage[chapter storage].
Minimum Requirements, for Evaluation

View File

@ -1,3 +1,4 @@
[[chapter_pveceph]]
ifdef::manvolnum[]
pveceph(1)
==========
@ -17,7 +18,6 @@ include::pveceph.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
pveceph - Manage CEPH Services on Proxmox VE Nodes
==================================================

View File

@ -1,4 +1,4 @@
[[chapter-storage]]
[[chapter_storage]]
ifdef::manvolnum[]
pvesm(1)
========

View File

@ -1,3 +1,4 @@
[[chapter_user_management]]
ifdef::manvolnum[]
pveum(1)
========
@ -19,7 +20,6 @@ include::pveum.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
User Management
===============
@ -39,12 +39,13 @@ By using the role based user- and permission management for all
objects (VMs, storages, nodes, etc.) granular access can be defined.
[[pveum_users]]
Users
-----
{pve} stores user attributes in `/etc/pve/user.cfg`.
Passwords are not stored here, users are instead associated with
<<authentication-realms,authentication realms>> described below.
<<pveum_authentication_realms,authentication realms>> described below.
Therefore a user is internally often identified by its name and
realm in the form `<userid>@<realm>`.
@ -69,6 +70,7 @@ still be changed and system mails will be sent to the email address
assigned to this user.
[[pveum_groups]]
Groups
~~~~~~
@ -78,7 +80,7 @@ to groups instead of using individual users. That way you will get a
much shorter access control list which is easier to handle.
[[authentication-realms]]
[[pveum_authentication_realms]]
Authentication Realms
---------------------
@ -187,6 +189,7 @@ https://developers.yubico.com/Software_Projects/YubiKey_OTP/YubiCloud_Validation
host your own verification server].
[[pveum_permission_management]]
Permission Management
---------------------
@ -202,6 +205,7 @@ role)', with the role containing a set of allowed actions, and the path
representing the target of these actions.
[[pveum_roles]]
Roles
~~~~~
@ -325,6 +329,7 @@ by default). We use the following inheritance rules:
* Permissions replace the ones inherited from an upper level.
[[pveum_pools]]
Pools
~~~~~

24
qm.adoc
View File

@ -1,3 +1,4 @@
[[chapter_virtual_machines]]
ifdef::manvolnum[]
qm(1)
=====
@ -18,7 +19,6 @@ include::qm.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
Qemu/KVM Virtual Machines
=========================
@ -92,15 +92,19 @@ measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
http://www.linux-kvm.org/page/Using_VirtIO_NIC]
[[qm_virtual_machines_settings]]
Virtual Machines settings
-------------------------
Generally speaking {pve} tries to choose sane defaults for virtual machines
(VM). Make sure you understand the meaning of the settings you change, as it
could incur a performance slowdown, or putting your data at risk.
[[qm_general_settings]]
General Settings
~~~~~~~~~~~~~~~~
General settings of a VM include
* the *Node* : the physical server on which the VM will run
@ -109,16 +113,20 @@ General settings of a VM include
* *Resource Pool*: a logical group of VMs
[[qm_os_settings]]
OS Settings
~~~~~~~~~~~
When creating a VM, setting the proper Operating System(OS) allows {pve} to
optimize some low level parameters. For instance Windows OS expect the BIOS
clock to use the local time, while Unix based OS expect the BIOS clock to have
the UTC time.
[[qm_hard_disk]]
Hard Disk
~~~~~~~~~
Qemu can emulate a number of storage controllers:
* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
@ -186,8 +194,11 @@ With this enabled, Qemu uses one thread per disk, instead of one thread for all,
so it should increase performance when using multiple disks.
Note that backups do not currently work with *IO Thread* enabled.
[[qm_cpu]]
CPU
~~~
A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
This CPU can then contain one or many *cores*, which are independent
processing units. Whether you have a single CPU socket with 4 cores, or two CPU
@ -242,8 +253,11 @@ option is also required in {pve} to allow hotplugging of cores and RAM to a VM.
If the NUMA option is used, it is recommended to set the number of sockets to
the number of sockets of the host system.
[[qm_memory]]
Memory
~~~~~~
For each VM you have the option to set a fixed size memory or asking
{pve} to dynamically allocate memory based on the current RAM usage of the
host.
@ -284,8 +298,11 @@ systems.
When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
of RAM available to the host.
[[qm_network_device]]
Network Device
~~~~~~~~~~~~~~
Each VM can have many _Network interface controllers_ (NIC), of four different
types:
@ -344,8 +361,10 @@ traffic increases. We recommend to set this option only when the VM has to
process a great number of incoming connections, such as when the VM is running
as a router, reverse proxy or a busy HTTP server doing long polling.
USB Passthrough
~~~~~~~~~~~~~~~
There are two different types of USB passthrough devices:
* Host USB passtrough
@ -378,6 +397,8 @@ if you use a SPICE client which supports it. If you add a SPICE USB port
to your VM, you can passthrough a USB device from where your SPICE client is,
directly to the VM (for example an input device or hardware dongle).
[[qm_bios_and_uefi]]
BIOS and UEFI
~~~~~~~~~~~~~
@ -448,6 +469,7 @@ All configuration files consists of lines in the form
Configuration files are stored inside the Proxmox cluster file
system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
[[qm_options]]
Options
~~~~~~~

View File

@ -1,3 +1,4 @@
[[chapter_system_administration]]
Host System Administration
==========================
include::attributes.txt[]

View File

@ -1,3 +1,4 @@
[[chapter_vzdump]]
ifdef::manvolnum[]
vzdump(1)
=========
@ -19,7 +20,6 @@ include::vzdump.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
Backup and Restore
==================