IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
DISCONNECTED - This node could not be connected through the network and is currently not participating in the cluster. If there is a public IP address associated with this node it should have been taken over by a different node. No services are running on this node.
DISABLED - This node has been administratively disabled. This node is still functional and participates in the CTDB cluster but its IP addresses have been taken over by a different node and no services are currently being hosted.
</para>
<para>
UNHEALTHY - A service provided by this node is malfunctioning and should be investigated. The CTDB daemon itself is operational and participates in the cluster. Its public IP address has been taken over by a different node and no services are currnetly being hosted. All unhealthy nodes should be investigated and require an administrative action to rectify.
</para>
<para>
BANNED - This node failed too many recovery attempts and has been banned from participating in the cluster for a period of RecoveryBanPeriod seconds. Any public IP address has been taken over by other nodes. This node does not provide any services. All banned nodes should be investigated and require an administrative action to rectify. This node does not perticipate in the CTDB cluster but can still be communicated with. I.e. ctdb commands can be sent to it.
The list of Virtual Node Numbers. This is a list of all nodes that actively participates in the cluster and that share the workload of hosting the Clustered TDB database records.
This is the current recovery mode of the cluster. There are two possible modes:
</para>
<para>
NORMAL - The cluster is fully operational.
</para>
<para>
RECOVERY - The cluster databases have all been frozen, pausing all services while the cluster awaits a recovery process to complete. A recovery process should finish within seconds. If a cluster is stuck in the RECOVERY state this would indicate a cluster malfunction which needs to be investigated.
This is the cluster node that is currently designated as the recovery master. This node is responsible of monitoring the consistency of the cluster and to perform the actual recovery process when reqired.
This command shows the pnn of the node which is currently the recmaster.
</para>
</refsect2>
<refsect2><title>uptime</title>
<para>
This command shows the uptime for the ctdb daemon. When the last recovery completed and how long the last recovery took. If the "duration" is shown as a negative number, this indicates that there is a recovery in progress and it started that many seconds ago.
</para>
<para>
Example: ctdb uptime
</para>
<para>Example output:</para>
<screenformat="linespecific">
Current time of node : Tue Mar 24 18:27:54 2009
Ctdbd start time : (000 00:00:05) Tue Mar 24 18:27:49 2009
Time of last recovery : (000 00:00:05) Tue Mar 24 18:27:49 2009
Duration of last recovery : 0.000000 seconds
</screen>
</refsect2>
<refsect2><title>listnodes</title>
<para>
This command shows lists the ip addresses of all the nodes in the cluster.
This command will display the list of public addresses that are provided by the cluster and which physical node is currently serving this ip. By default this command will ONLY show those public addresses that are known to the node itself. To see the full list of all public ips across the cluster you must use "ctdb ip -n all".
This command displays which scripts where run in the previous monitoring cycle and the result of each script. If a script failed with an error, causing the node to become unhealthy, the output from that script is also shown.
</para>
<para>
Example: ctdb scriptstatus
</para>
<para>
Example output:
</para>
<screenformat="linespecific">
7 scripts were executed last monitoring cycle
00.ctdb Status:OK Duration:0.056 Tue Mar 24 18:56:57 2009
10.interface Status:OK Duration:0.077 Tue Mar 24 18:56:57 2009
11.natgw Status:OK Duration:0.039 Tue Mar 24 18:56:57 2009
20.multipathd Status:OK Duration:0.038 Tue Mar 24 18:56:57 2009
40.vsftpd Status:OK Duration:0.045 Tue Mar 24 18:56:57 2009
41.httpd Status:OK Duration:0.039 Tue Mar 24 18:56:57 2009
50.samba Status:ERROR Duration:0.082 Tue Mar 24 18:56:57 2009
OUTPUT:ERROR: Samba tcp port 445 is not responding
This command is used to show the filename of the reclock file that is used.
</para>
<para>
Example output:
</para>
<screenformat="linespecific">
Reclock file:/gpfs/.ctdb/shared
</screen>
</refsect2>
<refsect2><title>setreclock [filename]</title>
<para>
This command is used to modify, or clear, the file that is used as the reclock file at runtime. When this command is used, the reclock file checks are disabled. To re-enable the checks the administrator needs to activate the "VerifyRecoveryLock" tunable using "ctdb setvar".
</para>
<para>
If run with no parameter this will remove the reclock file completely. If run with a parameter the parameter specifies the new filename to use for the recovery lock.
</para>
<para>
This command only affects the runtime settings of a ctdb node and will be lost when ctdb is restarted. For persistent changes to the reclock file setting you must edit /etc/sysconfig/ctdb.
This command is used to add a new public ip to a node during runtime.
This allows public addresses to be added to a cluster without having
to restart the ctdb daemons.
</para>
<para>
Note that this only updates the runtime instance of ctdb. Any changes will be lost next time ctdb is restarted and the public addresses file is re-read.
If you want this change to be permanent you must also update the public addresses file manually.
</para>
</refsect2>
<refsect2><title>delip <public_ip></title>
<para>
This command is used to remove a public ip from a node during runtime.
If this public ip is currently hosted by the node it being removed from, the ip will first be failed over to another node, if possible, before it is removed.
</para>
<para>
Note that this only updates the runtime instance of ctdb. Any changes will be lost next time ctdb is restarted and the public addresses file is re-read.
If you want this change to be permanent you must also update the public addresses file manually.
This command will force the recovery master to perform a full ip reallocation process and redistribute all ip addresses. This is useful to "reset" the allocations back to its default state if they have been changed using the "moveip" command. While a "recover" will also perform this reallocation, a recovery is much more hevyweight since it will also rebuild all the databases.
1, To remove a node from an existing cluster, first ensure with 'ctdb status' that
all nodes, except the node to be deleted, are up and running and that they are all healthy.
Do not try to remove a node from a cluster unless the cluster is completely healthy!
</para>
<para>
2, Shutdown and poerwoff the node to be removed.
</para>
<para>
3, On all other nodes, edit the /etc/ctdb/nodes file and comment out the node to be removed. Do not delete the line for that node, just comment it out by adding a '#' at the beginning of the line.
</para>
<para>
4, Run 'ctdb reloadnodes' to force all nodes to reload the nodesfile.
</para>
<para>
5, Use 'ctdb status' on all nodes and verify that the deleted node no longer shows up in the list..
Over time, when records are created and deleted in a TDB, the TDB list of free space will become fragmented. This can lead to a slowdown in accessing TDB records.
This command is used to defragment a TDB database and pruning the freelist.
</para>
<para>
If [max_freelist] is specified, then a database will only be repacked if it has more than this number of entries in the freelist.
</para>
<para>
During repacking of the database, the entire TDB database will be locked to prevent writes. If samba tries to write to a record in the database during a repack operation, samba will block until the repacking has completed.
</para>
<para>
This command can be disruptive and can cause samba to block for the duration of the repack operation. In general, a repack operation will take less than one second to complete.
</para>
<para>
A repack operation will only defragment the local TDB copy of the CTDB database. You need to run this command on all of the nodes to repack a CTDB database completely.
</para>
<para>
Example: ctdb repack 1000
</para>
<para>
By default, this operation is issued from the 00.ctdb event script every 5 minutes.
</para>
</refsect2>
<refsect2><title>vacuum [max_records]</title>
<para>
Over time CTDB databases will fill up with empty deleted records which will lead to a progressive slow down of CTDB database access.
This command is used to prune all databases and delete all empty records from the cluster.
</para>
<para>
By default, vacuum will delete all empty records from all databases.
This command can be used to copy the entire content of a database out to a file. This file can later be read back into ctdb using the restoredb command.
This is mainly useful for backing up persistent databases such as secrets.tdb and similar.
</para>
</refsect2>
<refsect2><title>restoredb <file></title>
<para>
This command restores a persistent database that was previously backed up using backupdb.
This command checks if a specific process exists on the CTDB host. This is mainly used by Samba to check if remote instances of samba are still running or not.
This command lists all clustered TDB databases that the CTDB daemon has attached to. Some databases are flagged as PERSISTENT, this means that the database stores data persistently and the data will remain across reboots. One example of such a database is secrets.tdb where information about how the cluster was joined to the domain is stored.
</para>
<para>
Most databases are not persistent and only store the state information that the currently running samba daemons need. These databases are always wiped when ctdb/samba starts and when a node is rebooted.
This command will dump a clustered TDB database to the screen. This is a debugging command.
</para>
</refsect2>
<refsect2><title>getmonmode</title>
<para>
This command returns the monutoring mode of a node. The monitoring mode is either ACTIVE or DISABLED. Normally a node will continously monitor that all other nodes that are expected are in fact connected and that they respond to commands.
</para>
<para>
ACTIVE - This is the normal mode. The node is actively monitoring all other nodes, both that the transport is connected and also that the node responds to commands. If a node becomes unavailable, it will be marked as DISCONNECTED and a recovery is initiated to restore the cluster.
</para>
<para>
DISABLED - This node is not monitoring that other nodes are available. In this mode a node failure will not be detected and no recovery will be performed. This mode is useful when for debugging purposes one wants to attach GDB to a ctdb process but wants to prevent the rest of the cluster from marking this node as DISCONNECTED and do a recovery.
</para>
</refsect2>
<refsect2><title>setmonmode <0|1></title>
<para>
This command can be used to explicitely disable/enable monitoring mode on a node. The main purpose is if one wants to attach GDB to a running ctdb daemon but wants to prevent the other nodes from marking it as DISCONNECTED and issuing a recovery. To do this, set monitoring mode to 0 on all nodes before attaching with GDB. Remember to set monitoring mode back to 1 afterwards.
</para>
</refsect2>
<refsect2><title>attach <dbname></title>
<para>
This is a debugging command. This command will make the CTDB daemon create a new CTDB database and attach to it.
Administratively ban a node for bantime seconds. A bantime of 0 means that the node should be permanently banned.
</para>
<para>
A banned node does not participate in the cluster and does not host any records for the clustered TDB. Its ip address has been taken over by an other node and no services are hosted.
</para>
<para>
Nodes are automatically banned if they are the cause of too many