IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The controls only modify the runtime setting of which public addresses a node
can server and does not modify /etc/ctdb/public_addresses.
To make the change permanent you also need to edit /etc/ctdb/public_addresses
manually.
After ip addresses have been added/deleted you need to invoke a recovery
for the ip addresses to be redistributed.
(This used to be ctdb commit f8294d103fdd8a720d0b0c337d3973c7fdf76b5c)
when this tunable is set, ip addresses will only be failed over when a node
fails. And only those ip addresses held by the failed node will be reallocated
in the cluster.
When a node becomes active again, this will not lead to any failback of ip addresses.
This can reduce the number of "ip address movements" in the cluster since we dont automatically fail an ip address back, but can also lead to an unbalanced cluster since we no longer attempt to spread the ip addresses out evenly across the active nodes.
This tuneable can NOT be active at the same time as DeterministicIPs are used.
(This used to be ctdb commit d3b8a461b15bc584fa1785eb5922de6d49d8f6c4)
a node that has been allocated to server an ip actually CAN serve that ip
(if we use differing public_addresses files on each node)
(This used to be ctdb commit fdaf7cb2d7682507fbf4c6c2b833b327c93fac08)
flag.
change calling of the recovered/takeip/releaseip event scripts to use
these enable/disable functions instead of stopping/starting monitoring.
when we disable monitoring we want all events to still be running
in particular the events to monitor for dead nodes and we only want to
supress running the monitor event scripts
(This used to be ctdb commit a006dcc4f75aba950dd701ad7d1a84e89df285e8)
control, instead call ctdb_start/stop_monitoring()
ctdb_stop_monitoring() dont allocate a new monitoring context, leave it
NULL. Also set the monitoring_mode in this function so that
ctdb_stop/start_monitoring() and ->monitoring_mode are kept in sync.
Add a debug message to log that we have stopped monitoring.
ctdb_start_monitoring() check whether monitoring is already active and
make the function idempotent.
Create the monitoring context when monitoring is started.
Update ->monitoring_mode once the monitoring has been started.
Add a debug message to log that we have started monitoring.
When we temporarily stop monitoring while running an event script,
restart monitoring after the event script wrapper returns instead of in
the event script callback.
Let monitoring_mode start out as DISABLED and let it be enabled once we call ctdb_start_monitoring.
dont check for MONITORING_DISABLED in check_fore_dead_nodes(). If
monitoring is disabled, this event handler will not be called.
(This used to be ctdb commit 3a93ae8bdcffb1adbd6243844f3058fc742f76aa)
check in the recovery daemon that all nodes are using the same
/etc/ctdb/nodes file.
Also add some more missing checks that the pnn used is a valid pnn
before using it to dereferencing the ctdb->nodes array
This is useful since it allows us to add more physical nodes to a an
existing cluster without having to bring down the entire cluster.
The to add an additional node to an existing cluster would then be
1, on all nodes set CheckNodesFile=0 using 'ctdb setvar'
2, on all nodes add CTDB_SET_CheckNodesFile=0 to /etc/sysconfig/ctdb
For each each node, one at a time :
3, use 'ctdb disable' to stop the hosted services
4, service ctdb stop
5, service ctdb start
Once all nodes have been restarted
6, on all nodes remove CTDB_SET_CheckNodesFile=0 from
/etc/sysconfig/ctdb
7, on all nodes set CheckNodesFile=0 using 'ctdb setvar'
8, configure and start up the new node
During this procedure, only one node at a time was brought
down/restarted and was so only for a short period.
(This used to be ctdb commit 462501a32143e943ce350bd904a47c0955414a51)
public addresses to nodes deterministic.
Activate it by adding CTDB_SET_DeterministicIPs=1 in /etc/sysconfig/ctdb
When this is set, the first entry in /etc/ctdb/public_addresses will
always be hosted by node 0, when that node is available, the second
entry by node1 and so on.
This tunable allows the allocation of addresses to become very
unbalanced and is only for debugging/testing use.
Beware, this feature requires that /etc/ctdb/public_addresses are
identical on all the nodes in the cluster.
(This used to be ctdb commit f0ca221f235731542090d8a6c86f2b7cd2ce2f96)
used in single public ip address mode.
when using this argument, --public-interface must also be used.
add a vnn structure to the ctdb context to describe the single public ip
address
update the killtcp control in the daemon that if a socketpair that is to
be killed does not match a normal public address it checks if the
destination address maches the single public ip address and if so uses
that vnn structure from the ctdb context
this allows killtcp to kill also connections to the single public ip
instead of only normal public addresses
(This used to be ctdb commit 5661ba17b91f62821dec1c76056c78b99752a90b)
a bool that specifies whether the ip was held by a loopback adaptor or
not
the name of the interface where the ip was held
when we release an ip address from an interface, move the ip address
over to the loopback interface
when we release an ip address after we have move it onto loopback,
use 60.nfs to kill off the server side (the local part) of the tcp
connection so that the tcp connections dont survive a
failover/failback
61.nfstickle, since we kill hte tcp connections when we release an ip
address we no longer need to restart the nfs service in 61.nfstickle
update ctdb_takeover to use the new signature for ctdb_sys_have_ip
when we add a tcp connection to kill in ctdb_killtcp_add_connection()
check if either the srouce or destination address match a known public
address
(This used to be ctdb commit f9fd2a4719c50f6b8e01d0a1b3a74b76b52ecaf3)
files
so that we can partition the cluster into different subsets of nodes
which each serve a different subset of the public addresses
(This used to be ctdb commit 889e0fe69e4c88c6166282b12843b8d9727552d6)
everytime we release an ip.
this context is used to hold all resources needed when sending out
gratious arps and tcp tickles during ip takeover.
we hang it off the vnn structure that manages that particular ip address
instead so that we can have multiple ones going in parallell
this bug (or the same bug in different shape) has probably been in ctdb
for very very long but is likely to be hard to trigger
(This used to be ctdb commit c58db1cadaba253b2659573673b28c235ef7db76)
multiple public addresses spread across multiple interfaces on each
node.
this is a massive patch since we have previously made the assumtion that
we only have one public address per node.
get rid of the public_interface argument. the public addresses file
now explicitely lists which interface the address belongs to
(This used to be ctdb commit 462ebbc791e906a6b874c862defea43235597ca8)
public address remain at that node until either the node becomes
unhealthy or the original/primary node for that address becomes healthy
again.
Othervise what will happen is
1, if we ban a node, the banning code immediately does a
takeover_run() and reassigns the public address to a different node in
the cluster.
2, a few seconds later (at most) the recovery daemon will detect that
the number of nodes has shrunk and will initiate a recovery.
During the recovery the public address would again be assigned to a
node, this time a different node.
(This used to be ctdb commit 30a6b7a648e22873d8ce6289a3d6dc42c4b9e3b3)
specific script /etc/ctdb/events.d/00.ctdb
get rid of CTDB_EVENTS_SCRIPT and --event-script
(This used to be ctdb commit 81ccfaf838e5772d4a58eb6a70224b7b39aba9f3)
instead for from /etc/ctdb/events so that we can get better debugging
output in the logs when something fails in the scripts
(This used to be ctdb commit 4ed96b768aea1611e8002f7095d3c4d12ccf77a3)
tcp connection in the tree that stores the tcp connections to kill by
sending an RST
add a define that specified the keylength instead of hardcoding it as 4
(This used to be ctdb commit 6a8322cbae10f2c78b2e286c75aeb25ece12ea7f)
we store in the tree and use a node destructor so that when the data is
talloc_free()d we also remove the node from the tree.
(This used to be ctdb commit b8dabd1811ebd85ee031563e95085f720a2fa04d)
description (src + dst sockaddr_in) in a linked list.
everytime we receive a captured packet from the network we had to walk
this list in linear time to see if the packet matched a connection we
wanted to RST.
which wouldnt scale very well.
replace the linked list with a redblack tree that is indexed by
src address, src port, dst address, dst port
to make checking whether the packet belongs to a connection we want to
RST very fast and scalable
the reason we need to capture packets when we want to kill a TCP
connection is because we must wait for an ACK coming back from the
remote host so that we can learn which sequence number to use in the
RST.
Most tcp today will ingore any and all RST segments unless the
sequencenumber lies exactly on the right edge of the window to make
spoofing RST a little bit more difficult.
(This used to be ctdb commit ced18caea8582af042287beb6333dd1f8ba3344d)
update addr to the source address so the rpintout in the log matches
the client that attached to samba
(This used to be ctdb commit 72098b71c79469c86769ca82bbd484c81902d27c)
by a talloc_steal()
use the returned pointer in talloc_steal as the value to assign
(This used to be ctdb commit 5c6375ad3bbecfa725ec3b1477f259e5a8191866)
tickles) just talloc_steal the enture tcp_array into the arp
structure instead of copying each of the entries into a linked list
and then releasing the tcparray.
(This used to be ctdb commit 468e237740cf37a65872ef700bbb1284ede8352a)
tickled all connections
othervise the other nodes will still remember this list until next time
we have had a connection/client closing.
(This used to be ctdb commit cb8e5d4bbee2f14f498735489f673ff3679dfd9d)
there is an array for each node/public address that contains tcp tickles
we send a TCP_ADD as a broadcast to all nodes when a client is added
if tcp tickles are removed, they are only removed immediately from the
local node.
once every 20 seconds a node will push/broadcast out the tickle list for
all public addresses it manages. this will remove any deleted tickles
from the remote nodes
(This used to be ctdb commit e3c432a915222e1392d91835bc7a73a96ab61ac9)
ip/node
once we have started sending all tickles for a specific ip delete the
entire array so that the tickles dont remain forever in the ctdb
server
add a control to send the full list of every tickle that is registered
for a particular public ip/node
(This used to be ctdb commit d0eee33e44d3f8e26debbec21d41e2cbdbb520e6)
back to 0 if it is to prevent an infinite loop.
this could happen if in the future we add a mechanism to add/remove
nodes to a cluster at runtime
(This used to be ctdb commit 217e80a468713fec86ccb0608460e3401046bb98)