IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
quotad and ganesha.nfsd prints many logs as,
[rpc-clnt.c:1739:rpc_clnt_submit ] 0-<VOLUME_NAME>-quota: error returned while attempting to connect to host: (null), port 0
Change-Id: Ic0c815400619e4a87a772a51b19822920228c1ef
Updates: bz#1596787
Signed-off-by: Kinglong Mee <mijinlong@open-fs.com>
When reconfigure happens, string values from one dictionary
are directly set in another dictionary. This can lead to
invalid memory when the first dictionary is freed up.
So do dict_set_dynstr_with_alloc instead of dict_set_str
updates bz#1650403
Change-Id: Id53236467521cfdeb07e7178d87ba6cf88d17003
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21762/ has migrated
rebalance commands from op-sm framework to mgmt v3 framework.
In a heterogenous cluster, if rebalance commands follow op-sm
framework, localhost information is not displayed in the
output of "gluster v rebalance <volname> status".
Cause:
Previously without https://review.gluster.org/#/c/glusterfs/+/21762/
rebalance commands were following op-sm framework.
In glusterd_volume_rebalance_use_rsp_dict() current_index variable
keeps track of number/count of peers in trusted storage pool.
In op-sm, glusterd_volume_rebalance_use_rsp_dict() will be called
only for the peers. So the current index should start from 2
assuming local host as node 1.
With the above patch, rebalance commands are following mgmt v3
framework. In mgmt v3, glusterd_volume_rebalance_use_rsp_dict()
is called for all nodes. For localhost it is called from
brick-op function and for peers it is called from brick-op
call back function. So the current index value should start
from 1.
https://review.gluster.org/#/c/glusterfs/+/21762/ has changed the
value of current index to 1. Because of this, In heterogenous cluster,
local host's information is overwritten by one of the peers information.
And rebalance status will not display localhost's information in
the output.
Solution: assign a value to current index based on a op-version
check.
Change-Id: I2dfba1f007e908cf160acc4a4a5d8ef672572e4d
fixes: bz#1663243
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
When we run profile info command, it should display statistics
of all the bricks of the volume. To display information of bricks
which are hosted on peers, we need to aggregate the response from
peers.
For profile info command, all the statistics will be added into
the dictionary in brick-op phase. To aggregate the information from
peers, we need to call glusterd_syncop_aggr_rsp_dict() in brick-op
call back function.
fixes: bz#1663223
Change-Id: I5f5890c3d01974747f829128ab74be6071f4aa30
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
Problem:
glusterd acquires a cleanup mutex lock before it starts
cleanup process, so that any other thread which tries to acquire
lock on any resource will be blocked on cleanup mutex lock.
We don't want any thread to try to acquire any resource, once
the cleanup is started. because other threads might try to acquire
lock on resources which are already freed by the thread which is
going though the cleanup phase.
previously we were releasing the cleanup mutex lock before the
process exit. As we are releasing the cleanup mutex lock, before
the process can exit some other thread which is blocked on
cleanup mutex lock is acquiring the cleanup mutex lock and
trying to acquire some resources which are already freed as a
part of cleanup. This is leading glusterd to crash.
Solution: We should exit the process without releasing the
cleanup mutex lock.
Change-Id: Ibae1c62260f141019017f7a547519a5d38dc2bb6
fixes: bz#1654270
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
This is follow-up patch to the comment received for
- https://review.gluster.org/#/c/glusterfs/+/21882
We need not hold the fs->mutex lock to log error message.
Change-Id: I29d2ea2e6cfecc3dd94982bd48f4bc9f11cc3aac
fixes: bz#1660577
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
This patch fixes coverity issue in api/src/glfs-fops.c.
CID: 1389247, 1389296, 1389369, 1389392.
All coverity defects are of type Mixing enum types (MIXED_ENUMS).
updates: bz#789278
Change-Id: I007bb317ed5f0b8ddaf94a93b3a4d02b1e74cb8d
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Defect: CID 1398469- Calling strncpy with a maximum size argument
of 4096 bytes on destination array key of size 4096 bytes might
leave the destination string unterminated.
Fix: Using snprintf instead of strncpy.
updates: bz#789278
Change-Id: I4fdcd0cbf3af8b2ded94603d92d1ceb4112284c4
Signed-off-by: Harpreet Kaur <hlalwani@redhat.com>
They were not used at all, just taking space.
I've also marked all those that are not common really, but used
in just one place - they probably should move there (in follow-up
patches)
As a test, I've removed from the stripe xlator unused private
enums and moved one that was in the common list, but only
used in the stripe code, to be a private enum.
Compile-tested only!
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
Change-Id: I1158dc1d259f1fd3f69904336c46c9d83cea799f
This added hint helps to get ASan logs for the daemon processes, with
this one can start using asan for regression tests.
updates: bz#1633930
Change-Id: I3b39892d45d29ae514dad8ab10f65703c02003f1
Signed-off-by: Amar Tumballi <amarts@redhat.com>
For implementing copy_file_range fop, AFR needs to perform two inodelks in the
same transaction. This patch brings in the necessary structure to make it
easier to do so.
Entry-locks in AFR were already taking multiple entry-locks on different inodes
with the respective basenames. This patch extends the logic in inodelks to use
the same lockee_t structure. This lead to removal of quite a lot of duplicate
code present in afr-lk-common.c as both the locks are doing same things except
'winding' part.
updates: #536
Change-Id: Ibfce7e3f260bb27b18645152ec680c33866fe0ae
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Defect: Code can never be reached because of the
condition queue_index > 1024 cannot be true.
CID: 1398471 Logically dead code
updates: bz#789278
Change-Id: I367cda7e734f6d774900a58d8664cffcab69126f
Signed-off-by: Sheetal Pamecha <sheetal.pamecha08@gmail.com>
Squash some ugly warnings, and make the code a little bit simpler by
removing some unneeded goto jumps.
On Ubuntu 16.04 the following warnings were reported by Amudhan:
CC barrier.lo
barrier.c: In function ‘notify’:
barrier.c:499:33: warning: switch condition has boolean value [-Wswitch-bool]
switch (past) {
^
barrier.c: In function ‘reconfigure’:
barrier.c:565:25: warning: switch condition has boolean value [-Wswitch-bool]
switch (past) {
^
Change-Id: Ifb6b75058dff8c789b729c76530a1358d391f4d1
Updates: bz#1193929
Reported-by: Amudhan P <amudhan83@gmail.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Rearrange the dht_lookup_cbk code to make
it easier to understand.
Corrected a message in dht_linkfile_create_lookup_cbk
Change-Id: Id41db9ef901732f0410f1c007807362c630218ff
fixes: bz#1590385
Signed-off-by: N Balachandran <nbalacha@redhat.com>
A given epoll thread can handle only one incoming (POLLIN) request.
And until the socket is rearmed for listening, it is guaranteed that
there won't be any new incoming requests. As a result, the priv->in_lock
which guards the socket proto state machine seems redundant.
This patch removes priv->in_lock.
Change-Id: I26b6ddd852aba8c10385833b85ffd2e53e46cb8c
updates: bz#1467614
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
In some cases (for eg., when there are multiple
RPC_CLNT_CONNECT notifications), multiple threads may fetch
volfile and try to update it in 'fs' object simultaneously.
Hence protect those variables' access under fs->mutex lock.
Change-Id: Idaee9548560db32d83f4c04ebb1f375fee7864a9
fixes: bz#1660577
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
This test "tests/bugs/core/bug-1432542-mpx-restart-crash.t" case creates 20 2x3
volumes after enabling brick_mux.At the time of creating last volume brick is getting
OOM because brick consumption has increased from previous consumption due to these patches
https://review.gluster.org/#/c/glusterfs/+/19997/,
https://review.gluster.org/#/c/glusterfs/+/20362/
To avoid OOM reduce NUM_VOLS to 15 so that brick consumption has reduced
Change-Id: Ib98b47a3db6b990ff22c7e57396d51e7fef5c7e8
fixes: bz#1661214
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
Adaptive mutexes are used to protect critical/shared data items that
are held for short periods.It provides a balance between spin locks
and traditional mutex.We have observed after use adaptive mutex in
rpcsvc_program_register got some improvement.
Change-Id: I7905744b32516ac4e4ca3c83c2e8e5e306093add
fixes: bz#1660701
* we shouldn't be using 'local' after DHT_STACK_UNWIND() as it frees
the content of local. Add a 'goto out' or similar logic to handle
the situation.
* fix possible overlook of unref(dict), instead of unref(xdata).
* make coverity happy by re-ordering unref in meta-defaults.
* gfid-access: re-order dictionary allocation so we don't have to
do a extra unref.
* other obvious errors reported.
updates: bz#789278
Change-Id: If05961ee946b0c4868df19861d7e4a927a2a2489
Signed-off-by: Amar Tumballi <amarts@redhat.com>
The default value of shard-block-size was changed from 4MB
to 64MB sometime back. The script "fallocate"s a 6MB file
and expects it to have 1 shard under .shard. This worked when
the shard-block-size was 4MB. With the default value now at 64MB,
file "file1" won't have any shards under .shard and the stat on the
1st shard's path fails with ENOENT.
Changed the script to explicitly set shard-block-size to 4MB.
Change-Id: I7f1785922287d16d74c95fa57cbbe12e6e66e4f7
fixes: bz#1656264
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
With brick mux, the number of threads increases as the number of
bricks increases. As an initiative to reduce the number of
threads in brick mux scenario, replacing janitor thread to use
synctask infra.
Now close() and closedir() handle by separate janitor
thread which is linked with glusterfs_ctx.
Updates #475
Change-Id: I0c4aaf728125ab7264442fde59f3d08542785f73
Signed-off-by: Poornima G <pgurusid@redhat.com>
used snprintf instead of sprintf and if the source string is bigger
than destination then logged a warning message.
clang warning: ‘%s’ directive writing up to 1024 bytes into a region
of size 108.
updates: bz#1622665
Change-Id: Ia5e7c53d35d8178dd2c75708698599fe8bded5de
Signed-off-by: Rinku Kothiya <rkothiya@redhat.com>
Problem: When trying to convert a plain distribute volume to replica-3
or arbiter type it is failing with ENOTCONN error as the lookup on
the root will fail as there is no quorum.
Fix: Allow lookup on root if it is coming from the ADD_REPLICA_MOUNT
which is used while adding bricks to a volume. It will try to set the
pending xattrs for the newly added bricks to allow the heal to happen
in the right direction and avoid data loss scenarios.
Note: This fix will solve the problem of type conversion only in the
case where the volume was mounted at least once. The conversion of
non mounted volumes will still fail since the dht selfheal tries to
set the directory layout will fail as they do that with the PID
GF_CLIENT_PID_NO_ROOT_SQUASH set in the frame->root.
Change-Id: Ic511939981dad118cc946754341318b164954b3b
fixes: bz#1655854
Signed-off-by: karthik-us <ksubrahm@redhat.com>
The current implementation of iobuf_pool has two problems:
- prealloc of 12.5MB memory, this limits the scale factor of the gluster
processes due to RAM requirements
- lock contention, as the current implementation has one global
iobuf_pool lock. Credits for debugging and addressing the same goes to
Krutika Dhananjay <kdhananj@redhat.com>. Issue: #410
Hence changing the iobuf implementation to use per thread mem pool.
This may theoritically appear to cause perf dip as there is no preallocation.
But per thread mem pool will not have significant perf impact as the last
allocated memory is kept alive for subsequent allocs, for some time.
The worst case would be if iobufs requested are of random sizes each time.
The best case is, if we get iobuf request of the same size. From the perf
tests, this patch did not seem to cause any perf decrease.
Note that, with this patch, the rdma performance is going to degrade
drastically. In one of the previous patchsets we had fixes to not
degrade rdma perf, but rdma is not supported and also not tested [1].
Hence the decision was to not have code in rdma that is not tested
and not supported.
[1] https://lists.gluster.org/pipermail/gluster-users.old/2018-July/034400.html
Updates: #325
Change-Id: Ic2ef3bd498f9250dea25f25ba0c01fde19584b27
Signed-off-by: Poornima G <pgurusid@redhat.com>
Currently io-cache invalidate pages falling in the range of write. But
instead it can update pages with same data so that reads can make use
of the cache.
credits: Xavi Hernandez <xhernandez@redhat.com>
Change-Id: I932bd3da97ddfd464187f3009b1013eb334f00a7
Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
updates: bz#1659869
With read-after-open being set to yes by default, if open-behind sees
any reads, it'll do an open on backend (and hence flush/release
later). This means with the current order of quick-read and
open-behind, open-behind sees all reads and hence also does open
bringing down performance for small file reads.
Since for small files, reads are absorbed by quick-read, if quick-read
is made a parent of open-behind, ob doesn't witness any reads. For
read-only workloads, this means ob doen't do any opens (even with
read-after-open yes and use-anonymous-fd no).
Change-Id: I138a42b006d104cff43ee6f07829e39c36f6f234
Signed-off-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
Fixes: bz#1659327
Added '-Wvla' and saw this - gcc doesn't like variable arrays.
There are plenty of others in the EC code, but this seems OK to remove:
there is no use for the array members (I hope - that was from reading
the code).
Compile-tested only!
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
Change-Id: I350f4520e52b86c8bbcd60eea1b27ef99cd119aa
As glusterd scans through all the ports in its defined range, with RHEL
7.3 onwards any port beyond 60999 isn't within the ephemeral port range
and following AVC denial message is seen.
type=AVC msg=audit(1471946614.154:109): avc: denied { name_bind } for
pid=2302 comm="glusterd" src=61000 scontext=system_u:system_r:glusterd_t:s0
tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket
Fix is to define the max port range to 60999 in glusterd.vol file. The
port range can be tweaked through a reconfigure of this configuration
file though.
Fixes: bz#1659857
Change-Id: I60fd4a421d8509b8dca4ca13b73999ae33965f72
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Current rebalance commands use the op_state machine framework.
Porting it to use the mgmt_v3 framework.
Change-Id: I6faf4a6335c2e2f3d54bbde79908a7749e4613e7
fixes: bz#1655827
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
Currently mem-pool implementation provides api to get from the
mem pool based on the struct type. This is to retain api
compatibility with the old implementation of mem pool. Internally
in the mem pool structure there is a mapping from struct to size
based pools.
In this patch, we are adding new APIs to fetch memory from mem pool,
given a size.
Change-Id: Ib220ee45ebd134a7be8f6482db5a592dbb7b9211
Updates: #325
Signed-off-by: Poornima G <pgurusid@redhat.com>
Problem: When separate ssh key is given for non root user setting slave volume
read-only option results in failure.
Solution: Check for extra param in case separate key is given for non-root user
and take action accordingly.
Change-Id: Iafe9a2aa6b86cde1dcd7d63771048a6ae33c2cde
fixes: bz#1659971
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
To handle the race condition of a fop or a function accessing priv->path
and a reconfigure changing priv->path (because entry point directory
changed), the private structure's path is guarded by the lock.
updates bz#1650403
Change-Id: I61c539da06d68d38eafcf2155699c7702f31323e
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
In a previous patch (https://review.gluster.org/20769) we've
added the key length to be passed to dict_* funcs, to remove the need
to strlen() it. This patch moves some xlators to use it.
- In some cases, moved strlen() of the key length outside of locks,
which is usually a good thing. Please verify it's safe to do so.
- In some cases, created a prefix for the keys, replacing something like
"%d-%d" with a "%s" in snprintf(). Not sure it adds value, but improves
readability.
Please review carefully.
Compile-tested only!
Change-Id: I04f2a1eb2ecfc3283d849d150d10d088ae7aa7f1
updates: bz#1193929
Signed-off-by: Yaniv Kaul <ykaul@redhat.com>
Traceback:
Direct leak of 765 byte(s) in 9 object(s) allocated from:
#0 0x7ffb9cad2c48 in malloc (/lib64/libasan.so.5+0xeec48)
#1 0x7ffb9c5f8949 in __gf_malloc ./libglusterfs/src/mem-pool.c:136
#2 0x7ffb9c5f91bb in gf_vasprintf ./libglusterfs/src/mem-pool.c:236
#3 0x7ffb9c5f938a in gf_asprintf ./libglusterfs/src/mem-pool.c:256
#4 0x7ffb826714ab in afr_get_heal_info ./xlators/cluster/afr/src/afr-common.c:6204
#5 0x7ffb825765e5 in afr_handle_heal_xattrs ./xlators/cluster/afr/src/afr-inode-read.c:1481
#6 0x7ffb825765e5 in afr_getxattr ./xlators/cluster/afr/src/afr-inode-read.c:1571
#7 0x7ffb9c635af7 in syncop_getxattr ./libglusterfs/src/syncop.c:1680
#8 0x406c78 in glfsh_process_entries ./heal/src/glfs-heal.c:810
#9 0x408555 in glfsh_crawl_directory ./heal/src/glfs-heal.c:898
#10 0x408cc0 in glfsh_print_pending_heals_type ./heal/src/glfs-heal.c:970
#11 0x408fc5 in glfsh_print_pending_heals ./heal/src/glfs-heal.c:1012
#12 0x409546 in glfsh_gather_heal_info ./heal/src/glfs-heal.c:1154
#13 0x403e96 in main ./heal/src/glfs-heal.c:1745
#14 0x7ffb99bc411a in __libc_start_main ../csu/libc-start.c:308
The dictionary is referenced by caller to print the status.
So set it as dynstr, the last unref of dictionary will free it.
updates: bz#1633930
Change-Id: Ib5a7cb891e6f7d90560859aaf6239e52ff5477d0
Signed-off-by: Kotresh HR <khiremat@redhat.com>