498 Commits

Author SHA1 Message Date
Kaushal M
9167857667 cli: More checks in rebalance status output
Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda
BUG: 1031887
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6337
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-12-03 15:19:25 -08:00
Aravinda VK
0e5837a449 cli: xml: Rebalance status(xml) was empty when a glusterd down
When a glusterd is down in cluster rebalance/remove-brick status
--xml will fail to get status and returns null.

This patch skips collecting status if glusterd is down, and
collects status from all the other up nodes.

Change-Id: I6df0feef41b5cc817cc8d7820ee2acac95176a98
BUG: 1036564
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/6391
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-12-02 21:47:47 -08:00
Krutika Dhananjay
182bad8bfd cli, glusterd: More quota fixes ...
... which may be grouped under the following categories:

1. Fix incorrect cli exit status for 'quota list' cmd
2. Print appropriate error message on quota parse errors in cli

        Authored by: Anuradha Talur <atalur@redhat.com>

3. glusterd: Improve quota validation during stage-op
4. Fix peer probe issues resulting from quota conf checksum mismatches
5. Enhancements to CLI output in the event of quota command failures

        Authored by: Kaushal Madappa <kmadappa@redhat.com>

7. Move aux mount location from /tmp to /var/run/gluster

        Authored by: Krishnan Parthasarathi <kparthas@redhat.com>

8. Fix performance issues in quota limit-usage

        Authored by: Krutika Dhananjay <kdhananj@redhat.com>

Note: Some functions that were used in earlier version of quota,
      that aren't called anymore have been removed.

Change-Id: I9d874f839ae5fdcfbe6d4f2d727eac091f27ac57
BUG: 969461
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6366
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-11-30 10:15:05 -08:00
Raghavendra G
0d5cd92f51 cli/glusterd: Changes to quota command Quota feature
re-work.

Following are the cli commands that are new/re-worked:
======================================================

volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool]
volume statedump <VOLNAME> [nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]

glusterd changes:
=================
* Quota limits are now set as extended attributes by glusterd from
  the aux mount created by the cli.
* The gfids of the directories on which quota limits are set
  for a given volume are stored in
  /var/lib/glusterd/vols/<volname>/quota.conf file in binary format,
  and whose cksum and version is stored in
  /var/lib/glusterd/vols/<volname>/quota.cksum.

Original-author: Krutika Dhananjay <kdhananj@redhat.com>
Original-author: Krishnan Parthasarathi <kparthas@redhat.com>

BUG: 969461
Change-Id: If32bba36c67f9c2a30417af9c6389045b2b7c13b
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-on: http://review.gluster.org/6003
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-11-26 10:25:27 -08:00
Kaushal M
bc9f0bb5ce cli: List only nodes which have rebalance started in rebalance status
Listing the nodes on which rebalance hasn't been started is just giving
out extraneous information.

Also, refactor the rebalance status printing code into a single function
and use it for both rebalance and remove-brick status.

BUG: 1031887
Change-Id: I47bd561347dfd6ef76c52a1587916d6a71eac369
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/6300
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-11-20 11:30:25 -08:00
M. Mohan Kumar
56b82b5294 Fix xml compilation error
Compiling GlusterFS without xml package results in following build error

cli-rpc-ops.o: In function `gf_cli_status_cbk':
/home/mohan/Work/glusterfs/cli/src/cli-rpc-ops.c:6430: undefined
reference to `cli_xml_output_vol_status_tasks_detail'

Change-Id: I49b3c46ac3340c40e372bef4690cedb41df20e8a
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/6295
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-11-19 09:59:28 -08:00
Bala.FA
432cecfbff cli: add peerid to volume status xml output
This patch adds <peerid> tag to bricks and nfs/shd like services to
volume status xml output.

BUG: 955548
Change-Id: I9aaa9266e4d56f632235eaeef565e92d757c0694
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: http://review.gluster.org/6162
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
2013-11-14 23:31:32 -08:00
M. Mohan Kumar
48c40e1a42 bd: posix/multi-brick support to BD xlator
Current BD xlator (block backend) has a few limitations such as
* Creation of directories not supported
* Supports only single brick
* Does not use extended attributes (and client gfid) like posix xlator
* Creation of special files (symbolic links, device nodes etc) not
  supported

Basic limitation of not allowing directory creation is blocking
oVirt/VDSM to consume BD xlator as part of Gluster domain since VDSM
creates multi-level directories when GlusterFS is used as storage
backend for storing VM images.

To overcome these limitations a new BD xlator with following
improvements is suggested.

* New hybrid BD xlator that handles both regular files and block device
  files
* The volume will have both POSIX and BD bricks. Regular files are
  created on POSIX bricks, block devices are created on the BD brick (VG)
* BD xlator leverages exiting POSIX xlator for most POSIX calls and
  hence sits above the POSIX xlator
* Block device file is differentiated from regular file by an extended
  attribute
* The xattr 'user.glusterfs.bd' (BD_XATTR) plays a role in mapping a
  posix file to Logical Volume (LV).
* When a client sends a request to set BD_XATTR on a posix file, a new
  LV is created and mapped to posix file. So every block device will
  have a representative file in POSIX brick with 'user.glusterfs.bd'
  (BD_XATTR) set.
* Here after all operations on this file results in LV related
  operations.

For example opening a file that has BD_XATTR set results in opening
the LV block device, reading results in reading the corresponding LV
block device.

When BD xlator gets request to set BD_XATTR via setxattr call, it
creates a LV and information about this LV is placed in the xattr of the
posix file. xattr "user.glusterfs.bd" used to identify that posix file
is mapped to BD.

Usage:
Server side:
[root@host1 ~]# gluster volume create bdvol host1:/storage/vg1_info?vg1 host2:/storage/vg2_info?vg2
It creates a distributed gluster volume 'bdvol' with Volume Group vg1
using posix brick /storage/vg1_info in host1 and Volume Group vg2 using
/storage/vg2_info in host2.

[root@host1 ~]# gluster volume start bdvol

Client side:
[root@node ~]# mount -t glusterfs host1:/bdvol /media
[root@node ~]# touch /media/posix
It creates regular posix file 'posix' in either host1:/vg1 or host2:/vg2 brick
[root@node ~]# mkdir /media/image
[root@node ~]# touch /media/image/lv1
It also creates regular posix file 'lv1' in either host1:/vg1 or
host2:/vg2 brick
[root@node ~]# setfattr -n "user.glusterfs.bd" -v "lv" /media/image/lv1
[root@node ~]#
Above setxattr results in creating a new LV in corresponding brick's VG
and it sets 'user.glusterfs.bd' with value 'lv:<default-extent-size'
[root@node ~]# truncate -s5G /media/image/lv1
It results in resizig LV 'lv1'to 5G

New BD xlator code is placed in xlators/storage/bd directory.

Also add volume-uuid to the VG so that same VG can't be used for other
bricks/volumes. After deleting a gluster volume, one has to manually
remove the associated tag using vgchange <vg-name> --deltag
<trusted.glusterfs.volume-id:<volume-id>>

Changes from previous version V5:
* Removed support for delayed deleting of LVs

Changes from previous version V4:
* Consolidated the patches
* Removed usage of BD_XATTR_SIZE and consolidated it in BD_XATTR.

Changes from previous version V3:
* Added support in FUSE to support full/linked clone
* Added support to merge snapshots and provide information about origin
* bd_map xlator removed
* iatt structure used in inode_ctx. iatt is cached and updated during
fsync/flush
* aio support
* Type and capabilities of volume are exported through getxattr

Changes from version 2:
* Used inode_context for caching BD size and to check if loc/fd is BD or
  not.
* Added GlusterFS server offloaded copy and snapshot through setfattr
  FOP. As part of this libgfapi is modified.
* BD xlator supports stripe
* During unlinking if a LV file is already opened, its added to delete
  list and bd_del_thread tries to delete from this list when a last
  reference to that file is closed.

Changes from previous version:
* gfid is used as name of LV
* ? is used to specify VG name for creating BD volume in volume
  create, add-brick. gluster volume create volname host:/path?vg
* open-behind issue is fixed
* A replicate brick can be added dynamically and LVs from source brick
  are replicated to destination brick
* A distribute brick can be added dynamically and rebalance operation
  distributes existing LVs/files to the new brick
* Thin provisioning support added.
* bd_map xlator support retained
* setfattr -n user.glusterfs.bd -v "lv" creates a regular LV and
  setfattr -n user.glusterfs.bd -v "thin" creates thin LV
* Capability and backend information added to gluster volume info (and
--xml) so
  that management tools can exploit BD xlator.
* tracing support for bd xlator added

TODO:
* Add support to display snapshots for a given LV
* Display posix filename for list-origin instead of gfid

Change-Id: I00d32dfbab3b7c806e0841515c86c3aa519332f2
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/4809
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-11-13 11:38:42 -08:00
M. Mohan Kumar
15a8ecd9b3 bd_map: Remove bd_map xlator
Remove bd_map xlator and CLI related changes.

Change-Id: If7086205df1907127c1a1fa4ba603f1c48421d09
BUG: 1028672
Signed-off-by: M. Mohan Kumar <mohan@in.ibm.com>
Reviewed-on: http://review.gluster.org/5747
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-11-13 11:38:28 -08:00
Vijaykumar M
27935ee84c cli: Set the o/p width of hostname to 8 characters
Change-Id: I91dcb19ba4d31c17e6041155c0e59af457b87f1b
BUG: 1028871
Signed-off-by: Vijaykumar M <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/6245
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-11-11 10:23:02 -08:00
Dawit Alemu
cbb47056ab cli: write 'volume rebalance' error message in xml format when
--xml is specified

When 'volume rebalance' encounters an error the cli prints the
error message in plain text independent of whether --xml is
specified. This throws off client application that expect xml
output (as mentioned in bz1026143).

Now, if the --xml flag is supplied, the cli print 'volume
rebalance' error messages in xml format.

Change-Id: I16c6a7a4cdd2819eb73422ab849125986dc299a6
BUG: 1026143
Signed-off-by: Dawit Alemu <dalemu@redhat.com>
Reviewed-on: http://review.gluster.org/6242
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-11-10 23:50:31 -08:00
Ravishankar N ravishankar@redhat.com
11e28d3aed glusterd: modify remove-brick CLI message.
In the current context "replica_cnt" is used just to know whether the
specific key exists or not by calling "dict_get_int32", which we can
replace by "dict_get ()". And changing the log message as it is more
appropriate to say "migration of data" rather than "rebalance".

This patch refactors commit 51c6fa7a354826744de98 against BZ 961669

reviewed on : http://review.gluster.org/5566

Change-Id: I48eae206a28d4083975e64407ed8fe4539f9c24b
BUG: 1027270
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Original patch: Susant Palai <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/6001
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: susant palai <spalai@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-11-07 23:54:26 -08:00
Anuradha
fc86b3a22a glusterd : Improved quota volume reset command
Quota volume reset command without "force"
option fixed, doesn't fail anymore. It resets
unprotected fields and not the protected ones.

Also, an appropriate message is provided to the user
for the following cases :
1. only unprotected fields are reset, "force" option
should be used to reset protected fields.
2. Both protected and unprotected fields are reset.
3. No field was reset, "force" option required.

Test case for the same also added.

Change-Id: I24e8f1be87b79ccd81bf6f933e00608b861c7a16
BUG: 1022905
Signed-off-by: Anuradha <atalur@redhat.com>
Reviewed-on: http://review.gluster.org/6135
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-10-28 11:39:21 -07:00
Kaushal M
fc637b14cf cli,glusterd: Changes to cli-glusterd communication
Glusterd changes:
With this patch, glusterd creates a socket file in
DATADIR/run/glusterd.socket , and listen on it for cli requests. It
listens for 2 rpc programs on the socket file,
- The glusterd cli rpc program, for all cli commands
- A reduced glusterd handshake program, just for the 'system:: getspec'
  command

The location of the socket file can be changed with the glusterd option
'glusterd-sockfile'.

To retain compatibility with the '--remote-host' cli option, glusterd
also listens for the cli requests on port 24007. But, for the sake of
security, it listens using a reduced cli rpc program on the port. The
reduced rpc program only contains read-only procs used for 'volume
(info|list|status)', 'peer status' and 'system:: getwd' cli commands.

CLI changes:
The gluster cli now uses the glusterd socket file for communicating with
glusterd by default. A new option '--gluster-sock' has been added to
allow specifying the sockfile used to connect. Using the '--remote-host'
option will make cli connect to the given host & port.

Tests changes:
cluster.rc has been modified to make use of socket files and use
different log files for each glusterd.
Some of the tests using cluster.rc have been fixed.

Change-Id: Iaf24bc22f42f8014a5fa300ce37c7fc9b1b92b53
BUG: 980754
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5280
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-10-17 11:26:45 -07:00
Venkatesh Somyajulu
75caba6371 cluster/afr: [Feature] Command implementation to get heal-count
Currently to know the number of files to be healed, either user
has to go to backend and check the number of entries present in
indices/xattrop directory. But if a volume consists of large
number of bricks, going to each backend and counting the number
of entries is a time-taking task. Otherwise user can give
gluster volume heal vol-name info command but with this
approach if no. of entries are very hugh in the indices/
xattrop directory, it will comsume time.

So as a feature, new command is implemented.

Command 1: gluster volume heal vn statistics heal-count
This command will get the number of entries present in
every brick of a volume. The output displays only entries
count.

Command 2: gluster volume heal vn statistics heal-count
           replica 192.168.122.1:/home/user/brickname

           Here if we are concerned with just one replica.
So providing any one of the brick of a replica will get
the number of entries to be healed for that replica only.

Example:
Replicate volume with replica count 2.

Backend status:
--------------
[root@dhcp-0-17 xattrop]# ls -lia | wc -l
1918

NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
entries so actual no. of entries to be healed are
1916.

[root@dhcp-0-17 xattrop]# pwd
/home/user/2ty/.glusterfs/indices/xattrop

Command output:
--------------
Gathering count of entries to be healed on volume volume3 has been successful

Brick 192.168.122.1:/home/user/22iu
Status: Brick is Not connected
Entries count is not available

Brick 192.168.122.1:/home/user/2ty
Number of entries: 1916

Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
BUG: 1015990
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/6044
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-10-14 14:41:54 -07:00
Venkatesh Somyajulu
047882750e cluster/afr : Implementation of command "gluster volume heal vn statistics"
"gluster volume heal volumename statistics" command gives the summary
of the afr crawl done based on the entries present in the xattrop
directory. Whenever afr crawls are attempted, the beginning time of
crawl, end time of crawl, no of files healed, heal-failed count and
number of files in split brain are shown along with the type of the
crawl. If crawl is already in progress then it will give the number
of files healed, heal failed count and number of files in split-brain
from the beginning of the crawl and instead of telling the end time of
the crawl, "CRAWL IN PROGRESS" message will be shown.

Output format:
command: "gluster volume heal volume-name statistics"
Output:
Gathering afr crawl statistics crawl statistics on volume volume-name
has been successful
------------------------------------------------

Crawl statistics for brick no 0
Hostname of brick 192.168.122.248

Starting time of crawl: Wed Jul 10 15:52:38 2013

Ending time of crawl: Wed Jul 10 15:52:38 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

Starting time of crawl: Wed Jul 10 15:52:38 2013

Ending time of crawl: Wed Jul 10 15:52:38 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

------------------------------------------------

Crawl statistics for brick no 1
Hostname of brick 192.168.122.1

Starting time of crawl: Wed Jul 10 15:52:42 2013

Ending time of crawl: Wed Jul 10 15:52:42 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

Starting time of crawl: Wed Jul 10 15:52:42 2013

Ending time of crawl: Wed Jul 10 15:52:42 2013

Type of crawl: INDEX
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0

--------------------------------------------------

Change-Id: I10bf9d10b005741db9973fb1352e0dd59ed99aa9
BUG: 949400
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/4790
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-10-14 14:41:44 -07:00
Krutika Dhananjay
e51ca3c1c9 cli,glusterd: Implement 'volume status tasks'
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.

The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.

Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
2013-10-08 23:13:16 -07:00
Bala.FA
d9db4a8ff3 cli: add node uuid in rebalance and remove brick status xml output
This patch adds node uuid in rebalance/remove-brick status xml output.
Output XML will look like

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <op>3</op>
    <nodeCount>1</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
 ==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </node>
    <aggregate>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <status>3</status>
      <statusStr>completed</statusStr>
    </aggregate>
  </volRebalance>
</cliOutput>

Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843
BUG: 1012296
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: http://review.gluster.org/6005
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-10-03 22:25:59 -07:00
Aravinda VK
7dba6f9b55 cli: runtime in xml output of rebalance/remove-brick status
"runtime in secs" is available in the CLI output of
rebalance status and remove-brick status, but not available
in xml output when --xml is passed.

runtime in aggregate section will be max of all nodes runtimes.

Example output:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <op>3</op>
    <nodeCount>1</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <runtime>1.00</runtime>
      <status>3</status>
      <statusStr>completed</statusStr>
    </node>
    <aggregate>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <runtime>1.00</runtime>
      <status>3</status>
      <statusStr>completed</statusStr>
    </aggregate>
  </volRebalance>
</cliOutput>

BUG: 1012773
Change-Id: I8deaba08922a53cd2d3b411e097a7b3cf591b127
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/5997
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-10-03 21:12:38 -07:00
Aravinda VK
59b4e379e6 cli: skipped tag in xml output of rebalance/remove-brick status
Skipped files count is available in CLI output of rebalance status
and remove-brick status, but not available in xml output.

Example output:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRebalance>
    <op>3</op>
    <nodeCount>1</nodeCount>
    <node>
      <nodeName>localhost</nodeName>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>0</status>
      <statusStr>completed</statusStr>
    </node>
    <aggregate>
      <files>0</files>
      <size>0</size>
      <lookups>0</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>0</status>
      <statusStr>completed</statusStr>
    </aggregate>
  </volRebalance>
</cliOutput>

BUG: 1012772
Change-Id: I05191293403e66e0d681f0cd0422aa3c78a2d91d
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/6000
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-10-03 21:12:21 -07:00
Vijay Bellur
8484717992 logging: Remove multiple definitions of DEFAULT_LOG_FILE_DIRECTORY
Change-Id: I8d670a228d3c1282aa7d70b151f166d04abc40e5
BUG: 764890
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/5909
Reviewed-by: Anand Avati <avati@redhat.com>
Tested-by: Anand Avati <avati@redhat.com>
2013-09-24 12:00:35 -07:00
Avra Sengupta
4152ef34ec glusterd/cli: Status detail cli parse check and vol geo status crash fix
Change-Id: I1841864273fc4242de15fbfcf76fd5de40269f28
BUG: 1006249
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5889
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-09-21 23:13:57 -07:00
Ravishankar N
c550ae6952 cli/glusterd: improve rebalance fix-layout status reporting
Problem:
Currenly the CLI rebalance status command output does not indicate the
'type' of rebalance, i.e. whether a full rebalance or only a fix-layout
was carried out.

Fix: After the rebalance status of all peers is received by the
originator glusterd, alter it to reflect the type of rebalance
before passing it on to the CLI process.

Change-Id: I1940ffda0d36e25e5b33c84a0ea210394cc9e1d3
BUG: 1004744
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5826
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-09-19 09:22:36 -07:00
Timothy Asir
91e4b7aa13 cli: add aggregate status for rebalance and remove-brick status xml output
Add aggregate status information in <aggregate> section of gluster volume
'rebalance status' and 'remove-brick status cli xml output.

The aggregate status determined based on the most critical level
and the aggregate status will have 'Complete' only when all
individual status are 'Complete'.

Change-Id: Ie805b9dd52fd82fd277c3da9ee91cc8b6dea8212
BUG: 1006813
Signed-off-by: Timothy Asir <tjeyasin@redhat.com>
Reviewed-on: http://review.gluster.org/4950
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
2013-09-18 09:00:59 -07:00
Kaushal M
91cd0eae2c cli,glusterd: Task parameters in xml output
This patch introduces task parameters for the asynchronus task shown in
volume status. The parameters are only given for xml output. The
parameters shown currently are,
- source and destination bricks for replace-brick tasks
  ......
        <tasks>
          <task>
            <type>Replace brick</type>
            <id>3d1a1005-9d2e-4ae0-bd62-577bc1d333a3</id>
            <status>1</status>
            <params>
              <srcBrick>archm:/export/test4</srcBrick>
              <dstBrick>archm:/export/test-replace1</dstBrick>
            </params>
          </task>
        </tasks>
  ......
- list of bricks being removed for remove-brick tasks
  ......
        <tasks>
          <task>
            <type>Remove brick</type>
            <id>901c20ca-0da2-41de-8669-5f0caca6b846</id>
            <status>1</status>
            <params>
              <brick>archm:/export/test2</brick>
              <brick>archm:/export/test3</brick>
            </params>
          </task>
        </tasks>
  ......

The changes for non-xml output will be done in a subsequent patch.

Change-Id: I322afe2f83ed8adeddb99f7962c25911204dc204
BUG: 916577
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5771
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
2013-09-13 03:38:04 -07:00
Aravinda VK
cd7951aa31 cli: Add statusStr xml tag to task list and rebalance/remove brick status
New xml tag statusStr added to following gluster cli commands
gluster volume status all --xml (For Task status)
gluster volume rebalance <VOLNAME> status --xml
gluster volume remove-brick <VOLNAME> <BRICK1..> status --xml

Example(volume status all):
<task>
    <type>Rebalance</type>
    <id>82d8d122-8738-4144-8507-d93fc98b61df</id>
    <status>3</status>
    <statusStr>completed</statusStr>
</task>

Example(volume rebalance <VOL> status)
<node>
    <nodeName>localhost</nodeName>
    <files>0</files>
    <size>0</size>
    <lookups>0</lookups>
    <failures>0</failures>
    <status>3</status>
    <statusStr>completed</statusStr>
</node>

Also modified task status as string instead of showing number
in gluster volume status all

Example:
Status of volume: gv1
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick sumne.sumne:/gfs/b1                               49154   Y       15489
Brick sumne.sumne:/gfs/b2                               49155   Y       15493
NFS Server on localhost                                 N/A     N       15913

           Task                                      ID         Status
           ----                                      --         ------
      Rebalance    82d8d122-8738-4144-8507-d93fc98b61df      completed

BUG: 1003521
Change-Id: Ib283016af4c18132fb13fb33d44075782d77823c
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/5739
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-09-12 10:57:22 -07:00
Kaushal M
7d9bc0d214 cli: Fix 'status all' xml output when volumes are not started
CLI now only outputs one XML document for 'status all' only containing
those volumes which are started.

BUG: 1004218
Change-Id: Id4130fe59b3b74475d8bd1cc8134ac59a28f1b7e
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5773
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-09-11 09:28:45 -07:00
Venky Shankar
fe98c2902f glusterd/cli: Geo-Replication "status detail" cmd
Provides detailed status info in the following format

                    MASTER <master-vol>  SLAVE <slave-vol>

NODE   HEALTH   UPTIME  FILES SYNCD  FILES PENDING  BYTES PENDING  DELETES PENDING
-----------------------------------------------------------------------------------

This patch introdues "status detail" command to show crawl related
information in CLI. These values are "pulled" from gsyncd when
"status detail" is executed.

Change-Id: I1fdaf7180eacce054a864d34971dc160bd7301e1
BUG: 990420
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5590
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-09-04 19:50:49 -07:00
Venky Shankar
c7cc5252a3 glusterd: Saving geo-rep session details in a more specific path
Now saving the session details in
/var/lib/glusterd/geo-replication/<mastervol>_<slaveip>_<slavevol>
repo to distinguish between two master-slave sessions where the
slavename is same across two different clusters.

Change-Id: I57c93f55cc9bd4fe2bffe579028aaf5e4335b223
BUG: 991501
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reviewed-on: http://review.gluster.org/5488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-09-04 19:27:19 -07:00
Timothy Asir
c12fccc473 cli: Add server uuid into volume brick info xml
Add server uuid as an attribute to the existing brick details in the
volume info cli xml output.
Currently, when a node has more than one ip, the oVirt-engine fails
to map the corresponding server using the ip alone.
If we get the host uuid along with brick details in volume info
command it will be easy for ovirt-engine to find out the
server and thereby we can avoid confusion in finding the server.

Change-Id: I3c9c9acea80e10e0b2977477759d9af045e48959
BUG: 955588
Signed-off-by: Timothy Asir <tjeyasin@redhat.com>
Reviewed-on: http://review.gluster.org/4875
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-08-18 05:24:10 -07:00
Bala.FA
4e63eafaed log: set ident to openlog
at syslog side, log message is identified by its properties like
programname, pid, etc.  brick/mount processes need to be identified
uniquely as they are different process of gluterfsd/glusterfs.  At
rsyslog side, log separated by programname/app-name with pid works but
bit hard to identify them in long run which process is for what
brick/mount.

This patch fixes by setting identity string at openlog() which sets
programname/app-name as similar to old style log file prefixed by
gluster, glusterd, glusterfs or glusterfsd

Change-Id: Ia05068943fa67ae1663aaded1444cf84ea648db8
BUG: 928648
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: http://review.gluster.org/5541
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-08-13 07:07:43 -07:00
Kaushal M
572f5f0a85 cli,glusterd: Fix when tasks are shown in 'volume status'
Asynchronous tasks are shown in 'volume status' only for a normal volume
status request for either all volumes or a single volume.

Change-Id: I9d47101511776a179d213598782ca0bbdf32b8c2
BUG: 888752
Signed-off-by: Kaushal M <kaushal@redhat.com>
Reviewed-on: http://review.gluster.org/5308
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-08-03 05:51:16 -07:00
shishir gowda
e306698b00 cluster/dht: Treat migration failures due to space constraints as skipped
Currently rebalance/remove-brick op's display migration failed count even
for files which failed due to space issues (not enough space for file, or
migration leading to cluster imbalance)

These will now be counted as skipped, and rebalance/remove-brick status
will display the additional counter

Change-Id: I674904d380b5f8300e9ca9e6af557c3d30d6cff4
BUG: 989846
Signed-off-by: shishir gowda <sgowda@redhat.com>
Reviewed-on: http://review.gluster.org/5399
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-07-30 23:55:59 -07:00
Avra Sengupta
355ff31dff glusterd: Fixing create force issues while it returned true everytime.
Now geo-rep create force will return true if a node is down, and log an
appropriate message. It will also return true with an appropriate log
message if the slave verification fails.

However it will not return true if the config file is deleted, ot corrupted,
so as not to get the state_file's path. It will also fail if the slave url
is invalid. If the push-pem option is given and
/var/lib/glusterd/geo-replication/common_secret.pem.pub is not present, then
also the create force command will fail.

Change-Id: Ie7532a0884ddf9c3008bd30832d171d5b53b540e
BUG: 988314
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5405
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-07-29 04:42:40 -07:00
Avra Sengupta
c6b8143b9f cli : Display error messages if virt file has been deleted or is invalid.
"gluster volume set <VOLNAME> group virt" will display error message
if virt file is deleted or is invalid.

Change-Id: Icb202b6a445597fcd9a3dcef8001891f2601a115
BUG: 916127
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/4586
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-07-27 21:27:15 -07:00
Vijay Bellur
d16cdf141d cli, glusterd: Cleanup logging of bd op commands.
This patch prevents messages of the form "bd op: %s : SUCCESS"
from being logged in .cmd_log_history.

Change-Id: Iebeb7e26d409bf99b9c8df0a5c1c5a5d30d78a61
BUG: 823081
Signed-off-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/4871
Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-by: M. Mohan Kumar <mohan@in.ibm.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
2013-07-27 03:48:05 -07:00
Avra Sengupta
5757ed2727 glusterd/cli changes for distributed geo-rep
Commands:
gluster system:: execute gsec_create
gluster volume geo-rep <master> <slave-url> create [push-pem] [force]
gluster volume geo-rep <master> <slave-url> start [force]
gluster volume geo-rep <master> <slave-url> stop [force]
gluster volume geo-rep <master> <slave-url> delete
gluster volume geo-rep <master> <slave-url> config
gluster volume geo-rep <master> <slave-url> status

The geo-replication is distributed. The session will be created, and
gsyncd will be spawned on all relevant nodes, instead of only one
node.

geo-rep: Collecting status detail related data

Added persistent store for saving information about
TotalFilesSynced, TotalSyncTime, TotalBytesSynced

Changes in the status information in socket:
Existing(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;

New(Ex):
FilesSynced=2;BytesSynced=2507;Uptime=00:26:01;SyncTime=0.69978;
TotalSyncTime=2.890044;TotalFilesSynced=6;TotalBytesSynced=143640;

Persistent details stored in
/var/lib/glusterd/geo-replication/${mastervol}/${eSlave}-detail.status

Change-Id: I1db7fc13ffca2e415c05200b0109b1254067f111
BUG: 847839
Original Author: Avra Sengupta <asengupt@redhat.com>
Original Author: Venky Shankar <vshankar@redhat.com>
Original Author: Aravinda VK <avishwan@redhat.com>
Original Author: Amar Tumballi <amarts@redhat.com>
Original Author: Csaba Henk <csaba@redhat.com>
Signed-off-by: Avra Sengupta <asengupt@redhat.com>
Reviewed-on: http://review.gluster.org/5132
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
2013-07-26 13:19:18 -07:00
susant
e45e0037f6 cli :remove-brick process output leads to ambiguity
The output of remove-brick  status as "Not started" leads to
ambiguity.We should not show the status of the Server nodes
which do not participate in the remove-brick process.

Change-Id: I85fea40deb15f3e2dd5487d881f48c9aff7221de
BUG: 986896
Signed-off-by: susant <spalai@redhat.com>
Reviewed-on: http://review.gluster.org/5383
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-07-24 11:05:04 -07:00
Venkatesh Somyajulu
59dde88921 cli: gluster volume heal commands are more elaborative
1. "gluster volume heal volume-name"
output :Launching heal operation to perform index self heal on volume volume-name has been successful

2. "gluster volume heal volume-name full"
Output :Launching heal operation to perform full self heal on volume volume-name has been successful

3. "gluster volume heal volume-name info"
Output :Gathering list of entries to be healed on volume volume-name has been successful 

4. "gluster volume heal volume-name info healed"
Output :Gathering list of healed entries on volume volume-name has been successful 

5. "gluster volume heal volume-name info split-brain"
Output :Gathering list of split brain entries on volume volume-name has been successful 

6. "gluster volume heal volume-name info heal-failed"    
Output :Gathering list of heal failed entries on volume volume-name has been successful 

Change-Id: I74c90e8129d23d513ddb7879358a9d21c94a5c0d
BUG: 978936
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/5286
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-07-24 05:18:05 -07:00
Venkatesh Somyajulu
50cfc2d8e8 cli: Increased timeout for gluster volume heal commnads
Problem:
If number of files are very large, then gluster volume heal
volumename info commnads take large time. So timeout of 2
minutes seems to be insufficient.

Fix:
Increased timeout to 10 minutes

Change-Id: I5f847163e01c4afbb587b726833ad80183f1a928
BUG: 986945
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/5372
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-07-23 02:52:42 -07:00
Ravishankar N
07833f13d4 cli: check for null in is_server_debug_xlator()
Command: gluster volume set <volname> diagnostics.client-log-level trace
Expected output:
"volume set: failed: option log-level trace: 'trace' is not valid
(possible options are DEBUG, WARNING, ERROR, INFO, CRITICAL, NONE,
TRACE.)"
Current output: gluster cli receives a segmentation fault
Fix: check for NULL before calling strstr

Change-Id: If4c7a85a635849a388cf122543e12349c109643c
BUG: 982174
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/5298
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-07-12 04:14:12 -07:00
Venkatesh Somyajulu
b3cc221844 cli: Fix remove brick cli out for wrong volume name
Problem:  gluster volume remove-brick command, was not printing the error in
          case of volume-name specified is wrong.

Fix:      Fix will print error message to indicate that provided volume name is
          invalid. Although patch for bug 961669
          http://review.gluster.org/#/c/4975/ does print cli-output now, but
          still xml is unable to use the response values

Change-Id: I2ee1df86c1e756fb8e93b4d6bbdd102b4f368f87
BUG: 961307
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/4972
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-07-04 05:25:31 -07:00
Venkatesh Somyajulu
291344f9c8 cli: Fix in letter case in volume heal output
Change-Id: I25d13444c2cbff9b26642e91677ad1e09e77aa1e
BUG: 978936
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/5259
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-07-03 22:29:52 -07:00
Krutika Dhananjay
78a2e27ec7 glusterd: Log peer op status at the appropriate time
Change-Id: Ia8e1af082078f2f791708ba4faa4992bf291dd6e
BUG: 961339
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: http://review.gluster.org/5023
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-06-18 22:15:12 -07:00
Krishnan Parthasarathi
214dccb317 glusterd: Add a cmd for getting uuid of local node
Usage: gluster system:: uuid get

This is needed since we generate uuid of a node in a lazy manner. ie, we
generate a uuid for the node only on the first volume or peer operation,
when the node needs an external identity.  With this command, we can
force[1] the uuid generation, without a volume or peer operation performed.

[1]: Querying for uuid (or uuid get), forces uuid to come into
existence.

Change-Id: I62c8b6754117756aa4d773dd48af4ddeb1a1d878
BUG: 971661
Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-on: http://review.gluster.org/5175
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
2013-06-10 12:20:05 -07:00
Venkatesh Somyajulu
68a97ece53 cli: Remove unused port info from peer status.
Problem: "gluster peer status" on some nodes gives port info and fails to give
on other. But it is a hard coded value.

Fix: Removing the port info from command

Change-Id: I919f0349f252e658bfc13e60bb8e171da32eaf25
BUG: 964026
Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
Reviewed-on: http://review.gluster.org/5027
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-06-05 05:28:17 -07:00
Jeff Darcy
1afbd1e899 cli: set min-op-version and max-op-version for getspec
Change-Id: I2185df5d6b560d9367ae404c91812048e1655180
BUG: 969193
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/5119
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-05-30 23:46:38 -07:00
Ravishankar N
0d415f7f8c glusterd: remove-brick: prevent removal from a replicate volume.
Prevent the removal of brick(s) from a plain replicate volume and
display the error message at the CLI.

Change-Id: I8e182404564147329d8cd364b7c7931d19f14570
BUG: 961669
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/4975
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
2013-05-13 01:33:17 -07:00
Santosh Kumar Pradhan
fef1270fc2 gluster/CLI: crash upon executing "peer status" command
Problem:
While doing "gluster peer status", cli_cmd_peer_status_cbk() creates
the frame and passes as arg to gf_cli_list_friends() which sets
frame->local to GF_CLI_LIST_PEERS flag (value: 0x1). It expects
gf_cli_list_friends_cbk() [invoked through cli_cmd_submit()] to
reset frame->local to NULL. But if cli_cmd_submit() fails some
where before gf_cli_list_friends_cbk() gets invoked, then the
flag value remains in frame->local and causes a SEGV while
destroying the stack i.e. [CLI_STACK_DESTROY => cli_local_wipe()].

Fix:
In gf_cli_list_friends(), if cli_cmd_submit() fails, then
reset the value of frame->local to NULL.

Change-Id: Ied19f07eaf67e3bd42c75cdc2ff3729b0789e632
BUG: 961691
Signed-off-by: Santosh Kumar Pradhan <spradhan@redhat.com>
Reviewed-on: http://review.gluster.org/4976
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-05-10 05:38:53 -07:00
Ravishankar N
fc8aa43d46 cli: Avoid storing empty lines in command history
When the console manager is run in the interactive mode, it also saves
empty lines (i.e. the Enter key is pressed without running a command) in
it's command history. Avoid this by processing the line only if
readline() returns a non-empty string. Makes it easier to navigate the
history using arrow keys.

	modified:   cli/src/cli-rl.c

Change-Id: I0fcce394474589bb345b7c9ef39d25849dc0c2af
BUG: 957139
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/4894
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
2013-04-28 22:26:52 -07:00