Bug 1214772 - gluster xml empty output volume status detail
Summary: gluster xml empty output volume status detail
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.6.2
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-23 13:46 UTC by jAHu
Modified: 2015-06-09 12:23 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-06-09 12:23:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description jAHu 2015-04-23 13:46:18 UTC
Description of problem:
- xml output of volume status detail command is empty.
- only xml status command output (without detail) is ok.
- status detail command output xml or noxml is ok.
- underlaying filesystem is btrfs. in other case with btrfs output of status command is ok
- this command has been runned after geo-replication start. if output is empty. geo-replication will not start.

Version-Release number of selected component (if applicable):
3.6.2, and 3.6.3beta2 
i have tried upgrade to actual beta with bugfixes for 3.6


How reproducible:
gluster --xml volume status volname detail
gluster --xml --remote-host=host volume status volname detail


Actual results:
empty response

Expected results:
xml response with volume status detail information

Additional info:

# gluster --xml volume status volname
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>volname</volName>
        <nodeCount>1</nodeCount>
        <node>
          <hostname>host</hostname>
          <path>/mnt/...</path>
          <peerid>....</peerid>
          <status>1</status>
          <port>49158</port>
          <pid>14099</pid>
        </node>
        <tasks/>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

# gluster --xml volume status volname detail
#
# gluster  volume status volname detail
Status of volume: volname
------------------------------------------------------------------------------
Brick                : Brick host:/mnt/...
Port                 : 49158               
Online               : Y                   
Pid                  : 14099               
File System          : N/A                 
Device               : N/A                 
Mount Options        : N/A                 
Inode Size           : N/A                 
Disk Space Free      : 2.0TB               
Total Disk Space     : 2.6TB               
Inode Count          : N/A                 
Free Inodes          : N/A   

# tail cli.log
[2015-04-23 13:39:05.928307] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0
[2015-04-23 13:39:05.928333] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0
[2015-04-23 13:39:05.928346] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0
[2015-04-23 13:39:05.928390] D [cli-xml-output.c:322:cli_xml_output_vol_status_common] 0-cli: Returning 0
[2015-04-23 13:39:05.928412] D [cli-xml-output.c:429:cli_xml_output_vol_status_detail] 0-cli: Returning -2
[2015-04-23 13:39:05.928422] D [cli-xml-output.c:1756:cli_xml_output_vol_status] 0-cli: Returning -2
[2015-04-23 13:39:05.928433] E [cli-rpc-ops.c:6742:gf_cli_status_cbk] 0-cli: Error outputting to xml
[2015-04-23 13:39:05.928471] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning -2
[2015-04-23 13:39:05.928492] D [cli-rpc-ops.c:6912:gf_cli_status_volume] 0-cli: Returning: -2
[2015-04-23 13:39:05.928508] D [cli-cmd-volume.c:1930:cli_cmd_volume_status_cbk] 0-cli: frame->local is not NULL (0x366700009c0)
[2015-04-23 13:39:05.928530] I [input.c:36:cli_batch] 0-: Exiting with: -2


- georeplication start error, running this command:

[2015-04-21 14:58:23.372023] I [monitor(monitor):141:set_state] Monitor: new state: Initializing...
[2015-04-21 14:58:23.582990] E [resource(monitor):221:errlog] Popen: command "gluster --xml --remote-host=host volume status volumename detail" returned with 2
[2015-04-21 14:58:23.583717] I [syncdutils(monitor):214:finalize] <top>: exiting.

Comment 1 Atin Mukherjee 2015-04-23 14:52:25 UTC
Could you attach glusterd log file for all the nodes. As per the cli log it definitely looks like volume status details --xml failed for some reason, glusterd log files will contain the actual reason of failure.

Comment 2 jAHu 2015-04-24 10:46:59 UTC
yes, sorry, i forgot.
because no error log entries for given time/request, only info.
there is only one node (peer). it will be geo replication slave.

*** log (info):
[2015-04-23 13:39:05.927582] I [glusterd-handler.c:3803:__glusterd_handle_status_volume] 0-management: Received status volume req for volume volname


*** glusterd -N --debug, version 3.6.3beta2

[2015-04-24 10:31:05.142692] I [glusterd-handler.c:3803:__glusterd_handle_status_volume] 0-management: Received status volume req for volume volname
[2015-04-24 10:31:05.142746] D [glusterd-op-sm.c:170:glusterd_generate_txn_id] 0-: Transaction_id = 00000000-0000-0000-0000-000000000000
[2015-04-24 10:31:05.142775] D [glusterd-op-sm.c:268:glusterd_set_txn_opinfo] 0-: Successfully set opinfo for transaction ID : 00000000-0000-0000-0000-000000000000
[2015-04-24 10:31:05.142792] D [glusterd-op-sm.c:275:glusterd_set_txn_opinfo] 0-: Returning 0
[2015-04-24 10:31:05.142810] D [glusterd-syncop.c:1584:gd_sync_task_begin] 0-management: Transaction ID : 00000000-0000-0000-0000-000000000000
[2015-04-24 10:31:05.142839] D [glusterd-utils.c:156:glusterd_lock] 0-management: Cluster lock held by 57c7bfb9-.....
[2015-04-24 10:31:05.142863] D [glusterd-utils.c:1589:glusterd_volinfo_find] 0-management: Volume volname found
[2015-04-24 10:31:05.142891] D [glusterd-utils.c:1596:glusterd_volinfo_find] 0-management: Returning 0
[2015-04-24 10:31:05.142929] D [glusterd-utils.c:1589:glusterd_volinfo_find] 0-management: Volume volname found
[2015-04-24 10:31:05.142945] D [glusterd-utils.c:1596:glusterd_volinfo_find] 0-management: Returning 0
[2015-04-24 10:31:05.142965] D [glusterd-op-sm.c:1371:glusterd_op_stage_status_volume] 0-management: Returning: 0
[2015-04-24 10:31:05.142981] D [glusterd-op-sm.c:4885:glusterd_op_stage_validate] 0-management: OP = 18. Returning 0
[2015-04-24 10:31:05.143001] D [glusterd-op-sm.c:6182:glusterd_op_bricks_select] 0-management: Returning 0
[2015-04-24 10:31:05.143017] D [glusterd-syncop.c:1529:gd_brick_op_phase] 0-management: Sent op req to 0 bricks
[2015-04-24 10:31:05.143047] D [glusterd-utils.c:1589:glusterd_volinfo_find] 0-management: Volume volname found
[2015-04-24 10:31:05.143063] D [glusterd-utils.c:1596:glusterd_volinfo_find] 0-management: Returning 0
[2015-04-24 10:31:05.151830] D [glusterd-utils.c:7637:glusterd_add_brick_detail_to_dict] 0-management: Error adding brick detail to dict: Permission denied
[2015-04-24 10:31:05.151856] D [glusterd-op-sm.c:2821:glusterd_op_status_volume] 0-management: Returning 0
[2015-04-24 10:31:05.151871] D [glusterd-op-sm.c:5004:glusterd_op_commit_perform] 0-management: Returning 0
[2015-04-24 10:31:05.151930] D [glusterd-op-sm.c:3910:glusterd_op_modify_op_ctx] 0-management: op_ctx modification not required for status operation being performed
[2015-04-24 10:31:05.151951] D [glusterd-op-sm.c:215:glusterd_get_txn_opinfo] 0-: Successfully got opinfo for transaction ID : 00000000-0000-0000-0000-000000000000
[2015-04-24 10:31:05.151962] D [glusterd-op-sm.c:219:glusterd_get_txn_opinfo] 0-: Returning 0
[2015-04-24 10:31:05.151976] D [glusterd-op-sm.c:311:glusterd_clear_txn_opinfo] 0-: Successfully cleared opinfo for transaction ID : 00000000-0000-0000-0000-000000000000
[2015-04-24 10:31:05.151986] D [glusterd-op-sm.c:315:glusterd_clear_txn_opinfo] 0-: Returning 0
[2015-04-24 10:31:05.152077] D [glusterd-rpc-ops.c:196:glusterd_op_send_cli_response] 0-management: Returning 0
[2015-04-24 10:31:05.153452] D [socket.c:590:__socket_rwv] 0-socket.management: EOF on socket
[2015-04-24 10:31:05.153489] D [socket.c:2353:socket_event_handler] 0-transport: disconnecting now


***
looks like problem with getting more info like nod ecount from filesystem (perm denied):
File System          : N/A
Device               : N/A
Mount Options        : N/A
Inode Size           : N/A
Disk Space Free      : 2.0TB
Total Disk Space     : 2.6TB
Inode Count          : N/A
Free Inodes          : N/A

but it should be n/a in xml also. and output should not be empty.


*** 
in other working btrfs case, xml output is:


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>volume_with_working_xml_output</volName>
        <nodeCount>1</nodeCount>
        <node>
          <hostname>shostname</hostname>
          <path>/backup/gfs-georeplication/....</path>
          <peerid>cf426f5c-0....</peerid>
          <status>1</status>
          <port>49155</port>
          <pid>3689</pid>
          <sizeTotal>214748364800</sizeTotal>
          <sizeFree>31830433792</sizeFree>
          <device>/dev/mapper/vgS51-bak_gfs_...</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,noatime,nosuid,nodev,noexec,compress=lzo,space_cache</mntOptions>
          <fsName>btrfs</fsName>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Comment 3 Atin Mukherjee 2015-04-24 11:16:40 UTC
The only strange thing I found here is the following log:

[2015-04-24 10:31:05.151830] D [glusterd-utils.c:7637:glusterd_add_brick_detail_to_dict] 0-management: Error adding brick detail to dict: Permission denied

Did you use different user while creating the volume?

Comment 4 jAHu 2015-04-24 15:08:54 UTC
no, all was under root user.

Comment 5 jAHu 2015-06-09 12:23:56 UTC
xml output of volume status detail is working ok with 3.7.1 release in my case.

# gluster --xml volume status volname
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>volname</volName>
        <nodeCount>1</nodeCount>
        <node>
          <hostname>hostname</hostname>
          <path>/mnt/....</path>
          <peerid>.....</peerid>
          <status>1</status>
          <port>49158</port>
          <ports>
            <tcp>49158</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>19737</pid>
          <sizeTotal>2871989895168</sizeTotal>
          <sizeFree>2206573936640</sizeFree>
          <blockSize>4096</blockSize>
        </node>
        <tasks/>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Note You need to log in before you can comment on or make changes to this bug.