Bug 1020870 - glusterfs: "No Volume Present" even though volume is present
glusterfs: "No Volume Present" even though volume is present
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
glusterd
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-18 08:09 EDT by Saurabh
Modified: 2016-01-19 01:14 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:12:07 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
sosreport of node (9.45 MB, application/x-xz)
2013-10-18 08:14 EDT, Saurabh
no flags Details

  None (edit)
Description Saurabh 2013-10-18 08:09:50 EDT
Description of problem:

The problem is I am getting "No volumes present" for gluster volume info command even though I am having a volume.


Presently I upgraded my system to glusterfs 3.4.0.35rhs from 3.4.0.34rhs.

Already I was having a volume, with quota enabled and over nfs mount the I/O was going on.
Before updating the rpms I have stopped the volume.

Now after updating the rpm using yum update.

I tried to check the status of the system and it did provide me status properly.
Even I/O over nfs resumed after I started the volume.

But executing some gluster related commands, gluster volume info fails to provide the information of single existing volume.

Before gluster volume info failure, setting the quota-deem-statfs comamnd also failed.

[root@quota2 ~]# gluster volume set dist-rep quota-deem-statfs on
volume set: failed: Another transaction is in progress. Please try again after sometime.

In actaul results section, mentioning the sequence of commands and there outputs.

Version-Release number of selected component (if applicable):
glusterfs.3.4.0.35rhs

How reproducible:
seen this time

Actual results:
[root@quota2 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.42.186
Uuid: ee116b10-466c-45b4-8552-77a6ce289179
State: Peer in Cluster (Connected)

Hostname: 10.70.43.18
Uuid: 83bd0eae-0b2c-4735-a208-b03029c8c1a8
State: Peer in Cluster (Connected)

Hostname: 10.70.43.22
Uuid: 6ca57f52-a0f2-417a-9d55-82c5fc390003
State: Peer in Cluster (Connected)
[root@quota2 ~]# 
[root@quota2 ~]# 
[root@quota2 ~]# gluster volume info
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 1e06795e-7032-479d-9d48-026b832cede3
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r1
Brick2: 10.70.43.181:/rhs/brick1/d1r2
Brick3: 10.70.43.18:/rhs/brick1/d2r1
Brick4: 10.70.43.22:/rhs/brick1/d2r2
Brick5: 10.70.42.186:/rhs/brick1/d3r1
Brick6: 10.70.43.181:/rhs/brick1/d3r2
Brick7: 10.70.43.18:/rhs/brick1/d4r1
Brick8: 10.70.43.22:/rhs/brick1/d4r2
Brick9: 10.70.42.186:/rhs/brick1/d5r1
Brick10: 10.70.43.181:/rhs/brick1/d5r2
Brick11: 10.70.43.18:/rhs/brick1/d6r1
Brick12: 10.70.43.22:/rhs/brick1/d6r2
Options Reconfigured:
features.quota: on
[root@quota2 ~]# 
[root@quota2 ~]# 
[root@quota2 ~]# gluster volume set quota dist-rep quota-deem-statfs on
Usage: volume set <VOLNAME> <KEY> <VALUE>
[root@quota2 ~]# gluster volume set dist-rep quota-deem-statfs on
volume set: failed: Another transaction is in progress. Please try again after sometime.
[root@quota2 ~]# less /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
[root@quota2 ~]# 
[root@quota2 ~]# 
[root@quota2 ~]# gluster volume info
No volumes present


Expected results:

Seems it went to some inconsistent state and could not collect the correct info.

Additional info:


Before executing all the commands mentioned in "Actual Results" section, 
I executed gluster volume quota $volname list and it listed me 2679 directories successfully.
Comment 1 Saurabh 2013-10-18 08:14:53 EDT
Created attachment 813762 [details]
sosreport of  node
Comment 3 Vivek Agarwal 2015-12-03 12:12:07 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.