Bug 1109613 - gluster volume create fails with ambiguous error
Summary: gluster volume create fails with ambiguous error
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.4.3
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-16 00:20 UTC by Alexey Zilber
Modified: 2015-05-26 12:33 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-03-31 04:46:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Alexey Zilber 2014-06-16 00:20:30 UTC
Description of problem:
# gluster volume create devroot replica 2 transport tcp sfdev1:/data/brick1/devroot sfdev2:/data/brick1/devroot
volume create: devroot: failed

# setfattr -x trusted.glusterfs.volume-id /data/brick1/devroot

# gluster volume create devroot rep 2 transport tcp sfdev1:/data/brick1/devroot sfdev2:/data/brick1/devroot
volume create: devroot: failed

cli log:
---

[2014-06-14 05:42:03.519769] W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"

[2014-06-14 05:42:03.523525] I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled
[2014-06-14 05:42:03.523580] I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread

[2014-06-14 05:42:03.600482] I [cli-cmd-volume.c:392:cli_cmd_volume_create_cbk] 0-cli: Replicate cluster type found. Checking brick order.

[2014-06-14 05:42:03.600844] I [cli-cmd-volume.c:304:cli_cmd_check_brick_order] 0-cli: Brick order okay
[2014-06-14 05:42:03.668257] I [cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli: Received resp to create volume

[2014-06-14 05:42:03.668365] I [input.c:36:cli_batch] 0-: Exiting with: -1
---


.cmd_log_history shows:

[2014-06-14 05:42:03.668051]  : volume create devroot replica 2 transport tcp sfdev1:/data/brick1/devroot sfdev2:/data/brick1/devroot : FAILED :


Debug log of glusterd:
[2014-06-14 15:13:54.203460] I [glusterd-rpc-ops.c:542:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 514b76e8-7537-4660-9beb-6a2c061bb43b

[2014-06-14 15:16:30.468986] D [glusterd-volume-ops.c:69:__glusterd_handle_create_volume] 0-management: Received create volume req

[2014-06-14 15:16:30.469082] D [glusterd-utils.c:620:glusterd_check_volume_exists] 0-management: Volume devroot does not exist.stat failed with errno : 2 on path: /var/lib/glusterd/vols/devroot

[2014-06-14 15:16:30.469233] D [glusterd-utils.c:340:glusterd_lock] 0-management: Cluster lock held by 95ee24ce-5d6a-4e2d-9641-764adb5ac79a

[2014-06-14 15:16:30.470704] D [glusterd-utils.c:620:glusterd_check_volume_exists] 0-management: Volume devroot does not exist.stat failed with errno : 2 on path: /var/lib/glusterd/vols/devroot


Version-Release number of selected component (if applicable):
CentOS release 5.10 (Final) on x64
glusterfs-3.4.3-3.el5
glusterfs-geo-replication-3.4.3-3.el5
glusterfs-libs-3.4.3-3.el5
glusterfs-fuse-3.4.3-3.el5
glusterfs-server-3.4.3-3.el5
glusterfs-api-3.4.3-3.el5
glusterfs-cli-3.4.3-3.el5

How reproducible:
1. Clean installs of CentOS 5.10, full updates, epel repo, gluster repo on SoftLayer VM (4gb ram, 4 cores, 2nd drive=300gb).

Steps to Reproduce:
1. Follow quick install guide.
2. mkdir -p /data/brick1
3. mkfs.xfs -L devroot -i size=512 /dev/<some volume>
4. add it fstab, mount volume to /data/brick1
5. mkdir devroot /data/brick1
6. Do same on second host, peer probe from both sides.
7. Attempt to create volume.


Actual results:
No volume is created.
No bricks are added.
Only extended attribute is set on /data/brick1/devroot:
# getfattr -dR -e hex -m . /data/brick1
getfattr: Removing leading '/' from absolute path names
# file: data/brick1/devroot
trusted.glusterfs.volume-id=0x7735ee8fd2a64b21b278b607f0c6b4bf

Expected results:
Volume created.

Additional info:
Virtual instances on SoftLayer.

Comment 1 Atin Mukherjee 2014-06-16 04:08:15 UTC
Can you please provide the other level of logs of glusterd? Only DEBUG logs would not help here to identify the issue.

Comment 2 Lalatendu Mohanty 2014-09-02 16:42:56 UTC
Are you still facing this issue?  This issue seems to be a setup config issue. Can you please provide glusterd logs as requested in comment #1 , selinux status i.e. "sestatus", iptable rules from each node, so that it would be easier for us to debug it futrther.

Comment 3 Atin Mukherjee 2015-03-31 04:46:11 UTC
Closing this bug as there is no response from reporter for a quite a long time.

Comment 4 Niels de Vos 2015-05-26 12:33:25 UTC
This bug has been CLOSED, and there has not been a response to the requested NEEDINFO in more than 4 weeks. The NEEDINFO flag is now getting cleared so that our Bugzilla household is getting more in order.


Note You need to log in before you can comment on or make changes to this bug.