Bug 855643

Summary: [rhsc] unable to create nested directory (gluster are using mkdir instead of mkdir -p)
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Haim <hateya>
Component: glusterfsAssignee: krishnan parthasarathi <kparthas>
Status: CLOSED ERRATA QA Contact: Shruti Sampat <ssampat>
Severity: high Docs Contact:
Priority: medium    
Version: unspecifiedCC: amarts, hateya, iheim, jkt, mmahoney, nsathyan, pprakash, rhs-bugs, vbellur, yeylon
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.4.0qa5-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-23 22:33:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Haim 2012-09-09 14:16:28 UTC
Description of problem:

- create a new volume
- add new brick 
  * brick should be /hateya/vol1 (both doesn't exists)
  * hit create:

error:


[2012-09-09 08:15:07.213822] I [glusterd-volume-ops.c:83:glusterd_handle_create_volume] 0-glusterd: Received create volume req
[2012-09-09 08:15:07.214970] I [glusterd-utils.c:285:glusterd_lock] 0-glusterd: Cluster lock held by f22b234d-238f-49be-97fa-d1d1c6ccdd4c
[2012-09-09 08:15:07.215002] I [glusterd-handler.c:458:glusterd_op_txn_begin] 0-management: Acquired local lock
[2012-09-09 08:15:07.215468] I [glusterd-rpc-ops.c:547:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:15:07.216474] W [glusterd-utils.c:4911:mkdir_if_missing] 0-: Failed to create the directory /hateya/vol1
[2012-09-09 08:15:07.216507] E [glusterd-op-sm.c:1999:glusterd_op_ac_send_stage_op] 0-: Staging failed
[2012-09-09 08:15:07.216537] I [glusterd-op-sm.c:2039:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req to 0 peers
[2012-09-09 08:15:07.216837] I [glusterd-rpc-ops.c:606:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:15:07.216882] I [glusterd-op-sm.c:2653:glusterd_op_txn_complete] 0-glusterd: Cleared local lock
[2012-09-09 08:16:10.432981] I [glusterd-handler.c:497:glusterd_handle_cluster_lock] 0-glusterd: Received LOCK from uuid: 347297f8-31bd-41fd-9d12-08540490f68d

the following does succeed (/hateya/):
[2012-09-09 08:16:10.432981] I [glusterd-handler.c:497:glusterd_handle_cluster_lock] 0-glusterd: Received LOCK from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:10.433086] I [glusterd-utils.c:285:glusterd_lock] 0-glusterd: Cluster lock held by 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:10.433172] I [glusterd-handler.c:1315:glusterd_op_lock_send_resp] 0-glusterd: Responded, ret: 0
[2012-09-09 08:16:10.433532] I [glusterd-handler.c:542:glusterd_req_ctx_create] 0-glusterd: Received op from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:10.548148] I [glusterd-handler.c:1417:glusterd_op_stage_send_resp] 0-glusterd: Responded to stage, ret: 0
[2012-09-09 08:16:10.551845] I [glusterd-handler.c:542:glusterd_req_ctx_create] 0-glusterd: Received op from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:10.590411] I [glusterd-handler.c:1458:glusterd_op_commit_send_resp] 0-glusterd: Responded to commit, ret: 0
[2012-09-09 08:16:10.590815] I [glusterd-handler.c:1359:glusterd_handle_cluster_unlock] 0-glusterd: Received UNLOCK from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:10.590861] I [glusterd-handler.c:1335:glusterd_op_unlock_send_resp] 0-glusterd: Responded to unlock, ret: 0
[2012-09-09 08:16:10.990671] I [glusterd-utils.c:285:glusterd_lock] 0-glusterd: Cluster lock held by f22b234d-238f-49be-97fa-d1d1c6ccdd4c
[2012-09-09 08:16:10.990704] I [glusterd-handler.c:458:glusterd_op_txn_begin] 0-management: Acquired local lock
[2012-09-09 08:16:10.991242] I [glusterd-rpc-ops.c:547:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:10.996113] I [glusterd-utils.c:814:glusterd_volume_brickinfo_get] 0-management: Found brick
[2012-09-09 08:16:10.996580] I [glusterd-utils.c:814:glusterd_volume_brickinfo_get] 0-management: Found brick
[2012-09-09 08:16:10.998717] I [glusterd-op-sm.c:2039:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req to 1 peers
[2012-09-09 08:16:11.008021] I [glusterd-rpc-ops.c:880:glusterd3_1_stage_op_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:11.011301] I [glusterd-op-sm.c:2384:glusterd_op_ac_send_commit_op] 0-management: Sent op req to 1 peers
[2012-09-09 08:16:11.014760] I [glusterd-rpc-ops.c:1316:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:11.014795] I [glusterd-op-sm.c:2254:glusterd_op_modify_op_ctx] 0-management: op_ctx modification not required
[2012-09-09 08:16:11.015566] I [glusterd-rpc-ops.c:606:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:11.015613] I [glusterd-op-sm.c:2653:glusterd_op_txn_complete] 0-glusterd: Cleared local lock
[2012-09-09 08:16:11.169996] I [glusterd-utils.c:285:glusterd_lock] 0-glusterd: Cluster lock held by f22b234d-238f-49be-97fa-d1d1c6ccdd4c
[2012-09-09 08:16:11.170031] I [glusterd-handler.c:458:glusterd_op_txn_begin] 0-management: Acquired local lock
[2012-09-09 08:16:11.170478] I [glusterd-rpc-ops.c:547:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:11.175328] I [glusterd-utils.c:814:glusterd_volume_brickinfo_get] 0-management: Found brick
[2012-09-09 08:16:11.175713] I [glusterd-utils.c:814:glusterd_volume_brickinfo_get] 0-management: Found brick
[2012-09-09 08:16:11.177818] I [glusterd-op-sm.c:2039:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req to 1 peers
[2012-09-09 08:16:11.186930] I [glusterd-rpc-ops.c:880:glusterd3_1_stage_op_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:11.189542] I [glusterd-op-sm.c:2384:glusterd_op_ac_send_commit_op] 0-management: Sent op req to 1 peers
[2012-09-09 08:16:11.192781] I [glusterd-rpc-ops.c:1316:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:11.192819] I [glusterd-op-sm.c:2254:glusterd_op_modify_op_ctx] 0-management: op_ctx modification not required
[2012-09-09 08:16:11.193810] I [glusterd-rpc-ops.c:606:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 347297f8-31bd-41fd-9d12-08540490f68d
[2012-09-09 08:16:11.193842] I [glusterd-op-sm.c:2653:glusterd_op_txn_complete] 0-glusterd: Cleared local lock

Comment 1 Haim 2012-09-09 14:17:32 UTC
glusterfs-fuse-3.3.0-22.el6rhs.x86_64
vdsm-gluster-4.9.6-14.el6rhs.noarch
gluster-swift-plugin-1.0-5.noarch
gluster-swift-container-1.4.8-4.el6.noarch
org.apache.hadoop.fs.glusterfs-glusterfs-0.20.2_0.2-1.noarch
glusterfs-3.3.0-22.el6rhs.x86_64
glusterfs-server-3.3.0-22.el6rhs.x86_64
gluster-swift-proxy-1.4.8-4.el6.noarch
gluster-swift-account-1.4.8-4.el6.noarch
glusterfs-rdma-3.3.0-22.el6rhs.x86_64
gluster-swift-doc-1.4.8-4.el6.noarch
gluster-swift-1.4.8-4.el6.noarch
gluster-swift-object-1.4.8-4.el6.noarch
glusterfs-geo-replication-3.3.0-22.el6rhs.x86_64

Comment 3 Shireesh 2012-09-10 07:34:06 UTC
This is the glusterfs behavior. Hence changing the component to glusterfs.

Comment 4 Amar Tumballi 2012-09-18 05:25:50 UTC
the reason why create doesn't create everything in the path was intentional. We didn't want people to make a possible spelling mistake in path and then say wanted the other path.

The only way I think of allowing this is by providing a 'force' option to create.

Comment 5 krishnan parthasarathi 2012-10-09 17:40:03 UTC
http://review.gluster.com/3378 replaces use of mkdir_if_missing with mkdir -p like functionality in glusterd, in master branch (upstream). It could be backported to rhs-2.0 based on priority.

Comment 6 Shruti Sampat 2013-01-02 12:42:31 UTC
Verified in glusterfs-3.4.0qa5-1.

Comment 7 Scott Haines 2013-09-23 22:33:21 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html