Bug 828039
Summary: | ping_pong fails on fuse/nfs mount when new bricks are added to distribute volume | ||||||
---|---|---|---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Shwetha Panduranga <shwetha.h.panduranga> | ||||
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> | ||||
Status: | CLOSED DEFERRED | QA Contact: | |||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 3.3-beta | CC: | bugs, gluster-bugs | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 991445 (view as bug list) | Environment: | |||||
Last Closed: | 2014-12-14 19:40:28 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | DP | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 991445 | ||||||
Attachments: |
|
replicate fails lk fop if the errno is not ENOTCONN.This is because it does not have lock healing capability yet. The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug. If there has been no update before 9 December 2014, this bug will get automatocally closed. |
Created attachment 589046 [details] fuse mount log Description of problem: ----------------------- When new bricks are added to a plain distribute volume(8 bricks) to change the volume type to distribute-replicate (8x2), ping_pong running on fuse/nfs mount fails. Version-Release number of selected component (if applicable): ----------------------------------------------------------- 3.3.0qa45 How reproducible: ------------------ often Steps to Reproduce: --------------------- 1. Create a plain distribute volume (8 bricks) 2. create fuse/nfs mounts 3. execute "/root/ping_pong_dir/vanilla_ping_pong/ping_pong ./file1 10" on both fuse and nfs mount 4. add 8 more bricks to the volume with replica count 2 Actual results: ---------------- Fuse mount Output:- mount -t glusterfs 192.168.2.35:/dstore /mnt/gfsc2 [06/04/12 - 15:44:32 root@APP-CLIENT1 gfsc2]# /root/ping_pong_dir/vanilla_ping_pong/ping_pong ./file1 10 unlock at 2 failed! - No such file or directory lock at 4 failed! - No such file or directory unlock at 3 failed! - No such file or directory lock at 5 failed! - No such file or directory unlock at 4 failed! - No such file or directory lock at 6 failed! - No such file or directory unlock at 5 failed! - No such file or directory lock at 7 failed! - No such file or directory unlock at 6 failed! - No such file or directory lock at 8 failed! - No such file or directory unlock at 7 failed! - No such file or directory lock at 9 failed! - No such file or directory unlock at 8 failed! - No such file or directory lock at 0 failed! - No such file or directory unlock at 9 failed! - No such file or directory lock at 1 failed! - No such file or directory unlock at 0 failed! - No such file or directory lock at 2 failed! - No such file or directory Nfs mount output:- mount -t nfs -o vers=3,noac 192.168.2.35:/dstore /mnt/nfsc2 [06/04/12 - 15:44:41 root@APP-CLIENT1 nfsc2]# /root/ping_pong_dir/vanilla_ping_pong/ping_pong ./file1 10 unlock at 1 failed! - No locks available unlock at 2 failed! - No locks available unlock at 3 failed! - No locks available unlock at 4 failed! - No locks available unlock at 5 failed! - No locks available unlock at 6 failed! - No locks available unlock at 7 failed! - No locks available unlock at 8 failed! - No locks available unlock at 9 failed! - No locks available unlock at 0 failed! - No locks available Expected results: ------------------- Locks and Unlocks on the region of the file should not fail Additional info: --------------- The file is not self-healed onto new brick. Xattrs of file on old brick:- ~~~~~~~~~~~~~~~~~~~~~~~~~~~ [06/04/12 - 16:41:18 root@APP-SERVER1 ~]# ls -l /export_sdc/dir2 total 0 -rw------- 2 root root 11 Jun 4 16:38 file1 [06/04/12 - 16:41:19 root@APP-SERVER1 ~]# getfattr -d -m . -e hex /export_sdc/dir2 getfattr: Removing leading '/' from absolute path names # file: export_sdc/dir2 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000009ffffffbbffffff9 trusted.glusterfs.volume-id=0xc95888e0446244bea36c3bd5d8707346 [06/04/12 - 16:41:23 root@APP-SERVER1 ~]# getfattr -d -m . -e hex /export_sdc/dir2/file1 getfattr: Removing leading '/' from absolute path names # file: export_sdc/dir2/file1 trusted.gfid=0x497888f6e6f14eeeaa86a77847c176bd Xattrs on file on new brick:- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [06/04/12 - 16:36:37 root@APP-SERVER2 ~]# getfattr -d -m . -e hex /export_sdc/dir2 getfattr: Removing leading '/' from absolute path names # file: export_sdc/dir2 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000009ffffffbbffffff9 trusted.glusterfs.volume-id=0xc95888e0446244bea36c3bd5d8707346 [06/04/12 - 16:41:35 root@APP-SERVER2 ~]# getfattr -d -m . -e hex /export_sdc/dir2/file1 getfattr: /export_sdc/dir2/file1: No such file or directory