Bug 960843
Summary: | nfs: rm-rf * does not remove the data | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Saurabh <saujain> | |
Component: | distribute | Assignee: | Raghavendra G <rgowdapp> | |
Status: | CLOSED DUPLICATE | QA Contact: | Saurabh <saujain> | |
Severity: | urgent | Docs Contact: | ||
Priority: | high | |||
Version: | 2.1 | CC: | mzywusko, nsathyan, rhs-bugs, rwheeler, spalai, vagarwal, vbellur | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.4.0.10rhs | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 966852 (view as bug list) | Environment: | ||
Last Closed: | 2015-11-27 10:38:01 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 966852 |
Description
Saurabh
2013-05-08 06:33:03 UTC
can we try with 'eager-lock disable' option for this? also check nfs server health. One of the possible errors: 1. dht_access_cbk sees a ENOENT error from the subvolume 2. dht_migration_complete_check is called to check if file is migrated 3. fd is NULL as this is a path based op 4. But in this case inode is also NULL 5. dht_migration_complete_check fails as both fd and inode is NULL 6. dht_access2 returns with EUCLEAN error. The inode might be NULL, as a parallel rm from another client might have succeeded, hence the ENOENT errors in the first case. Will continue the investigations. [2013-05-07 22:08:47.159766] W [client-rpc-fops.c:1369:client3_3_access_cbk] 0-dist-rep-client-0: remote operation failed: No such file or directory [2013-05-07 22:08:47.160948] W [client-rpc-fops.c:1369:client3_3_access_cbk] 0-dist-rep-client-1: remote operation failed: No such file or directory [2013-05-07 22:08:47.167239] E [dht-helper.c:1065:dht_inode_ctx_get] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_discover_complete+0x421) [0x7f8efb933721] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_layout_set+0x4e) [0x7f8efb91603e] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_inode_ctx_layout_get+0x1b) [0x7f8efb924cfb]))) 0-dist-rep-dht: invalid argument: inode [2013-05-07 22:08:47.167919] E [dht-helper.c:1065:dht_inode_ctx_get] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_discover_complete+0x421) [0x7f8efb933721] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_layout_set+0x63) [0x7f8efb916053] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_inode_ctx_layout_set+0x34) [0x7f8efb916544]))) 0-dist-rep-dht: invalid argument: inode [2013-05-07 22:08:47.167987] E [dht-helper.c:1084:dht_inode_ctx_set] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_discover_complete+0x421) [0x7f8efb933721] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_layout_set+0x63) [0x7f8efb916053] (-->/usr/lib64/glusterfs/3.4.0.4rhs/xlator/cluster/distribute.so(dht_inode_ctx_layout_set+0x52) [0x7f8efb916562]))) 0-dist-rep-dht: invalid argument: inode [2013-05-07 22:08:47.168052] W [nfs3.c:1522:nfs3svc_access_cbk] 0-nfs: c2a64999: /6825/54 => -1 (Structure needs cleaning) Looks like selinux was also enabled cat /tmp/rhsqe-repo.lab.eng.blr.redhat.com/sosreports/960834/bigbend1-2013050804431367968426/etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted cat /tmp/rhsqe-repo.lab.eng.blr.redhat.com/sosreports/960834/bigbend3-2013050804431367968439/etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted It still is not fixed, [root@bigbend1 ~]# gluster volume status Status of volume: dist-rep Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.115:/rhs/brick1/d1r1 49152 Y 2209 Brick 10.70.37.164:/rhs/brick1/d1r2 49152 Y 2205 Brick 10.70.37.55:/rhs/brick1/d2r1 49152 Y 2198 Brick 10.70.37.168:/rhs/brick1/d2r2 49152 Y 2196 Brick 10.70.37.115:/rhs/brick1/d3r1 49153 Y 2218 Brick 10.70.37.164:/rhs/brick1/d3r2 49153 Y 2214 Brick 10.70.37.55:/rhs/brick1/d4r1 49153 Y 2207 Brick 10.70.37.168:/rhs/brick1/d4r2 49153 Y 2205 Brick 10.70.37.115:/rhs/brick1/d5r1 49154 Y 2227 Brick 10.70.37.164:/rhs/brick1/d5r2 49154 Y 2223 Brick 10.70.37.55:/rhs/brick1/d6r1 49154 Y 2216 Brick 10.70.37.168:/rhs/brick1/d6r2 49154 Y 2214 NFS Server on localhost 2049 Y 2237 Self-heal Daemon on localhost N/A Y 2243 NFS Server on c612bd05-bf73-445c-a206-45bea2b7d2bc 2049 Y 2226 Self-heal Daemon on c612bd05-bf73-445c-a206-45bea2b7d2b c N/A Y 2233 NFS Server on 474c7f95-5c0a-4142-b075-338b2612af37 2049 Y 2233 Self-heal Daemon on 474c7f95-5c0a-4142-b075-338b2612af3 7 N/A Y 2240 NFS Server on 3d039042-43c3-4ad2-93a0-3e74d75ab666 2049 Y 2224 Self-heal Daemon on 3d039042-43c3-4ad2-93a0-3e74d75ab66 6 N/A Y 2231 There are no active volume tasks [root@rhsauto020 nfs-regression]# date Fri May 17 09:26:54 IST 2013 [root@rhsauto020 nfs-regression]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_rhsauto020-lv_root 51606140 2016516 46968184 5% / tmpfs 1961320 0 1961320 0% /dev/shm /dev/vda1 495844 37542 432702 8% /boot /dev/mapper/vg_rhsauto020-lv_home 614742048 202088 583312876 1% /home 10.70.34.114:/opt 51606528 6159872 42825216 13% /opt 10.70.37.115:/dist-rep 213780480 5091392 208689088 3% /mnt/nfs-regression.1368703141 [root@rhsauto020 nfs-regression]# date Fri May 17 09:29:07 IST 2013 [root@rhsauto020 nfs-regression]# [root@rhsauto020 nfs-regression]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_rhsauto020-lv_root 51606140 2016516 46968184 5% / tmpfs 1961320 0 1961320 0% /dev/shm /dev/vda1 495844 37542 432702 8% /boot /dev/mapper/vg_rhsauto020-lv_home 614742048 202088 583312876 1% /home 10.70.34.114:/opt 51606528 6159872 42825216 13% /opt 10.70.37.115:/dist-rep 213780480 5091392 208689088 3% /mnt/nfs-regression.1368703141 Rajesh, It might be same as the one worked by Pranith in BZ 965987. Just check with him or test his patch. :) Assigning Raghvendra G, based on https://bugzilla.redhat.com/show_bug.cgi?id=960843#c9 Dev ack to 3.0 RHS BZs *** This bug has been marked as a duplicate of bug 1115367 *** |