Bug 1360674
| Summary: | [RGW:NFS]:- Every subdirectory access from the mount will have separate mount points on the client side | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | shylesh <shmohan> | ||||
| Component: | RGW | Assignee: | Matt Benjamin (redhat) <mbenjamin> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Ramakrishnan Periyasamy <rperiyas> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | low | ||||||
| Version: | 2.0 | CC: | cbodley, ceph-eng-bugs, ffilz, hnallurv, hyelloji, icolle, kbader, kdreyer, mbenjamin, owasserm, sweil, uboppana | ||||
| Target Milestone: | rc | ||||||
| Target Release: | 2.2 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | RHEL: ceph-10.2.5-18.el7cp Ubuntu: ceph_10.2.5-11redhat1xenial | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2017-03-14 15:44:47 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
shylesh
2016-07-27 10:05:41 UTC
Hi Matt, The behaviour is still same as mentioned in description. while accessing a directory in nfs mount path, then corresponding mount point get updated. Mount point is getting created only for the directories inside nfs mount eg: if nfs mount is /mnt/nfs/ and if we access any directories inside this path then corresponding mount point will be updated. [ubuntu@magna104 nfs]$ ls aaaaaaa bucket-from-nfs dir1 dir2 dir3 dir4 Further if you access any subdirectories "cd dir1/doc/" mount points are not getting created. steps followed: 1. create a nfs mount 2. check mount command output, only the nfs mount will be there in command output 3. create some directories inside nfs mount path 4. cd to each directory 5. check mount command output, there will be mount points created for each directory we accessed in nfs mount path. Regards, Ramakrishnan What may be happening is how the fsid attribute is reported by FSAL_RGW, The Linux NFS client will create a new mount and superblock for each new fsid it encounters. This is part of the automatic mounting of exports in the PseudoFS, but it also has been applied to NFS v3. Ok, Ganesha version looks good. I'm trying to remember if stat -f shows the fsid that came over the wire or an internal one... Wire trace is definitely best for this sort of thing... Created attachment 1217995 [details]
tcpdump logs while accessing directories in NFS mount
Attaching tcpdum logs.
command used to collect logs:"sudo tcpdump -i any tcp | tee tcpdump.txt"
While collecting logs did following operations
1. cd /mnt/nfs/
2. cd /mnt/nfs/dir1/
3. cd /mnt/nfs/dir2/
4. cd /mnt/nfs/dir4/
Between each command ran "mount" command to verify the mount point creation.
Fix pushed to http://tracker.ceph.com/issues/17850 Has https://github.com/ceph/ceph/pull/12045 passed Teuthology and is it ready to go downstream? (In reply to Ken Dreyer (Red Hat) from comment #27) > Has https://github.com/ceph/ceph/pull/12045 passed Teuthology and is it > ready to go downstream? We don't have teuthology automation yet, but I think it's passed downstream QE, hasn't it? After inspection, it can do so, if it applies there. Matt The Issue is still seen in the latest builds.. nfs-ganesha-2.4.2-1.el7cp.x86_64 I am trying to delete the empty directories from the mountpoint. [root@magna048 hell]# mount /dev/sda1 on / type ext4 (rw,relatime,seclabel,data=ordered) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) nfsd on /proc/fs/nfsd type nfsd (rw,relatime) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=3274684k,mode=700) magna020.ceph.redhat.com:/ on /hell type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.8.128.48,local_lock=n one,addr=10.8.128.20) [root@magna048 hell]# ls folder1 folder2 folder3 [root@magna048 hell]# rm -rf * rm: cannot remove ‘folder1’: Device or resource busy rm: cannot remove ‘folder2’: Device or resource busy rm: cannot remove ‘folder3’: Device or resource busy [root@magna048 hell]# mount /dev/sda1 on / type ext4 (rw,relatime,seclabel,data=ordered) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) nfsd on /proc/fs/nfsd type nfsd (rw,relatime) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=3274684k,mode=700) magna020.ceph.redhat.com:/ on /hell type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.8.128.48,local_lock=none,addr=10.8.128.20) magna020.ceph.redhat.com:/folder1 on /hell/folder1 type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.8.128.48,local_lock=none,addr=10.8.128.20) magna020.ceph.redhat.com:/folder2 on /hell/folder2 type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.8.128.48,local_lock=none,addr=10.8.128.20) magna020.ceph.redhat.com:/folder3 on /hell/folder3 type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.8.128.48,local_lock=none,addr=10.8.128.20) [root@magna020 ~]# rpm -qa | grep ganesha nfs-ganesha-rgw-2.4.2-1.el7cp.x86_64 nfs-ganesha-2.4.2-1.el7cp.x86_64 [root@magna020 ~]# rpm -qa | grep radosgw ceph-radosgw-10.2.5-12.el7cp.x86_64 Hi, Sorry, it turns out the upstream backport PR had not been merged. Done. Matt Moving this bug to verified state. Verified in ceph "10.2.5-18.el7cp (6d6e431ef9e773beaa8ddc28b6c552d6f813b36a)" NFS-Ganesha: nfs-ganesha-rgw-2.4.2-1.el7cp.x86_64 [ubuntu@host03 ~]$ mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=3274684k,mode=700,uid=1000,gid=1118) host03.abc.com:/ on /mnt/nfs type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=x.x.x.x,local_lock=none,addr=x.x.x.x) [ubuntu@host03 ~]$ cd /mnt/nfs/nfs_mount/ [ubuntu@host03 nfs_mount]$ mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=3274684k,mode=700,uid=1000,gid=1118) host03.abc.com:/ on /mnt/nfs type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=x.x.x.x,local_lock=none,addr=x.x.x.x) [ubuntu@host03 nfs_mount]$ cd .. [ubuntu@host03 nfs]$ cd nfsuser_s3_dir/ [ubuntu@host03 nfsuser_s3_dir]$ mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=3274684k,mode=700,uid=1000,gid=1118) host3.abc.com:/ on /mnt/nfs type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=x.x.x.x,local_lock=none,addr=x.x.x.x) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0514.html |