Bug 1388210
| Summary: | [10.2.3-10.el7cp - kcephfs/rhel7.3] file: ceph.file.layout.pool_namespace: No such attribute | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasu Kulkarni <vakulkar> |
| Component: | Documentation | Assignee: | Bara Ancincova <bancinco> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 2.1 | CC: | ceph-eng-bugs, hnallurv, john.spray, kdreyer |
| Target Milestone: | rc | ||
| Target Release: | 2.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
Cause:
The CephFS kernel client in RHEL 7.3 does not support the pool_namespace layout setting.
Consequence:
Files written from FUSE clients with a namespace set may not be accessible from RHEL 7.3 kernel clients. Attempts to read or set the ceph.file.layout.pool_namespace extended attribute will fail with "No such attribute" from RHEL 7.3 kernel clients.
Workaround (if any):
Result:
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-03-21 23:50:08 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Yes, if the downstream documentation mentions this field then it should warn about this. Bara: I've added notes in the doc field, could you pick this up? I'd say just the limitations section should be fine. Looks good, thanks! |
Description of problem: layout_vxattrs.sh fails during pool namespace 3.10.0-514.el7.x86_64 2016-10-21T20:20:35.235 INFO:tasks.workunit.client.0.pluto008.stderr:++ getfattr -n ceph.dir.layout.pool ./../.. --only-values 2016-10-21T20:20:35.235 INFO:tasks.workunit.client.0.pluto008.stderr:+ datapool=cephfs_data 2016-10-21T20:20:35.236 INFO:tasks.workunit.client.0.pluto008.stderr:+ break 2016-10-21T20:20:35.236 INFO:tasks.workunit.client.0.pluto008.stderr:+ rm -f file file2 2016-10-21T20:20:35.237 INFO:tasks.workunit.client.0.pluto008.stderr:+ touch file file2 2016-10-21T20:20:35.238 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file 2016-10-21T20:20:35.238 INFO:tasks.workunit.client.0.pluto008.stderr:+ grep -q object_size= 2016-10-21T20:20:35.239 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file 2016-10-21T20:20:35.240 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file 2016-10-21T20:20:35.240 INFO:tasks.workunit.client.0.pluto008.stderr:+ grep -q stripe_count= 2016-10-21T20:20:35.241 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file 2016-10-21T20:20:35.242 INFO:tasks.workunit.client.0.pluto008.stderr:+ grep -q stripe_unit= 2016-10-21T20:20:35.242 INFO:tasks.workunit.client.0.pluto008.stderr:+ grep -q pool= 2016-10-21T20:20:35.243 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file 2016-10-21T20:20:35.244 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout.pool file 2016-10-21T20:20:35.244 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout.pool_namespace file 2016-10-21T20:20:35.245 INFO:tasks.workunit.client.0.pluto008.stderr:file: ceph.file.layout.pool_namespace: No such attribute 2016-10-21T20:20:35.246 INFO:tasks.workunit.client.0.pluto008.stdout:. 2016-10-21T20:20:35.247 INFO:tasks.workunit.client.0.pluto008.stdout:./.. 2016-10-21T20:20:35.247 INFO:tasks.workunit.client.0.pluto008.stdout:./../.. 2016-10-21T20:20:35.248 INFO:tasks.workunit.client.0.pluto008.stdout:# file: file 2016-10-21T20:20:35.248 INFO:tasks.workunit.client.0.pluto008.stdout:ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data" 2016-10-21T20:20:35.249 INFO:tasks.workunit.client.0.pluto008.stdout: 2016-10-21T20:20:35.250 INFO:tasks.workunit.client.0.pluto008.stdout:# file: file 2016-10-21T20:20:35.250 INFO:tasks.workunit.client.0.pluto008.stdout:ceph.file.layout.pool="cephfs_data" 2016-10-21T20:20:35.251 INFO:tasks.workunit.client.0.pluto008.stdout: 2016-10-21T20:20:35.252 INFO:tasks.workunit:Stopping ['fs/misc'] on client.0... 2016-10-21T20:20:35.252 INFO:teuthology.orchestra.run.pluto008:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0 /home/ubuntu/cephtest/clone.client.0' 2016-10-21T20:20:35.306 ERROR:teuthology.parallel:Exception in parallel execution Traceback (most recent call last): File "/home/teuthworker/src/teuthology_rh22/teuthology/parallel.py", line 83, in __exit__ for result in self: File "/home/teuthworker/src/teuthology_rh22/teuthology/parallel.py", line 101, in next resurrect_traceback(result) File "/home/teuthworker/src/teuthology_rh22/teuthology/parallel.py", line 19, in capture_traceback return func(*args, **kwargs) File "/home/teuthworker/src/ceph-qa-suite_rh22/tasks/workunit.py", line 404, in _run_tests label="workunit test {workunit}".format(workunit=workunit) File "/home/teuthworker/src/teuthology_rh22/teuthology/orchestra/remote.py", line 194, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/teuthology_rh22/teuthology/orchestra/run.py", line 402, in run r.wait() File "/home/teuthworker/src/teuthology_rh22/teuthology/orchestra/run.py", line 166, in wait label=self.label) If the name pool namespace is not supported with latest 7.3 kernel, probably we want to document?