Bug 1388210 - [10.2.3-10.el7cp - kcephfs/rhel7.3] file: ceph.file.layout.pool_namespace: No such attribute
Summary: [10.2.3-10.el7cp - kcephfs/rhel7.3] file: ceph.file.layout.pool_namespace: No...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 2.2
Assignee: Bara Ancincova
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-24 18:22 UTC by Vasu Kulkarni
Modified: 2017-03-21 23:50 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Cause: The CephFS kernel client in RHEL 7.3 does not support the pool_namespace layout setting. Consequence: Files written from FUSE clients with a namespace set may not be accessible from RHEL 7.3 kernel clients. Attempts to read or set the ceph.file.layout.pool_namespace extended attribute will fail with "No such attribute" from RHEL 7.3 kernel clients. Workaround (if any): Result:
Clone Of:
Environment:
Last Closed: 2017-03-21 23:50:08 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Vasu Kulkarni 2016-10-24 18:22:28 UTC
Description of problem:

layout_vxattrs.sh fails during pool namespace

3.10.0-514.el7.x86_64 

2016-10-21T20:20:35.235 INFO:tasks.workunit.client.0.pluto008.stderr:++ getfattr -n ceph.dir.layout.pool ./../.. --only-values
2016-10-21T20:20:35.235 INFO:tasks.workunit.client.0.pluto008.stderr:+ datapool=cephfs_data
2016-10-21T20:20:35.236 INFO:tasks.workunit.client.0.pluto008.stderr:+ break
2016-10-21T20:20:35.236 INFO:tasks.workunit.client.0.pluto008.stderr:+ rm -f file file2
2016-10-21T20:20:35.237 INFO:tasks.workunit.client.0.pluto008.stderr:+ touch file file2
2016-10-21T20:20:35.238 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file
2016-10-21T20:20:35.238 INFO:tasks.workunit.client.0.pluto008.stderr:+ grep -q object_size=
2016-10-21T20:20:35.239 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file
2016-10-21T20:20:35.240 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file
2016-10-21T20:20:35.240 INFO:tasks.workunit.client.0.pluto008.stderr:+ grep -q stripe_count=
2016-10-21T20:20:35.241 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file
2016-10-21T20:20:35.242 INFO:tasks.workunit.client.0.pluto008.stderr:+ grep -q stripe_unit=
2016-10-21T20:20:35.242 INFO:tasks.workunit.client.0.pluto008.stderr:+ grep -q pool=
2016-10-21T20:20:35.243 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout file
2016-10-21T20:20:35.244 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout.pool file
2016-10-21T20:20:35.244 INFO:tasks.workunit.client.0.pluto008.stderr:+ getfattr -n ceph.file.layout.pool_namespace file
2016-10-21T20:20:35.245 INFO:tasks.workunit.client.0.pluto008.stderr:file: ceph.file.layout.pool_namespace: No such attribute
2016-10-21T20:20:35.246 INFO:tasks.workunit.client.0.pluto008.stdout:.
2016-10-21T20:20:35.247 INFO:tasks.workunit.client.0.pluto008.stdout:./..
2016-10-21T20:20:35.247 INFO:tasks.workunit.client.0.pluto008.stdout:./../..
2016-10-21T20:20:35.248 INFO:tasks.workunit.client.0.pluto008.stdout:# file: file
2016-10-21T20:20:35.248 INFO:tasks.workunit.client.0.pluto008.stdout:ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data"
2016-10-21T20:20:35.249 INFO:tasks.workunit.client.0.pluto008.stdout:
2016-10-21T20:20:35.250 INFO:tasks.workunit.client.0.pluto008.stdout:# file: file
2016-10-21T20:20:35.250 INFO:tasks.workunit.client.0.pluto008.stdout:ceph.file.layout.pool="cephfs_data"
2016-10-21T20:20:35.251 INFO:tasks.workunit.client.0.pluto008.stdout:
2016-10-21T20:20:35.252 INFO:tasks.workunit:Stopping ['fs/misc'] on client.0...
2016-10-21T20:20:35.252 INFO:teuthology.orchestra.run.pluto008:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0 /home/ubuntu/cephtest/clone.client.0'
2016-10-21T20:20:35.306 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_rh22/teuthology/parallel.py", line 83, in __exit__
    for result in self:
  File "/home/teuthworker/src/teuthology_rh22/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/teuthology_rh22/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/ceph-qa-suite_rh22/tasks/workunit.py", line 404, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/teuthology_rh22/teuthology/orchestra/remote.py", line 194, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/teuthology_rh22/teuthology/orchestra/run.py", line 402, in run
    r.wait()
  File "/home/teuthworker/src/teuthology_rh22/teuthology/orchestra/run.py", line 166, in wait
    label=self.label)


If the name pool namespace is not supported with latest 7.3 kernel, probably we want to document?

Comment 2 John Spray 2016-10-24 19:01:25 UTC
Yes, if the downstream documentation mentions this field then it should warn about this.

Comment 3 John Spray 2017-01-06 12:35:13 UTC
Bara: I've added notes in the doc field, could you pick this up?

Comment 5 John Spray 2017-01-09 13:32:05 UTC
I'd say just the limitations section should be fine.

Comment 7 John Spray 2017-01-09 15:58:34 UTC
Looks good, thanks!


Note You need to log in before you can comment on or make changes to this bug.