Bug 1447628

Summary: [RGW]: "error in read_id for object name: default : (2) No such file or directory" message seen when rgw commands are run
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tejas <tchandra>
Component: RGWAssignee: Casey Bodley <cbodley>
Status: CLOSED ERRATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: low Docs Contact:
Priority: unspecified    
Version: 2.3CC: cbodley, ceph-eng-bugs, edonnell, hnallurv, kbader, mbenjamin, owasserm, razvan, sweil, tserlin, vakulkar
Target Milestone: rc   
Target Release: 2.3   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-10.2.7-16.el7cp Ubuntu: ceph_10.2.7-18redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-19 13:32:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Tejas 2017-05-03 11:07:37 UTC
Description of problem:
   This error message is seen when most of the radosgw-admin commands are run.

magna077 ~]# radosgw-admin bucket list --cluster slave1
2017-05-03 09:32:10.580572 7f01a73dd9c0  0 error in read_id for object name: default : (2) No such file or directory   <----------
[
    "lop",
    "bad",
    "new1",
    "bigbucket4",
    "bigbucket3"
]


Version-Release number of selected component (if applicable):
ceph version 10.2.7-13.el7cp (4955aa6a90abc27bc043729db19df24e1c840eac)

How reproducible:
Always


Additional info:

This seems to be resolved upstream:
http://tracker.ceph.com/issues/15776

If that is the case, we just need tp pull downstream.

Comment 2 Tejas 2017-05-03 11:10:57 UTC
I forgot to mention that this error message was seen on a multisite setup.
Also it does not affect the command ouput.

Thanks,
Tejas

Comment 9 Casey Bodley 2017-05-12 16:25:03 UTC
The tracker issue and upstream fix being referred to here was just to provide a 'more descriptive error message' for this - all it did was change what the error message says. That change is indeed present in the downstream build being tested here.

But the original reported issue is that the error message is shown in the first place. Removing that error message (by reporting it at a higher log level) would be an additional code change that has not been made upstream. I'll propose such a change.

Comment 14 Tejas 2017-05-15 11:24:39 UTC
Verified in ceph version:
ceph version 10.2.7-16.el7cp

>radosgw-admin bucket list --cluster master
[]

Comment 15 Razvan Musaloiu-E. 2017-05-23 14:25:57 UTC
(In reply to Casey Bodley from comment #9)
> The tracker issue and upstream fix being referred to here was just to
> provide a 'more descriptive error message' for this - all it did was change
> what the error message says. That change is indeed present in the downstream
> build being tested here.
> 
> But the original reported issue is that the error message is shown in the
> first place. Removing that error message (by reporting it at a higher log
> level) would be an additional code change that has not been made upstream.
> I'll propose such a change.

I think https://github.com/ceph/ceph/pull/9686 is fixing the issue on the
master. Maybe backporting that PR is all that needs to be done here.

Comment 18 errata-xmlrpc 2017-06-19 13:32:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1497