Bug 1262284 - Getting an Error after running `rbd-replay --read-only replay.bin`
Getting an Error after running `rbd-replay --read-only replay.bin`
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD (Show other bugs)
x86_64 Linux
unspecified Severity high
: rc
: 1.3.1
Assigned To: Josh Durgin
Depends On:
  Show dependency treegraph
Reported: 2015-09-11 06:50 EDT by Tanay Ganguly
Modified: 2017-07-30 11:32 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-10-29 14:33:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
librbd API script (575 bytes, text/x-python)
2015-09-11 06:50 EDT, Tanay Ganguly
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 13220 None None None Never

  None (edit)
Description Tanay Ganguly 2015-09-11 06:50:44 EDT
Created attachment 1072520 [details]
librbd API script

Description of problem:
Getting an Error after running `rbd-replay --read-only replay.bin`

Version-Release number of selected component (if applicable):
ceph version 0.94.2

How reproducible:

Steps to Reproduce:
0.mkdir -p ~/traces
1.lttng create -o traces librbd
2.lttng enable-event -u 'librbd:*'
3.lttng add-context -u -t pthread_id
4.lttng start
5.Ran the attached script (PFA)
6.lttng stop
7.rbd-replay-prep ~/traces/ust/uid/*/* replay.bin
8.rbd-replay --read-only replay.bin 

Actual results:
After running rbd-replay command i am getting an Error.

rbd-replay --read-only replay.bin 
Unable to create IoCtx: -2

I also tried to run: 

[root@hp-ms-01-c04 ~]# rbd-replay-prep ~/traces/ust/uid/0/64-bit/channel0_0 replay1.bin

As it showed UST event librbd:* created in channel channel0


[error] Unable to open trace directory "/root/traces/ust/uid/0/64-bit/channel0_0".
[warning] [Context] Cannot open_trace of format ctf at path /root/traces/ust/uid/0/64-bit/channel0_0.
rbd_replay/rbd-replay-prep.cc: In function 'void Processor::run(std::vector<std::basic_string<char> >)' thread 7fee3fbfa7c0 time 2015-09-11 06:49:00.214968
rbd_replay/rbd-replay-prep.cc: 189: FAILED assert(trace_handle >= 0)
Assertion details: trace_handle = -1
 ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
 1: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0xeb) [0x41830b]
 2: (Processor::run(std::vector<std::string, std::allocator<std::string> >)+0x86e) [0x417d6e]
 3: (main()+0x26b) [0x40e71b]
 4: (__libc_start_main()+0xf5) [0x7fee3e3c0af5]
 5: rbd-replay-prep() [0x40eb41]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
terminate called after throwing an instance of 'ceph::FailedAssertion'


Expected results:
I should not see any Error

Additional info:
Comment 3 Josh Durgin 2015-09-23 21:27:43 EDT
rbd-replay-prep just needs the base directory, not the channel file.

This works:

rbd-replay-prep ~/traces/ust/uid/0/64-bit replay.bin
Comment 4 Josh Durgin 2015-09-23 21:29:39 EDT
For rbd-replay, it's trying to replay on the 'rbd' pool by default. You'll need to specify an existing pool with the images in it, i.e.:

rbd-replay -p Tanay-RBD --read-only replay.bin
Comment 5 Josh Durgin 2015-09-23 21:37:31 EDT
Added http://tracker.ceph.com/issues/13220 and http://tracker.ceph.com/issues/13221 for better error reporting.
Comment 6 Tanay Ganguly 2015-10-29 06:25:00 EDT
Followed the Document and it worked as expected, hence marking this Bug as Verified.


rbd-replay works fine after specifying the pool name.
Comment 7 Jason Dillaman 2015-10-29 14:30:13 EDT
Since no code change took place, should this just be closed instead of flagged verified?
Comment 8 Ken Dreyer (Red Hat) 2015-10-29 14:33:32 EDT
(In reply to Jason Dillaman from comment #7)
> Since no code change took place, should this just be closed instead of
> flagged verified?

Great catch, thanks.

Note You need to log in before you can comment on or make changes to this bug.