Bug 1298738

Summary: block device /dev/rbd/$pool/$image is not present
Product: Red Hat Ceph Storage Reporter: Rachana Patel <racpatel>
Component: RBDAssignee: Josh Durgin <jdurgin>
Status: CLOSED NOTABUG QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 1.3.2CC: asriram, ceph-eng-bugs, hnallurv, jdillama, jdurgin, kdreyer, ngoswami, racpatel
Target Milestone: rcKeywords: Documentation, ZStream
Target Release: 1.3.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-02-12 19:42:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Rachana Patel 2016-01-14 21:21:27 UTC
Description of problem:
======================
In Installation document, chapeter - BLOCK DEVICE QUICK START has example to create fs using block device.
"
Use the block device by creating a file system on the ceph-client node.
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
"

Couldn't find /dev/rbd/rbd/foo and this command fails with error'Could not stat /dev/rbd/rbd/foo--- Not a directory


Version-Release number of selected component (if applicable):
=============================================================
0.94.5-1.el7cp.x86_64

How reproducible:
=================
always

Steps to Reproduce:
1.follow the instruction mentioned in 'BLOCK DEVICE QUICK START'
2.below steps are failing
===
4. Use the block device by creating a file system on the ceph-client node.
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
This may take a few moments.
5. Mount the file system on the ceph-client node.
sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
===

Actual results:
==============
unable to create fs and unable to mount.


Additional info:
=================
I should be '/dev/rbd#' instead of '/dev/rbd/rbd/foo'

Comment 2 Ken Dreyer (Red Hat) 2016-02-04 03:37:10 UTC
Jason/Josh, could you look briefly over the suggested change and confirm that /dev/rbd# is correct?

Comment 3 Josh Durgin 2016-02-04 03:51:39 UTC
The documented '/dev/rbd/$pool/$image' path is correct - that it isn't present indicates that the udev rule setting it up was not installed or not working.

Comment 7 Ken Dreyer (Red Hat) 2016-02-09 18:17:59 UTC
Rachana, can you reproduce this bug? Can you also verify that 50-rbd.rules is present on your system?

Comment 10 Ken Dreyer (Red Hat) 2016-02-09 19:19:34 UTC
Could this be an issue in ceph-rbdnamer ? I don't know how to debug udev issues :(

Comment 11 Jason Dillaman 2016-02-09 19:34:38 UTC
If you run "/usr/bin/ceph-rbdnamer <device number>", you should be able to see it dump the pool and image name:

# /usr/bin/ceph-rbdnamer 0
rbd foo

You can test udev via:

# sudo udevadm test --action=add `udevadm info -q path -n /dev/rbdX`
... snip ...
.ID_FS_TYPE_NEW=
ACTION=add
DEVLINKS=/dev/rbd/rbd/foo
DEVNAME=/dev/rbd0
DEVPATH=/devices/virtual/block/rbd0
DEVTYPE=disk
ID_FS_TYPE=
MAJOR=251
MINOR=0
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=699743934832

The important line is the DEVLINKS entry showing that it is creating the necessary symlink.

Comment 12 Rachana Patel 2016-02-12 19:42:33 UTC
Unable to reproduce with current build,with latest build link is getting created so closing bug.

re-testing done with version:-
ceph-deploy-1.5.27.4-3.el7cp.noarch
ceph-common-0.94.5-9.el7cp.x86_64