Bug 2017620 - rbd-nbd: generate and send device cookie with netlink connect request
Summary: rbd-nbd: generate and send device cookie with netlink connect request
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 5.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 5.1
Assignee: Ilya Dryomov
QA Contact: Preethi
URL:
Whiteboard:
Depends On:
Blocks: 1851289
TreeView+ depends on / blocked
 
Reported: 2021-10-27 03:09 UTC by Prasanna Kumar Kalever
Modified: 2022-04-04 10:22 UTC (History)
3 users (show)

Fixed In Version: ceph-16.2.7-48.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-04 10:22:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-2106 0 None None None 2021-10-27 03:09:32 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:22:28 UTC

Description Prasanna Kumar Kalever 2021-10-27 03:09:09 UTC
Description of problem:

On remap/attach of the device, there is no way for rbd-nbd to defend if the backend storage is matching with the initial backend storage.

Say, if an initial map request for backend "pool1/image1" got mapped to /dev/nbd0 and the userspace process is terminated/detached. A next remap/attach request within reattach-timeout is allowed to use /dev/nbd0 for a different backend "pool1/image2"

For example, an operation like below could be dangerous:

$ sudo rbd-nbd map --try-netlink rbd-pool/ext4-image
/dev/nbd0
$ sudo blkid /dev/nbd0
/dev/nbd0: UUID="bfc444b4-64b1-418f-8b36-6e0d170cfc04" TYPE="ext4"
$ sudo pkill 15 rbd-nbd <- nodeplugin terminate
$ sudo rbd-nbd attach --try-netlink --device /dev/nbd0 rbd-pool/xfs-image
/dev/nbd0
$ sudo blkid /dev/nbd0
/dev/nbd0: UUID="d29bf343-6570-4069-a9ea-2fa156ced908" TYPE="xfs"

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
Provided above.

Actual results:
Attach accepts other backend images, which is dangerous.

Expected results:
Attach should reject other backends for a given device.


Additional info:
https://tracker.ceph.com/issues/53046
https://github.com/ceph/ceph/pull/41323

Comment 1 RHEL Program Management 2021-10-27 03:09:15 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 8 Preethi 2022-02-21 09:11:18 UTC
Working as expected . Verified with the below kernel and ceph versions

[root@ceph-scale-test-1gjvin-node1-installer cephuser]# rbd device -t nbd map test/image2 --options try-netlink --show-cookie
/dev/nbd0 5e4e3320-24ee-4948-a670-20387a75d8cc
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# rbd device detach -t nbd test/image2
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# rbd device attach -t nbd --device /dev/nbd0 test/image2
rbd: could not validate attach request
rbd: mismatching the image and the device may lead to data corruption
rbd: must specify --cookie <arg> or --force to proceed
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# rbd device attach -t nbd --device /dev/nbd0  --cookie 5e4e3320-24ee-4948-a670-20387a75d8cc test/image2
/dev/nbd0
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# rbd device detach -t nbd test/image2
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# rbd device attach -t nbd --device /dev/nbd0 --cookie 5e4e3320-24ee-4948-a670-20387a75d8dd test/image2
rbd-nbd: cookie mismatch
rbd: rbd-nbd failed with error: /usr/bin/rbd-nbd: exit status: 1
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# rbd device attach -t nbd --device /dev/nbd0 --cookie 5e4e3320-24ee-4948-a670-20387a75d8cc test/image2
/dev/nbd0
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# rbd device unmap -t nbd test/image2
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# 
[root@ceph-scale-test-1gjvin-node1-installer cephuser]#


worked as expected from step1 to step7
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# uname -r
4.18.0-368.el8.x86_64
[root@ceph-scale-test-1gjvin-node1-installer cephuser]# ceph version
ceph version 16.2.7-65.el8cp (499bc1e23ab4671631da5affff6e1c772b8fe42d) pacific (stable)
[root@ceph-scale-test-1gjvin-node1-installer cephuser]#

Comment 10 errata-xmlrpc 2022-04-04 10:22:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.