RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1725052 - kvdo does not prevent 2 VDOs from opening the same storage
Summary: kvdo does not prevent 2 VDOs from opening the same storage
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: kmod-kvdo
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: 8.0
Assignee: Sweet Tea Dorminy
QA Contact: Filip Suba
URL:
Whiteboard:
Depends On:
Blocks: 1755139
TreeView+ depends on / blocked
 
Reported: 2019-06-28 10:36 UTC by Jakub Krysl
Modified: 2021-09-06 15:25 UTC (History)
3 users (show)

Fixed In Version: 6.2.2.113
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 16:42:57 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-23939 0 None None None 2021-09-06 15:25:37 UTC
Red Hat Product Errata RHBA-2020:1782 0 None None None 2020-04-28 16:43:13 UTC

Description Jakub Krysl 2019-06-28 10:36:36 UTC
Description of problem:
Using directly DM it is possible to create 2 VDO devices using the same storage with little to no issues. The 2nd VDO fails to load the index, but the device is usable by both at the same time.

# vdoformat /dev/sda
Logical blocks defaulted to 2435905347 blocks.
# dmsetup create vdo0 --table "0 1000 vdo V2 /dev/sda 2441609216 4096 32768 16380 off sync vdo0"
# dmsetup create vdo1 --table "0 1000 vdo V2 /dev/sda 2441609216 4096 32768 16380 off sync vdo1"
# dmsetup table
vdo1: 0 1000 vdo V2 /dev/sda 2441609216 4096 32768 16380 off sync vdo1
vdo0: 0 1000 vdo V2 /dev/sda 2441609216 4096 32768 16380 off sync vdo0
# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0   9.1T  0 disk 
├─vdo0      253:0    0   500K  0 dm   
└─vdo1      253:1    0   500K  0 dm   

syslog:
[ 6740.323954] kvdo2:dmsetup: WARNING: Running in sync mode atop a device supporting flushes is dangerous!
[ 6751.612181] kvdo3:dmsetup: WARNING: Running in sync mode atop a device supporting flushes is dangerous!
[ 6751.688810] kvdo3:reqQ: Device was dirty, rebuilding reference counts
[ 6753.464640] uds: kvdo3:dedupeQ: index could not be loaded: UDS Error: Index not saved cleanly (1069)

Version-Release number of selected component (if applicable):
kmod-kvdo-6.2.1.102-53.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1. vdoformat /dev/sda
2. dmsetup create vdo0 --table "0 1000 vdo V2 /dev/sda 2441609216 4096 32768 16380 off sync vdo0"
3. dmsetup create vdo1 --table "0 1000 vdo V2 /dev/sda 2441609216 4096 32768 16380 off sync vdo1"

Actual results:
both VDO created and running

Expected results:
2nd VDO fails to create as the device is already in use by other VDO.

Additional info:

Comment 1 Jakub Krysl 2019-10-15 14:38:54 UTC
Mass migration to Filip.

Comment 6 Filip Suba 2020-03-16 13:49:23 UTC
Verified with vdo-6.2.2.117-13.el8.

# vdoformat /dev/sda
Logical blocks defaulted to 2435905347 blocks.
The VDO volume can address 9 TB in 4655 data slabs, each 2 GB.
It can grow to address at most 16 TB of physical storage in 8192 slabs.
If a larger maximum size might be needed, use bigger slabs.
# dmsetup create vdo0 --table "0 1000 vdo V2 /dev/sda 2441609216 4096 32768 16380 off sync vdo0"
# dmsetup create vdo1 --table "0 1000 vdo V2 /dev/sda 2441609216 4096 32768 16380 off sync vdo1"
device-mapper: reload ioctl on vdo1 (253:11) failed: Input/output error
Command failed.

[ 2527.805784] kvdo3:dmsetup: loading device 'vdo1'
[ 2527.810410] kvdo3:dmsetup: Existing layer named vdo0 already uses device /dev/sda
[ 2527.817899] kvdo3:dmsetup: Could not create kernel physical layer. (VDO error 2053, message Cannot share storage device with already-running VDO)

Comment 8 errata-xmlrpc 2020-04-28 16:42:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1782


Note You need to log in before you can comment on or make changes to this bug.