RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1510581 - issues when attempting vdo creation on live filesystem
Summary: issues when attempting vdo creation on live filesystem
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: vdo
Version: 7.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Ken Raeburn
QA Contact: Jakub Krysl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-07 17:28 UTC by Corey Marthaler
Modified: 2019-03-06 00:35 UTC (History)
3 users (show)

Fixed In Version: 6.1.0.69
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 15:47:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0871 0 None None None 2018-04-10 15:48:08 UTC

Description Corey Marthaler 2017-11-07 17:28:37 UTC
Description of problem:

[root@host-115 ~]# pvcreate /dev/sd[abcd]1
  Physical volume "/dev/sda1" successfully created.
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/sdc1" successfully created.
  Physical volume "/dev/sdd1" successfully created.

[root@host-115 ~]# vgcreate VG /dev/sd[abcd]1
  Volume group "VG" successfully created

[root@host-115 ~]# lvcreate --type raid1 -m 1 -n my_raid -L 10G VG
  Logical volume "my_raid" created.

[root@host-115 ~]# lvs -a -o +devices
  LV                 VG  Attr       LSize   Cpy%Sync Devices
  my_raid            VG  rwi-a-r---  10.00g 0.00     my_raid_rimage_0(0),my_raid_rimage_1(0)
  [my_raid_rimage_0] VG  Iwi-aor---  10.00g          /dev/sda1(1)
  [my_raid_rimage_1] VG  Iwi-aor---  10.00g          /dev/sdb1(1)
  [my_raid_rmeta_0]  VG  ewi-aor---   4.00m          /dev/sda1(0)
  [my_raid_rmeta_1]  VG  ewi-aor---   4.00m          /dev/sdb1(0)

[root@host-115 ~]# mkfs.ext4 /dev/VG/my_raid 
[...]
[root@host-115 ~]# mount /dev/VG/my_raid /mnt/test

# FS size is 9.8G
[root@host-115 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VG-my_raid           9.8G   37M  9.2G   1% /mnt/test

[root@host-115 ~]# vdo create --name my_vdo --device /dev/VG/my_raid 
Creating VDO my_vdo
Starting VDO my_vdo
vdo: ERROR - Could not set up device mapper for my_vdo
Removing VDO my_vdo
Stopping VDO my_vdo
vdo: ERROR - device-mapper: reload ioctl on my_vdo  failed: Invalid argument

# FS size is now 64Z?
[root@host-115 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VG-my_raid            64Z   64Z  9.7G 100% /mnt/test

# The exisiting data still appears correct
[root@host-115 ~]# /usr/tests/sts-rhel7.5/bin/checkit -w /mnt/test -f /tmp/vdo -v
checkit starting with:
VERIFY
Verify XIOR Stream: /tmp/vdo
Working dir:        /mnt/test


# Any additional data attempted appears to fail due to a "full" file system
[root@host-115 ~]# /usr/tests/sts-rhel7.5/bin/checkit -w /mnt/test -f /tmp/vdo2
checkit starting with:
CREATE
Num files:          100
Random Seed:        2084
Verify XIOR Stream: /tmp/vdo2
Working dir:        /mnt/test
kilcpmovaeqjpoxxlfpgiojr: open() error: Input/output error


Nov  7 11:01:01 host-115 systemd: Starting Session 5 of user root.
Nov  7 11:01:42 host-115 kernel: EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null)
Nov  7 11:02:12 host-115 kernel: md: mdX: resync done.
Nov  7 11:02:12 host-115 lvm[1980]: raid1 array, VG-my_raid, is now in-sync.
Nov  7 11:04:15 host-115 kernel: uds: loading out-of-tree module taints kernel.
Nov  7 11:04:15 host-115 kernel: uds: module verification failed: signature and/or required key missing - tainting kernel
Nov  7 11:04:15 host-115 kernel: uds: modprobe: uds starting
Nov  7 11:04:15 host-115 kernel: kvdo: modprobe: loaded version
Nov  7 11:04:15 host-115 UDS/vdoformat[2064]: ERROR  (vdoformat/2064) loadVolumeGeometry ID mismatch, expected 5, got 0: VDO Status: Component id mismatch in decoder (2059)
Nov  7 11:04:15 host-115 UDS/vdoformat[2064]: ERROR  (vdoformat/2064) decodeSuperBlock version mismatch, expected 12.0, got 0.0: VDO Status: Unsupported component version (2058)
Nov  7 11:04:16 host-115 kernel: kvdo0:dmsetup: starting device 'my_vdo' device instantiation 0 (ti=ffffc269c0247040) write policy sync
Nov  7 11:04:16 host-115 kernel: kvdo0:dmsetup: couldn't open device "/dev/VG/my_raid": error -16
Nov  7 11:04:16 host-115 kernel: device-mapper: table: 253:7: dedupe: Device lookup failed
Nov  7 11:04:16 host-115 kernel: device-mapper: ioctl: error adding target to table
Nov  7 11:04:16 host-115 vdo: ERROR - Could not set up device mapper for my_vdo
Nov  7 11:04:16 host-115 multipathd: dm-7: remove map (uevent)
Nov  7 11:04:16 host-115 multipathd: dm-7: remove map (uevent)
Nov  7 11:04:16 host-115 vdo: ERROR - device-mapper: reload ioctl on my_vdo  failed: Invalid argument
Nov  7 11:05:50 host-115 kernel: EXT4-fs error (device dm-6): ext4_map_blocks:565: inode #2: block 9251: comm checkit: lblock 0 mapped to illegal pblock (length 1)
Nov  7 11:05:50 host-115 kernel: EXT4-fs error (device dm-6): ext4_map_blocks:565: inode #2: block 9251: comm checkit: lblock 0 mapped to illegal pblock (length 1)




Version-Release number of selected component (if applicable):
3.10.0-772.el7.x86_64

lvm2-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
lvm2-libs-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
lvm2-cluster-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
lvm2-lockd-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
lvm2-python-boom-0.8-2.el7    BUILT: Fri Nov  3 07:48:54 CDT 2017
cmirror-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-1.02.145-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-libs-1.02.145-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-event-1.02.145-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-event-libs-1.02.145-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-persistent-data-0.7.3-2.el7    BUILT: Tue Oct 10 04:00:07 CDT 2017
sanlock-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
sanlock-lib-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
vdo-6.1.0.34-8    BUILT: Fri Nov  3 06:58:45 CDT 2017
kmod-kvdo-6.1.0.34-7.el7    BUILT: Fri Nov  3 06:44:06 CDT 2017

Comment 6 Jakub Krysl 2017-12-04 14:57:48 UTC
Tested with vdo-6.1.0.72-12. VDO now tried 'pvcreate --test' on vdo creation to see if it can create VDO there.


# mkfs.xfs -K /dev/mapper/mpatha
[...]
# mkdir mpatha
# mount /dev/mapper/mpatha mpatha
# vdo create --device=/dev/mapper/mpatha --name=vdo --verbose --indexMem 2
Creating VDO vdo
    pvcreate -qq --test /dev/mapper/mpatha
vdo: ERROR -   Can't open /dev/mapper/mpatha exclusively.  Mounted filesystem?

Also because there are no other operation on the mounted filesystem, it does not get corrupted:
# df -h
Filesystem                                       Size  Used Avail Use% Mounted on
/dev/mapper/rhel_storageqe--75-root               50G  2.1G   48G   5% /
devtmpfs                                         3.8G     0  3.8G   0% /dev
tmpfs                                            3.8G     0  3.8G   0% /dev/shm
tmpfs                                            3.8G  9.1M  3.8G   1% /run
tmpfs                                            3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/sda1                                       1014M  151M  864M  15% /boot
/dev/mapper/rhel_storageqe--75-home              407G   33M  407G   1% /home
na3170b.lab.bos.redhat.com:/qe-data/kdump_cores  973G  237G  737G  25% /var/crash
tmpfs                                            770M     0  770M   0% /run/user/0
/dev/mapper/mpatha                               100G   33M  100G   1% /root/mpatha

Comment 9 errata-xmlrpc 2018-04-10 15:47:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0871


Note You need to log in before you can comment on or make changes to this bug.