RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1599668 - Current merge function causes issues when stacked on top of RAID50 [rhel-7.5.z]
Summary: Current merge function causes issues when stacked on top of RAID50 [rhel-7.5.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kmod-kvdo
Version: 7.6
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: bjohnsto
QA Contact: Jakub Krysl
Marek Suchánek
URL:
Whiteboard:
Depends On: 1593444
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-10 10:58 UTC by Jaroslav Reznik
Modified: 2021-09-03 11:54 UTC (History)
5 users (show)

Fixed In Version: 6.1.0.178-16
Doc Type: If docs needed, set a value
Doc Text:
Previously, creating a VDO volume on top of a RAID 50 array caused the system to halt unexpectedly. With this update, the problem has been fixed, and creating a VDO volume on RAID 50 no longer crashes the system.
Clone Of: 1593444
Environment:
Last Closed: 2018-08-16 14:19:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2450 0 None None None 2018-08-16 14:19:13 UTC

Description Jaroslav Reznik 2018-07-10 10:58:12 UTC
This bug has been copied from bug #1593444 and has been proposed to be backported to 7.5 z-stream (EUS).

Comment 3 Jakub Krysl 2018-07-23 12:57:29 UTC
kernel-3.10.0-862.10.2.el7
kmod-kvdo-kmod-kvdo-6.1.0.171-17.el7_5

# vdo create -n vdo1 --device /dev/md50 --activate=enabled --compression=enabled --deduplication=enabled --sparseIndex=enabled --vdoLogicalSize=20T --verbose
Creating VDO vdo1
    grep MemAvailable /proc/meminfo
    pvcreate -qq --test /dev/md50
    modprobe kvdo
    vdoformat --uds-checkpoint-frequency=0 --uds-memory-size=0.25 --uds-sparse --logical-size=20T /dev/md50
    vdodumpconfig /dev/md50
Starting VDO vdo1
    dmsetup status vdo1
    grep MemAvailable /proc/meminfo
    modprobe kvdo
    vdodumpconfig /dev/md50
    dmsetup create vdo1 --uuid VDO-aa204d8f-f552-48b6-a31d-083405eca2c5 --table '0 42949672960 vdo /dev/md50 4096 disabled 0 32768 16380 on auto vdo1 ack=1,bio=4,bioRotationInterval=64,cpu=2,hash=1,logical=1,physical=1'
    dmsetup status vdo1
Starting compression on VDO vdo1
    dmsetup message vdo1 0 compression on
    dmsetup status vdo1
VDO instance 82 volume is ready at /dev/mapper/vdo1

# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sdb                           8:16   0     2T  0 disk
├─vg-lv1                    253:3    0  39.1G  0 lvm
│ └─md51                      9:51   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv2                    253:4    0  39.1G  0 lvm
│ └─md51                      9:51   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv3                    253:5    0  39.1G  0 lvm
│ └─md51                      9:51   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv4                    253:6    0  39.1G  0 lvm
│ └─md51                      9:51   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv5                    253:7    0  39.1G  0 lvm
│ └─md51                      9:51   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv6                    253:8    0  39.1G  0 lvm
│ └─md51                      9:51   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv7                    253:9    0  39.1G  0 lvm
│ └─md51                      9:51   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv8                    253:10   0  39.1G  0 lvm
│ └─md51                      9:51   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv9                    253:11   0  39.1G  0 lvm
│ └─md52                      9:52   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv10                   253:12   0  39.1G  0 lvm
│ └─md52                      9:52   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv11                   253:13   0  39.1G  0 lvm
│ └─md52                      9:52   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv12                   253:14   0  39.1G  0 lvm
│ └─md52                      9:52   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv13                   253:15   0  39.1G  0 lvm
│ └─md52                      9:52   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv14                   253:16   0  39.1G  0 lvm
│ └─md52                      9:52   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv15                   253:17   0  39.1G  0 lvm
│ └─md52                      9:52   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv16                   253:18   0  39.1G  0 lvm
│ └─md52                      9:52   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv17                   253:19   0  39.1G  0 lvm
│ └─md53                      9:53   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv18                   253:20   0  39.1G  0 lvm
│ └─md53                      9:53   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv19                   253:21   0  39.1G  0 lvm
│ └─md53                      9:53   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv20                   253:22   0  39.1G  0 lvm
│ └─md53                      9:53   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv21                   253:23   0  39.1G  0 lvm
│ └─md53                      9:53   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv22                   253:24   0  39.1G  0 lvm
│ └─md53                      9:53   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
├─vg-lv23                   253:25   0  39.1G  0 lvm
│ └─md53                      9:53   0 273.2G  0 raid5
│   └─md50                    9:50   0 819.3G  0 raid0
│     └─vdo1                253:27   0    20T  0 vdo
└─vg-lv24                   253:26   0  39.1G  0 lvm
  └─md53                      9:53   0 273.2G  0 raid5
    └─md50                    9:50   0 819.3G  0 raid0
      └─vdo1                253:27   0    20T  0 vdo

Comment 4 Jakub Krysl 2018-07-23 12:58:35 UTC
(In reply to Jakub Krysl from comment #3)
> kmod-kvdo-kmod-kvdo-6.1.0.171-17.el7_5
sorry, should have been kmod-kvdo-6.1.0.181-17.el7_5

Comment 9 errata-xmlrpc 2018-08-16 14:19:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2450


Note You need to log in before you can comment on or make changes to this bug.