Bug 1599668
Summary: | Current merge function causes issues when stacked on top of RAID50 [rhel-7.5.z] | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Jaroslav Reznik <jreznik> |
Component: | kmod-kvdo | Assignee: | bjohnsto |
Status: | CLOSED ERRATA | QA Contact: | Jakub Krysl <jkrysl> |
Severity: | high | Docs Contact: | Marek Suchánek <msuchane> |
Priority: | high | ||
Version: | 7.6 | CC: | awalsh, bjohnsto, jkrysl, limershe, rhandlin |
Target Milestone: | rc | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | 6.1.0.178-16 | Doc Type: | If docs needed, set a value |
Doc Text: |
Previously, creating a VDO volume on top of a RAID 50 array caused the system to halt unexpectedly. With this update, the problem has been fixed, and creating a VDO volume on RAID 50 no longer crashes the system.
|
Story Points: | --- |
Clone Of: | 1593444 | Environment: | |
Last Closed: | 2018-08-16 14:19:04 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1593444 | ||
Bug Blocks: |
Description
Jaroslav Reznik
2018-07-10 10:58:12 UTC
kernel-3.10.0-862.10.2.el7 kmod-kvdo-kmod-kvdo-6.1.0.171-17.el7_5 # vdo create -n vdo1 --device /dev/md50 --activate=enabled --compression=enabled --deduplication=enabled --sparseIndex=enabled --vdoLogicalSize=20T --verbose Creating VDO vdo1 grep MemAvailable /proc/meminfo pvcreate -qq --test /dev/md50 modprobe kvdo vdoformat --uds-checkpoint-frequency=0 --uds-memory-size=0.25 --uds-sparse --logical-size=20T /dev/md50 vdodumpconfig /dev/md50 Starting VDO vdo1 dmsetup status vdo1 grep MemAvailable /proc/meminfo modprobe kvdo vdodumpconfig /dev/md50 dmsetup create vdo1 --uuid VDO-aa204d8f-f552-48b6-a31d-083405eca2c5 --table '0 42949672960 vdo /dev/md50 4096 disabled 0 32768 16380 on auto vdo1 ack=1,bio=4,bioRotationInterval=64,cpu=2,hash=1,logical=1,physical=1' dmsetup status vdo1 Starting compression on VDO vdo1 dmsetup message vdo1 0 compression on dmsetup status vdo1 VDO instance 82 volume is ready at /dev/mapper/vdo1 # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 2T 0 disk ├─vg-lv1 253:3 0 39.1G 0 lvm │ └─md51 9:51 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv2 253:4 0 39.1G 0 lvm │ └─md51 9:51 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv3 253:5 0 39.1G 0 lvm │ └─md51 9:51 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv4 253:6 0 39.1G 0 lvm │ └─md51 9:51 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv5 253:7 0 39.1G 0 lvm │ └─md51 9:51 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv6 253:8 0 39.1G 0 lvm │ └─md51 9:51 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv7 253:9 0 39.1G 0 lvm │ └─md51 9:51 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv8 253:10 0 39.1G 0 lvm │ └─md51 9:51 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv9 253:11 0 39.1G 0 lvm │ └─md52 9:52 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv10 253:12 0 39.1G 0 lvm │ └─md52 9:52 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv11 253:13 0 39.1G 0 lvm │ └─md52 9:52 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv12 253:14 0 39.1G 0 lvm │ └─md52 9:52 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv13 253:15 0 39.1G 0 lvm │ └─md52 9:52 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv14 253:16 0 39.1G 0 lvm │ └─md52 9:52 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv15 253:17 0 39.1G 0 lvm │ └─md52 9:52 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv16 253:18 0 39.1G 0 lvm │ └─md52 9:52 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv17 253:19 0 39.1G 0 lvm │ └─md53 9:53 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv18 253:20 0 39.1G 0 lvm │ └─md53 9:53 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv19 253:21 0 39.1G 0 lvm │ └─md53 9:53 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv20 253:22 0 39.1G 0 lvm │ └─md53 9:53 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv21 253:23 0 39.1G 0 lvm │ └─md53 9:53 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv22 253:24 0 39.1G 0 lvm │ └─md53 9:53 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo ├─vg-lv23 253:25 0 39.1G 0 lvm │ └─md53 9:53 0 273.2G 0 raid5 │ └─md50 9:50 0 819.3G 0 raid0 │ └─vdo1 253:27 0 20T 0 vdo └─vg-lv24 253:26 0 39.1G 0 lvm └─md53 9:53 0 273.2G 0 raid5 └─md50 9:50 0 819.3G 0 raid0 └─vdo1 253:27 0 20T 0 vdo (In reply to Jakub Krysl from comment #3) > kmod-kvdo-kmod-kvdo-6.1.0.171-17.el7_5 sorry, should have been kmod-kvdo-6.1.0.181-17.el7_5 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2450 |