Bug 1593444
| Summary: | Current merge function causes issues when stacked on top of RAID50 | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | bjohnsto | |
| Component: | kmod-kvdo | Assignee: | bjohnsto | |
| Status: | CLOSED ERRATA | QA Contact: | Jakub Krysl <jkrysl> | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | 7.6 | CC: | awalsh, jkrysl, jreznik, limershe, rhandlin | |
| Target Milestone: | rc | Keywords: | ZStream | |
| Target Release: | --- | Flags: | rhandlin:
needinfo+
|
|
| Hardware: | Unspecified | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | 6.1.1.103 | Doc Type: | If docs needed, set a value | |
| Doc Text: |
Previously, creating a VDO volume on top of a RAID 50 array caused the system to halt unexpectedly. With this update, the problem has been fixed, and creating a VDO volume on RAID 50 no longer crashes the system.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1599662 1599668 (view as bug list) | Environment: | ||
| Last Closed: | 2018-10-30 09:39:49 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1599662, 1599668 | |||
|
Description
bjohnsto
2018-06-20 20:35:24 UTC
kernel-3.10.0-915.el7
vdo-6.1.1.111-3.el7
kmod-kvdo-6.1.1.111.1-el7
Regression testing passed on both usual device and on raid 50.
# mdadm --create /dev/md50 --level=0 --raid-devices=3 /dev/md5{1..3}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md50 started.
# vdo create -n vdo1 --device /dev/md50 --activate=enabled --compression=enabled --deduplication=enabled --sparseIndex=enabled --vdoLogicalSize=20T --verbose
Creating VDO vdo1
grep MemAvailable /proc/meminfo
pvcreate -qq --test /dev/md50
modprobe kvdo
vdoformat --uds-checkpoint-frequency=0 --uds-memory-size=0.25 --uds-sparse --logical-size=20T /dev/disk/by-id/md-uuid-48a2b43e:052de42b:0dabe16c:0988fc32
vdodumpconfig /dev/disk/by-id/md-uuid-48a2b43e:052de42b:0dabe16c:0988fc32
Starting VDO vdo1
dmsetup status vdo1
grep MemAvailable /proc/meminfo
modprobe kvdo
vdodumpconfig /dev/disk/by-id/md-uuid-48a2b43e:052de42b:0dabe16c:0988fc32
dmsetup create vdo1 --uuid VDO-035ef5c9-a203-467b-b18e-cd2934b024a0 --table '0 42949672960 vdo /dev/disk/by-id/md-uuid-48a2b43e:052de42b:0dabe16c:0988fc32 4096 disabled 0 32768 16380 on auto vdo1 ack=1,bio=4,bioRotationInterval=64,cpu=2,hash=1,logical=1,physical=1'
dmsetup status vdo1
Starting compression on VDO vdo1
dmsetup message vdo1 0 compression on
vdodmeventd -r vdo1
dmsetup status vdo1
VDO instance 89 volume is ready at /dev/mapper/vdo1
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 0 2T 0 disk
├─vg-lv1 253:2 0 39.1G 0 lvm
│ └─md51 9:51 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv2 253:4 0 39.1G 0 lvm
│ └─md51 9:51 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv3 253:5 0 39.1G 0 lvm
│ └─md51 9:51 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv4 253:6 0 39.1G 0 lvm
│ └─md51 9:51 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv5 253:7 0 39.1G 0 lvm
│ └─md51 9:51 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv6 253:8 0 39.1G 0 lvm
│ └─md51 9:51 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv7 253:9 0 39.1G 0 lvm
│ └─md51 9:51 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv8 253:10 0 39.1G 0 lvm
│ └─md51 9:51 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv9 253:11 0 39.1G 0 lvm
│ └─md52 9:52 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv10 253:12 0 39.1G 0 lvm
│ └─md52 9:52 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv11 253:13 0 39.1G 0 lvm
│ └─md52 9:52 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv12 253:14 0 39.1G 0 lvm
│ └─md52 9:52 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv13 253:15 0 39.1G 0 lvm
│ └─md52 9:52 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv14 253:16 0 39.1G 0 lvm
│ └─md52 9:52 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv15 253:17 0 39.1G 0 lvm
│ └─md52 9:52 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv16 253:18 0 39.1G 0 lvm
│ └─md52 9:52 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv17 253:19 0 39.1G 0 lvm
│ └─md53 9:53 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv18 253:20 0 39.1G 0 lvm
│ └─md53 9:53 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv19 253:21 0 39.1G 0 lvm
│ └─md53 9:53 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv20 253:22 0 39.1G 0 lvm
│ └─md53 9:53 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv21 253:23 0 39.1G 0 lvm
│ └─md53 9:53 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv22 253:24 0 39.1G 0 lvm
│ └─md53 9:53 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
├─vg-lv23 253:25 0 39.1G 0 lvm
│ └─md53 9:53 0 273.2G 0 raid5
│ └─md50 9:50 0 819.3G 0 raid0
│ └─vdo1 253:27 0 20T 0 vdo
└─vg-lv24 253:26 0 39.1G 0 lvm
└─md53 9:53 0 273.2G 0 raid5
└─md50 9:50 0 819.3G 0 raid0
└─vdo1 253:27 0 20T 0 vdo
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3094 |