Bug 1191604
| Summary: | DM RAID - Add support for 'raid0' mappings to device-mapper raid target | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Heinz Mauelshagen <heinzm> |
| Component: | kernel | Assignee: | Mike Snitzer <msnitzer> |
| kernel sub component: | RAID | QA Contact: | Zhang Yi <yizhan> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | unspecified | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, xni, yanwang |
| Version: | 7.2 | Keywords: | Tracking |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | kernel-3.10.0-284.el7 | Doc Type: | Enhancement |
| Doc Text: |
Cause:
Support for 'raid0' mappings missing in device-mapper raid target.
Consequence:
Conversions from 'striped' to 'raid0' or 'raid[45]' to 'raid0' not possible.
Fix:
Result:
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-11-19 21:29:39 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1189124, 1346081 | ||
|
Description
Heinz Mauelshagen
2015-02-11 15:06:48 UTC
Patch(es) available on kernel-3.10.0-284.el7 Verified this issue with kernel 3.10.0-307.el7.x86_64. The patch from comment 5 exist on kernel-3.10.0-309.el7. I tried do mkfs.ext4 for the /dev/mapper/test-raid0, observed BUG issue, I will file another issue to track it. The dm raid0 can be created with below steps: ---------------------------------------------------------------- #!/bin/bash for i in `seq 0 3`;do dd if=/dev/zero of=/tmp/$i.tmp bs=1M count=1200 & done dd if=/dev/urandom of=bigfile bs=1M count=1024 & wait for i in `seq 0 3`;do losetup /dev/loop$i /tmp/$i.tmp done for S in `seq 0 3`; do dmsetup create test-raid-metadata$S --table "0 8192 linear /dev/loop$S 0" dmsetup create test-raid-data$S --table "0 1953125 linear /dev/loop$S 8192" done dmsetup create test-raid0 --table '0 7812096 raid raid0 1 128 4 - /dev/mapper/test-raid-data0 - /dev/mapper/test-raid-data1 - /dev/mapper/test-raid-data2 - /dev/mapper/test-raid-data3' ----------------------------------------------------------------------------- Log: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 1.2G 0 loop ├─test-raid-metadata0 253:3 0 4M 0 dm └─test-raid-data0 253:4 0 953.7M 0 dm └─test-raid0 253:11 0 3.7G 0 dm loop1 7:1 0 1.2G 0 loop ├─test-raid-metadata1 253:5 0 4M 0 dm └─test-raid-data1 253:6 0 953.7M 0 dm └─test-raid0 253:11 0 3.7G 0 dm loop2 7:2 0 1.2G 0 loop ├─test-raid-metadata2 253:7 0 4M 0 dm └─test-raid-data2 253:8 0 953.7M 0 dm └─test-raid0 253:11 0 3.7G 0 dm loop3 7:3 0 1.2G 0 loop ├─test-raid-metadata3 253:9 0 4M 0 dm └─test-raid-data3 253:10 0 953.7M 0 dm └─test-raid0 253:11 0 3.7G 0 dm # dmsetup status /dev/mapper/test-raid0 0 7812096 raid raid0 4 AAAA 1953024/1953024 idle 0 [ 168.815923] raid6: sse2x1 gen() 5750 MB/s [ 168.832934] raid6: sse2x2 gen() 7441 MB/s [ 168.849948] raid6: sse2x4 gen() 8648 MB/s [ 168.849949] raid6: using algorithm sse2x4 gen() (8648 MB/s) [ 168.849951] raid6: using ssse3x2 recovery algorithm [ 168.886191] md: raid6 personality registered for level 6 [ 168.886194] md: raid5 personality registered for level 5 [ 168.886195] md: raid4 personality registered for level 4 [ 168.891382] device-mapper: raid: Loading target version 1.7.0 [ 168.891566] device-mapper: raid: Choosing default region size of 4MiB [ 168.901764] md: raid0 personality registered for level 0 [ 168.901967] md/raid0:mdX: md_size is 7812096 sectors. [ 168.901969] md: RAID0 configuration for mdX - 1 zone [ 168.901971] md: zone0=[dm-4/dm-6/dm-8/dm-10] [ 168.901975] zone-offset= 0KB, device-offset= 0KB, size= 3906048KB loop0 7:0 0 1.2G 0 loop ├─test-raid-metadata0 253:3 0 4M 0 dm └─test-raid-data0 253:4 0 953.7M 0 dm └─test-raid0 253:11 0 3.7G 0 dm loop1 7:1 0 1.2G 0 loop ├─test-raid-metadata1 253:5 0 4M 0 dm └─test-raid-data1 253:6 0 953.7M 0 dm └─test-raid0 253:11 0 3.7G 0 dm loop2 7:2 0 1.2G 0 loop ├─test-raid-metadata2 253:7 0 4M 0 dm └─test-raid-data2 253:8 0 953.7M 0 dm └─test-raid0 253:11 0 3.7G 0 dm loop3 7:3 0 1.2G 0 loop ├─test-raid-metadata3 253:9 0 4M 0 dm └─test-raid-data3 253:10 0 953.7M 0 dm └─test-raid0 253:11 0 3.7G 0 dm Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-2152.html |