Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1191604 - DM RAID - Add support for 'raid0' mappings to device-mapper raid target
DM RAID - Add support for 'raid0' mappings to device-mapper raid target
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel (Show other bugs)
7.2
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Mike Snitzer
Zhang Yi
: Tracking
Depends On:
Blocks: 1189124 1346081
  Show dependency treegraph
 
Reported: 2015-02-11 10:06 EST by Heinz Mauelshagen
Modified: 2016-06-13 17:46 EDT (History)
6 users (show)

See Also:
Fixed In Version: kernel-3.10.0-284.el7
Doc Type: Enhancement
Doc Text:
Cause: Support for 'raid0' mappings missing in device-mapper raid target. Consequence: Conversions from 'striped' to 'raid0' or 'raid[45]' to 'raid0' not possible. Fix: Result:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-19 16:29:39 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:2152 normal SHIPPED_LIVE Important: kernel security, bug fix, and enhancement update 2015-11-19 19:56:02 EST

  None (edit)
Description Heinz Mauelshagen 2015-02-11 10:06:48 EST
Description of problem:
the device-mapper raid target does not support 'raid0' mapings based on the respective MD personality

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. use dmsetup to create a 'raid0' mapping
2.
3.

Actual results:
Error

Expected results:
Success

Additional info:
The raid target needs to be enhanced to support 'raid0' in addtion to the already supported 'raid1/raid10/raid4/raid5/raid6' mappings.
Comment 4 Rafael Aquini 2015-06-28 11:30:13 EDT
Patch(es) available on kernel-3.10.0-284.el7
Comment 8 Zhang Yi 2015-08-31 03:50:12 EDT
Verified this issue with kernel 3.10.0-307.el7.x86_64.
The patch from comment 5 exist on kernel-3.10.0-309.el7.

I tried do mkfs.ext4 for the /dev/mapper/test-raid0, observed BUG issue, I will file another issue to track it. 

The dm raid0 can be created with below steps:
----------------------------------------------------------------
#!/bin/bash
for i in `seq 0 3`;do
	dd if=/dev/zero of=/tmp/$i.tmp bs=1M count=1200 &
done
dd if=/dev/urandom of=bigfile bs=1M count=1024 &
wait
for i in `seq 0 3`;do
	losetup /dev/loop$i /tmp/$i.tmp
done
for S in `seq 0 3`; do 
	dmsetup create test-raid-metadata$S --table "0 8192 linear /dev/loop$S 0"
	dmsetup create test-raid-data$S --table "0 1953125 linear /dev/loop$S 8192"
done

dmsetup create test-raid0 --table '0 7812096 raid raid0 1 128 4 - /dev/mapper/test-raid-data0 - /dev/mapper/test-raid-data1 - /dev/mapper/test-raid-data2 - /dev/mapper/test-raid-data3'
-----------------------------------------------------------------------------
Log:
# lsblk 
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0   1.2G  0 loop 
├─test-raid-metadata0     253:3    0     4M  0 dm   
└─test-raid-data0         253:4    0 953.7M  0 dm   
  └─test-raid0            253:11   0   3.7G  0 dm   
loop1                       7:1    0   1.2G  0 loop 
├─test-raid-metadata1     253:5    0     4M  0 dm   
└─test-raid-data1         253:6    0 953.7M  0 dm   
  └─test-raid0            253:11   0   3.7G  0 dm   
loop2                       7:2    0   1.2G  0 loop 
├─test-raid-metadata2     253:7    0     4M  0 dm   
└─test-raid-data2         253:8    0 953.7M  0 dm   
  └─test-raid0            253:11   0   3.7G  0 dm   
loop3                       7:3    0   1.2G  0 loop 
├─test-raid-metadata3     253:9    0     4M  0 dm   
└─test-raid-data3         253:10   0 953.7M  0 dm   
  └─test-raid0            253:11   0   3.7G  0 dm   
# dmsetup status /dev/mapper/test-raid0 
0 7812096 raid raid0 4 AAAA 1953024/1953024 idle 0


[  168.815923] raid6: sse2x1   gen()  5750 MB/s
[  168.832934] raid6: sse2x2   gen()  7441 MB/s
[  168.849948] raid6: sse2x4   gen()  8648 MB/s
[  168.849949] raid6: using algorithm sse2x4 gen() (8648 MB/s)
[  168.849951] raid6: using ssse3x2 recovery algorithm
[  168.886191] md: raid6 personality registered for level 6
[  168.886194] md: raid5 personality registered for level 5
[  168.886195] md: raid4 personality registered for level 4
[  168.891382] device-mapper: raid: Loading target version 1.7.0
[  168.891566] device-mapper: raid: Choosing default region size of 4MiB
[  168.901764] md: raid0 personality registered for level 0
[  168.901967] md/raid0:mdX: md_size is 7812096 sectors.
[  168.901969] md: RAID0 configuration for mdX - 1 zone
[  168.901971] md: zone0=[dm-4/dm-6/dm-8/dm-10]
[  168.901975]       zone-offset=         0KB, device-offset=         0KB, size=   3906048KB


loop0                       7:0    0   1.2G  0 loop 
├─test-raid-metadata0     253:3    0     4M  0 dm   
└─test-raid-data0         253:4    0 953.7M  0 dm   
  └─test-raid0            253:11   0   3.7G  0 dm   
loop1                       7:1    0   1.2G  0 loop 
├─test-raid-metadata1     253:5    0     4M  0 dm   
└─test-raid-data1         253:6    0 953.7M  0 dm   
  └─test-raid0            253:11   0   3.7G  0 dm   
loop2                       7:2    0   1.2G  0 loop 
├─test-raid-metadata2     253:7    0     4M  0 dm   
└─test-raid-data2         253:8    0 953.7M  0 dm   
  └─test-raid0            253:11   0   3.7G  0 dm   
loop3                       7:3    0   1.2G  0 loop 
├─test-raid-metadata3     253:9    0     4M  0 dm   
└─test-raid-data3         253:10   0 953.7M  0 dm   
  └─test-raid0            253:11   0   3.7G  0 dm
Comment 9 errata-xmlrpc 2015-11-19 16:29:39 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-2152.html

Note You need to log in before you can comment on or make changes to this bug.