Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1368915

Summary: Use two disks to create raid5 should be failed on cockpit,but it was successfully.
Product: Red Hat Enterprise Linux 7 Reporter: Yihui Zhao <yzhao>
Component: cockpitAssignee: Marius Vollmer <mvollmer>
Status: CLOSED NOTABUG QA Contact: qe-baseos-daemons
Severity: medium Docs Contact:
Priority: unspecified    
Version: 7.2CC: bugs, cshao, danken, fche, fdeutsch, huzhao, jiawu, leiwang, michal.skrivanek, weiwang, yaniwang, ycui, yzhao
Target Milestone: pre-dev-freezeKeywords: Extras
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-20 14:21:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1329957    
Attachments:
Description Flags
three_disks_raid5_success.png
none
two_disks_raid5_failed.png
none
two_disks_raid5_cockpit1.png
none
two_disks_raid5_cockpit2.png
none
ks file none

Description Yihui Zhao 2016-08-22 06:07:50 UTC
Created attachment 1192772 [details]
three_disks_raid5_success.png

Description of problem:
Use two disks to create raid5 successfully on Cockpit about RHVH4.0.
But in fact,we can create raid5 with at least three disks.

Version-Release number of selected component (if applicable):
redhat-virtualization-host-4.0-20160817.0
cockpit-0.114-2.el7.x86_64
cockpit-ovirt-dashboard-0.10.6-1.3.6.el7ev.noarch

How reproducible:
100%

Steps to Reproduce:
1.Install NGN4.0
2.Run cockpit client in Firefox/chrome(ip:9090)
3.Switch to storage page
4.Create raid5 with two disks
5.For about twenty minutes,it is successful to create raid5.

Actual results:
Use two disks to create raid5 successfully on Cockpit about RHVH4.0.


Expected results:
Use two disks to create raid5 failed on Cockpit about RHVH4.0.
Also,we can create raid5 with at least three disks.For details,on attachment:
two_disks_riad5_cockpit1.png , two_disks_raid5_cockpit2.png , two_disks_raid5_failed.png, three_disks_raid5_success.png





Additional info:
My testing environment is using two 100G and clean iscsi disks to create raid5 successfully on cockpit.

Comment 1 Red Hat Bugzilla Rules Engine 2016-08-22 06:07:58 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 2 Yihui Zhao 2016-08-22 06:08:54 UTC
Created attachment 1192773 [details]
two_disks_raid5_failed.png

Comment 3 Yihui Zhao 2016-08-22 06:09:29 UTC
Created attachment 1192774 [details]
two_disks_raid5_cockpit1.png

Comment 4 Yihui Zhao 2016-08-22 06:09:58 UTC
Created attachment 1192775 [details]
two_disks_raid5_cockpit2.png

Comment 5 Yihui Zhao 2016-08-22 06:10:51 UTC
Created attachment 1192776 [details]
ks file

Comment 6 Fabian Deutsch 2016-08-22 06:33:17 UTC
Maybe this should even go to storaged ..

Comment 8 Marius Vollmer 2016-09-13 12:17:33 UTC
The mdraid subsystem allows creating raid level 5 devices with only two disks.  

It might be a bad idea, but it can be done.  I think it treats it as a three-drive array where one drive has failed.

We might want to warn against this use, of course.  I'll ask around for more input.

Comment 9 Frank Ch. Eigler 2016-09-13 21:19:19 UTC
> I think it treats it as a three-drive array where one drive has failed. 

When run manually:

# dd if=/dev/zero of=DISK2 bs=1048576 count=128
# dd if=/dev/zero of=DISK1 bs=1048576 count=128
# losetup DISK2
# losetup DISK1
# mdadm --create /dev/md/test -l 5 --raid-disks 2 /dev/loop0 /dev/loop1

results in a md with 128mb capacity, so a happy redundant raid5, basically equivalent to raid1 mirror.

Comment 10 Marius Vollmer 2016-09-20 14:21:17 UTC
(In reply to Frank Ch. Eigler from comment #9)
> > I think it treats it as a three-drive array where one drive has failed. 
> 
> When run manually:
> 
> # dd if=/dev/zero of=DISK2 bs=1048576 count=128
> # dd if=/dev/zero of=DISK1 bs=1048576 count=128
> # losetup DISK2
> # losetup DISK1
> # mdadm --create /dev/md/test -l 5 --raid-disks 2 /dev/loop0 /dev/loop1
> 
> results in a md with 128mb capacity, so a happy redundant raid5, basically
> equivalent to raid1 mirror.

Thanks.  Let's keep this as a feature then.