Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1396071

Summary: [RHEL7.3] Can not get raid detail message though the raid create successful
Product: Red Hat Enterprise Linux 7 Reporter: guazhang <guazhang>
Component: kernelAssignee: XiaoNi <xni>
kernel sub component: Multiple Devices (MD) QA Contact: guazhang <guazhang>
Status: CLOSED NOTABUG Docs Contact:
Severity: unspecified    
Priority: unspecified CC: guazhang, xni
Version: 7.3   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-05-21 08:50:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
sosreport none

Description guazhang@redhat.com 2016-11-17 12:02:53 UTC
Description of problem:
Can not get raid detail message though the raid create successful

Version-Release number of selected component (if applicable):
 3.10.0-514.el7.x86_64

How reproducible:
sometimes 

Steps to Reproduce:
1.create raid with some disks
mdadm --create --run /dev/md0 --level 1  --metadata 1.2 --raid-devices  5  /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1  --spare-devices 1 /dev/sdg1--chunk 512

2. mdadm --detail /dev/md0
INFO: Found free iSCSI disks: sdb sdc sdd sde sdf sdg sdh sdi
INFO: iSCSI is setuped on storageqe-55.rhts.eng.pek2.redhat.com for mdadm
INFO: Executing MD_Create_RAID() to create raid 1
INFO: Created md raid with these raid devices " /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1".
INFO: Created md raid with these spare disks " /dev/sdg1".
mdadm: array /dev/md0 started.
[INFO][23:42:51]INFO: Successfully created md raid /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
state is 
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory


Actual results:
can not get the raid message

Expected results:
get md raid message

Additional info:

[root@storageqe-55 grow-add-disk]# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : inactive sdi1[6](S)
      10477568 blocks super 1.2
       
md0 : active raid1 sdg1[5](S) sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
      10477568 blocks super 1.2 [5/5] [UUUUU]
      [====>................]  resync = 24.2% (2545792/10477568) finish=8.3min speed=15795K/sec
      
unused devices: <none>


[root@storageqe-55 grow-add-disk]# mdadm --detail /dev/md127 
/dev/md127:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 1
    Persistence : Superblock is persistent

          State : inactive

           Name : 0
           UUID : 3da95f68:1c79c02e:4f0fbd2f:1d751f45
         Events : 259

    Number   Major   Minor   RaidDevice

       -       8      129        -        /dev/sdi1


[root@storageqe-55 grow-add-disk]# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                           8:0    0 279.4G  0 disk  
├─sda1                        8:1    0     1G  0 part  /boot
└─sda2                        8:2    0 278.4G  0 part  
  ├─rhel_storageqe--55-root 253:0    0    50G  0 lvm   /
  ├─rhel_storageqe--55-swap 253:1    0    28G  0 lvm   [SWAP]
  └─rhel_storageqe--55-home 253:2    0 200.4G  0 lvm   /home
sdb                           8:16   0    20G  0 disk  
└─sdb1                        8:17   0    10G  0 part  
  └─md0                       9:0    0    10G  0 raid1 
sdc                           8:32   0    20G  0 disk  
└─sdc1                        8:33   0    10G  0 part  
  └─md0                       9:0    0    10G  0 raid1 
sdd                           8:48   0    20G  0 disk  
└─sdd1                        8:49   0    10G  0 part  
  └─md0                       9:0    0    10G  0 raid1 
sde                           8:64   0    20G  0 disk  
└─sde1                        8:65   0    10G  0 part  
  └─md0                       9:0    0    10G  0 raid1 
sdf                           8:80   0    20G  0 disk  
└─sdf1                        8:81   0    10G  0 part  
  └─md0                       9:0    0    10G  0 raid1 
sdg                           8:96   0    20G  0 disk  
└─sdg1                        8:97   0    10G  0 part  
  └─md0                       9:0    0    10G  0 raid1 
sdh                           8:112  0    20G  0 disk  
└─sdh1                        8:113  0    10G  0 part  
sdi                           8:128  0    20G  0 disk  
└─sdi1                        8:129  0    10G  0 part 

job:
https://beaker.engineering.redhat.com/recipes/3244084#task47921946

Comment 1 guazhang@redhat.com 2016-11-17 12:38:05 UTC
Created attachment 1221541 [details]
sosreport

Comment 2 Jes Sorensen 2016-11-17 13:11:25 UTC
Could you please clarify your bug report:

This doesn't make any sense - what happens as a result of 1?

The INFO: messages do not come from the mdadm command as your report indicates.
Could you please provide the /proc/mdstat output inbetween command 1 and
command 2?

Is this run in a script or on the command line? If you run command 1 and 2
in a script you may simply be racing the creation of the device node in /dev

Jes


Steps to Reproduce:
1.create raid with some disks
mdadm --create --run /dev/md0 --level 1  --metadata 1.2 --raid-devices  5  /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1  --spare-devices 1 /dev/sdg1--chunk 512

2. mdadm --detail /dev/md0
INFO: Found free iSCSI disks: sdb sdc sdd sde sdf sdg sdh sdi
INFO: iSCSI is setuped on storageqe-55.rhts.eng.pek2.redhat.com for mdadm
INFO: Executing MD_Create_RAID() to create raid 1
INFO: Created md raid with these raid devices " /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1".
INFO: Created md raid with these spare disks " /dev/sdg1".
mdadm: array /dev/md0 started.
[INFO][23:42:51]INFO: Successfully created md raid /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
state is 
mdadm: cannot open /dev/md0: No such file or directory
mdadm: cannot open /dev/md0: No such file or directory

Comment 3 guazhang@redhat.com 2016-11-18 01:58:35 UTC
1.create raid 1 or raid 10 successful.
I met this question on raid1 and raid10 while running /kernel/storage/mdadm/grow-add-disk test case 

mdadm --create --run /dev/md0 --level 1  --metadata 1.2 --raid-devices  5  /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1  --spare-devices 1 /dev/sdg1--chunk 512


2.get message about the raid command "mdadm --detail /dev/md0 ",but failed like this

[root@storageqe-55 grow-add-disk]# mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory

[root@storageqe-55 grow-add-disk]# ls /dev/md*
/dev/md127

but we can get the /dev/md0 from mdstat.
[root@storageqe-55 grow-add-disk]# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : inactive sdi1[6](S)
      10477568 blocks super 1.2
       
md0 : active raid1 sdg1[5](S) sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
      10477568 blocks super 1.2 [5/5] [UUUUU]
      [===========>.........]  resync = 56.7% (5949248/10477568) finish=4.8min speed=15547K/sec
      
unused devices: <none>

the INFO message was my test case output when the issue occur.

this issue was hard to reproduce,so i'm not sure i can get the output between command 1 and command 2.

Comment 4 Jes Sorensen 2016-11-18 02:10:46 UTC
I see,

There is a possibility that mdadm --create completes before the /dev/md0
device node is created. If you add a small delay after the first command,
are you still able to reproduce the problem?

Thanks,
Jes

Comment 5 guazhang@redhat.com 2016-11-18 08:21:22 UTC
i have added 30s delay time after the first time get raid status failed ,then get the raid status ,but all can not get the raid name