Bug 1330161

Summary: blivet.errors.DeviceError: ('array is not fully defined', 'home')
Product: [Fedora] Fedora Reporter: Menanteau Guy <menantea>
Component: mdadmAssignee: Nigel Croxon <ncroxon>
Status: CLOSED CURRENTRELEASE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 24CC: agk, anaconda-maint-list, dan, dledford, g.kaviyarasu, Jes.Sorensen, jonathan, menantea, ncroxon, vanmeeuwen+fedora, xni
Target Milestone: ---   
Target Release: ---   
Hardware: ppc64   
OS: Unspecified   
Whiteboard: abrt_hash:92f71cdc726e5d0be411a39aee9953dea19ca19e63c9900fadf9c212b5777008;
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-07-20 13:45:46 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1071880, 1320886    
Attachments:
Description Flags
File: anaconda-tb
none
File: anaconda.log
none
File: dnf.log
none
File: environ
none
File: lsblk_output
none
File: lvm.log
none
File: nmcli_dev_list
none
File: os_info
none
File: program.log
none
File: storage.log
none
File: syslog
none
File: ifcfg.log
none
File: packaging.log none

Description Menanteau Guy 2016-04-25 13:32:14 UTC
Description of problem:
Problem happened on ppc64 during "Testcase Partitioning On Software RAID" test with iso f24 Alpha 1.3
http://ppc.koji.fedoraproject.org/compose/24/Fedora-24-20160330.0/compose/Server/ppc64/iso/Fedora-Server-dvd-ppc64-24_Alpha-1.3.iso

Here after is my steps to reproduce:
On "Installation Destination" panel, I selected 2 blank disks (disks 2x20G)
and set "I will configure partitioning"

Then I used "Click here to create them automatically"
I get a / partition of 35,5 GB (LVM)
I reduced / partition size to 10G (LVM)
"Update Settings"

Then I created a /home partition of 10G (LVM)
and I changed it to select "RAID" in "Device Type" and "RAID1" in "RAID Level"
"Update Settings"

My set was all done, I did the "Done" and "Begin Installation".


Version-Release number of selected component:
anaconda-24.13-1

The following was filed automatically by anaconda:
anaconda 24.13-1 exception report
Traceback (most recent call first):
  File "/usr/lib/python3.5/site-packages/blivet/devices/md.py", line 281, in mdadmConfEntry
    raise errors.DeviceError("array is not fully defined", self.name)
  File "/usr/lib/python3.5/site-packages/blivet/osinstall.py", line 782, in mdadmConf
    conf += array.mdadmConfEntry
  File "/usr/lib/python3.5/site-packages/blivet/osinstall.py", line 724, in write
    mdadm_conf = self.mdadmConf()
  File "/usr/lib/python3.5/site-packages/blivet/blivet.py", line 1383, in write
    self.fsset.write()
  File "/usr/lib64/python3.5/site-packages/pyanaconda/packaging/__init__.py", line 631, in writeStorageEarly
    self.storage.write()
  File "/usr/lib64/python3.5/site-packages/pyanaconda/install.py", line 196, in doInstall
    payload.writeStorageEarly()
  File "/usr/lib64/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.5/site-packages/pyanaconda/threads.py", line 253, in run
    threading.Thread.run(self, *args, **kwargs)
blivet.errors.DeviceError: ('array is not fully defined', 'home')

Additional info:
addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda
cmdline_file:   BOOT_IMAGE=/ppc/ppc64/vmlinuz ro
dnf.rpm.log:    Apr 25 13:15:33 INFO --- logging initialized ---
executable:     /sbin/anaconda
hashmarkername: anaconda
kernel:         4.5.0-0.rc7.git0.2.fc24.ppc64
product:        Fedora
release:        Cannot get release name.
type:           anaconda
version:        24

Comment 1 Menanteau Guy 2016-04-25 13:32:23 UTC
Created attachment 1150477 [details]
File: anaconda-tb

Comment 2 Menanteau Guy 2016-04-25 13:32:26 UTC
Created attachment 1150478 [details]
File: anaconda.log

Comment 3 Menanteau Guy 2016-04-25 13:32:28 UTC
Created attachment 1150479 [details]
File: dnf.log

Comment 4 Menanteau Guy 2016-04-25 13:32:29 UTC
Created attachment 1150480 [details]
File: environ

Comment 5 Menanteau Guy 2016-04-25 13:32:31 UTC
Created attachment 1150481 [details]
File: lsblk_output

Comment 6 Menanteau Guy 2016-04-25 13:32:33 UTC
Created attachment 1150482 [details]
File: lvm.log

Comment 7 Menanteau Guy 2016-04-25 13:32:35 UTC
Created attachment 1150483 [details]
File: nmcli_dev_list

Comment 8 Menanteau Guy 2016-04-25 13:32:36 UTC
Created attachment 1150484 [details]
File: os_info

Comment 9 Menanteau Guy 2016-04-25 13:32:38 UTC
Created attachment 1150485 [details]
File: program.log

Comment 10 Menanteau Guy 2016-04-25 13:32:42 UTC
Created attachment 1150486 [details]
File: storage.log

Comment 11 Menanteau Guy 2016-04-25 13:32:45 UTC
Created attachment 1150487 [details]
File: syslog

Comment 12 Menanteau Guy 2016-04-25 13:32:47 UTC
Created attachment 1150488 [details]
File: ifcfg.log

Comment 13 Menanteau Guy 2016-04-25 13:32:48 UTC
Created attachment 1150489 [details]
File: packaging.log

Comment 14 Menanteau Guy 2016-04-25 13:46:09 UTC
Note that this problem is not present on ppc64le.

Comment 15 David Lehman 2016-04-25 14:45:42 UTC
Seems likely that udev is not reporting the array's UUID when we ask after creating the array.

Comment 16 Menanteau Guy 2016-05-02 13:30:11 UTC
On the failed install machine, I can get the UUID thru following command:
[anaconda root@localhost tmp]# mdadm --examine --scan 
ARRAY /dev/md/home  metadata=1.2 UUID=67a04a5b:38a1de07:9c3eba89:3e1aaed3 name=localhost:home

But in the program.log I don't see the UUId on mdadm --detail command.
Also if I run the command manually I get:
[anaconda root@localhost tmp]# mdadm --detail --test /dev/md/home
/dev/md/home:
        Version : 1.2
  Creation Time : Fri Apr 29 11:02:57 2016
     Raid Level : raid1
     Array Size : 10485760 (10.00 GiB 10.74 GB)
  Used Dev Size : 10485760 (10.00 GiB 10.74 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Fri Apr 29 10:13:58 2016
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       18        1      active sync   /dev/sdb2

There is no UUID !!!!

I expected some more lines like:
Name : localhost:home  (local to host localhost)
UUID : 67a04a5b:38a1de07:9c3eba89:3e1aaed3 
Events : ...

Comment 17 Menanteau Guy 2016-05-03 08:43:27 UTC
I transfered the bug to mdadm component because I reproduced the problem on a VM ppc64 already installed, when I created raid1 partitions manually.

To compare outputs, I did the same process on a ppc64 VM and a ppc64le VM.
First I installed the VM ppc64 and the VM ppc64le. 
The two machines have been installed with 2 disks of 20G and following partitions:
Device     Boot   Start      End  Sectors  Size Id Type
/dev/sda1  *       2048    10239     8192    4M 41 PPC PReP Boot
/dev/sda2         10240  1034239  1024000  500M 83 Linux
/dev/sda3       1034240 15734783 14700544    7G 8e Linux LVM

Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1        2048 14702591 14700544   7G 8e Linux LVM

Then I manually created an extended partition sda4 using fdisk
/dev/sda4       15734784 41943039 26208256 12.5G  5 Extended

And one raid1 partition of 10G on sda5 and sdb2
/dev/sda5       15736832 36708351 20971520   10G fd Linux raid autodetect
/dev/sdb2       14702592 35674111 20971520   10G fd Linux raid autodetect

Then I used this command:
mdadm --create /dev/md/home --run --level=raid1 --raid-devices=2 --metadata=default --bitmap=internal /dev/sda5 /dev/sdb2

When I used following command:
mdadm --detail /dev/md/home
I've got a different output on ppc64 and ppc64le

And specifically on ppc64le I've got the UUID:
/dev/md/home:
        Version : 1.2
  Creation Time : Tue May  3 04:11:21 2016
     Raid Level : raid1
     Array Size : 10477568 (9.99 GiB 10.73 GB)
  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue May  3 04:30:43 2016
          State : clean, resyncing
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : home
           UUID : 1afa7914:13b82dee:89aa091f:01ce6bd8
         Events : 225

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       18        1      active sync   /dev/sdb2


When I don't have the UUID on ppc64:
/dev/md/home:
        Version : 1.2
  Creation Time : Mon May  2 12:05:12 2016
     Raid Level : raid1
     Array Size : 10477568 (9.99 GiB 10.73 GB)
  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon May  2 12:19:34 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       18        1      active sync   /dev/sdb2

Comment 18 Menanteau Guy 2016-06-16 09:52:16 UTC
Unable to install fedora on RAID disks. Problem still present on f24 RC1.

Comment 19 Nigel Croxon 2017-07-19 20:42:25 UTC
none of the MD code is processor specific.

Comment 20 Nigel Croxon 2017-07-20 13:04:34 UTC
Tested on Fedora release 26

On "Installation Destination" panel, I selected 2 blank disks (disks 2x20G)
and set "I will configure partitioning"

Then I used "Click here to create them automatically"

I get a /boot partition of 1024Mib
I get a / partition of 15 GB 
I get a swap partition of 3.98 GB

I then reduced / partition size to 10G (LVM)
"Update Settings"

Then I created a /home partition of 10G (LVM)
and I changed it to select "RAID" in "Device Type" and "RAID1" in "RAID Level"
"Update Settings"

After the reboot, I see, mdadm --detail /dev/md127

/dev/md127:
        Version : 1.2
  Creation Time : Thu Jul 20 08:32:00 2017
     Raid Level : raid1
     Array Size : 10485760 (10.00 GiB 10.74 GB)
  Used Dev Size : 10485760 (10.00 GiB 10.74 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Jul 20 08:59:02 2017
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : localhost:home
           UUID : 8adf2e4c:f32ded4a:2b8a9005:10ee1149
         Events : 29

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       18        1      active sync   /dev/sdb2

Comment 21 Menanteau Guy 2017-07-20 13:45:46 UTC
I confirm it is working fine now on f26 ppc64 VM.
I close this problem.