Bug 213457 - cannot find disks for installation on 1027
cannot find disks for installation on 1027
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: anaconda (Show other bugs)
ppc64 Linux
urgent Severity urgent
: ---
: ---
Assigned To: Anaconda Maintenance Team
Depends On:
Blocks: 212626
  Show dependency treegraph
Reported: 2006-11-01 10:58 EST by Janice Girouard - IBM on-site partner
Modified: 2009-06-19 05:43 EDT (History)
2 users (show)

See Also:
Fixed In Version: RHEL5B2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2006-11-14 23:50:59 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Patch for this issue (11.42 KB, patch)
2006-11-02 17:39 EST, Bill Nottingham
no flags Details | Diff

External Trackers
Tracker ID Priority Status Summary Last Updated
IBM Linux Technology Center 28848 None None None Never

  None (edit)
Description Janice Girouard - IBM on-site partner 2006-11-01 10:58:58 EST
Description of problem:
I attempted to install 1027 on the i825 in the lab using the boot.img file.  The
installation get's to point where it asks for the disks to include, and it lists
the available disks.   None are shown, even though they are present.

Version-Release number of selected component (if applicable):
the 1020 build worked on this system.  The 1027 installation failed on the exact
same hardware configuration.  The failing boot.img was taken from

How reproducible:
The i825 has a boot.img in the /home/ directory.  Partition 2 is set up to boot
this image and start the installation.

Steps to Reproduce:
1. to access the management box i825
telnet i825
Password: SECOFR

2. to see the current kerne that is booted on this box, after step 1 above:
At the ===> prompt, enter wrkcfgsts *nws (with the cd in the cdrom).  A "Work
with Configuration Status menu will appear.  Make sure the partition (LINUX02)
is "VARIED OFF", if not, enter "2=Vary off" at the __ Opt column before the

Enter 8 in the opt field for the partition,
Enter 2 in the opt field for change

PgDn to the fields for IPL Source, 
	Verify the ipl stream file is set to '/home/boot.img'

3.  If you wish to put another boot.img on this host, you can:
- ftp into i825, user QSECOFR/SECOFR
- cd /home before the put
- update the boot file as above if the file has another name.

4. vary on the parition to start the boot process.
- I used nfs mount options, for the 1027 images off of bigpapi

5.to access this serial console of i825-lp1:
telnet i825 2301
> select 2<cr> LINUX02 as Guest partition Console (i.e. enter 1 <cr>0
> enter os/400 serivce tool login as 'LINUX02'
> enter os/400 service tools admin as 'redhat'
this will lead you to "console connected"
.. should show you the installation screen.

6.  note, there is a redhat DST/sst account on the i825 with the password linux
in case you would like to update the hardware configuration via the i825.

Actual results:

Expected results:

Additional info:
Comment 1 Bill Nottingham 2006-11-01 12:00:47 EST
What sorts of disks are not being seen - connected via what adapters, using what
Comment 2 IBM Bug Proxy 2006-11-01 13:28:33 EST
viodasd (virtual disks)
Comment 3 Bill Nottingham 2006-11-01 15:23:49 EST
So, the probing code from the 20061027 tree, when run on an installed iSeries
partition, correctly finds viodasd disks.

Is viodasd being loaded on the box in question? This could be a kernel issue.
Has the kernel from this tree been tested outside the installer environment?
Comment 4 IBM Bug Proxy 2006-11-01 17:19:59 EST
I booted the new kernel ok on a non-lvm partition:
[root@bmw ~]# cat /proc/version
Linux version 2.6.18-1.2739.el5 (brewbuilder@js20-bc2-11.build.redhat.com) (gcc
version 4.1.1 20061011 (Red Hat 4.1.1-30)) #1 SMP Thu Oct 26 16:07:54 EDT 2006
[root@bmw ~]# cat /proc/cmdline
ro root=LABEL=/ rhgb quiet
[root@bmw ~]#

The viodasd module is in the ramdisk.image, so that's not the problem.  Should
have the cds in a little while...
Comment 5 IBM Bug Proxy 2006-11-01 18:28:35 EST
I setup an install source with the 20061027 tree and just the first cd (all that
I have downloaded so far).  I got this message, too, so I ctl-z to a shell:
sh-3.1# lsmod | grep viodasd
viodasd                35313  0
sh-3.1# fdisk -l
sh-3.1# cat /proc/partitions
major minor  #blocks  name

   7     0      84028 loop0
 112     0    7173022 iseries/vda
 112     1      40131 iseries/vda1
 112     2    7132860 iseries/vda2
 112     8    5124735 iseries/vdb
 112     9    5124703 iseries/vdb1
 112    16    5124735 iseries/vdc
 112    17    5124703 iseries/vdc1
sh-3.1# fdisk /dev/iseries/vda

Unable to open /dev/iseries/vda

ibmvscsic is not loaded (which is correct).  I see the following in dmesg:
<7>vio_register_driver: driver viodasd registering
<4>blk_queue_max_sectors: set to minimum 128
<6>viod: disk 0: 14346045 sectors (7004 MB) CHS=893/255/63 sector size 512
<6> iseries/vda: iseries/vda1 iseries/vda2
<4>blk_queue_max_sectors: set to minimum 128
<6>viod: disk 1: 10249470 sectors (5004 MB) CHS=638/255/63 sector size 512
<6> iseries/vdb: iseries/vdb1
<4>blk_queue_max_sectors: set to minimum 128
<6>viod: disk 2: 10249470 sectors (5004 MB) CHS=638/255/63 sector size 512
<6> iseries/vdc: iseries/vdc1
<6>viocd: vers 1.06, hosting partition 0

I booted my existing partition fine with just viodasd (no ibmvscsic) and fdisk
shows the disks.
Comment 6 IBM Bug Proxy 2006-11-01 18:53:18 EST
There were no devices in /dev/iseries.  Once I created the nodes fdisk saw the
disks in the shell.  However, I couldn't get them to show up in anaconda...
Comment 7 Bill Nottingham 2006-11-02 11:42:07 EST
Ah, wait, the box I was testing on didn't have viodasd.

Janice, any chance we could get some virtual disks added to i825-lp1?
Comment 8 Janice Girouard - IBM on-site partner 2006-11-02 14:32:38 EST
hm.. i825-lp1 is the linux1 system, it only has virtual disks.    Are we talking
about the same systems?

[root@i825-lp1 etc]# cat /proc/partitions
major minor  #blocks  name
 112     0   30724312 iseries/vda
 112     1      16033 iseries/vda1
 112     2   29166007 iseries/vda2
 112     3    1534207 iseries/vda3
 112     8   10241437 iseries/vdb
 112     9      64228 iseries/vdb1
   8     0    1048576 sda
   8    16   10241437 sdb
   8    17      64228 sdb1
[root@i825-lp1 etc]#
[root@i825-lp1 etc]# lspci
1c:1c.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
(rev 44)
[root@i825-lp1 etc]#

Comment 9 Janice Girouard - IBM on-site partner 2006-11-02 15:07:43 EST
fyi..  at the time of the failure, on the screen I see:

   ┌─────────────────────────┤ Partitioning Type ├─────────────────────────┐
   │                                                                       │
   │    Installation requires partitioning of your hard drive.  The        │
   │    default layout is reasonable for most users.  You can either       │
   │    choose to use this or create your own.                             │
   │                                                                       │
   │ Remove all partitions on selected drives and create default layout.   │
   │ Remove linux partitions on selected drives and create default layout. │
   │ Use free space on selected drives and create default layout.          │
   │ Create custom layout.                                                 │
   │                                                                       │
   │       Which drive(s) do you want to use for this installation?        │
   │                                  ↑                                    │
   │                                  ▮                                    │
   │                                                                       │
   │                          ┌────┐   ┌──────┐                            │
   │                          │ OK │   │ Back │                            │
   │                          └────┘   └──────┘                            │
   │                                                                       │
   │                                                                       │

If I think the disks are there, but invisible, and press ok, I receive the msg:

Welcome to Red Hat Enterprise Linux Server

                 ┌────────────┤ No Drives Found ├─────────────┐
                 │                                            │
                 │ An error has occurred - no valid devices   │
                 │ were found on which to create new file     │
                 │ systems. Please check your hardware for    │
                 │ the cause of this problem.                 │
                 │                                            │
                 │                  ┌────┐                    │
                 │                  │ OK │                    │
                 │                  └────┘                    │
                 │                                            │
                 │                                            │

  <Tab>/<Alt-Tab> between elements   |  <Space> selects   |  <F12> next screen

If I then enter ctrl Z< I see:

# cat /proc/partitions
major minor  #blocks  name

   7     0      84028 loop0
 112     0   10241437 iseries/vda
 112     1      16033 iseries/vda1
 112     2   10225372 iseries/vda2
 112     8   10241437 iseries/vdb
 112     9   10241406 iseries/vdb1
sh-3.1# ls /dev/iseries/
vcda  vcdb
Comment 10 Bill Nottingham 2006-11-02 15:12:02 EST
OK, I've poked some more. So, I'm not sure how i825-lp1 actually got installed,
if I'm reading this right - the vdX probing doesn't work at all currently.

Did something change in the device layer for viodasd recently (/proc layout,
/sys layout, etc.)?
Comment 11 Bill Nottingham 2006-11-02 15:27:53 EST
What we currently have is:

      if (!access("/sys/bus/vio/drivers/viodasd", R_OK)) {
        DIR * dir;
        struct dirent * ent;
        int ctlNum;

        dir = opendir("/sys/bus/vio/drivers/viodasd");
        while ((ent = readdir(dir))) {
            if (strncmp("viodasd", ent->d_name, 7))
            ctlNum = atoi(ent->d_name + 7);
            viodev = vioNewDevice(NULL);
            viodev->device = malloc(20);
            if (ctlNum < 26) {
              snprintf(viodev->device, 19, "iseries/vd%c", 'a' + ctlNum);
            } else {
              snprintf(viodev->device, 19, "iseries/vda%c", 'a' + ctlNum - 26);
            viodev->desc = strdup("IBM Virtual DASD");
            viodev->type = CLASS_HD;
            viodev->driver = strdup("viodasd");
            if (devlist)
              viodev->next = devlist;
            devlist = (struct device *) viodev;

However, the sysfs layout for this directory just has links such as:

12 -> ../../../../devices/vio/12
13 -> ../../../../devices/vio/13

So, the code as written will never find disks.

This is fixable, of course - working on that. But assuming that the sysfs layout
has been the same for all of the RHEL5 alpha/beta kernels, I don't see how we
could have ever installed. (The code reading /sys/bus/vio/drivers/viodasd
originated on RHEL4; presumably that has the layout the code is expecting.)
Comment 12 Bill Nottingham 2006-11-02 15:35:00 EST
OK, I've read the anaconda log.

So, what appears to have happened is that anaconda loaded viodasd. It did not
find any drives.

It then probed for and found ibmvscsic (as noted, we used to show this available
on iSeries.). When it loaded ibmvscsic, the iSeries viodasd disks then show up
as scsi disks (/dev/sda, etc.). anaconda then installed onto that. It then
rebooted with viodasd, and used the installed system that way.

For the 1027 tree, we no longer have ibmvscsic on iSeries, as requested. So
anaconda now can't see the disks at all, where before it saw them only by accident.
Comment 13 IBM Bug Proxy 2006-11-02 16:45:16 EST
Send this over manually, since mirroring isn't working so hot (again):
 ------- Additional Comment #4 From Michael Ranweiler  2006-11-02 16:29 EDT 
[reply] -------     Internal Only

Makes sense.  If you attach it we'll try it out, too.

I'm confused, though, about this kernel.  If I extract the modules out of the
milestone8 ramdisk.image.gz:
mranweil@redsfan8:~/m8/modules> find -name "ipr*"
mranweil@redsfan8:~/m8/modules> find -name "ibmvscsi*"

But you're right, I don't see ibmvscsic loaded.  It's in /modules/modules.dep
and  /modules/modules.alias in the installer, though.
Comment 15 RHEL Product and Program Management 2006-11-02 17:28:22 EST
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux release.  Product Management has requested further review
of this request by Red Hat Engineering.  This request is not yet committed for
inclusion in release.
Comment 16 Bill Nottingham 2006-11-02 17:39:35 EST
Created attachment 140194 [details]
Patch for this issue

Here's a kudzu patch that redoes most of the vio probing into something
smaller, simpler, and working (in theory).

To test this in the installer, you'd need to rebuild kudzu, and then rebuild
the installer against that, and then rebuild the boot images. If you've got an
installed iSeries, you can just build a new kudzu and compare the output of
'kudzu	-p -b vio' to see if it properly picks up the viocd and viodasd
Comment 17 IBM Bug Proxy 2006-11-02 18:25:59 EST
----- Additional Comments From saleon@us.ibm.com  2006-11-02 18:21 EDT -------
Glen, looks like there is some problem with the mirroring process.  The last
three comments before this one are not relevant to this bug.  This is not a
kudzu bug, it is a ibmveth bug. 
Comment 18 IBM Bug Proxy 2006-11-02 19:10:53 EST
----- Additional Comments From mranweil@us.ibm.com (prefers email at mjr@us.ibm.com)  2006-11-02 19:06 EDT -------
I did get better results.  Here's the old kudzu:
[root@metro kudzu-]# kudzu -p -b vio
class: NETWORK
bus: VIO
detached: 0
device: eth2
driver: iseries_veth
desc: "IBM Virtual Ethernet"
class: NETWORK
bus: VIO
detached: 0
device: eth0
driver: iseries_veth
desc: "IBM Virtual Ethernet"
class: SCSI
bus: VIO
detached: 0
driver: ibmvscsic
desc: "IBM Virtual SCSI"

And the new one:
[root@metro kudzu-]# ./kudzu -p -b vio
class: NETWORK
bus: VIO
detached: 0
device: eth2
driver: iseries_veth
desc: "IBM Virtual Ethernet"
network.hwaddr: 02:01:ff:00:ff:06
class: NETWORK
bus: VIO
detached: 0
device: eth1
driver: iseries_veth
desc: "IBM Virtual Ethernet"
network.hwaddr: 02:01:ff:03:ff:06
class: SCSI
bus: VIO
detached: 0
driver: viodasd
desc: "IBM Virtual DASD"
class: HD
bus: VIO
detached: 0
device: iseries/vda
desc: "IBM Virtual DASD"
class: HD
bus: VIO
detached: 0
device: iseries/vdb
desc: "IBM Virtual DASD"
class: HD
bus: VIO
detached: 0
device: iseries/vdc
desc: "IBM Virtual DASD"
[root@metro kudzu-]# 
Comment 21 Janice Girouard - IBM on-site partner 2006-11-07 19:01:43 EST
this bug is not fixed in 20061102.2/4.92/, even for the iSeries in Westford
case.  I assume the above was never checked in?
Comment 22 Bill Nottingham 2006-11-07 23:07:26 EST
That version is not in that tree.
Comment 23 IBM Bug Proxy 2006-11-08 10:56:10 EST
----- Additional Comments From rayda@us.ibm.com  2006-11-08 10:49 EDT -------
OK, so the milestone 9 does not have this fix (verified using partition bmw).  
But it also didn't show me any ipr disks yet.  Was this fix necessary to 
correctly detect those also?  This version was supposed to have the ipr fix in 
the config file...

Will this fix be in the special build for iSeries we are waiting for? 
Comment 24 Janice Girouard - IBM on-site partner 2006-11-08 11:58:26 EST
this is in the modified state. can you tell me what package/version will contain
this fix?
Comment 25 Bill Nottingham 2006-11-08 12:27:20 EST
Comment 26 IBM Bug Proxy 2006-11-08 13:31:51 EST
----- Additional Comments From rayda@us.ibm.com  2006-11-08 13:28 EDT -------
Oops, my mistake.  What I downloaded was NOT milestone 9, so the jury is still 
out.  Sorry to raise the flag. 
Comment 27 Janice Girouard - IBM on-site partner 2006-11-08 23:18:08 EST
I tried both the 1107 and 1108 nightly builds and the disks were found on the
i825 system.  So this problem appears to be partially fixed.

I'll continue testing the nightly builds.  I did see the following error with
both the 1107 and 1108 nightly builds:

Welcome to Red Hat Enterprise Linux Server

      ┌────────────────────────────┤ Error ├────────────────────────────┐
      │                                                                 │
      │ Unable to read package metadata. This may be due to a missing   │
      │ repodata directory.  Please ensure that your install tree has   │
      │ been correctly generated.  Cannot open/read repomd.xml file     │
      │ for repository: anaconda-base-200611070133.ppc                  │
      │                                                                 │
      │                           ┌───────┐                             │
      │                           │ Abort │                             │
      │                           └───────┘                             │
      │                                                                 │
      │                                                                 │

  <Tab>/<Alt-Tab> between elements   |  <Space> selects   |  <F12> next s
Comment 28 Bill Nottingham 2006-11-09 10:55:21 EST
That would be a different error - please open that as a different issue/bug.
Comment 29 Janice Girouard - IBM on-site partner 2006-11-09 13:54:38 EST
Agreed.   I just opened 214836 to track the above error.

Since this is in the modified state, is it easier if we close this defect and
reopen a new one if a similar problem occurs, but requires more cards than we
have here?  I think Mike saw a problem but hasn't had a chance to reproduce the
error and retest with this code.
Comment 30 Bill Nottingham 2006-11-09 13:57:59 EST
Comment 31 Janice Girouard - IBM on-site partner 2006-11-09 17:04:11 EST
This is fixed in the 20061107/08 builds.   Have you actually seen this problem
in one of the partner builds?   If not, I'd like to close this as fixed, as I
have confirmed it's working in the nightly builds.
Comment 32 IBM Bug Proxy 2006-11-09 20:11:10 EST
----- Additional Comments From rayda@us.ibm.com  2006-11-09 17:13 EDT -------
This is fixed in the m9 + the new kudzu.  Will close when we see it in 
m10/Beta2 that we don't have to patch for the install source. 
Comment 33 IBM Bug Proxy 2006-11-09 20:12:12 EST

           What    |Removed                     |Added
           Severity|block                       |high
             Impact|------                      |Installability

------- Additional Comments From rayda@us.ibm.com  2006-11-09 17:14 EDT -------
Lowering from block to high based on fix we have seen. 
Comment 34 IBM Bug Proxy 2006-11-10 10:37:13 EST
----- Additional Comments From yongwenw@us.ibm.com  2006-11-10 10:33 EDT -------
As Ray said in comment #21, the build we use is milestone 9 + a new kudzu. We 
would like to see the fix in an official drop before we close this bug. Or if 
it is more convenient to you, we can close this now and reopen it if we meet 
the problem again in the official drop. 
Comment 35 IBM Bug Proxy 2006-11-14 16:06:31 EST

           What    |Removed                     |Added
             Status|ACCEPTED                    |CLOSED

------- Additional Comments From rayda@us.ibm.com  2006-11-14 16:03 EDT -------
Yes, this is definitely in this build.  Closing. 

Note You need to log in before you can comment on or make changes to this bug.