Bug 243751 - Multipath not "seeing" last luns when creating mpath devices
Summary: Multipath not "seeing" last luns when creating mpath devices
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: device-mapper-multipath
Version: 9
Hardware: x86_64
OS: Linux
low
high
Target Milestone: ---
Assignee: Ben Marzinski
QA Contact: Corey Marthaler
URL:
Whiteboard: bzcl34nup
Depends On:
Blocks: 445585
TreeView+ depends on / blocked
 
Reported: 2007-06-11 18:15 UTC by Claudio Cuqui
Modified: 2009-07-14 17:48 UTC (History)
12 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2009-07-14 17:48:31 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Claudio Cuqui 2007-06-11 18:15:35 UTC
Description of problem:

HBA: Qlogic 2300

Storage: EMC Clariion CX700 - Flarecode = 02,19,700,5030


1-) Multipath is unable to create all mpath devices. Server "see" 20 lun pairs
but multipath create only 19 mpath devices (mpath0-18). And I have erratic
results running multipath command. Ex:

[root@pop-16 mapper]# multipath
error calling out /sbin/scsi_id -g -u -s /block/sda
mpath17: remove (wwid changed)
create: mpath17 (3600601603d34130058c23305c612dc11)  DGC,RAID 5
[size=50G][features=0][hwhandler=1 emc]
\_ round-robin 0 [prio=1][undef]
 \_ 1:0:1:17 sdam 66:96  [undef][ready]
\_ round-robin 0 [prio=0][undef]
 \_ 1:0:0:17 sds  65:32  [undef][ready]
[root@pop-16 mapper]#

Running the same command 1 second later:

[root@pop-16 mapper]# multipath
error calling out /sbin/scsi_id -g -u -s /block/sda
mpath17: remove (wwid changed)
create: mpath17 (3600601601034130078a83aeac412dc11)  DGC,RAID 5
[size=50G][features=0][hwhandler=1 emc]
\_ round-robin 0 [prio=0][undef]
 \_ 1:0:1:18 sdan 66:112 [undef][ready]
\_ round-robin 0 [prio=1][undef]
 \_ 1:0:0:18 sdt  65:48  [undef][ready]
[root@pop-16 mapper]#

and again:

root@pop-16 mapper]# multipath
error calling out /sbin/scsi_id -g -u -s /block/sda
mpath17: remove (wwid changed)
create: mpath17 (3600601603d34130058c23305c612dc11)  DGC,RAID 5
[size=50G][features=0][hwhandler=1 emc]
\_ round-robin 0 [prio=1][undef]
 \_ 1:0:1:17 sdam 66:96  [undef][ready]
\_ round-robin 0 [prio=0][undef]
 \_ 1:0:0:17 sds  65:32  [undef][ready]
[root@pop-16 mapper]#

2-) Lots of error messages when running lvm commands (even after filtering
passive devices in lvm.conf)

filter = [ "r|/dev/cdrom|", "r|/dev/sdc|",  "r|/dev/sdd|", "r|/dev/sde|", 
"r|/dev/sdf|", "r|/dev/sdg|", "r|/dev/sdh|", "r|/dev/sdi|", "r|/dev/sdj|",
"r|/dev/sdk|", "r|/dev/sdl|", "r|/dev/sdm|", "r|/dev/sdn|", "r|/dev/sdo|",
"r|/dev/sdp|", "r|/dev/sdq|", "r|/dev/sdr|", "r|/dev/sds|", "r|/dev/sdt|",
"r|/dev/sdu|", "r|/dev/sdv|", "r|/dev/sdw|", "r|/dev/sdx|", "r|/dev/sdy|",
"r|/dev/sdz|", "r|/dev/sdaa|", "r|/dev/sdab|", "r|/dev/sdac|", "r|/dev/sdad|",
"r|/dev/sdaf|", "r|/dev/sdag|", "r|/dev/sdah|", "r|/dev/sdai|", "r|/dev/sdaj|",
 "r|/dev/sdak|",  "r|/dev/sdal|",  "r|/dev/sdam|",  "r|/dev/sdan|", "a/.*/" ]


[root@pop-16 mapper]# vgdisplay -v pop_16-c3-data
    Using volume group(s) on command line
    Finding volume group "pop_16-c3-data"
  /dev/sdq: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdah: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdah1: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdc: read failed after 0 of 4096 at 0: Input/output error
  /dev/sds: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdc1: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdaj: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdaj1: read failed after 0 of 4096 at 0: Input/output error
  /dev/sde: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdu: read failed after 0 of 4096 at 0: Input/output error
  /dev/sde1: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdv: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdal: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdal1: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdg: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdx: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdan: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdi: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdz: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdk: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdab: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdab1: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdad: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdad1: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdo: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdaf: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdaf1: read failed after 0 of 4096 at 0: Input/output error
  --- Volume group ---
  VG Name               pop_16-c3-data
  System ID
  Format                lvm2
  Metadata Areas        19
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                19
  Act PV                19
  VG Size               949.41 GB
  PE Size               32.00 MB
  Total PE              30381
  Alloc PE / Size       0 / 0
  Free  PE / Size       30381 / 949.41 GB
  VG UUID               3dwnTE-OAUx-ArDu-rjT5-AfhE-IfO3-fRwNTB

  --- Physical volumes ---
  PV Name               /dev/mapper/mpath0p1
  PV UUID               VWhmRd-JivO-8omn-fVou-yjEA-0euc-3KXrvh
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath1p1
  PV UUID               13dA0R-fO0J-c7iJ-xywx-zgDO-SCxe-WBn8hh
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath2p1
  PV UUID               f0VMoA-B22o-EmHv-bhc7-kdhP-8pKt-nNZsRw
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath3p1
  PV UUID               jkexds-FRF6-h7II-Pvo4-Egt2-sYF7-VYxbsw
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath4p1
  PV UUID               EioQEG-QkyY-0jrq-VRIn-Q9eB-kZYG-pJsy41
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath5p1
  PV UUID               0dKa9d-1pGf-QKyr-uXF3-6zXi-3zFn-kreM6b
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath6p1
  PV UUID               BvRVTl-FbX0-gV95-mRw9-VZ3f-bvCX-tK3efR
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath7p1
  PV UUID               XQh0xL-q9K6-A0fh-SDlY-xXjt-3rVH-2HRXJC
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath8p1
  PV UUID               2mnlzf-f4hY-nY1n-OXzz-y7ZE-rQo6-yQi0Ah
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath9p1
  PV UUID               coOwD2-vH3X-6z12-w0h9-Agsn-Ev7a-69wtrj
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath10p1
  PV UUID               bsP07P-QWFW-Or1f-KYp9-fRwR-VLLx-GL3UsF
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath11p1
  PV UUID               OkU3MI-YqIy-pXjk-hsd3-Fohv-vwtJ-YC1jOi
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath12p1
  PV UUID               orunjk-eCaq-iaQ2-iZwT-ln1D-M2Tq-OUFLOf
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath13p1
  PV UUID               bYUcec-anGk-gMK3-6Ttd-5iKS-3YKd-x3Ze2G
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath14p1
  PV UUID               1jzOd8-sfA7-RQCK-3biI-fehy-i98t-AU3tMA
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath15p1
  PV UUID               eWWFSH-0TWX-ov7V-n32f-Hjka-aqkg-c9Kf8F
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/mapper/mpath16p1
  PV UUID               WQnpWM-vwTw-JQue-GxA5-3Vp1-gTiD-G887LI
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

  PV Name               /dev/sdam1
  PV UUID               tFprMl-oO6o-gQUV-E9Zg-Vztr-iD3w-f6isoZ
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599

(When this volume groups was created using vgcreate I especified
/dev/mapper/mpath17p1 and not /dev/sdam1 - probably related with problem nro 1)

  PV Name               /dev/mapper/mpath18p1
  PV UUID               HVjnhn-KbhX-ysyZ-FZCB-uOk3-yB2W-FqIDVZ
  PV Status             allocatable
  Total PE / Free PE    1599 / 1599


Version-Release number of selected component (if applicable):

lvm2-2.02.17-1.fc6
device-mapper-multipath-0.4.7-5
device-mapper-1.02.13-1.fc6
kernel-2.6.20-1.2952.fc6

Distro Fully updated

How reproducible:

Always

Steps to Reproduce:

Just run multipath command or any lvm command
  
Actual results:

1-) All mpath device are created correctly for all luns except the last ones;
2-) Command run but with lots of error messages;

Expected results:

1-) All mpath devices created for ALL luns;
2-) Command run without any error message;

What I "think" is causing this problem:

===== paths list =====
uuid                              hcil     dev  dev_t  pri dm_st  chk_st  vend
3600601603d3413000c3492f6c512dc11 1:0:1:5  sdaa 65:160 1   [undef][ready] DGC,
36006016010341300f6eb2cdcc412dc11 1:0:1:6  sdab 65:176 0   [undef][ready] DGC,
3600601603d3413000d3492f6c512dc11 1:0:1:7  sdac 65:192 1   [undef][ready] DGC,
36006016010341300f7eb2cdcc412dc11 1:0:1:8  sdad 65:208 0   [undef][ready] DGC,
3600601603d341300f6a2dafdc512dc11 1:0:1:9  sdae 65:224 1   [undef][ready] DGC,
36006016010341300f8eb2cdcc412dc11 1:0:1:10 sdaf 65:240 0   [undef][ready] DGC,
3600601603d341300f7a2dafdc512dc11 1:0:1:11 sdag 66:0   1   [undef][ready] DGC,
3600601601034130026e740e3c412dc11 1:0:1:12 sdah 66:16  0   [undef][ready] DGC,
3600601603d341300f8a2dafdc512dc11 1:0:1:13 sdai 66:32  1   [undef][ready] DGC,
3600601601034130027e740e3c412dc11 1:0:1:14 sdaj 66:48  0   [undef][ready] DGC,
3600601603d341300f9a2dafdc512dc11 1:0:1:15 sdak 66:64  1   [undef][ready] DGC,
3600601601034130028e740e3c412dc11 1:0:1:16 sdal 66:80  0   [undef][ready] DGC,
3600601603d34130058c23305c612dc11 1:0:1:17 sdam 66:96  1   [undef][ready] DGC,
3600601601034130078a83aeac412dc11 1:0:1:18 sdan 66:112 0   [undef][ready] DGC,
3600601603d34130059c23305c612dc11 1:0:1:19 sdao 66:128 1   [undef][ready] DGC,
                                  0:2:0:0  sda  8:0    0   [undef][ready] Mega
360060160103413007ab51dd5c412dc11 1:0:0:0  sdb  8:16   1   [undef][ready] DGC,
3600601603d3413000a3492f6c512dc11 1:0:0:1  sdc  8:32   0   [undef][ready] DGC,
360060160103413007bb51dd5c412dc11 1:0:0:2  sdd  8:48   1   [undef][ready] DGC,
3600601603d3413000b3492f6c512dc11 1:0:0:3  sde  8:64   0   [undef][ready] DGC,
360060160103413007cb51dd5c412dc11 1:0:0:4  sdf  8:80   1   [undef][ready] DGC,
3600601603d3413000c3492f6c512dc11 1:0:0:5  sdg  8:96   0   [undef][ready] DGC,
36006016010341300f6eb2cdcc412dc11 1:0:0:6  sdh  8:112  1   [undef][ready] DGC,
3600601603d3413000d3492f6c512dc11 1:0:0:7  sdi  8:128  0   [undef][ready] DGC,
36006016010341300f7eb2cdcc412dc11 1:0:0:8  sdj  8:144  1   [undef][ready] DGC,
3600601603d341300f6a2dafdc512dc11 1:0:0:9  sdk  8:160  0   [undef][ready] DGC,
36006016010341300f8eb2cdcc412dc11 1:0:0:10 sdl  8:176  1   [undef][ready] DGC,
3600601603d341300f7a2dafdc512dc11 1:0:0:11 sdm  8:192  0   [undef][ready] DGC,
3600601601034130026e740e3c412dc11 1:0:0:12 sdn  8:208  1   [undef][ready] DGC,
3600601603d341300f8a2dafdc512dc11 1:0:0:13 sdo  8:224  0   [undef][ready] DGC,
3600601601034130027e740e3c412dc11 1:0:0:14 sdp  8:240  1   [undef][ready] DGC,
3600601603d341300f9a2dafdc512dc11 1:0:0:15 sdq  65:0   0   [undef][ready] DGC,
3600601601034130028e740e3c412dc11 1:0:0:16 sdr  65:16  1   [undef][ready] DGC,
3600601603d34130058c23305c612dc11 1:0:0:17 sds  65:32  0   [undef][ready] DGC,
==================== Look here the sequence of pri column ===================
3600601601034130078a83aeac412dc11 1:0:0:18 sdt  65:48  1   [undef][ready] DGC,
3600601603d34130059c23305c612dc11 1:0:0:19 sdu  65:64  0   [undef][ready] DGC,
360060160103413007ab51dd5c412dc11 1:0:1:0  sdv  65:80  0   [undef][ready] DGC,
3600601603d3413000a3492f6c512dc11 1:0:1:1  sdw  65:96  1   [undef][ready] DGC,
=============================================================================
360060160103413007bb51dd5c412dc11 1:0:1:2  sdx  65:112 0   [undef][ready] DGC,
3600601603d3413000b3492f6c512dc11 1:0:1:3  sdy  65:128 1   [undef][ready] DGC,
360060160103413007cb51dd5c412dc11 1:0:1:4  sdz  65:144 0   [undef][ready] DGC,

Other thing that I noted, was when I run dmsetup info for all devices (except
mpath 17) I had something like:

[root@pop-16 mapper]# dmsetup info mpath0
Name:              mpath0
State:             ACTIVE
Tables present:    LIVE
Open count:        1
Event number:      2
Major, minor:      253, 23
Number of targets: 1
UUID: mpath-360060160103413007ab51dd5c412dc11

But, when I run for mpath17 I get:

[root@pop-16 mapper]# dmsetup info mpath17
Name:              mpath17
State:             ACTIVE
Tables present:    LIVE
Open count:        0
Event number:      1
Major, minor:      253, 20
Number of targets: 1
UUID: mpath-3600601601034130078a83aeac412dc11


[root@pop-16 mapper]# cat /etc/multipath.conf | grep -v ^#

defaults {
        user_friendly_names yes
}
defaults {
        udev_dir                /dev
        polling_interval        10
        selector                "round-robin 0"
        #path_grouping_policy   multibus
        path_grouping_policy    failover
        getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
        prio_callout            /bin/true
        path_checker            readsector0
        rr_min_io               100
        rr_weight               priorities
        failback                immediate
        no_path_retry           fail
        user_friendly_name      yes
}
devnode_blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|sda|st)[0-9]*"
        devnode "^hd[a-z]"
        devnode "^cciss!c[0-9]d[0-9]*"
}
devices {
       device {
               vendor                  "DGC"
               product                 "*"
               bl_product              "LUN_Z"
               path_grouping_policy    failover
               getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
               prio_callout            "/sbin/mpath_prio_emc /dev/%n"
               hardware_handler        "1 emc"
               features                "0"
               path_checker            emc_clariion
               failback                immediate
       }
}

[root@pop-16 mapper]# ls -1 mpath*
mpath0
mpath0p1
mpath1
mpath1p1
mpath2
mpath2p1
mpath3
mpath3p1
mpath4
mpath4p1
mpath5
mpath5p1
mpath6
mpath6p1
mpath7
mpath7p1
mpath8
mpath8p1
mpath9
mpath9p1
mpath10
mpath10p1
mpath11
mpath11p1
mpath12
mpath12p1
mpath13
mpath13p1
mpath14
mpath14p1
mpath15
mpath15p1
mpath16
mpath16p1
mpath17
mpath18
mpath18p1

[root@pop-16 mapper]# cat /proc/scsi/scsi

Attached devices:
Host: scsi0 Channel: 00 Id: 06 Lun: 00
  Vendor: PE/PV    Model: 1x3 SCSI BP      Rev: 1.1
  Type:   Processor                        ANSI SCSI revision: 02
Host: scsi0 Channel: 02 Id: 00 Lun: 00
  Vendor: MegaRAID Model: LD 0 RAID1   34G Rev: 422A
  Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 00 Lun: 00
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 01
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 02
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 03
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 04
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 05
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 06
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 07
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 08
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 09
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 10
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 11
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 12
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 13
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 14
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 15
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 16
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 17
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 18
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 00 Lun: 19
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 00
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 01
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 02
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 03
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 04
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 05
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 06
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 07
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 08
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 09
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 10
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 11
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 12
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 13
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 14
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 15
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 16
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 17
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 18
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi1 Channel: 00 Id: 01 Lun: 19
  Vendor: DGC      Model: RAID 5           Rev: 0219
  Type:   Direct-Access                    ANSI SCSI revision: 04
[root@pop-16 mapper]#

Please, let me know if you need further information.

Regards,

Claudio Cuqui

Comment 1 Bug Zapper 2008-04-04 07:24:04 UTC
Fedora apologizes that these issues have not been resolved yet. We're
sorry it's taken so long for your bug to be properly triaged and acted
on. We appreciate the time you took to report this issue and want to
make sure no important bugs slip through the cracks.

If you're currently running a version of Fedora Core between 1 and 6,
please note that Fedora no longer maintains these releases. We strongly
encourage you to upgrade to a current Fedora release. In order to
refocus our efforts as a project we are flagging all of the open bugs
for releases which are no longer maintained and closing them.
http://fedoraproject.org/wiki/LifeCycle/EOL

If this bug is still open against Fedora Core 1 through 6, thirty days
from now, it will be closed 'WONTFIX'. If you can reporduce this bug in
the latest Fedora version, please change to the respective version. If
you are unable to do this, please add a comment to this bug requesting
the change.

Thanks for your help, and we apologize again that we haven't handled
these issues to this point.

The process we are following is outlined here:
http://fedoraproject.org/wiki/BugZappers/F9CleanUp

We will be following the process here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this
doesn't happen again.

And if you'd like to join the bug triage team to help make things
better, check out http://fedoraproject.org/wiki/BugZappers

Comment 2 Bug Zapper 2008-05-06 19:40:57 UTC
This bug is open for a Fedora version that is no longer maintained and
will not be fixed by Fedora. Therefore we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen thus bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.

Comment 3 Dave Wysochanski 2008-05-07 18:09:20 UTC
I believe this may still be a problem with rawhide and is potentially related to
user_friendly_names setting.  Can you try with user_friendly_names set to 'no'?

As far as lvm errors try removing /etc/lvm/.cache.  You also might want
something like this to use LVM over multipath:
filter = [ "r|/dev/sd.*|", "a|/dev/mapper|" ]
preferred_names = [ "/dev/mapper" ]



Comment 4 Bug Zapper 2008-05-14 02:59:07 UTC
Changing version to '9' as part of upcoming Fedora 9 GA.
More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 5 Bug Zapper 2009-06-09 22:39:06 UTC
This message is a reminder that Fedora 9 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 9.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '9'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 9's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 9 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 6 Bug Zapper 2009-07-14 17:48:31 UTC
Fedora 9 changed to end-of-life (EOL) status on 2009-07-10. Fedora 9 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.