Bug 178435 - pvcreate returns invalid physical volume on /dev/ddb
pvcreate returns invalid physical volume on /dev/ddb
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: lvm (Show other bugs)
3.0
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: LVM and device-mapper development team
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-01-20 10:48 EST by Patrick Bienati
Modified: 2007-11-30 17:07 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-10-19 14:48:26 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Patrick Bienati 2006-01-20 10:48:12 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.8) Gecko/20051111 Firefox/1.5

Description of problem:
This bug is similar to bug #124647.
My server (RHAS 3 U6) is connected to a NEC S1300 disk subsystem via 2 fiber channel links. The dual path is managed by NEC Storage Pathmanager. The physical disk devices, initially seen as /dev/sdb, are seen as /dev/ddb through the dual path manager.
If I try to create a pv with "pvcreate /dev/sdb", all is good.
When I try to create a pv with "pvcreate /dev/ddb", I obtain the error message :
 pvcreate -- invalid physical volume "/dev/ddb"
As I want a minimum of security on my server, I would like to use LVM over dual path management driver.

Version-Release number of selected component (if applicable):
lvm-1.0.8-14

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.
  

Expected Results:  Correct execution of pvcreate

Additional info:
Comment 1 Patrick Bienati 2006-01-23 07:35:55 EST
[root@clio2 root]# lvmdiskscan 
lvmdiskscan -- reading all disks / partitions (this may take a while...)
lvmdiskscan -- /dev/sda1  [     101.94 MB] free whole disk
lvmdiskscan -- /dev/sdb1  [      95.98 MB] free whole disk
lvmdiskscan -- /dev/sdc1  [         50 GB] free whole disk
lvmdiskscan -- /dev/sdd   [        150 GB] new LVM whole disk
lvmdiskscan -- /dev/sde   [        200 GB] new LVM whole disk
lvmdiskscan -- /dev/sdf   [        200 GB] free whole disk
lvmdiskscan -- /dev/sdg1  [      95.98 MB] free whole disk
lvmdiskscan -- /dev/sdh1  [         50 GB] free whole disk
lvmdiskscan -- /dev/sdi   [        150 GB] new LVM whole disk
lvmdiskscan -- /dev/sdj   [        200 GB] new LVM whole disk
lvmdiskscan -- /dev/sdk   [        200 GB] free whole disk
lvmdiskscan -- 11 disks
lvmdiskscan -- 11 whole disks
lvmdiskscan -- 0 loop devices
lvmdiskscan -- 0 multiple devices
lvmdiskscan -- 0 network block devices
lvmdiskscan -- 10 partitions
lvmdiskscan -- 0 LVM physical volume partitions


[root@clio2 root]# 
Comment 2 Patrick Bienati 2006-01-24 07:19:34 EST
As you see into the result of lvmdiskscan command, the /dev/dd* devices are not 
seen by lvm.
Here is the content of /proc/partitions :
The dd* devices are the dualpath devices (not seen by lvm)

[root@clio2 root]# cat /proc/partitions
major minor  #blocks  name     rio rmerge rsect ruse wio wmerge wsect wuse 
running use aveq

 251     0     209920 dda 27 193 440 20 0 0 0 0 0 20 20
 251     1      98288 dda1 13 95 216 0 0 0 0 0 0 0 0
 251     2      98304 dda2 13 95 216 20 0 0 0 0 0 20 20
 251    16   52430848 ddb 18 126 288 30 0 0 0 0 0 30 30
 251    17   52428096 ddb1 14 106 240 20 0 0 0 0 0 20 20
 251    32  157287424 ddc 1 3 8 0 0 0 0 0 0 0 0
 251    48  209717248 ddd 1 3 8 0 0 0 0 0 0 0 0
 251    64  209717248 dde 1 3 8 0 0 0 0 0 0 0 0
   8     0   71483392 sda 42959 23813 531554 185370 104258 147796 2027430 
16363440 0 687820 16619610
   8     1     104391 sda1 35 89 248 310 25 18 86 11460 0 8500 11770
   8     2   33415200 sda2 42850 23476 530578 184650 104233 147778 2027344 
16351980 0 687380 16607430
   8     3    2040255 sda3 17 69 256 70 0 0 0 0 0 70 70
   8     4   35921340 sda4 14 58 144 140 0 0 0 0 0 140 140
   8    16     209920 sdb 97 595 1384 70 0 0 0 0 0 70 70
   8    17      98288 sdb1 38 250 576 30 0 0 0 0 0 30 30
   8    18      98304 sdb2 36 244 560 20 0 0 0 0 0 20 20
   8    32   52430848 sdc 51 301 704 10 0 0 0 0 0 10 10
   8    33   52428096 sdc1 38 250 576 0 0 0 0 0 0 0 0
   8    48  157287424 sdd 6 18 48 10 0 0 0 0 0 10 10
   8    64  209717248 sde 6 18 48 10 0 0 0 0 0 10 10
   8    80  209717248 sdf 6 18 48 10 0 0 0 0 0 10 10
   8    96     209920 sdg 96 584 1360 60 0 0 0 0 0 60 60
   8    97      98288 sdg1 38 250 576 20 0 0 0 0 0 20 20
   8    98      98304 sdg2 36 244 560 10 0 0 0 0 0 10 10
   8   112   52430848 sdh 51 301 704 60 0 0 0 0 0 60 60
   8   113   52428096 sdh1 38 250 576 50 0 0 0 0 0 50 50
   8   128  157287424 sdi 6 18 48 10 0 0 0 0 0 10 10
   8   144  209717248 sdj 6 18 48 0 0 0 0 0 0 0 0
   8   160  209717248 sdk 6 18 48 0 0 0 0 0 0 0 0
[root@clio2 root]# 
Comment 3 RHEL Product and Program Management 2007-10-19 14:48:26 EDT
This bug is filed against RHEL 3, which is in maintenance phase.
During the maintenance phase, only security errata and select mission
critical bug fixes will be released for enterprise products. Since
this bug does not meet that criteria, it is now being closed.
 
For more information of the RHEL errata support policy, please visit:
http://www.redhat.com/security/updates/errata/
 
If you feel this bug is indeed mission critical, please contact your
support representative. You may be asked to provide detailed
information on how this bug is affecting you.

Note You need to log in before you can comment on or make changes to this bug.