Bug 163221 - pool comands reports Unable to open device "/dev/lvma":
pool comands reports Unable to open device "/dev/lvma":
Status: CLOSED ERRATA
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: gfs (Show other bugs)
3
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: AJ Lewis
GFS Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-07-14 02:47 EDT by Thomas von Steiger
Modified: 2010-01-11 22:06 EST (History)
1 user (show)

See Also:
Fixed In Version: RHBA-2005-723
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-09-30 10:57:22 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Ignore pool devices in /proc/partitions (482 bytes, patch)
2005-07-15 13:10 EDT, AJ Lewis
no flags Details | Diff

  None (edit)
Description Thomas von Steiger 2005-07-14 02:47:30 EDT
Description of problem:

Hello,

I'm using lvm for local filesystem and GFS for shared Filesystem.
If i execute a pool command i have the messages:

Unable to open device "/dev/lvma": No such file or directory

There are no /dev/lvm[a-j].
cat /proc/partitions found lvm[a-j].

If i do redirect of cca Config or pool Config to a file, then all this 
messages are inside the config file.

[root@hihhlx08 gfs]# pool_tool -s
Unable to open device "/dev/lvma": No such file or directory
Unable to open device "/dev/lvmb": No such file or directory
Unable to open device "/dev/lvmc": No such file or directory
Unable to open device "/dev/lvmd": No such file or directory
Unable to open device "/dev/lvme": No such file or directory
Unable to open device "/dev/lvmf": No such file or directory
Unable to open device "/dev/lvmg": No such file or directory
Unable to open device "/dev/lvmh": No such file or directory
Unable to open device "/dev/lvmi": No such file or directory
  Device                                            Pool Label
  ======                                            ==========
  /dev/pool/aim_corba                     <- GFS filesystem ->
  /dev/pool/aim_data01                    <- GFS filesystem ->
  /dev/pool/aim_data02                    <- GFS filesystem ->
  /dev/pool/aim_spool                     <- GFS filesystem ->
  /dev/pool/oracle_arch                   <- GFS filesystem ->
  /dev/pool/oracle_backup                 <- GFS filesystem ->
  /dev/pool/oracle_data1a                 <- GFS filesystem ->
  /dev/pool/oracle_index1a                <- GFS filesystem ->
  /dev/pool/oracle_redo1                  <- GFS filesystem ->
  /dev/pool/oracle_redo2                  <- GFS filesystem ->
  /dev/pool/pacs01_cca                        <- CCA device ->
  /dev/lvma                                        <- error ->
  /dev/lvmb                                        <- error ->
  /dev/lvmc                                        <- error ->
  /dev/lvmd                                        <- error ->
  /dev/lvme                                        <- error ->
  /dev/lvmf                                        <- error ->
  /dev/lvmg                                        <- error ->
  /dev/lvmh                                        <- error ->
  /dev/lvmi                                        <- error ->
  /dev/sda                         <- partition information ->
  /dev/sda1                                          aim_corba
  /dev/sda2                                          aim_corba
  /dev/sdb                         <- partition information ->
  /dev/sdb1                                         aim_data01
  /dev/sdc                         <- partition information ->
  /dev/sdc1                                      <- unknown ->
  /dev/sdc2                                      oracle_backup
  /dev/sdc3                                      <- unknown ->
  /dev/sdd                         <- partition information ->
  /dev/sdd1                                       oracle_redo1
  /dev/sdd2                                     oracle_index1a
  /dev/sdd3                                      oracle_data1a
  /dev/sde                         <- partition information ->
  /dev/sde1                                       oracle_redo2
  /dev/sde2                                        oracle_arch
  /dev/sdf                         <- partition information ->
  /dev/sdf1                                         pacs01_cca
  /dev/sdf2                                      <- unknown ->
  /dev/sdg                         <- partition information ->
  /dev/sdg1                                      <- unknown ->
  /dev/sdh                         <- partition information ->
  /dev/sdh1                                          aim_spool
  /dev/sdi                         <- partition information ->
  /dev/sdi1                                         aim_data02
  /dev/sdj                         <- partition information ->
  /dev/sdj1                                         aim_data02
  /dev/sdk                         <- partition information ->
  /dev/sdk1                                         aim_data01
  /dev/sdl                         <- partition information ->
  /dev/sdl1                                      <- unknown ->
  /dev/sdm                         <- partition information ->
  /dev/sdm1                                      <- unknown ->
  /dev/cciss/c0d0                  <- partition information ->
  /dev/cciss/c0d0p1                    <- EXT2/3 filesystem ->
  /dev/cciss/c0d0p2                    <- EXT2/3 filesystem ->
  /dev/cciss/c0d0p3                          <- swap device ->
  /dev/cciss/c0d0p4                <- partition information ->
  /dev/cciss/c0d0p5                          <- swap device ->
  /dev/cciss/c0d0p6                          <- swap device ->
  /dev/cciss/c0d0p7                       <- lvm1 subdevice ->
[root@hihhlx08 gfs]# pool_tool -c aim_icoserve.pool
Unable to open device "/dev/lvma": No such file or directory
Unable to open device "/dev/lvmb": No such file or directory
Unable to open device "/dev/lvmc": No such file or directory
Unable to open device "/dev/lvmd": No such file or directory
Unable to open device "/dev/lvme": No such file or directory
Unable to open device "/dev/lvmf": No such file or directory
Unable to open device "/dev/lvmg": No such file or directory
Unable to open device "/dev/lvmh": No such file or directory
Unable to open device "/dev/lvmi": No such file or directory
Pool label written successfully from aim_icoserve.pool
[root@hihhlx08 gfs]# pool_assemble aim_icoserve
Unable to open device "/dev/lvma": No such file or directory
Unable to open device "/dev/lvmb": No such file or directory
Unable to open device "/dev/lvmc": No such file or directory
Unable to open device "/dev/lvmd": No such file or directory
Unable to open device "/dev/lvme": No such file or directory
Unable to open device "/dev/lvmf": No such file or directory
Unable to open device "/dev/lvmg": No such file or directory
Unable to open device "/dev/lvmh": No such file or directory
Unable to open device "/dev/lvmi": No such file or directory
aim_icoserve assembled.
[root@hihhlx08 gfs]# ls -la /dev/lvma
ls: /dev/lvma: No such file or directory

Version-Release number of selected component (if applicable):

GFS-6.0.2-17

How reproducible:

always

Steps to Reproduce:
1.  execute pool_tool or pool_info or pool_assemble

  
Actual results:


Expected results:


Additional info:
Comment 1 AJ Lewis 2005-07-14 13:01:57 EDT
Interesting...my RHEL3 system has the /dev/lvm* files symlinked to their actual
LVs.  I'll make sure I have the latest LVM1 tools and try again - which version
of the LVM1 RPM do you have?
Comment 2 AJ Lewis 2005-07-14 13:07:38 EDT
Never mind - I must have been messing around with some interaction between pool
and lvm1 earlier.
Comment 4 Thomas von Steiger 2005-07-15 02:07:16 EDT
ok, this is RHAS3 Update4 Kickstart installation:

[root@hihhlx08 root]# rpm -qa | grep lvm
lvm-1.0.8-9

--snip from ks
clearpart --drives=cciss/c0d0 --all --initlabel
part /boot --fstype ext3 --size=200 --ondisk cciss/c0d0
part swap --size=2000 --ondisk cciss/c0d0
part swap --size=2000 --ondisk cciss/c0d0
part swap --size=2000 --ondisk cciss/c0d0
part / --fstype ext3 --size=2000 --ondisk cciss/c0d0
part pv.8 --size=100 --grow --ondisk cciss/c0d0
volgroup rhvg pv.8
logvol /home --fstype ext3 --name=homelv --vgname=rhvg --size=400
logvol /tmp --fstype ext3 --name=tmplv --vgname=rhvg --size=10000
logvol /usr --fstype ext3 --name=usrlv --vgname=rhvg --size=5000
logvol /usr/local --fstype ext3 --name=usrloclv --vgname=rhvg --size=200
logvol /opt --fstype ext3 --name=optlv --vgname=rhvg --size=400
logvol /var --fstype ext3 --name=varlv --vgname=rhvg --size=600
logvol /var/log --fstype ext3 --name=varloglv --vgname=rhvg --size=1000
logvol /var/icoserve --fstype ext3 --name=varicolv --vgname=rhvg --size=25000
--
[root@hihhlx08 root]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/cciss/c0d0p2      2015920    101888   1811624   6% /
/dev/cciss/c0d0p1       197546     11518    175829   7% /boot
/dev/rhvg/homelv        396672      8247    367945   3% /home
/dev/rhvg/optlv         396672     45009    331183  12% /opt
none                   2015512         0   2015512   0% /dev/shm
/dev/rhvg/tmplv       10079084    260404   9306680   3% /tmp
/dev/rhvg/usrlv        5039616   1293352   3490264  28% /usr
/dev/rhvg/usrloclv      198337      7747    180350   5% /usr/local
/dev/rhvg/varlv         604736     47968    526048   9% /var
/dev/rhvg/varicolv    25197676     33780  23883896   1% /var/icoserve
/dev/rhvg/varloglv     1007896    109960    846736  12% /var/log
/dev/pool/aim_icoserve
                       3538544    812532   2726012  23% /opt/icoserve
/dev/pool/aim_datalta
                     104194176        76 104194100   1% /aim/datalta
/dev/pool/aim_data01 2096272576       792 2096271784   1% /aim/data01
[root@hihhlx08 root]# cat /proc/partitions | grep lvm
  58     0     409600 lvma 0 0 0 0 0 0 0 0 0 0 0
  58     1     409600 lvmb 0 0 0 0 0 0 0 0 0 0 0
  58     3   10240000 lvmd 0 0 0 0 0 0 0 0 0 0 0
  58     4    5120000 lvme 0 0 0 0 0 0 0 0 0 0 0
  58     5     204800 lvmf 0 0 0 0 0 0 0 0 0 0 0
  58     6     614400 lvmg 0 0 0 0 0 0 0 0 0 0 0
  58     7   25600000 lvmh 0 0 0 0 0 0 0 0 0 0 0
  58     8    1024000 lvmi 0 0 0 0 0 0 0 0 0 0 0
[root@hihhlx08 root]# ls -la /dev/lvm*
crw-r-----    1 root     root     109,   0 Jul  5 16:53 /dev/lvm
Comment 6 AJ Lewis 2005-07-15 13:10:32 EDT
Created attachment 116810 [details]
Ignore pool devices in /proc/partitions

This makes pool completely ignore all lvm devices it sees in /proc/partitions -
you will not get a listing for lvm devices in pool_tool -s, and you will not be
able to create/activate pools on lvm devices after this is applied.
Comment 7 AJ Lewis 2005-07-15 16:34:53 EDT
Patch from comment #6 checked into GFS RHEL3 branch.
Comment 13 Red Hat Bugzilla 2005-09-30 10:57:23 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2005-723.html

Note You need to log in before you can comment on or make changes to this bug.