Bug 1303940

Summary: [HC] vdsm should be able to consume a gluster storage domin, even if a underlining thin-LV with lvm-cache is defined or allow disabling multipath.
Product: [oVirt] vdsm Reporter: Sahina Bose <sabose>
Component: GlusterAssignee: Sahina Bose <sabose>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: urgent    
Version: 4.17.18CC: agk, amureini, bazulay, bmarzins, bmcclain, bugs, dfediuck, fdeutsch, gklein, knarra, lsurette, nsoffer, pcuzner, rcyriac, sabose, sasundar, srevivo, teigland, tnisan, ybronhei, ycui, ykaul, ylavi, zkabelac
Target Milestone: ovirt-4.1.1Keywords: TestOnly
Target Release: 4.19.8Flags: rule-engine: ovirt-4.1+
bmcclain: blocker+
ylavi: planning_ack+
rule-engine: devel_ack+
sasundar: testing_ack+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-04-21 09:29:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1374545    
Bug Blocks: 1277939    
Attachments:
Description Flags
vdsm.log
none
required information none

Description Sahina Bose 2016-02-02 13:15:49 UTC
Created attachment 1120424 [details]
vdsm.log

Description of problem:

On a RHEL 7.2 server, create a dm-cache backed thin-pool and create thin LVs.
Mount the LVs and start vdsm service.

All vdsm commands fail in this setup with "Recovering from crash.." 

For instance

# vdsClient -s 0 list
Recovering from crash or Initializing

# vdsClient -s 0 getVdsCaps
Recovering from crash or Initializing

Version-Release number of selected component (if applicable):
lvm2-2.02.130-5.el7.x86_64
vdsm-4.17.19-0.el7ev.noarch

How reproducible:
Tried twice, and faced the issue both times

Steps to Reproduce:
I had 2 disks - /dev/sdb and /dev/sdc:

pvcreate --dataalignment 256K /dev/sdb
pvcreate --dataalignment 256K /dev/sdc --use as cache
vgcreate RHS_vg1 /dev/sdb /dev/sdc
lvcreate --thinpool RHS_vg1/vg1_pool --size 1.6T --chunksize 256K
--poolmetadatasize 16G
lvchange --zero n RHS_vg1/vg1_pool
lvcreate -L 200G -n rhs_cache  RHS_vg1 /dev/sdc
lvcreate -L 200M -n rhs_cache_meta RHS_vg1 /dev/sdc
lvconvert --type cache-pool --cachemode writethrough --poolmetadata
RHS_vg1/rhs_cache_meta RHS_vg1/rhs_cache
lvconvert --type cache --cachepool RHS_vg1/rhs_cache RHS_vg1/vg1_pool

lvcreate --thin --name engine --virtualsize 100G RHS_vg1/vg1_pool
mkfs.xfs -f -i size=512 -n size=8192 /dev/RHS_vg1/engine
lvcreate --thin --name data --virtualsize 1.5T RHS_vg1/vg1_pool
mkfs.xfs -f -i size=512 -n size=8192 /dev/RHS_vg1/data

mkdir -p /rhgs/engine  /rhgs/data
mount /dev/RHS_vg1/engine /rhgs/engine
mount /dev/RHS_vg1/data /rhgs/data


Additional info:
vdsm works as expected when thinpool is created without a lvm cache and recreating LV (same version of all components)

Comment 1 Sahina Bose 2016-02-03 06:04:09 UTC
Lowering severity, as the same setup worked for Paul C. No vdsm errors.

Script used:
lvcreate -n cachedisk -L 180g rhgs_vg1 /dev/sdc

lvcreate -n metadisk -L 5g rhgs_vg1 /dev/sdc

lvconvert --type cache-pool --poolmetadata /dev/rhgs_vg1/metadisk --poolmetadataspare n  --cachemode $MODE /dev/rhgs_vg1/cachedisk -y

lvconvert --type cache /dev/rhgs_vg1/thinpool --cachepool /dev/rhgs_vg1/cachedisk

Comment 2 Tal Nisan 2016-02-03 09:01:59 UTC
Sahina, since the same setup worked for someone else it might be an environmental issue and thus not a bug?

Comment 3 Sahina Bose 2016-02-03 10:48:49 UTC
(In reply to Tal Nisan from comment #2)
> Sahina, since the same setup worked for someone else it might be an
> environmental issue and thus not a bug?

Quite possible. But can we leave this open till I try to reproduce again with a clean install?

Comment 4 Red Hat Bugzilla Rules Engine 2016-02-03 10:52:27 UTC
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.

Comment 5 Tal Nisan 2016-02-03 11:14:40 UTC
Sure

Comment 6 Yaniv Kaul 2016-02-04 08:23:20 UTC
Restoring needinfo on the reporter - still need to reproduce this.

Comment 7 Sahina Bose 2016-02-08 11:34:20 UTC
I reproduced this on a clean install of RHEL 7.2

- subscribe to rhel-7 channels, rhev channels , gluster channels & yum update -y && yum install vdsm gluster
- create thin-LV with cache and mount directory as per comment 1
- restart vdsm
- execute command vdsClient -s 0 list

vdsm has updated multipath entries as seen below. 

[root@rhsdev9 ~]# vdsClient -s 0 list
Recovering from crash or Initializing
[root@rhsdev9 ~]# lsblk
NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                   8:0    0 278.9G  0 disk  
├─sda1                                8:1    0   500M  0 part  /boot
└─sda2                                8:2    0 278.4G  0 part  
  ├─rhel_rhsdev9-root               253:0    0    50G  0 lvm   /
  ├─rhel_rhsdev9-swap               253:1    0     4G  0 lvm   [SWAP]
  └─rhel_rhsdev9-home               253:4    0 224.3G  0 lvm   /home
sdb                                   8:16   0   1.8T  0 disk  
└─36c81f660c9bcaa001d65c9720fe36369 253:11   0   1.8T  0 mpath 
  └─RHS_vg1-vg1_pool_tdata_corig    253:6    0   1.6T  0 lvm   
    └─RHS_vg1-vg1_pool_tdata        253:7    0   1.6T  0 lvm   
      └─RHS_vg1-vg1_pool-tpool      253:8    0   1.6T  0 lvm   
        ├─RHS_vg1-vg1_pool          253:9    0   1.6T  0 lvm   
        └─RHS_vg1-engine            253:10   0   100G  0 lvm   /rhgs/engine
sdc                                   8:32   0   1.8T  0 disk  
├─RHS_vg1-rhs_cache_cdata           253:3    0   200G  0 lvm   
│ └─RHS_vg1-vg1_pool_tdata          253:7    0   1.6T  0 lvm   
│   └─RHS_vg1-vg1_pool-tpool        253:8    0   1.6T  0 lvm   
│     ├─RHS_vg1-vg1_pool            253:9    0   1.6T  0 lvm   
│     └─RHS_vg1-engine              253:10   0   100G  0 lvm   /rhgs/engine
├─RHS_vg1-rhs_cache_cmeta           253:5    0     2G  0 lvm   
│ └─RHS_vg1-vg1_pool_tdata          253:7    0   1.6T  0 lvm   
│   └─RHS_vg1-vg1_pool-tpool        253:8    0   1.6T  0 lvm   
│     ├─RHS_vg1-vg1_pool            253:9    0   1.6T  0 lvm   
│     └─RHS_vg1-engine              253:10   0   100G  0 lvm   /rhgs/engine
└─36c81f660c9bcaa001d65c9be146d3c97 253:12   0   1.8T  0 mpath 
  ├─RHS_vg1-vg1_pool_tmeta          253:2    0  15.8G  0 lvm   
  │ └─RHS_vg1-vg1_pool-tpool        253:8    0   1.6T  0 lvm   
  │   ├─RHS_vg1-vg1_pool            253:9    0   1.6T  0 lvm   
  │   └─RHS_vg1-engine              253:10   0   100G  0 lvm   /rhgs/engine
  ├─RHS_vg1-rhs_cache_cdata         253:3    0   200G  0 lvm   
  │ └─RHS_vg1-vg1_pool_tdata        253:7    0   1.6T  0 lvm   
  │   └─RHS_vg1-vg1_pool-tpool      253:8    0   1.6T  0 lvm   
  │     ├─RHS_vg1-vg1_pool          253:9    0   1.6T  0 lvm   
  │     └─RHS_vg1-engine            253:10   0   100G  0 lvm   /rhgs/engine
  └─RHS_vg1-rhs_cache_cmeta         253:5    0     2G  0 lvm   
    └─RHS_vg1-vg1_pool_tdata        253:7    0   1.6T  0 lvm   
      └─RHS_vg1-vg1_pool-tpool      253:8    0   1.6T  0 lvm   
        ├─RHS_vg1-vg1_pool          253:9    0   1.6T  0 lvm   
        └─RHS_vg1-engine            253:10   0   100G  0 lvm   /rhgs/engine


Retried the same flow, adding the following to /etc/multipath.conf:
blacklist {
        devnode "*"
}

This time, vds commands execute without error

[root@rhsdev9 ~]# vdsClient -s 0 list
[root@rhsdev9 ~]# lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                          8:0    0 278.9G  0 disk 
├─sda1                       8:1    0   500M  0 part /boot
└─sda2                       8:2    0 278.4G  0 part 
  ├─rhel_rhsdev9-root      253:0    0    50G  0 lvm  /
  ├─rhel_rhsdev9-swap      253:1    0     4G  0 lvm  [SWAP]
  └─rhel_rhsdev9-home      253:6    0 224.3G  0 lvm  /home
sdb                          8:16   0   1.8T  0 disk 
└─RHS_vg1-vg1_pool_tdata   253:3    0   1.6T  0 lvm  
  └─RHS_vg1-vg1_pool-tpool 253:4    0   1.6T  0 lvm  
    ├─RHS_vg1-vg1_pool     253:5    0   1.6T  0 lvm  
    └─RHS_vg1-engine       253:7    0   100G  0 lvm  /rhgs/engine
sdc                          8:32   0   1.8T  0 disk 
└─RHS_vg1-vg1_pool_tmeta   253:2    0  15.8G  0 lvm  
  └─RHS_vg1-vg1_pool-tpool 253:4    0   1.6T  0 lvm  
    ├─RHS_vg1-vg1_pool     253:5    0   1.6T  0 lvm  
    └─RHS_vg1-engine       253:7    0   100G  0 lvm  /rhgs/engine



Should multipath be disabled for this setup to work?
Paul, did you disable multipath updation in your setup

Comment 8 Yaniv Kaul 2016-02-08 11:58:11 UTC
I'd agree that in the specific case of HC, it makes sense to disable it.
But in the general case, we need to selectively disable (or enable) multipath - we do not (yet) support dm-cache (nor do we configure it, etc.).

Comment 9 Itamar Heim 2016-02-09 16:44:07 UTC
(In reply to Yaniv Kaul from comment #8)
> I'd agree that in the specific case of HC, it makes sense to disable it.
> But in the general case, we need to selectively disable (or enable)
> multipath - we do not (yet) support dm-cache (nor do we configure it, etc.).

what happens if we have mixed gluster / shared (block) storage in same host?

Comment 10 Yaniv Kaul 2016-02-09 16:45:32 UTC
(In reply to Itamar Heim from comment #9)
> (In reply to Yaniv Kaul from comment #8)
> > I'd agree that in the specific case of HC, it makes sense to disable it.
> > But in the general case, we need to selectively disable (or enable)
> > multipath - we do not (yet) support dm-cache (nor do we configure it, etc.).
> 
> what happens if we have mixed gluster / shared (block) storage in same host?

This is exactly the 'selectively' - you need to configure multipath properly for some devices and blacklist others.

Comment 12 Sahina Bose 2016-03-22 08:14:14 UTC
Since I was able to reproduce again, and isolated the issue - the needinfo was not required.
Paul, could you confirm if you had multipathd disabled when you tried this setup?

Comment 13 Nir Soffer 2016-03-30 11:09:39 UTC
The attached vdsm log is too short - we see pvs command started, but we don't see
it finish. We need complete log showing how the storage refresh thread running
during startup is failing.

Comment 14 Sahina Bose 2016-04-05 07:18:01 UTC
Nir, that was the entire log that I had at the time. I don't have the setup now.
But Kasturi has run into the same issue, so I will ask her to provide the logs.

Comment 15 RamaKasturi 2016-04-05 07:51:48 UTC
I tried to setup thin-lv with lvmcache. I see that the command vdsClient -s 0 getStorageDomainsList gets hung and hosted-engine --deploy gets stuck when storage domain is provided as glusterfs. 

output from lsblk:
=====================
[root@sulphur ~]# lsblk
NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                   8:0    0 931.5G  0 disk  
├─sda1                                8:1    0   500M  0 part  /boot
└─sda2                                8:2    0   931G  0 part  
  ├─rhel_dhcp35--223-root           253:0    0    50G  0 lvm   /
  ├─rhel_dhcp35--223-swap           253:1    0     4G  0 lvm   [SWAP]
  └─rhel_dhcp35--223-home           253:2    0   877G  0 lvm   /home
sdb                                   8:16   0   3.7T  0 disk  
├─RHS_vg1-vg1_pool_tdata_corig      253:7    0   3.5T  0 lvm   
│ └─RHS_vg1-vg1_pool_tdata          253:8    0   3.5T  0 lvm   
│   └─RHS_vg1-vg1_pool-tpool        253:9    0   3.5T  0 lvm   
│     ├─RHS_vg1-vg1_pool            253:10   0   3.5T  0 lvm   
│     ├─RHS_vg1-data                253:11   0     3T  0 lvm   /rhgs/data
│     └─RHS_vg1-vmstore             253:12   0   400G  0 lvm   /rhgs/vmstore
└─3600508b1001c168403137fd6ec720d83 253:13   0   3.7T  0 mpath 
  ├─RHS_vg1-engine                  253:3    0    50G  0 lvm   /rhgs/engine
  ├─RHS_vg1-vg1_pool_tmeta          253:4    0  15.8G  0 lvm   
  │ └─RHS_vg1-vg1_pool-tpool        253:9    0   3.5T  0 lvm   
  │   ├─RHS_vg1-vg1_pool            253:10   0   3.5T  0 lvm   
  │   ├─RHS_vg1-data                253:11   0     3T  0 lvm   /rhgs/data
  │   └─RHS_vg1-vmstore             253:12   0   400G  0 lvm   /rhgs/vmstore
  └─RHS_vg1-vg1_pool_tdata_corig    253:7    0   3.5T  0 lvm   
    └─RHS_vg1-vg1_pool_tdata        253:8    0   3.5T  0 lvm   
      └─RHS_vg1-vg1_pool-tpool      253:9    0   3.5T  0 lvm   
        ├─RHS_vg1-vg1_pool          253:10   0   3.5T  0 lvm   
        ├─RHS_vg1-data              253:11   0     3T  0 lvm   /rhgs/data
        └─RHS_vg1-vmstore           253:12   0   400G  0 lvm   /rhgs/vmstore
sdc                                   8:32   0 223.6G  0 disk  
└─3600508b1001c6315db88677b1347bafc 253:14   0 223.6G  0 mpath 
  ├─RHS_vg1-rhs_cache_cdata         253:5    0   200G  0 lvm   
  │ └─RHS_vg1-vg1_pool_tdata        253:8    0   3.5T  0 lvm   
  │   └─RHS_vg1-vg1_pool-tpool      253:9    0   3.5T  0 lvm   
  │     ├─RHS_vg1-vg1_pool          253:10   0   3.5T  0 lvm   
  │     ├─RHS_vg1-data              253:11   0     3T  0 lvm   /rhgs/data
  │     └─RHS_vg1-vmstore           253:12   0   400G  0 lvm   /rhgs/vmstore
  └─RHS_vg1-rhs_cache_cmeta         253:6    0     2G  0 lvm   
    └─RHS_vg1-vg1_pool_tdata        253:8    0   3.5T  0 lvm   
      └─RHS_vg1-vg1_pool-tpool      253:9    0   3.5T  0 lvm   
        ├─RHS_vg1-vg1_pool          253:10   0   3.5T  0 lvm   
        ├─RHS_vg1-data              253:11   0     3T  0 lvm   /rhgs/data
        └─RHS_vg1-vmstore           253:12   0   400G  0 lvm   /rhgs/vmstore

'pvs' command hangs on the system.


vdsm.log can be found in the link http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/1303940/

Comment 16 Nir Soffer 2016-04-06 07:28:02 UTC
(In reply to RamaKasturi from comment #15)
> 'pvs' command hangs on the system.

pvs command from the shell hangs?

This sounds like lvm issue, not vdsm - vdsm assumes that lvm command do not hang.

Zdenek, can you take a look at this?

Comment 17 Paul Cuzner 2016-04-06 09:06:18 UTC
(In reply to Sahina Bose from comment #12)
> Since I was able to reproduce again, and isolated the issue - the needinfo
> was not required.
> Paul, could you confirm if you had multipathd disabled when you tried this
> setup?

Yes I had multipathd disabled before I would have encountered this. I noticed that I was getting mpath devices for local drives, so disabled multipathd earlier in the process. With multipathd disabled, no issues with vdsm. Sorry for the delay..missed the email!

Comment 18 Zdenek Kabelac 2016-04-06 09:17:39 UTC
At this moment I'd not recommend using thin-pool with caching - unless you are willing to live/workaround limitation associated with this use-case.

Primarily  thin-pool using cached dataLV cannot be resized (as we do not yet support resize of cached LV)
So whenever thin-pool data usage grows above threshold it will not be able to grow.

Hanging  'pvs'  without showing/providing -vvvv traces and states of  dmsetup table, dmsetup status, dmsetup info -c    is unfortunately not resolvable by lvm2 developers....

So please fill this mandatory info so we could see the possible reason for hanging...

Comment 19 Nir Soffer 2016-04-06 09:47:32 UTC
(In reply to Zdenek Kabelac from comment #18)
> Hanging  'pvs'  without showing/providing -vvvv traces and states of 
> dmsetup table, dmsetup status, dmsetup info -c    is unfortunately not
> resolvable by lvm2 developers....

Thanks Zdenek, RamaKasturi, can you provide the info?

Comment 20 RamaKasturi 2016-04-07 05:38:18 UTC
(In reply to Nir Soffer from comment #19)
> (In reply to Zdenek Kabelac from comment #18)
> > Hanging  'pvs'  without showing/providing -vvvv traces and states of 
> > dmsetup table, dmsetup status, dmsetup info -c    is unfortunately not
> > resolvable by lvm2 developers....
> 
> Thanks Zdenek, RamaKasturi, can you provide the info?

I see that when i run 'pvs' command the following output gets displayed and this is hung afterwards.

WARNING: duplicate PV kQ37t6V3DkDZhmXe1ijOafE6Lk8jARRF is being used from both devices /dev/mapper/3600508b1001cc2b61a4c189537531fac and /dev/sdc
  Found duplicate PV kQ37t6V3DkDZhmXe1ijOafE6Lk8jARRF: using /dev/mapper/3600508b1001cc2b61a4c189537531fac not /dev/sdc
  Using duplicate PV /dev/mapper/3600508b1001cc2b61a4c189537531fac from subsystem DM, ignoring /dev/sdc
  WARNING: duplicate PV z1EsUDFqGdXhQpuYwKs2ubStCJRnlhOP is being used from both devices /dev/mapper/3600508b1001c33b4f84cafef54e24026 and /dev/sdb

output from dmsetup table:
============================

[root@tettnang ~]# dmsetup table
RHS_vg1-data: 0 5368709120 thin 253:10 1
RHS_vg1-engine: 0 104857600 linear 253:14 2048
RHS_vg1-rhs_cache_cdata: 0 419430400 linear 253:15 512
rhel_tettnang-home: 0 1258291200 linear 8:2 8390656
RHS_vg1-rhs_cache_cmeta: 0 4194304 linear 253:15 419430912
RHS_vg1-vg1_pool-tpool: 0 7516192768 thin-pool 253:5 253:9 2048 0 1 skip_block_zeroing 
RHS_vg1-vg1_pool_tdata: 0 7516192768 cache 253:7 253:6 253:8 128 1 writethrough smq 0
RHS_vg1-vmstore: 0 2147483648 thin 253:10 2
RHS_vg1-vg1_pool_tmeta: 0 33161216 linear 253:14 7654606848
RHS_vg1-vg1_pool: 0 7516192768 linear 253:10 0
rhel_tettnang-swap: 0 8388608 linear 8:2 2048
rhel_tettnang-root: 0 104857600 linear 8:2 1847445504
3600508b1001cc2b61a4c189537531fac: 0 468792000 multipath 0 0 1 1 service-time 0 1 2 8:32 1 1 
3600508b1001c33b4f84cafef54e24026: 0 7813834160 multipath 0 0 1 1 service-time 0 1 2 8:16 1 1 
RHS_vg1-vg1_pool_tdata_corig: 0 7516192768 linear 253:14 138414080
rhel_tettnang-engine: 0 104857600 linear 8:2 1266681856

dmsetup status:
================

[root@tettnang ~]# dmsetup status
RHS_vg1-data: 0 5368709120 thin 2691072 5368709119
RHS_vg1-engine: 0 104857600 linear 
RHS_vg1-rhs_cache_cdata: 0 419430400 linear 
rhel_tettnang-home: 0 1258291200 linear 
RHS_vg1-rhs_cache_cmeta: 0 4194304 linear 
RHS_vg1-vg1_pool-tpool: 0 7516192768 thin-pool 2 504/4145152 1860/3670016 - rw discard_passdown queue_if_no_space - 

After it prints the above output, this command is hangs.

output from dmsetup info -c:
============================

[root@tettnang ~]# dmsetup info -c
Name                              Maj Min Stat Open Targ Event  UUID                                                                      
RHS_vg1-data                      253  12 L-sw    1    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOnG1WOWiQy1ITCblu13iLFde2T28QQKRqn      
RHS_vg1-engine                    253   4 L--w    1    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOn500miZgR8AL0BgEvpSRZZDhPa1jXwj7A      
RHS_vg1-rhs_cache_cdata           253   6 LIsw    1    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOnxUV6fk53tXxQ80a13knuKo4oMeIesqe7-cdata
rhel_tettnang-home                253   2 L--w    1    1      0 LVM-6auugx5Em2TY1b6eXZE9nHqYHLSC5MnQOEAIctV7iwkHdzd8ueUNDOFBLTCeLTKE      
RHS_vg1-rhs_cache_cmeta           253   7 LIsw    1    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOndod3wVBv5kynlR5Q33NMIUZpwqyJ3DXp-cmeta
RHS_vg1-vg1_pool-tpool            253  10 L--w    3    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOnkw0DBWOTgRiqFTDXH2KeLmAv7374huKx-tpool
RHS_vg1-vg1_pool_tdata            253   9 L--w    1    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOnFEsjFKHNRx8ZXNZK3esgY8SddKF050lk-tdata
RHS_vg1-vmstore                   253  13 L--w    2    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOnWpPdsUIw8CXQ3Hc7jv0HMWq0drBJQnHb      
RHS_vg1-vg1_pool_tmeta            253   5 L--w    1    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOnnvORLrHdw8IWTzFIaJRjgP4XNrw7Zlca-tmeta
RHS_vg1-vg1_pool                  253  11 L--w    0    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOnkw0DBWOTgRiqFTDXH2KeLmAv7374huKx-pool 
rhel_tettnang-swap                253   1 L--w    2    1      0 LVM-6auugx5Em2TY1b6eXZE9nHqYHLSC5MnQkSe9R7HdzxapHnjKFeNpL8UkZADn4cV2      
rhel_tettnang-root                253   0 L--w    1    1      0 LVM-6auugx5Em2TY1b6eXZE9nHqYHLSC5MnQ8LBd3uLeFI5Xfw11NeT1TFELuUorMv6D      
3600508b1001cc2b61a4c189537531fac 253  15 L--w    3    1      0 mpath-3600508b1001cc2b61a4c189537531fac                                   
3600508b1001c33b4f84cafef54e24026 253  14 L--w    5    1      0 mpath-3600508b1001c33b4f84cafef54e24026                                   
RHS_vg1-vg1_pool_tdata_corig      253   8 L-sw    1    1      0 LVM-RsF9S2ztGpvXya5YZJcGZfHdkmpbKCOnVJg8lJugnR5iUKNPsOY53s05B0DWgJLx-real 
rhel_tettnang-engine              253   3 L--w    0    1      0 LVM-6auugx5Em2TY1b6eXZE9nHqYHLSC5MnQcAxq0wsuZJiE1Qz7uX7c6wt2jcDB3ekI  

Hope i have provided all the information which is required. please let me know in case you would more info.

Comment 21 Zdenek Kabelac 2016-04-09 13:00:54 UTC
This looks like misconfigured multipath  - recent upstream changes in lvm2 are focusing on early detection of this scenarion.


The warning about duplicates is a serious one.


Please attach output of:

'pvs -vvvv'

So we could more closely see where it has been frozen.
But you should likely look at your multipath configuration - it cannot see  /dev/sdc & multipath device at the same time. Something is misconfigured on such system.

Comment 23 RamaKasturi 2016-04-11 08:59:36 UTC
setting the flags back.

Comment 24 RamaKasturi 2016-04-11 09:26:00 UTC
Hi Zdenek Kabelac ,

  Output in comment 22 was given from a system where pvs was not hanging but was showing duplicate entries found. Here is the output of pvs.

[root@zod ~]# pvs
  WARNING: duplicate PV 7HfJMpapMT3P52OFKuWEZsuHPendcWvG is being used from both devices /dev/mapper/3600508b1001c8c800e29ed75e437f364 and /dev/sdc
  Found duplicate PV 7HfJMpapMT3P52OFKuWEZsuHPendcWvG: using /dev/mapper/3600508b1001c8c800e29ed75e437f364 not /dev/sdc
  Using duplicate PV /dev/mapper/3600508b1001c8c800e29ed75e437f364 from subsystem DM, ignoring /dev/sdc
  WARNING: duplicate PV 0RQl2mlIkOQHJP8Zgi1ShkwPw1u2qRHN is being used from both devices /dev/mapper/3600508b1001c7cf75875d4c2d5b9de9c and /dev/sdb
  WARNING: duplicate PV 0RQl2mlIkOQHJP8Zgi1ShkwPw1u2qRHN is being used from both devices /dev/mapper/3600508b1001c7cf75875d4c2d5b9de9c and /dev/sdb
  WARNING: duplicate PV 7HfJMpapMT3P52OFKuWEZsuHPendcWvG is being used from both devices /dev/mapper/3600508b1001c8c800e29ed75e437f364 and /dev/sdc
  PV                                            VG              Fmt  Attr PSize   PFree  
  /dev/mapper/3600508b1001c7cf75875d4c2d5b9de9c RHS_vg1         lvm2 a--    3.64t  59.92g
  /dev/mapper/3600508b1001c8c800e29ed75e437f364 RHS_vg1         lvm2 a--  223.54g  21.54g
  /dev/sda2                                     rhel_dhcp35-135 lvm2 a--  930.99g 226.99g

Comment 25 Zdenek Kabelac 2016-04-12 09:36:24 UTC
Please attach  trace when the system is blocked.

Also add:

dmsetup info -c
dmsetup table
dmsetup status


Provided output only shows misconfigured mpath system.

Comment 26 RamaKasturi 2016-04-26 11:32:51 UTC
Hi zdenek,

   I see that pvs is again hanging on the system and i captured output for all the commands you have put up in comment 25. Hope this helps.

output of dmsetup info -c:
=========================

[root@rhsqa1 ~]# dmsetup info -c
Name                              Maj Min Stat Open Targ Event  UUID                                                                      
rhel_rhsqa1-swap                  253   1 L--w    2    1      0 LVM-S2kHiFuto442LH3IHf3C1e5jv1Xbln5eZ8ytjqNoyiCFcTpy1A3c2ADCp5HWZydL      
rhel_rhsqa1-root                  253   0 L--w    1    1      0 LVM-S2kHiFuto442LH3IHf3C1e5jv1Xbln5eVRupsQxzbDusBCrYu6A0LDgl9HX7Yh8T      
RHS_vg1-rhs_cache_cdata           253   5 L--w    1    1      0 LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTLL2pafDrp5t8J1qKI0geCrAix1jAoNi1-cdata
RHS_vg1-rhs_cache_cmeta           253   7 L--w    1    1      0 LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTZpzXEYpdQkpDoMoM529n2lHeqZOnFeXg-cmeta
RHS_vg1-vg1_pool-tpool            253  10 L--w    1    1      0 LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTzbJt8gbiBqa7Mk3XWMJ8bzEscPmFixYA-tpool
RHS_vg1-vg1_pool_tdata            253   9 L--w    1    1      0 LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLT7gxcUkIAwptJAguoptvqrkBTDurBW1VJ-tdata
RHS_vg1-vg1_pool_tmeta            253   4 L--w    1    1      0 LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTRtBmPGvHxMbocM6yQ4AEA3RN5wdJGGpj-tmeta
RHS_vg1-vg1_pool                  253  11 L--w    0    1      0 LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTzbJt8gbiBqa7Mk3XWMJ8bzEscPmFixYA-pool 
RHS_vg1-engine_lv                 253   3 L--w    1    1      0 LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTBcT6ziOv3lRifdQPDSZFN6fJFupd3PlL      
36005076058045f581eb079b12107eb77 253   6 L--w    3    1      0 mpath-36005076058045f581eb079b12107eb77                                   
rhel_rhsqa1-home                  253   2 L--w    1    1      0 LVM-S2kHiFuto442LH3IHf3C1e5jv1Xbln5eqx6FCCM4LDc1fZHhPhvfxCdmlwppo82n      
RHS_vg1-vg1_pool_tdata_corig      253   8 L--w    1    1      0 LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLT75GtT9zmDMKxMjcL7zo3Yz50LTelOOZ8-real 
You have new mail in /var/spool/mail/root

output from dmsetup table
======================================================================
[root@rhsqa1 ~]# dmsetup table
rhel_rhsqa1-swap: 0 65929216 linear 8:2 2048
rhel_rhsqa1-root: 0 104857600 linear 8:2 1846429696
RHS_vg1-rhs_cache_cdata: 0 838860800 linear 8:32 512
RHS_vg1-rhs_cache_cmeta: 0 4194304 linear 8:32 838861312
RHS_vg1-vg1_pool-tpool: 0 1468006400 thin-pool 253:4 253:9 512 0 1 skip_block_zeroing 
RHS_vg1-vg1_pool_tdata: 0 1468006400 cache 253:7 253:5 253:8 128 1 writethrough smq 0
RHS_vg1-vg1_pool_tmeta: 0 33161216 linear 253:6 1711278080
RHS_vg1-vg1_pool: 0 1468006400 linear 253:10 0
RHS_vg1-engine_lv: 0 209715200 linear 253:6 2048
36005076058045f581eb079b12107eb77: 0 1952448512 multipath 0 0 1 1 service-time 0 1 2 8:16 1 1 
rhel_rhsqa1-home: 0 1780498432 linear 8:2 65931264
RHS_vg1-vg1_pool_tdata_corig: 0 1468006400 linear 253:6 243271680

dmsetup status:
=====================================
[root@rhsqa1 ~]# dmsetup status
rhel_rhsqa1-swap: 0 65929216 linear 
rhel_rhsqa1-root: 0 104857600 linear 
RHS_vg1-rhs_cache_cdata: 0 838860800 linear 
RHS_vg1-rhs_cache_cmeta: 0 4194304 linear 
RHS_vg1-vg1_pool-tpool: 0 1468006400 thin-pool 2 468/4145152 3516/2867200 - rw discard_passdown queue_if_no_space - 
RHS_vg1-vg1_pool_tdata: 0 1468006400 cache 8 19507/524288 128 164/6553600 580 11 253 20528 0 89 0 1 writethrough 2 migration_threshold 2048 smq 0 rw - 
RHS_vg1-vg1_pool_tmeta: 0 33161216 linear 
RHS_vg1-vg1_pool: 0 1468006400 linear 
RHS_vg1-engine_lv: 0 209715200 linear 
36005076058045f581eb079b12107eb77: 0 1952448512 multipath 2 0 0 0 1 1 A 0 1 2 8:16 A 0 0 1 
rhel_rhsqa1-home: 0 1780498432 linear 
RHS_vg1-vg1_pool_tdata_corig: 0 1468006400 linear

output of pvs -vvvv: 
==========================
#lvmcmdline.c:1581         DEGRADED MODE. Incomplete RAID LVs will be processed.
#libdm-config.c:1013       Setting activation/monitoring to 1
#lvmcmdline.c:1587         Processing: pvs -vvvv
#lvmcmdline.c:1588         system ID: 
#lvmcmdline.c:1591         O_DIRECT will be used
#libdm-config.c:949       Setting global/locking_type to 1
#libdm-config.c:1013       Setting global/wait_for_locks to 1
#locking/locking.c:129       File-based locking selected.
#libdm-config.c:1013       Setting global/prioritise_write_locks to 1
#libdm-config.c:918       Setting global/locking_dir to /run/lock/lvm
#libdm-common.c:926         Preparing SELinux context for /run/lock/lvm to system_u:object_r:lvm_lock_t:s0.
#libdm-common.c:929         Resetting SELinux context to default value.
#libdm-config.c:1013       Setting global/use_lvmlockd to 0
#libdm-config.c:1026       report/aligned not found in config: defaulting to 1
#libdm-config.c:1026       report/buffered not found in config: defaulting to 1
#libdm-config.c:1026       report/headings not found in config: defaulting to 1
#libdm-config.c:927       report/separator not found in config: defaulting to  
#libdm-config.c:1026       report/prefixes not found in config: defaulting to 0
#libdm-config.c:1026       report/quoted not found in config: defaulting to 1
#libdm-config.c:1026       report/colums_as_rows not found in config: defaulting to 0
#libdm-config.c:927       report/pvs_sort not found in config: defaulting to pv_name
#libdm-config.c:927       report/pvs_cols_verbose not found in config: defaulting to pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid
#toollib.c:2553     Using physical volume(s) on command line.
#cache/lvmetad.c:742         Asking lvmetad for complete list of known PVs


i see the above output and the system hangs.

Comment 27 RamaKasturi 2016-04-26 11:35:05 UTC
lsblk output:
==================
[root@rhsqa1 ~]# lsblk
NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                   8:0    0   931G  0 disk  
├─sda1                                8:1    0   500M  0 part  /boot
└─sda2                                8:2    0 930.5G  0 part  
  ├─rhel_rhsqa1-root                253:0    0    50G  0 lvm   /
  ├─rhel_rhsqa1-swap                253:1    0  31.4G  0 lvm   [SWAP]
  └─rhel_rhsqa1-home                253:2    0   849G  0 lvm   /home
sdb                                   8:16   0   931G  0 disk  
└─36005076058045f581eb079b12107eb77 253:6    0   931G  0 mpath 
  ├─RHS_vg1-engine_lv               253:3    0   100G  0 lvm   /rhgs/engine
  ├─RHS_vg1-vg1_pool_tmeta          253:4    0  15.8G  0 lvm   
  │ └─RHS_vg1-vg1_pool-tpool        253:10   0   700G  0 lvm   
  │   └─RHS_vg1-vg1_pool            253:11   0   700G  0 lvm   
  └─RHS_vg1-vg1_pool_tdata_corig    253:8    0   700G  0 lvm   
    └─RHS_vg1-vg1_pool_tdata        253:9    0   700G  0 lvm   
      └─RHS_vg1-vg1_pool-tpool      253:10   0   700G  0 lvm   
        └─RHS_vg1-vg1_pool          253:11   0   700G  0 lvm   
sdc                                   8:32   0 446.6G  0 disk  
├─RHS_vg1-rhs_cache_cdata           253:5    0   400G  0 lvm   
│ └─RHS_vg1-vg1_pool_tdata          253:9    0   700G  0 lvm   
│   └─RHS_vg1-vg1_pool-tpool        253:10   0   700G  0 lvm   
│     └─RHS_vg1-vg1_pool            253:11   0   700G  0 lvm   
└─RHS_vg1-rhs_cache_cmeta           253:7    0     2G  0 lvm   
  └─RHS_vg1-vg1_pool_tdata          253:9    0   700G  0 lvm   
    └─RHS_vg1-vg1_pool-tpool        253:10   0   700G  0 lvm   
      └─RHS_vg1-vg1_pool            253:11   0   700G  0 lvm   
loop1                                 7:1    0     2G  0 loop  /rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmpSIAGQl

Comment 28 Zdenek Kabelac 2016-04-26 13:37:56 UTC
So it seems the more interactive session would be needed here (next time try to find me on #lvm)- as we would preferable like to grab  gdb backtrace or blocked processes (i.e. lvmetad)

But there are more points here -

vdsm  seemed to be 'RHEV' clustering - so usually running on multiple machines.

In such case however you may NOT use 'lvmetad' unless there is some extra tool maintaining synchronous state on multiple nodes.

Also thin-pool must be always exclusively active on a single node.
You can't operate with thin-pool/thin-volumes on multiple nodes at the same time.
Also thin-volumes from a single thin-pool must always run on exactly 1 node.

Also when you see 'pvs' has been blocked - isn't there any kernel error trace ?

Also  'mpath' for pool 
and cache on non-mpath looks suspicious solution here ?

So more and more questions...

Comment 29 Zdenek Kabelac 2016-04-26 14:07:15 UTC
Hmm so here we have  lvmetad bt all:




Thread 25 (Thread 0x7fedbcfbf700 (LWP 27959)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebf84d in lock_pvid_to_pvmeta (s=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  pv_found (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:1080
#5  handler (s=..., h=..., r=...) at lvmetad-core.c:1522
#6  0x00007fedbfec1896 in client_thread (state=0x7fedc1a0f2c0) at daemon-server.c:457
#7  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 24 (Thread 0x7fedb7fff700 (LWP 4300)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fedb0004de0, vg=vg@entry=0x7fedb0004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc19c63d0) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 23 (Thread 0x7fedb77fe700 (LWP 4559)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7feda8004de0, vg=vg@entry=0x7feda8004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc19c62a0) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 22 (Thread 0x7fedb6ffd700 (LWP 5063)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfec0403 in lock_pvid_to_pvmeta (s=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  pv_list (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:425
#5  handler (s=..., h=..., r=...) at lvmetad-core.c:1546
#6  0x00007fedbfec1896 in client_thread (state=0x7fedc1a0fb50) at daemon-server.c:457
#7  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 21 (Thread 0x7fedb67fc700 (LWP 5187)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfec0403 in lock_pvid_to_pvmeta (s=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  pv_list (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:425
#5  handler (s=..., h=..., r=...) at lvmetad-core.c:1546
---Type <return> to continue, or q <return> to quit---
#6  0x00007fedbfec1896 in client_thread (state=0x7fedc1a0fea0) at daemon-server.c:457
#7  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 20 (Thread 0x7fedb5ffb700 (LWP 5358)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfec0403 in lock_pvid_to_pvmeta (s=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  pv_list (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:425
#5  handler (s=..., h=..., r=...) at lvmetad-core.c:1546
#6  0x00007fedbfec1896 in client_thread (state=0x7fedc1a101f0) at daemon-server.c:457
#7  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 19 (Thread 0x7fedb57fa700 (LWP 5472)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed98004de0, vg=vg@entry=0x7fed98004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a10540) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 18 (Thread 0x7fedb4ff9700 (LWP 5816)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed9c004de0, vg=vg@entry=0x7fed9c004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a10890) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 17 (Thread 0x7fed97fff700 (LWP 5907)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed90004de0, vg=vg@entry=0x7fed90004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a10be0) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 16 (Thread 0x7fed977fe700 (LWP 6633)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed88004de0, vg=vg@entry=0x7fed88004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a10f30) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 15 (Thread 0x7fed96ffd700 (LWP 7012)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed8c004de0, vg=vg@entry=0x7fed8c004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a11280) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 14 (Thread 0x7fed967fc700 (LWP 8094)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed80004de0, vg=vg@entry=0x7fed80004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a115d0) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 13 (Thread 0x7fed95ffb700 (LWP 8125)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed84004de0, vg=vg@entry=0x7fed84004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a11920) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 12 (Thread 0x7fed957fa700 (LWP 9585)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed78004de0, vg=vg@entry=0x7fed78004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a11c70) at daemon-server.c:457
---Type <return> to continue, or q <return> to quit---
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 11 (Thread 0x7fed94ff9700 (LWP 14763)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed7c004de0, vg=vg@entry=0x7fed7c004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a11fc0) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7fed77fff700 (LWP 25344)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed70004de0, vg=vg@entry=0x7fed70004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a12310) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7fed777fe700 (LWP 25621)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfec0403 in lock_pvid_to_pvmeta (s=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  pv_list (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:425
#5  handler (s=..., h=..., r=...) at lvmetad-core.c:1546
#6  0x00007fedbfec1896 in client_thread (state=0x7fedc1a12660) at daemon-server.c:457
#7  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7fed76ffd700 (LWP 31563)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed6c004de0, vg=vg@entry=0x7fed6c004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a129b0) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7fed767fc700 (LWP 1720)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#3  0x00007fedbfec0403 in lock_pvid_to_pvmeta (s=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  pv_list (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:425
#5  handler (s=..., h=..., r=...) at lvmetad-core.c:1546
#6  0x00007fedbfec1896 in client_thread (state=0x7fedc1a12d00) at daemon-server.c:457
#7  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7fed75ffb700 (LWP 2067)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfec0403 in lock_pvid_to_pvmeta (s=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  pv_list (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:425
#5  handler (s=..., h=..., r=...) at lvmetad-core.c:1546
#6  0x00007fedbfec1896 in client_thread (state=0x7fedc1a13050) at daemon-server.c:457
#7  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7fed757fa700 (LWP 24233)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed58004de0, vg=vg@entry=0x7fed58004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a133a0) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7fed74ff9700 (LWP 26737)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed5c004de0, vg=vg@entry=0x7fed5c004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a136f0) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7fed57fff700 (LWP 6858)):
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfec0403 in lock_pvid_to_pvmeta (s=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  pv_list (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:425
#5  handler (s=..., h=..., r=...) at lvmetad-core.c:1546
#6  0x00007fedbfec1896 in client_thread (state=0x7fedc1a13a40) at daemon-server.c:457
#7  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#8  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7fed577fe700 (LWP 7668)):
---Type <return> to continue, or q <return> to quit---
#0  0x00007fedbf1e5f4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007fedbf1e1d1d in _L_lock_840 () from /lib64/libpthread.so.0
#2  0x00007fedbf1e1c3a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fedbfebe0bd in lock_pvid_to_pvmeta (s=s@entry=0x7ffdb66b0f80) at lvmetad-core.c:187
#4  update_pv_status (s=s@entry=0x7ffdb66b0f80, cft=0x7fed48004de0, vg=vg@entry=0x7fed48004ea8, act=act@entry=1) at lvmetad-core.c:317
#5  0x00007fedbfec01e5 in vg_lookup (s=0x7ffdb66b0f80, r=...) at lvmetad-core.c:692
#6  handler (s=..., h=..., r=...) at lvmetad-core.c:1543
#7  0x00007fedbfec1896 in client_thread (state=0x7fedc1a13d90) at daemon-server.c:457
#8  0x00007fedbf1dfdc5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007fedbef0d28d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7fedbfe83880 (LWP 18660)):
#0  0x00007fedbef049b3 in select () from /lib64/libc.so.6
#1  0x00007fedbfec1f66 in daemon_start (s=...) at daemon-server.c:629
#2  0x00007fedbfebda40 in main (argc=2, argv=0x7ffdb66b1238) at lvmetad-core.c:1672

Comment 30 Zdenek Kabelac 2016-04-26 14:34:19 UTC
pvs --config 'global{use_lvmetad=0}' -vvvv

  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
#lvmcmdline.c:1581         DEGRADED MODE. Incomplete RAID LVs will be processed.
#libdm-config.c:1013       Setting activation/monitoring to 1
#lvmcmdline.c:1587         Processing: pvs --config global{use_lvmetad=0} -vvvv
#lvmcmdline.c:1588         system ID: 
#lvmcmdline.c:1591         O_DIRECT will be used
#libdm-config.c:949       Setting global/locking_type to 1
#libdm-config.c:1013       Setting global/wait_for_locks to 1
#locking/locking.c:129       File-based locking selected.
#libdm-config.c:1013       Setting global/prioritise_write_locks to 1
#libdm-config.c:918       Setting global/locking_dir to /run/lock/lvm
#libdm-common.c:926         Preparing SELinux context for /run/lock/lvm to system_u:object_r:lvm_lock_t:s0.
#libdm-common.c:929         Resetting SELinux context to default value.
#libdm-config.c:1013       Setting global/use_lvmlockd to 0
#libdm-config.c:1026       report/aligned not found in config: defaulting to 1
#libdm-config.c:1026       report/buffered not found in config: defaulting to 1
#libdm-config.c:1026       report/headings not found in config: defaulting to 1
#libdm-config.c:927       report/separator not found in config: defaulting to  
#libdm-config.c:1026       report/prefixes not found in config: defaulting to 0
#libdm-config.c:1026       report/quoted not found in config: defaulting to 1
#libdm-config.c:1026       report/colums_as_rows not found in config: defaulting to 0
#libdm-config.c:927       report/pvs_sort not found in config: defaulting to pv_name
#libdm-config.c:927       report/pvs_cols_verbose not found in config: defaulting to pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid
#misc/lvm-flock.c:200       Locking /run/lock/lvm/P_global RB
#libdm-common.c:926         Preparing SELinux context for /run/lock/lvm/P_global to system_u:object_r:lvm_lock_t:s0.
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_global:aux WB
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_global:aux
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_global RB
#libdm-common.c:929         Resetting SELinux context to default value.
#toollib.c:2553     Using physical volume(s) on command line.
#device/dev-cache.c:350         /dev/sda: Added to device cache (8:0)
#device/dev-cache.c:346         /dev/disk/by-id/scsi-36005076058045f581eb0791717dcef4b: Aliased to /dev/sda in device cache (8:0)
#device/dev-cache.c:346         /dev/disk/by-id/wwn-0x6005076058045f581eb0791717dcef4b: Aliased to /dev/sda in device cache (8:0)
#device/dev-cache.c:346         /dev/disk/by-path/pci-0000:16:00.0-scsi-0:2:0:0: Aliased to /dev/sda in device cache (8:0)
#device/dev-cache.c:350         /dev/sda1: Added to device cache (8:1)
#device/dev-cache.c:346         /dev/disk/by-id/scsi-36005076058045f581eb0791717dcef4b-part1: Aliased to /dev/sda1 in device cache (8:1)
#device/dev-cache.c:346         /dev/disk/by-id/wwn-0x6005076058045f581eb0791717dcef4b-part1: Aliased to /dev/sda1 in device cache (8:1)
#device/dev-cache.c:346         /dev/disk/by-path/pci-0000:16:00.0-scsi-0:2:0:0-part1: Aliased to /dev/sda1 in device cache (8:1)
#device/dev-cache.c:346         /dev/disk/by-uuid/9ea21eaf-cf33-4b84-bc66-a17248c96285: Aliased to /dev/sda1 in device cache (8:1)
#device/dev-cache.c:350         /dev/sda2: Added to device cache (8:2)
#device/dev-cache.c:346         /dev/disk/by-id/lvm-pv-uuid-ZrluWB-TGzf-N8Pa-iljo-wiLP-eztX-oF2qpo: Aliased to /dev/sda2 in device cache (8:2)
#device/dev-cache.c:346         /dev/disk/by-id/scsi-36005076058045f581eb0791717dcef4b-part2: Aliased to /dev/sda2 in device cache (8:2)
#device/dev-cache.c:346         /dev/disk/by-id/wwn-0x6005076058045f581eb0791717dcef4b-part2: Aliased to /dev/sda2 in device cache (8:2)
#device/dev-cache.c:346         /dev/disk/by-path/pci-0000:16:00.0-scsi-0:2:0:0-part2: Aliased to /dev/sda2 in device cache (8:2)
#device/dev-cache.c:350         /dev/sdb: Added to device cache (8:16)
#device/dev-cache.c:346         /dev/disk/by-id/scsi-36005076058045f581eb079b12107eb77: Aliased to /dev/sdb in device cache (8:16)
#device/dev-cache.c:346         /dev/disk/by-id/wwn-0x6005076058045f581eb079b12107eb77: Aliased to /dev/sdb in device cache (8:16)
#device/dev-cache.c:346         /dev/disk/by-path/pci-0000:16:00.0-scsi-0:2:1:0: Aliased to /dev/sdb in device cache (8:16)
#device/dev-cache.c:350         /dev/sdc: Added to device cache (8:32)
#device/dev-cache.c:346         /dev/disk/by-id/scsi-36005076058045f581eb07a052607af53: Aliased to /dev/sdc in device cache (8:32)
#device/dev-cache.c:346         /dev/disk/by-id/wwn-0x6005076058045f581eb07a052607af53: Aliased to /dev/sdc in device cache (8:32)
#device/dev-cache.c:346         /dev/disk/by-path/pci-0000:16:00.0-scsi-0:2:2:0: Aliased to /dev/sdc in device cache (8:32)
#device/dev-cache.c:350         /dev/loop0: Added to device cache (7:0)
#device/dev-cache.c:350         /dev/loop1: Added to device cache (7:1)
#device/dev-cache.c:346         /dev/disk/by-uuid/6270bf55-a882-4852-9fae-883d79973953: Aliased to /dev/loop1 in device cache (7:1)
#device/dev-cache.c:350         /dev/dm-0: Added to device cache (253:0)
#device/dev-cache.c:346         /dev/disk/by-id/dm-name-rhel_rhsqa1-root: Aliased to /dev/dm-0 in device cache (preferred name) (253:0)
#device/dev-cache.c:346         /dev/disk/by-id/dm-uuid-LVM-S2kHiFuto442LH3IHf3C1e5jv1Xbln5eVRupsQxzbDusBCrYu6A0LDgl9HX7Yh8T: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-root in device cache (253:0)
#device/dev-cache.c:346         /dev/disk/by-uuid/76b57433-39dc-4520-b53a-98e5641976fc: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-root in device cache (253:0)
#device/dev-cache.c:346         /dev/mapper/rhel_rhsqa1-root: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-root in device cache (preferred name) (253:0)
#device/dev-cache.c:346         /dev/rhel_rhsqa1/root: Aliased to /dev/mapper/rhel_rhsqa1-root in device cache (preferred name) (253:0)
#device/dev-cache.c:350         /dev/dm-1: Added to device cache (253:1)
#device/dev-cache.c:346         /dev/disk/by-id/dm-name-rhel_rhsqa1-swap: Aliased to /dev/dm-1 in device cache (preferred name) (253:1)
#device/dev-cache.c:346         /dev/disk/by-id/dm-uuid-LVM-S2kHiFuto442LH3IHf3C1e5jv1Xbln5eZ8ytjqNoyiCFcTpy1A3c2ADCp5HWZydL: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-swap in device cache (253:1)
#device/dev-cache.c:346         /dev/disk/by-uuid/82acb5a4-b57b-43d1-a742-058d9591e198: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-swap in device cache (253:1)
#device/dev-cache.c:346         /dev/mapper/rhel_rhsqa1-swap: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-swap in device cache (preferred name) (253:1)
#device/dev-cache.c:346         /dev/rhel_rhsqa1/swap: Aliased to /dev/mapper/rhel_rhsqa1-swap in device cache (preferred name) (253:1)
#device/dev-cache.c:350         /dev/dm-10: Added to device cache (253:10)
#device/dev-cache.c:346         /dev/mapper/RHS_vg1-vg1_pool-tpool: Aliased to /dev/dm-10 in device cache (preferred name) (253:10)
#device/dev-cache.c:350         /dev/dm-11: Added to device cache (253:11)
#device/dev-cache.c:346         /dev/mapper/RHS_vg1-vg1_pool: Aliased to /dev/dm-11 in device cache (preferred name) (253:11)
#device/dev-cache.c:350         /dev/dm-2: Added to device cache (253:2)
#device/dev-cache.c:346         /dev/disk/by-id/dm-name-rhel_rhsqa1-home: Aliased to /dev/dm-2 in device cache (preferred name) (253:2)
#device/dev-cache.c:346         /dev/disk/by-id/dm-uuid-LVM-S2kHiFuto442LH3IHf3C1e5jv1Xbln5eqx6FCCM4LDc1fZHhPhvfxCdmlwppo82n: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-home in device cache (253:2)
#device/dev-cache.c:346         /dev/disk/by-uuid/b245e0e8-240d-43dd-a1ad-0c446d482218: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-home in device cache (253:2)
#device/dev-cache.c:346         /dev/mapper/rhel_rhsqa1-home: Aliased to /dev/disk/by-id/dm-name-rhel_rhsqa1-home in device cache (preferred name) (253:2)
#device/dev-cache.c:346         /dev/rhel_rhsqa1/home: Aliased to /dev/mapper/rhel_rhsqa1-home in device cache (preferred name) (253:2)
#device/dev-cache.c:350         /dev/dm-3: Added to device cache (253:3)
#device/dev-cache.c:346         /dev/RHS_vg1/engine_lv: Aliased to /dev/dm-3 in device cache (preferred name) (253:3)
#device/dev-cache.c:346         /dev/disk/by-id/dm-name-RHS_vg1-engine_lv: Aliased to /dev/RHS_vg1/engine_lv in device cache (253:3)
#device/dev-cache.c:346         /dev/disk/by-id/dm-uuid-LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTBcT6ziOv3lRifdQPDSZFN6fJFupd3PlL: Aliased to /dev/RHS_vg1/engine_lv in device cache (253:3)
#device/dev-cache.c:346         /dev/disk/by-uuid/8364d5a6-0c77-496a-9427-ca722d41ff27: Aliased to /dev/RHS_vg1/engine_lv in device cache (253:3)
#device/dev-cache.c:346         /dev/mapper/RHS_vg1-engine_lv: Aliased to /dev/RHS_vg1/engine_lv in device cache (253:3)
#device/dev-cache.c:350         /dev/dm-4: Added to device cache (253:4)
#device/dev-cache.c:346         /dev/mapper/RHS_vg1-vg1_pool_tmeta: Aliased to /dev/dm-4 in device cache (preferred name) (253:4)
#device/dev-cache.c:350         /dev/dm-5: Added to device cache (253:5)
#device/dev-cache.c:346         /dev/mapper/RHS_vg1-rhs_cache_cdata: Aliased to /dev/dm-5 in device cache (preferred name) (253:5)
#device/dev-cache.c:350         /dev/dm-6: Added to device cache (253:6)
#device/dev-cache.c:346         /dev/disk/by-id/dm-name-36005076058045f581eb079b12107eb77: Aliased to /dev/dm-6 in device cache (preferred name) (253:6)
#device/dev-cache.c:346         /dev/disk/by-id/dm-uuid-mpath-36005076058045f581eb079b12107eb77: Aliased to /dev/disk/by-id/dm-name-36005076058045f581eb079b12107eb77 in device cache (253:6)
#device/dev-cache.c:346         /dev/disk/by-id/lvm-pv-uuid-IbzFLQ-ytzw-lXGV-6dE5-G0AA-EOgV-o5U8cR: Aliased to /dev/disk/by-id/dm-name-36005076058045f581eb079b12107eb77 in device cache (253:6)
#device/dev-cache.c:346         /dev/mapper/36005076058045f581eb079b12107eb77: Aliased to /dev/disk/by-id/dm-name-36005076058045f581eb079b12107eb77 in device cache (preferred name) (253:6)
#device/dev-cache.c:350         /dev/dm-7: Added to device cache (253:7)
#device/dev-cache.c:346         /dev/mapper/RHS_vg1-rhs_cache_cmeta: Aliased to /dev/dm-7 in device cache (preferred name) (253:7)
#device/dev-cache.c:350         /dev/dm-8: Added to device cache (253:8)
#device/dev-cache.c:346         /dev/mapper/RHS_vg1-vg1_pool_tdata_corig: Aliased to /dev/dm-8 in device cache (preferred name) (253:8)
#device/dev-cache.c:350         /dev/dm-9: Added to device cache (253:9)
#device/dev-cache.c:346         /dev/mapper/RHS_vg1-vg1_pool_tdata: Aliased to /dev/dm-9 in device cache (preferred name) (253:9)
#device/dev-io.c:533         Opened /dev/loop0 RO O_DIRECT
#device/dev-io.c:310       /dev/loop0: size is 0 sectors
#filters/filter-usable.c:44         /dev/loop0: Skipping: Too small to hold a PV
#device/dev-io.c:587         Closed /dev/loop0
#device/dev-io.c:533         Opened /dev/sda RO O_DIRECT
#device/dev-io.c:310       /dev/sda: size is 1952448512 sectors
#device/dev-io.c:587         Closed /dev/sda
#device/dev-io.c:533         Opened /dev/sda RO O_DIRECT
#device/dev-io.c:144         /dev/sda: block size is 4096 bytes
#device/dev-io.c:155         /dev/sda: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/sda
#filters/filter-partitioned.c:27         /dev/sda: Skipping: Partition table signature found
#ioctl/libdm-iface.c:1832         dm version   OF   [16384] (*1)
#ioctl/libdm-iface.c:1832         dm status   (253:0) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:310       /dev/rhel_rhsqa1/root: size is 104857600 sectors
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/root: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/root: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-io.c:310       /dev/rhel_rhsqa1/root: size is 104857600 sectors
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/root: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/root: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-cache.c:1088         Using /dev/rhel_rhsqa1/root
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/root: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/root: physical block size is 512 bytes
#label/label.c:189       /dev/rhel_rhsqa1/root: No label detected
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-io.c:533         Opened /dev/loop1 RO O_DIRECT
#device/dev-io.c:310       /dev/loop1: size is 4194304 sectors
#device/dev-io.c:587         Closed /dev/loop1
#device/dev-io.c:533         Opened /dev/loop1 RO O_DIRECT
#device/dev-io.c:587         Closed /dev/loop1
#device/dev-io.c:310       /dev/loop1: size is 4194304 sectors
#device/dev-io.c:533         Opened /dev/loop1 RO O_DIRECT
#device/dev-io.c:144         /dev/loop1: block size is 4096 bytes
#device/dev-io.c:155         /dev/loop1: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/loop1
#device/dev-cache.c:1088         Using /dev/loop1
#device/dev-io.c:533         Opened /dev/loop1 RO O_DIRECT
#device/dev-io.c:144         /dev/loop1: block size is 4096 bytes
#device/dev-io.c:155         /dev/loop1: physical block size is 512 bytes
#label/label.c:189       /dev/loop1: No label detected
#device/dev-io.c:587         Closed /dev/loop1
#device/dev-io.c:533         Opened /dev/sda1 RO O_DIRECT
#device/dev-io.c:310       /dev/sda1: size is 1024000 sectors
#device/dev-io.c:587         Closed /dev/sda1
#filters/filter-mpath.c:196         /dev/sda1: Device is a partition, using primary device sda for mpath component detection
#device/dev-io.c:533         Opened /dev/sda1 RO O_DIRECT
#device/dev-io.c:587         Closed /dev/sda1
#device/dev-io.c:310       /dev/sda1: size is 1024000 sectors
#device/dev-io.c:533         Opened /dev/sda1 RO O_DIRECT
#device/dev-io.c:144         /dev/sda1: block size is 512 bytes
#device/dev-io.c:155         /dev/sda1: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/sda1
#device/dev-cache.c:1088         Using /dev/sda1
#device/dev-io.c:533         Opened /dev/sda1 RO O_DIRECT
#device/dev-io.c:144         /dev/sda1: block size is 512 bytes
#device/dev-io.c:155         /dev/sda1: physical block size is 512 bytes
#label/label.c:189       /dev/sda1: No label detected
#device/dev-io.c:587         Closed /dev/sda1
#ioctl/libdm-iface.c:1832         dm status   (253:1) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:310       /dev/rhel_rhsqa1/swap: size is 65929216 sectors
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/swap: block size is 4096 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/swap: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-io.c:310       /dev/rhel_rhsqa1/swap: size is 65929216 sectors
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/swap: block size is 4096 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/swap: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-cache.c:1088         Using /dev/rhel_rhsqa1/swap
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/swap: block size is 4096 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/swap: physical block size is 512 bytes
#label/label.c:189       /dev/rhel_rhsqa1/swap: No label detected
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-io.c:533         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:310       /dev/sda2: size is 1951422464 sectors
#device/dev-io.c:587         Closed /dev/sda2
#filters/filter-mpath.c:196         /dev/sda2: Device is a partition, using primary device sda for mpath component detection
#device/dev-io.c:533         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:587         Closed /dev/sda2
#device/dev-io.c:310       /dev/sda2: size is 1951422464 sectors
#device/dev-io.c:533         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:144         /dev/sda2: block size is 4096 bytes
#device/dev-io.c:155         /dev/sda2: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/sda2
#device/dev-cache.c:1088         Using /dev/sda2
#device/dev-io.c:533         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:144         /dev/sda2: block size is 4096 bytes
#device/dev-io.c:155         /dev/sda2: physical block size is 512 bytes
#label/label.c:167       /dev/sda2: lvm2 label detected at sector 1
#cache/lvmcache.c:1443         lvmcache: /dev/sda2: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
#format_text/text_label.c:422         /dev/sda2: PV header extension version 1 found
#format_text/format-text.c:1234         /dev/sda2: Found metadata at 8704 size 1710 (in area at 4096 size 1044480) for rhel_rhsqa1 (S2kHiF-uto4-42LH-3IHf-3C1e-5jv1-Xbln5e)
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "rhel_rhsqa1"
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "rhel_rhsqa1"
#cache/lvmcache.c:1443         lvmcache: /dev/sda2: now in VG rhel_rhsqa1 with 1 mdas
#cache/lvmcache.c:1220         lvmcache: /dev/sda2: setting rhel_rhsqa1 VGID to S2kHiFuto442LH3IHf3C1e5jv1Xbln5e
#cache/lvmcache.c:1480         lvmcache: /dev/sda2: VG rhel_rhsqa1: Set creation host to rhsqa1.lab.eng.blr.redhat.com.
#cache/lvmcache.c:1518         lvmcache: /dev/sda2: VG rhel_rhsqa1: Stored metadata checksum 2983801653 with size 1710.
#device/dev-io.c:587         Closed /dev/sda2
#ioctl/libdm-iface.c:1832         dm status   (253:2) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:310       /dev/rhel_rhsqa1/home: size is 1780498432 sectors
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/home: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/home: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#device/dev-io.c:310       /dev/rhel_rhsqa1/home: size is 1780498432 sectors
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/home: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/home: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#device/dev-cache.c:1088         Using /dev/rhel_rhsqa1/home
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/home: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/home: physical block size is 512 bytes
#label/label.c:189       /dev/rhel_rhsqa1/home: No label detected
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#ioctl/libdm-iface.c:1832         dm status   (253:3) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:310       /dev/RHS_vg1/engine_lv: size is 209715200 sectors
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:144         /dev/RHS_vg1/engine_lv: block size is 512 bytes
#device/dev-io.c:155         /dev/RHS_vg1/engine_lv: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#device/dev-io.c:310       /dev/RHS_vg1/engine_lv: size is 209715200 sectors
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:144         /dev/RHS_vg1/engine_lv: block size is 512 bytes
#device/dev-io.c:155         /dev/RHS_vg1/engine_lv: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#device/dev-cache.c:1088         Using /dev/RHS_vg1/engine_lv
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:144         /dev/RHS_vg1/engine_lv: block size is 512 bytes
#device/dev-io.c:155         /dev/RHS_vg1/engine_lv: physical block size is 512 bytes
#label/label.c:189       /dev/RHS_vg1/engine_lv: No label detected
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#ioctl/libdm-iface.c:1832         dm status   (253:4) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool_tmeta: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTRtBmPGvHxMbocM6yQ4AEA3RN5wdJGGpj-tmeta on internal LV device RHS_vg1-vg1_pool_tmeta not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool_tmeta: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:5) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-rhs_cache_cdata: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTLL2pafDrp5t8J1qKI0geCrAix1jAoNi1-cdata on internal LV device RHS_vg1-rhs_cache_cdata not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-rhs_cache_cdata: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:6) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:310       /dev/mapper/36005076058045f581eb079b12107eb77: size is 1952448512 sectors
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:144         /dev/mapper/36005076058045f581eb079b12107eb77: block size is 4096 bytes
#device/dev-io.c:155         /dev/mapper/36005076058045f581eb079b12107eb77: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-io.c:310       /dev/mapper/36005076058045f581eb079b12107eb77: size is 1952448512 sectors
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:144         /dev/mapper/36005076058045f581eb079b12107eb77: block size is 4096 bytes
#device/dev-io.c:155         /dev/mapper/36005076058045f581eb079b12107eb77: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-cache.c:1088         Using /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:144         /dev/mapper/36005076058045f581eb079b12107eb77: block size is 4096 bytes
#device/dev-io.c:155         /dev/mapper/36005076058045f581eb079b12107eb77: physical block size is 512 bytes
#label/label.c:167       /dev/mapper/36005076058045f581eb079b12107eb77: lvm2 label detected at sector 1
#cache/lvmcache.c:1443         lvmcache: /dev/mapper/36005076058045f581eb079b12107eb77: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
#format_text/text_label.c:422         /dev/mapper/36005076058045f581eb079b12107eb77: PV header extension version 1 found
#format_text/format-text.c:1234         /dev/mapper/36005076058045f581eb079b12107eb77: Found metadata at 65024 size 4586 (in area at 4096 size 1044480) for RHS_vg1 (MkXgyv-Yram-6Mia-uXGZ-NOn7-ZXN8-lR5oLT)
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "RHS_vg1"
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "RHS_vg1"
#cache/lvmcache.c:1443         lvmcache: /dev/mapper/36005076058045f581eb079b12107eb77: now in VG RHS_vg1 with 1 mdas
#cache/lvmcache.c:1220         lvmcache: /dev/mapper/36005076058045f581eb079b12107eb77: setting RHS_vg1 VGID to MkXgyvYram6MiauXGZNOn7ZXN8lR5oLT
#cache/lvmcache.c:1480         lvmcache: /dev/mapper/36005076058045f581eb079b12107eb77: VG RHS_vg1: Set creation host to rhsqa1.lab.eng.blr.redhat.com.
#cache/lvmcache.c:1518         lvmcache: /dev/mapper/36005076058045f581eb079b12107eb77: VG RHS_vg1: Stored metadata checksum 3162653530 with size 4586.
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#ioctl/libdm-iface.c:1832         dm status   (253:7) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-rhs_cache_cmeta: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTZpzXEYpdQkpDoMoM529n2lHeqZOnFeXg-cmeta on internal LV device RHS_vg1-rhs_cache_cmeta not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-rhs_cache_cmeta: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:8) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool_tdata_corig: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLT75GtT9zmDMKxMjcL7zo3Yz50LTelOOZ8-real on internal LV device RHS_vg1-vg1_pool_tdata_corig not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool_tdata_corig: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:9) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool_tdata: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLT7gxcUkIAwptJAguoptvqrkBTDurBW1VJ-tdata on internal LV device RHS_vg1-vg1_pool_tdata not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool_tdata: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:10) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool-tpool: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTzbJt8gbiBqa7Mk3XWMJ8bzEscPmFixYA-tpool on internal LV device RHS_vg1-vg1_pool-tpool not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool-tpool: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:11) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTzbJt8gbiBqa7Mk3XWMJ8bzEscPmFixYA-pool on internal LV device RHS_vg1-vg1_pool not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool: Skipping unusable device.
#device/dev-io.c:533         Opened /dev/sdb RO O_DIRECT
#device/dev-io.c:310       /dev/sdb: size is 1952448512 sectors
#device/dev-io.c:587         Closed /dev/sdb
#ioctl/libdm-iface.c:1832         dm status   (253:6) OF   [16384] (*1)
#filters/filter-mpath.c:253         /dev/sdb: Skipping mpath component device
#device/dev-io.c:533         Opened /dev/sdc RO O_DIRECT
#device/dev-io.c:310       /dev/sdc: size is 936640512 sectors
#device/dev-io.c:587         Closed /dev/sdc
#device/dev-io.c:533         Opened /dev/sdc RO O_DIRECT
#device/dev-io.c:144         /dev/sdc: block size is 4096 bytes
#device/dev-io.c:155         /dev/sdc: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/sdc
#device/dev-io.c:533         Opened /dev/sdc RO O_DIRECT
#device/dev-io.c:587         Closed /dev/sdc
#device/dev-io.c:310       /dev/sdc: size is 936640512 sectors
#device/dev-io.c:533         Opened /dev/sdc RO O_DIRECT
#device/dev-io.c:144         /dev/sdc: block size is 4096 bytes
#device/dev-io.c:155         /dev/sdc: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/sdc
#device/dev-cache.c:1088         Using /dev/sdc
#device/dev-io.c:533         Opened /dev/sdc RO O_DIRECT
#device/dev-io.c:144         /dev/sdc: block size is 4096 bytes
#device/dev-io.c:155         /dev/sdc: physical block size is 512 bytes
#label/label.c:167       /dev/sdc: lvm2 label detected at sector 1
#cache/lvmcache.c:1443         lvmcache: /dev/sdc: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
#format_text/text_label.c:422         /dev/sdc: PV header extension version 1 found
#format_text/format-text.c:1234         /dev/sdc: Using cached metadata at 39936 size 4586 (in area at 4096 size 258048) for RHS_vg1 (MkXgyv-Yram-6Mia-uXGZ-NOn7-ZXN8-lR5oLT)
#cache/lvmcache.c:1443         lvmcache: /dev/sdc: now in VG RHS_vg1 (MkXgyvYram6MiauXGZNOn7ZXN8lR5oLT) with 1 mdas
#device/dev-io.c:587         Closed /dev/sdc
#filters/filter-persistent.c:280         /dev/loop0: Skipping (cached)
#filters/filter-persistent.c:280         /dev/sda: Skipping (cached)
#ioctl/libdm-iface.c:1832         dm status   (253:0) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:310       /dev/rhel_rhsqa1/root: size is 104857600 sectors
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/root: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/root: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-io.c:310       /dev/rhel_rhsqa1/root: size is 104857600 sectors
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/root RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/root: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/root: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/root
#device/dev-cache.c:1088         Using /dev/rhel_rhsqa1/root
#device/dev-cache.c:1088         Using /dev/loop1
#device/dev-cache.c:1088         Using /dev/sda1
#ioctl/libdm-iface.c:1832         dm status   (253:1) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:310       /dev/rhel_rhsqa1/swap: size is 65929216 sectors
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/swap: block size is 4096 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/swap: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-io.c:310       /dev/rhel_rhsqa1/swap: size is 65929216 sectors
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/swap RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/swap: block size is 4096 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/swap: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/swap
#device/dev-cache.c:1088         Using /dev/rhel_rhsqa1/swap
#device/dev-cache.c:1088         Using /dev/sda2
#ioctl/libdm-iface.c:1832         dm status   (253:2) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:310       /dev/rhel_rhsqa1/home: size is 1780498432 sectors
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/home: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/home: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#device/dev-io.c:310       /dev/rhel_rhsqa1/home: size is 1780498432 sectors
#device/dev-io.c:533         Opened /dev/rhel_rhsqa1/home RO O_DIRECT
#device/dev-io.c:144         /dev/rhel_rhsqa1/home: block size is 512 bytes
#device/dev-io.c:155         /dev/rhel_rhsqa1/home: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/rhel_rhsqa1/home
#device/dev-cache.c:1088         Using /dev/rhel_rhsqa1/home
#ioctl/libdm-iface.c:1832         dm status   (253:3) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:310       /dev/RHS_vg1/engine_lv: size is 209715200 sectors
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:144         /dev/RHS_vg1/engine_lv: block size is 512 bytes
#device/dev-io.c:155         /dev/RHS_vg1/engine_lv: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#device/dev-io.c:310       /dev/RHS_vg1/engine_lv: size is 209715200 sectors
#device/dev-io.c:533         Opened /dev/RHS_vg1/engine_lv RO O_DIRECT
#device/dev-io.c:144         /dev/RHS_vg1/engine_lv: block size is 512 bytes
#device/dev-io.c:155         /dev/RHS_vg1/engine_lv: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/RHS_vg1/engine_lv
#device/dev-cache.c:1088         Using /dev/RHS_vg1/engine_lv
#ioctl/libdm-iface.c:1832         dm status   (253:4) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool_tmeta: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTRtBmPGvHxMbocM6yQ4AEA3RN5wdJGGpj-tmeta on internal LV device RHS_vg1-vg1_pool_tmeta not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool_tmeta: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:5) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-rhs_cache_cdata: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTLL2pafDrp5t8J1qKI0geCrAix1jAoNi1-cdata on internal LV device RHS_vg1-rhs_cache_cdata not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-rhs_cache_cdata: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:6) OF   [16384] (*1)
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:310       /dev/mapper/36005076058045f581eb079b12107eb77: size is 1952448512 sectors
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:144         /dev/mapper/36005076058045f581eb079b12107eb77: block size is 4096 bytes
#device/dev-io.c:155         /dev/mapper/36005076058045f581eb079b12107eb77: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-io.c:310       /dev/mapper/36005076058045f581eb079b12107eb77: size is 1952448512 sectors
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:144         /dev/mapper/36005076058045f581eb079b12107eb77: block size is 4096 bytes
#device/dev-io.c:155         /dev/mapper/36005076058045f581eb079b12107eb77: physical block size is 512 bytes
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-cache.c:1088         Using /dev/mapper/36005076058045f581eb079b12107eb77
#ioctl/libdm-iface.c:1832         dm status   (253:7) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-rhs_cache_cmeta: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTZpzXEYpdQkpDoMoM529n2lHeqZOnFeXg-cmeta on internal LV device RHS_vg1-rhs_cache_cmeta not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-rhs_cache_cmeta: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:8) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool_tdata_corig: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLT75GtT9zmDMKxMjcL7zo3Yz50LTelOOZ8-real on internal LV device RHS_vg1-vg1_pool_tdata_corig not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool_tdata_corig: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:9) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool_tdata: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLT7gxcUkIAwptJAguoptvqrkBTDurBW1VJ-tdata on internal LV device RHS_vg1-vg1_pool_tdata not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool_tdata: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:10) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool-tpool: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTzbJt8gbiBqa7Mk3XWMJ8bzEscPmFixYA-tpool on internal LV device RHS_vg1-vg1_pool-tpool not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool-tpool: Skipping unusable device.
#ioctl/libdm-iface.c:1832         dm status   (253:11) OF   [16384] (*1)
#activate/dev_manager.c:654         /dev/mapper/RHS_vg1-vg1_pool: Reserved uuid LVM-MkXgyvYram6MiauXGZNOn7ZXN8lR5oLTzbJt8gbiBqa7Mk3XWMJ8bzEscPmFixYA-pool on internal LV device RHS_vg1-vg1_pool not usable.
#filters/filter-usable.c:151         /dev/mapper/RHS_vg1-vg1_pool: Skipping unusable device.
#filters/filter-persistent.c:280         /dev/sdb: Skipping (cached)
#device/dev-cache.c:1088         Using /dev/sdc
#misc/lvm-flock.c:200       Locking /run/lock/lvm/V_RHS_vg1 RB
#libdm-common.c:926         Preparing SELinux context for /run/lock/lvm/V_RHS_vg1 to system_u:object_r:lvm_lock_t:s0.
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/V_RHS_vg1:aux WB
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/V_RHS_vg1:aux
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/V_RHS_vg1 RB
#libdm-common.c:929         Resetting SELinux context to default value.
#label/label.c:275         Using cached label for /dev/mapper/36005076058045f581eb079b12107eb77
#label/label.c:275         Using cached label for /dev/sdc
#device/dev-io.c:533         Opened /dev/mapper/36005076058045f581eb079b12107eb77 RO O_DIRECT
#device/dev-io.c:144         /dev/mapper/36005076058045f581eb079b12107eb77: block size is 4096 bytes
#device/dev-io.c:155         /dev/mapper/36005076058045f581eb079b12107eb77: physical block size is 512 bytes
#metadata/vg.c:70         Allocated VG RHS_vg1 at 0x7f4a92e238b0.
#label/label.c:275         Using cached label for /dev/mapper/36005076058045f581eb079b12107eb77
#label/label.c:275         Using cached label for /dev/sdc
#format_text/import_vsn1.c:687         Logical volume lvol0_pmspare is pool metadata spare.
#metadata/lv_manip.c:1200       Stack vg1_pool:0[0] on LV vg1_pool_tdata:0
#metadata/lv_manip.c:801       Adding vg1_pool:0 as an user of vg1_pool_tdata
#metadata/lv_manip.c:801       Adding vg1_pool:0 as an user of vg1_pool_tmeta
#metadata/lv_manip.c:801       Adding data:0 as an user of vg1_pool
#metadata/lv_manip.c:801       Adding vmstore:0 as an user of vg1_pool
#libdm-config.c:918       Setting data to rhs_cache_cdata
#libdm-config.c:918       Setting metadata to rhs_cache_cmeta
#libdm-config.c:918       Setting cache_mode to writethrough
#libdm-config.c:918       Setting policy to smq
#metadata/lv_manip.c:1200       Stack rhs_cache:0[0] on LV rhs_cache_cdata:0
#metadata/lv_manip.c:801       Adding rhs_cache:0 as an user of rhs_cache_cdata
#metadata/lv_manip.c:801       Adding rhs_cache:0 as an user of rhs_cache_cmeta
#libdm-config.c:918       Setting cache_pool to rhs_cache
#libdm-config.c:918       Setting origin to vg1_pool_tdata_corig
#metadata/lv_manip.c:1200       Stack vg1_pool_tdata:0[0] on LV vg1_pool_tdata_corig:0
#metadata/lv_manip.c:801       Adding vg1_pool_tdata:0 as an user of vg1_pool_tdata_corig
#metadata/lv_manip.c:801       Adding vg1_pool_tdata:0 as an user of rhs_cache
#format_text/format-text.c:548         Read RHS_vg1 metadata (25) from /dev/mapper/36005076058045f581eb079b12107eb77 at 65024 size 4586
#device/dev-io.c:533         Opened /dev/sdc RO O_DIRECT
#device/dev-io.c:144         /dev/sdc: block size is 4096 bytes
#device/dev-io.c:155         /dev/sdc: physical block size is 512 bytes
#format_text/format-text.c:553         Skipped reading metadata from /dev/sdc at 39936 size 4586 with matching checksum.
#cache/lvmcache.c:235         Metadata cache: VG RHS_vg1 (MkXgyv-Yram-6Mia-uXGZ-NOn7-ZXN8-lR5oLT) stored (4586 bytes).
#metadata/pv_manip.c:420         /dev/mapper/36005076058045f581eb079b12107eb77 0:      0  25600: engine_lv(0:0)
#metadata/pv_manip.c:420         /dev/mapper/36005076058045f581eb079b12107eb77 1:  25600   4096: lvol0_pmspare(0:0)
#metadata/pv_manip.c:420         /dev/mapper/36005076058045f581eb079b12107eb77 2:  29696 179200: vg1_pool_tdata_corig(0:0)
#metadata/pv_manip.c:420         /dev/mapper/36005076058045f581eb079b12107eb77 3: 208896   4096: vg1_pool_tmeta(0:0)
#metadata/pv_manip.c:420         /dev/mapper/36005076058045f581eb079b12107eb77 4: 212992  25343: NULL(0:0)
#metadata/pv_manip.c:420         /dev/sdc 0:      0 102400: rhs_cache_cdata(0:0)
#metadata/pv_manip.c:420         /dev/sdc 1: 102400    512: rhs_cache_cmeta(0:0)
#metadata/pv_manip.c:420         /dev/sdc 2: 102912  11423: NULL(0:0)
#metadata/vg.c:70         Allocated VG RHS_vg1 at 0x7f4a92e378d0.
#label/label.c:275         Using cached label for /dev/mapper/36005076058045f581eb079b12107eb77
#label/label.c:275         Using cached label for /dev/sdc
#format_text/import_vsn1.c:687         Logical volume lvol0_pmspare is pool metadata spare.
#metadata/lv_manip.c:1200       Stack vg1_pool:0[0] on LV vg1_pool_tdata:0
#metadata/lv_manip.c:801       Adding vg1_pool:0 as an user of vg1_pool_tdata
#metadata/lv_manip.c:801       Adding vg1_pool:0 as an user of vg1_pool_tmeta
#metadata/lv_manip.c:801       Adding data:0 as an user of vg1_pool
#metadata/lv_manip.c:801       Adding vmstore:0 as an user of vg1_pool
#libdm-config.c:918       Setting data to rhs_cache_cdata
#libdm-config.c:918       Setting metadata to rhs_cache_cmeta
#libdm-config.c:918       Setting cache_mode to writethrough
#libdm-config.c:918       Setting policy to smq
#metadata/lv_manip.c:1200       Stack rhs_cache:0[0] on LV rhs_cache_cdata:0
#metadata/lv_manip.c:801       Adding rhs_cache:0 as an user of rhs_cache_cdata
#metadata/lv_manip.c:801       Adding rhs_cache:0 as an user of rhs_cache_cmeta
#libdm-config.c:918       Setting cache_pool to rhs_cache
#libdm-config.c:918       Setting origin to vg1_pool_tdata_corig
#metadata/lv_manip.c:1200       Stack vg1_pool_tdata:0[0] on LV vg1_pool_tdata_corig:0
#metadata/lv_manip.c:801       Adding vg1_pool_tdata:0 as an user of vg1_pool_tdata_corig
#metadata/lv_manip.c:801       Adding vg1_pool_tdata:0 as an user of rhs_cache
#toollib.c:2787       Processing PV /dev/mapper/36005076058045f581eb079b12107eb77 in VG RHS_vg1.
#device/dev-io.c:310       /dev/mapper/36005076058045f581eb079b12107eb77: size is 1952448512 sectors
#toollib.c:2787       Processing PV /dev/sdc in VG RHS_vg1.
#device/dev-io.c:310       /dev/sdc: size is 936640512 sectors
#mm/memlock.c:549         Unlock: Memlock counters: locked:0 critical:0 daemon:0 suspended:0
#activate/fs.c:489         Syncing device names
#misc/lvm-flock.c:71       Unlocking /run/lock/lvm/V_RHS_vg1
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/V_RHS_vg1
#device/dev-io.c:587         Closed /dev/mapper/36005076058045f581eb079b12107eb77
#device/dev-io.c:587         Closed /dev/sdc
#metadata/vg.c:85         Freeing VG RHS_vg1 at 0x7f4a92e378d0.
#metadata/vg.c:85         Freeing VG RHS_vg1 at 0x7f4a92e238b0.
#locking/lvmlockd.c:1661         VG RHS_vg1 unlock skipped: lockd_state is failed
#misc/lvm-flock.c:200       Locking /run/lock/lvm/V_rhel_rhsqa1 RB
#libdm-common.c:926         Preparing SELinux context for /run/lock/lvm/V_rhel_rhsqa1 to system_u:object_r:lvm_lock_t:s0.
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/V_rhel_rhsqa1:aux WB
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/V_rhel_rhsqa1:aux
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/V_rhel_rhsqa1 RB
#libdm-common.c:929         Resetting SELinux context to default value.
#label/label.c:275         Using cached label for /dev/sda2
#device/dev-io.c:533         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:144         /dev/sda2: block size is 4096 bytes
#device/dev-io.c:155         /dev/sda2: physical block size is 512 bytes
#metadata/vg.c:70         Allocated VG rhel_rhsqa1 at 0x7f4a92e206b0.
#label/label.c:275         Using cached label for /dev/sda2
#format_text/format-text.c:548         Read rhel_rhsqa1 metadata (4) from /dev/sda2 at 8704 size 1710
#cache/lvmcache.c:235         Metadata cache: VG rhel_rhsqa1 (S2kHiF-uto4-42LH-3IHf-3C1e-5jv1-Xbln5e) stored (1710 bytes).
#metadata/pv_manip.c:420         /dev/sda2 0:      0   8048: swap(0:0)
#metadata/pv_manip.c:420         /dev/sda2 1:   8048 217346: home(0:0)
#metadata/pv_manip.c:420         /dev/sda2 2: 225394  12800: root(0:0)
#metadata/pv_manip.c:420         /dev/sda2 3: 238194     16: NULL(0:0)
#metadata/vg.c:70         Allocated VG rhel_rhsqa1 at 0x7f4a92e478e0.
#label/label.c:275         Using cached label for /dev/sda2
#toollib.c:2787       Processing PV /dev/sda2 in VG rhel_rhsqa1.
#device/dev-io.c:310       /dev/sda2: size is 1951422464 sectors
#mm/memlock.c:549         Unlock: Memlock counters: locked:0 critical:0 daemon:0 suspended:0
#activate/fs.c:489         Syncing device names
#misc/lvm-flock.c:71       Unlocking /run/lock/lvm/V_rhel_rhsqa1
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/V_rhel_rhsqa1
#device/dev-io.c:587         Closed /dev/sda2
#metadata/vg.c:85         Freeing VG rhel_rhsqa1 at 0x7f4a92e478e0.
#metadata/vg.c:85         Freeing VG rhel_rhsqa1 at 0x7f4a92e206b0.
#locking/lvmlockd.c:1661         VG rhel_rhsqa1 unlock skipped: lockd_state is failed
#misc/lvm-flock.c:200       Locking /run/lock/lvm/P_orphans RB
#libdm-common.c:926         Preparing SELinux context for /run/lock/lvm/P_orphans to system_u:object_r:lvm_lock_t:s0.
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_orphans:aux WB
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_orphans:aux
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_orphans RB
#libdm-common.c:929         Resetting SELinux context to default value.
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "#orphans"
#misc/lvm-flock.c:71       Unlocking /run/lock/lvm/P_orphans
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_orphans
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "#orphans"
#misc/lvm-flock.c:200       Locking /run/lock/lvm/P_orphans RB
#libdm-common.c:926         Preparing SELinux context for /run/lock/lvm/P_orphans to system_u:object_r:lvm_lock_t:s0.
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_orphans:aux WB
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_orphans:aux
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_orphans RB
#libdm-common.c:929         Resetting SELinux context to default value.
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "#orphans"
#misc/lvm-flock.c:71       Unlocking /run/lock/lvm/P_orphans
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_orphans
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "#orphans"
#misc/lvm-flock.c:200       Locking /run/lock/lvm/P_orphans RB
#libdm-common.c:926         Preparing SELinux context for /run/lock/lvm/P_orphans to system_u:object_r:lvm_lock_t:s0.
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_orphans:aux WB
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_orphans:aux
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_orphans RB
#libdm-common.c:929         Resetting SELinux context to default value.
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "#orphans"
#misc/lvm-flock.c:71       Unlocking /run/lock/lvm/P_orphans
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_orphans
#cache/lvmcache.c:498         Metadata cache has no info for vgname: "#orphans"
#libdm-config.c:1026       report/compact_output not found in config: defaulting to 0
#libdm-report.c:3985   PV                                            VG          Fmt  Attr PSize   PFree  DevSize PV UUID                               
#libdm-report.c:4308   /dev/mapper/36005076058045f581eb079b12107eb77 RHS_vg1     lvm2 a--  931,00g 99,00g 931,00g IbzFLQ-ytzw-lXGV-6dE5-G0AA-EOgV-o5U8cR
#libdm-report.c:4308   /dev/sda2                                     rhel_rhsqa1 lvm2 a--  930,51g 64,00m 930,51g ZrluWB-TGzf-N8Pa-iljo-wiLP-eztX-oF2qpo
#libdm-report.c:4308   /dev/sdc                                      RHS_vg1     lvm2 a--  446,62g 44,62g 446,62g eH6Lql-watf-A16u-CIyZ-fxXz-4oFb-1rRAiZ
#misc/lvm-flock.c:71       Unlocking /run/lock/lvm/P_global
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_global
#commands/toolcontext.c:2002     Reloading config files
#activate/fs.c:489         Syncing device names
#cache/lvmcache.c:1965     Wiping internal VG cache
#cache/lvmcache.c:190         Metadata cache: VG RHS_vg1 wiped.
#cache/lvmcache.c:190         Metadata cache: VG rhel_rhsqa1 wiped.
#metadata/vg.c:85         Freeing VG #orphans_lvm1 at 0x7f4a92d8c160.
#metadata/vg.c:85         Freeing VG #orphans_pool at 0x7f4a92d90170.
#metadata/vg.c:85         Freeing VG #orphans_lvm2 at 0x7f4a92d94ba0.
#config/config.c:281       Loading config file: /etc/lvm/lvm.conf
#device/dev-io.c:533         Opened /etc/lvm/lvm.conf RO
#device/dev-io.c:587         Closed /etc/lvm/lvm.conf
#libdm-config.c:1013       Setting log/syslog to 1
#libdm-config.c:949       Setting log/level to 0
#libdm-config.c:1013       Setting log/silent to 0
#libdm-config.c:1013       Setting log/verbose to 0
[root@rhsqa1 yum.repos.d]#

Comment 31 Zdenek Kabelac 2016-04-26 14:58:18 UTC
It also should be mentioned here that this command has been seen on the machine running:


vgdisplay -vv --config=global{locking_type=0}


This is in general unsupportable use-case - it's unclear how that would be related to 'dealock' state of lvmetad.

The lvm2 version in use:   lvm2-2.02.130-5.el7_2.1.x86_64

Comment 32 David Teigland 2016-04-26 14:58:37 UTC
"Found duplicate PV kQ37t6V3DkDZhmXe1ijOafE6Lk8jARRF: using /dev/mapper/3600508b1001cc2b61a4c189537531fac not /dev/sdc"

This means that lvm was processing both the multipath dev and the underlying dev (sdc), which appear to lvm like duplicate PVs.  If you have multipath_component_detection set to 1 in lvm.conf (or --config), then lvm should not process the underlying dev (sdc), and not see duplicate PVs.  Please check if that helps.

As for the stuck commands, lvmetad has known problems when exposed to duplicate PVs, and in this case a mutex has probably been left locked or destroyed, causing other threads that need that mutex to be blocked forever.  A fix for this is complete and planned for merging this week.  It involves disabling lvmetad when duplicate PVs are seen, and not attempting to manage them in lvmetad, reverting to disk scanning where we can properly choose among available duplicates for the best device.

Comment 33 Yaniv Kaul 2016-04-26 15:04:51 UTC
(In reply to David Teigland from comment #32)
> "Found duplicate PV kQ37t6V3DkDZhmXe1ijOafE6Lk8jARRF: using
> /dev/mapper/3600508b1001cc2b61a4c189537531fac not /dev/sdc"
> 
> This means that lvm was processing both the multipath dev and the underlying
> dev (sdc), which appear to lvm like duplicate PVs.  If you have
> multipath_component_detection set to 1 in lvm.conf (or --config), then lvm
> should not process the underlying dev (sdc), and not see duplicate PVs. 
> Please check if that helps.
> 
> As for the stuck commands, lvmetad has known problems when exposed to
> duplicate PVs, and in this case a mutex has probably been left locked or
> destroyed, causing other threads that need that mutex to be blocked forever.
> A fix for this is complete and planned for merging this week.  It involves
> disabling lvmetad when duplicate PVs are seen, and not attempting to manage
> them in lvmetad, reverting to disk scanning where we can properly choose
> among available duplicates for the best device.

Note that VDSM never uses lvmetad.

Comment 34 Zdenek Kabelac 2016-04-26 15:17:00 UTC
It should not have - but  unfortunately it does.


The blocked commands are all waiting on 'deadlocked' lvmetad.


So I'd advice to ensure  lvm.conf has always  use_lvmetad=0
and lvmetad.service  and .socket are disabled  and possibly even masked on such setup  (so they are surely not reenabled on upgrade)

Comment 35 Yaniv Kaul 2016-04-26 15:47:14 UTC
(In reply to Zdenek Kabelac from comment #34)
> It should not have - but  unfortunately it does.
> 
> 
> The blocked commands are all waiting on 'deadlocked' lvmetad.
> 
> 
> So I'd advice to ensure  lvm.conf has always  use_lvmetad=0
> and lvmetad.service  and .socket are disabled  and possibly even masked on
> such setup  (so they are surely not reenabled on upgrade)

Nir - thoughts?

Comment 36 Nir Soffer 2016-04-26 16:00:47 UTC
(In reply to Yaniv Kaul from comment #35)
> (In reply to Zdenek Kabelac from comment #34)
> > So I'd advice to ensure  lvm.conf has always  use_lvmetad=0
> > and lvmetad.service  and .socket are disabled  and possibly even masked on
> > such setup  (so they are surely not reenabled on upgrade)
> 
> Nir - thoughts?

We tried to avoid touching lvm host configuration, using our own configuration
given in --config to lvm commands. If this is not good enough, we can change
to configure lvm.conf just like we configure multipath and libvirt.

I think we need to add a lvm configurator to vdsm tool, and make sure vdsm
will not start if lvm is not configured.

Yaniv what do you think?

Comment 37 Zdenek Kabelac 2016-04-26 17:22:59 UTC
As side note - it would be probably best to provide some more complex overview of needed feature and put them in single RFE.

Lvm2 has now support for profiles?

The other 'feature' in progress is support for lvmlockd - maybe there is use-case vsdm ?

Possibly other features like 'dBus' interface ?
---


Back to BZ  -  configure use_lvmetad  is mandatory in case of a single disk can changed by multiple hosts where such disk is available.

Also we likely do need separate new BZ for lvmetad 'deadlock' referencing this original problem (lvmetad should either die or disable itself in case of duplicated PVs - but never deadlock waiting on mutex)

Comment 38 RamaKasturi 2016-04-27 04:58:09 UTC
(In reply to Zdenek Kabelac from comment #37)
> As side note - it would be probably best to provide some more complex
> overview of needed feature and put them in single RFE.
> 
> Lvm2 has now support for profiles?
> 
> The other 'feature' in progress is support for lvmlockd - maybe there is
> use-case vsdm ?
> 
> Possibly other features like 'dBus' interface ?
> ---
> 
> 
> Back to BZ  -  configure use_lvmetad  is mandatory in case of a single disk
> can changed by multiple hosts where such disk is available.
> 
> Also we likely do need separate new BZ for lvmetad 'deadlock' referencing
> this original problem (lvmetad should either die or disable itself in case
> of duplicated PVs - but never deadlock waiting on mutex)

Hi Zdenek,

    would logging a new bz against RHEL7 with component as lvm2 and sub component as lvmmetad works?

Thanks
kasturi

Comment 39 Zdenek Kabelac 2016-04-27 08:13:20 UTC
Yes please, thanks

Comment 40 Nir Soffer 2016-04-27 10:40:40 UTC
Adding back needinfo for Yaniv for comment 36.

Comment 41 Yaniv Bronhaim 2016-05-01 06:43:40 UTC
go for it... sounds reasonable to me

Comment 42 Sandro Bonazzola 2016-05-02 09:47:14 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 44 Yaniv Lavi 2016-05-23 13:12:24 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 45 Yaniv Lavi 2016-05-30 11:43:21 UTC
We don't support dm-cache yet and HC has workaround, so moving this to 4.1. Did you open a RFE to add support for dm-cache?

Comment 46 Sahina Bose 2016-06-08 08:24:16 UTC
Changing this bug to RFE as it has all the relevant context

Comment 47 Nir Soffer 2016-09-12 16:01:51 UTC
Shaina, can you check if disabling lvmetad solve this issue?

Stop and mask the service

    systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
    systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket

Edit /etc/lvm/lvm.conf:

    use_lvmetad = 0

We plan to do these changes during vdsm configuration in the next release.

Comment 48 Sahina Bose 2016-09-14 11:30:05 UTC
Yes, disabling lvmetad does solve issue

Though multipathd entries are created once vdsm is configured, vdsm and pvs commands continues to work as expected

[root@rhsdev-grafton1 ~]# lsblk
NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                   8:0    0   931G  0 disk  
├─sda1                                8:1    0   500M  0 part  /boot
├─sda2                                8:2    0   204G  0 part  
│ ├─rhel_rhsdev--grafton1-root      253:0    0    50G  0 lvm   /
│ ├─rhel_rhsdev--grafton1-swap      253:1    0     4G  0 lvm   [SWAP]
│ └─rhel_rhsdev--grafton1-home      253:2    0   150G  0 lvm   /home
└─sda3                                8:3    0 726.5G  0 part  
  └─rhs--vg1-bricks                 253:3    0 726.5G  0 lvm   /bricks
sdb                                   8:16   0 185.8G  0 disk  
└─3600304801acb77011f2abb3c04721341 253:12   0 185.8G  0 mpath 
  ├─rhs--vg2-vg1--pool_tmeta        253:4    0     7G  0 lvm   
  │ └─rhs--vg2-vg1--pool-tpool      253:9    0   170G  0 lvm   
  │   ├─rhs--vg2-vg1--pool          253:10   0   170G  0 lvm   
  │   └─rhs--vg2-data               253:11   0   100G  0 lvm   /rhgs/data
  └─rhs--vg2-vg1--pool_tdata_corig  253:7    0   170G  0 lvm   
    └─rhs--vg2-vg1--pool_tdata      253:8    0   170G  0 lvm   
      └─rhs--vg2-vg1--pool-tpool    253:9    0   170G  0 lvm   
        ├─rhs--vg2-vg1--pool        253:10   0   170G  0 lvm   
        └─rhs--vg2-data             253:11   0   100G  0 lvm   /rhgs/data
sdc                                   8:32   0 185.8G  0 disk  
└─3600304801acb77011f2abb490535df82 253:13   0 185.8G  0 mpath 
  ├─rhs--vg2-rhs_cache_cdata        253:5    0    50G  0 lvm   
  │ └─rhs--vg2-vg1--pool_tdata      253:8    0   170G  0 lvm   
  │   └─rhs--vg2-vg1--pool-tpool    253:9    0   170G  0 lvm   
  │     ├─rhs--vg2-vg1--pool        253:10   0   170G  0 lvm   
  │     └─rhs--vg2-data             253:11   0   100G  0 lvm   /rhgs/data
  └─rhs--vg2-rhs_cache_cmeta        253:6    0     2G  0 lvm   
    └─rhs--vg2-vg1--pool_tdata      253:8    0   170G  0 lvm   
      └─rhs--vg2-vg1--pool-tpool    253:9    0   170G  0 lvm   
        ├─rhs--vg2-vg1--pool        253:10   0   170G  0 lvm   
        └─rhs--vg2-data             253:11   0   100G  0 lvm   /rhgs/data
[root@rhsdev-grafton1 ~]# vdsClient -s 0 list
/usr/share/vdsm/vdsClient.py:33: DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is deprecated, please use vdsm.jsonrpcvdscli
  from vdsm import utils, vdscli, constants
[root@rhsdev-grafton1 ~]# pvs
  PV                                            VG                   Fmt  Attr PSize   PFree  
  /dev/mapper/3600304801acb77011f2abb3c04721341 rhs-vg2              lvm2 a--  185.78g   1.78g
  /dev/mapper/3600304801acb77011f2abb490535df82 rhs-vg2              lvm2 a--  185.78g 133.78g
  /dev/sda2                                     rhel_rhsdev-grafton1 lvm2 a--  204.00g      0 
  /dev/sda3                                     rhs-vg1              lvm2 a--  726.50g      0

Comment 49 Yaniv Lavi 2016-09-14 13:41:49 UTC
Nir, can you block this bug on that change?

Comment 50 SATHEESARAN 2016-09-15 09:26:43 UTC
(In reply to Nir Soffer from comment #47)
> Shaina, can you check if disabling lvmetad solve this issue?
> 
> Stop and mask the service
> 
>     systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>     systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
> 
> Edit /etc/lvm/lvm.conf:
> 
>     use_lvmetad = 0
> 
> We plan to do these changes during vdsm configuration in the next release.

Nir,

Does this have any perf drop or impact ?
as with this change vg metadata is not cached

Comment 51 Nir Soffer 2016-09-15 16:35:15 UTC
(In reply to SATHEESARAN from comment #50)
> Does this have any perf drop or impact ?
> as with this change vg metadata is not cached

There is no impact on vdsm, since vdsm always access storage directly, and maintain
its own cache.

lvm operations on the host may be slower since they will always access the storage, but on a hypervisor we don't expect significant amount of local lvm operations.

Basically lvmetad is not compatible with shared lvm storage, unless you use new
lvm lockd, but vdsm is not using lvm lockd but using sanlock directly.

Adding David to add more info on this.

Comment 52 David Teigland 2016-09-15 17:03:50 UTC
Yes, set use_lvmetad=0.  The caching done by lvmetad is not only dangerous when using shared devices, but it's unlikely to have any real benefit for local operations.

Apart from the caching issues, activation of LVs is also an issue to check and verify.  When use_lvmetad=1, lvm/systemd/udev attempt to autoactivate LVs, but when use_lvmetad=0, they do not attempt to autoactivate LVs.  (It's unfortunate and confusing that the activation logic is tied to the lvmetad/caching setting; it's an lvm design problem that will be cleaned up eventually.)  Obviously, vdsm must control the activation of its own LVs, and never wants those LVs automatically activated by lvm/systemd/udev.  However, you should verify that other LVs used by the host (apart from rhev/vdsm LVs) are being properly activated after setting use_lvmetad=0.

Comment 53 David Teigland 2016-09-15 18:24:23 UTC
> Apart from the caching issues, activation of LVs is also an issue to check
> and verify.  When use_lvmetad=1, lvm/systemd/udev attempt to autoactivate
> LVs, but when use_lvmetad=0, they do not attempt to autoactivate LVs.

This is not entirely accurate.  The lvm/systemd/udev mechanism used for autoactivation changes when the use_lvmetad setting changes.  When use_lvmetad=0, systemd still does autoactivation, but it does that once during boot using "vgchange -aay".  See man lvm2-activation-generator for more details.

When use_lvmetad=1, then autoactivation happens at boot and when devices are hotplugged later, using pvscan --cache -aay.

Comment 54 SATHEESARAN 2016-09-16 01:59:15 UTC
Thanks Nir and David for the explanation.

Comment 55 Sahina Bose 2016-11-09 15:33:29 UTC
Is the change mentioned in Comment 47 targeted for 4.1?

Comment 56 Yaniv Kaul 2016-11-21 12:30:45 UTC
(In reply to Sahina Bose from comment #55)
> Is the change mentioned in Comment 47 targeted for 4.1?

I don't think we have a separate bug for it. I've asked Scale QE to test its effect on scale and I believe storage QE would need to run their tier 1 & 2 on it to make sure there are no regressions as well.

Comment 57 Nir Soffer 2016-12-06 10:50:53 UTC
Sahina, did you try David advice about multipath detection? See command 32.

Comment 58 Sahina Bose 2016-12-07 11:16:11 UTC
(In reply to Nir Soffer from comment #57)
> Sahina, did you try David advice about multipath detection? See command 32.

This is set by default as 1
[root@dhcp37-161 ~]# cat /etc/lvm/lvm.conf | grep multipath_component_detection
	# Configuration option devices/multipath_component_detection.
	multipath_component_detection = 1

Comment 59 Nir Soffer 2017-02-14 11:30:39 UTC
Should be solved in 4.1.

Comment 60 Nir Soffer 2017-02-20 18:35:02 UTC
Resetting owner after the bug moved back to gluster.

Comment 61 Yaniv Lavi 2017-02-26 12:07:42 UTC
I believe this is fixed with latest 4.1.1 build. Can you test?

Comment 62 Nir Soffer 2017-02-27 15:11:59 UTC
This is not vdsm bug, we never seen any issue with vdsm consuming gluster storage.

The only issue that may be related to vdsm here is lvmetad. Now that lvmetad is
disabled, gluster folks should test their storage again.

Yaniv, can you move this bug to the proper component?

Comment 63 Yaniv Lavi 2017-03-01 09:54:57 UTC
Sahina can you please move it to the correct product\component?

Comment 64 SATHEESARAN 2017-03-07 02:06:01 UTC
I have checked it with RHEL 7.3 and RHV 4.1.1.

The following changes are in place

1. multipath_component_detection set to 1 in /etc/lvm/lvm.conf
2. use_lvmetad=0 is set in /etc/lvm/lvm.conf
3. lvmetad.socket and lvmetad.service were stopped and masked.

I am waiting to test the same with RHVH 4.1.1 as there are no available builds for RHVH 4.1.1

Comment 65 SATHEESARAN 2017-03-07 02:09:10 UTC
One problem still I could see is mpath names are seen for the devices.
Is that ok that devices get mpath names ? 

The reference to the devices with the ( /dev/sdc ) for pvcreate throws error:

[root@ ~]# lsblk /dev/sdc
NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sdc                                   8:32   0 223.6G  0 disk  
└─3600508b1001cb48647aab5cfe2384999 253:3    0 223.6G  0 mpath 

[root@ ~]# pvcreate /dev/sdc
  Device /dev/sdc not found (or ignored by filtering).

Comment 66 Nir Soffer 2017-03-07 06:48:30 UTC
(In reply to SATHEESARAN from comment #64)
> I have checked it with RHEL 7.3 and RHV 4.1.1.

...

> The following changes are in place
> 
> 1. multipath_component_detection set to 1 in /etc/lvm/lvm.conf
> 2. use_lvmetad=0 is set in /etc/lvm/lvm.conf
> 3. lvmetad.socket and lvmetad.service were stopped and masked.

None of these changes is needed, vdsm should handle all of this.

> I am waiting to test the same with RHVH 4.1.1 as there are no available
> builds for RHVH 4.1.1

You just said you checked 4.1.1?

Comment 67 Nir Soffer 2017-03-07 06:53:47 UTC
(In reply to SATHEESARAN from comment #65)
> One problem still I could see is mpath names are seen for the devices.
> Is that ok that devices get mpath names ?

Yes, this is normal in a hypervisor managed by vdsm. multipath is configured to
grab all devices, regardless if they have multiple paths or not.

> The reference to the devices with the ( /dev/sdc ) for pvcreate throws error:
> 
> [root@ ~]# lsblk /dev/sdc
> NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> sdc                                   8:32   0 223.6G  0 disk  
> └─3600508b1001cb48647aab5cfe2384999 253:3    0 223.6G  0 mpath 
> 
> [root@ ~]# pvcreate /dev/sdc
>   Device /dev/sdc not found (or ignored by filtering).

What is /dev/sdc? In hypervisor managed by vdsm all devices are accessed via
the multipath device.

Comment 68 SATHEESARAN 2017-03-07 08:45:03 UTC
(In reply to Nir Soffer from comment #66)
> (In reply to SATHEESARAN from comment #64)
> > I have checked it with RHEL 7.3 and RHV 4.1.1.
> 
> ...
> 
> > The following changes are in place
> > 
> > 1. multipath_component_detection set to 1 in /etc/lvm/lvm.conf
> > 2. use_lvmetad=0 is set in /etc/lvm/lvm.conf
> > 3. lvmetad.socket and lvmetad.service were stopped and masked.
> 
> None of these changes is needed, vdsm should handle all of this.

Yes, I haven't made these changes. I noticed these changes already happened post hosted-engine deployment.
> 
> > I am waiting to test the same with RHVH 4.1.1 as there are no available
> > builds for RHVH 4.1.1
> 
> You just said you checked 4.1.1?

I have checked with RHV 4.1.1 with RHEL 7.3 as hypervisors and not RHVH NG as hypervisors. ( because RHVH 4.1.1 is not yet available )

> The reference to the devices with the ( /dev/sdc ) for pvcreate throws error:
> 
> [root@ ~]# lsblk /dev/sdc
> NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> sdc                                   8:32   0 223.6G  0 disk  
> └─3600508b1001cb48647aab5cfe2384999 253:3    0 223.6G  0 mpath 
> 
> [root@ ~]# pvcreate /dev/sdc
>   Device /dev/sdc not found (or ignored by filtering).
>>
>> What is /dev/sdc? In hypervisor managed by vdsm all devices are accessed via
>> the multipath device.

/dev/sdc is the SSD device. I wanted to create dm-cache or lvmcache with this device. But I see that when I create pv with this device, I am unable to do so as  this device is no longer identified.

[root@ ~]# pvcreate /dev/sdc
   Device /dev/sdc not found (or ignored by filtering).

Comment 69 SATHEESARAN 2017-03-07 08:51:52 UTC
I think the problem will be solved when both mpath name ( 3600508b1001cb48647aab5cfe2384999 ) and device name ( /dev/sdc ) is used to access the device, while creating pv.

@Nir - What do you think ?

Comment 70 Nir Soffer 2017-03-08 11:48:08 UTC
(In reply to SATHEESARAN from comment #68)
> >> What is /dev/sdc? In hypervisor managed by vdsm all devices are accessed via
> >> the multipath device.
> 
> /dev/sdc is the SSD device. I wanted to create dm-cache or lvmcache with
> this device. But I see that when I create pv with this device, I am unable
> to do so as  this device is no longer identified.
> 
> [root@ ~]# pvcreate /dev/sdc
>    Device /dev/sdc not found (or ignored by filtering).

Multipath should not grab local devices. This is probably a multipath configuration
issue in this setup.

Please attach the output of the following commands to this bug:

lsblk
multipath -v3
tree /sys/class/

Comment 71 SATHEESARAN 2017-03-09 04:57:20 UTC
Created attachment 1261428 [details]
required information

Comment 72 Nir Soffer 2017-03-10 13:54:43 UTC
So, looking at /dev/sdc (the SSD used for cache):

# lsblk

sdc                                                   8:32   0 223.6G  0 disk  
└─3600508b1001cb48647aab5cfe2384999                 253:14   0 223.6G  0 mpath 
  └─gluster_vg_sdb-lvcache_cdata                    253:10   0   220G  0 lvm   
    ...

# multipath -v3

Mar 09 10:18:00 | sdc: not found in pathvec
Mar 09 10:18:00 | sdc: mask = 0x3f
Mar 09 10:18:00 | sdc: dev_t = 8:32
Mar 09 10:18:00 | sdc: size = 468792000
Mar 09 10:18:00 | sdc: vendor = HP      
Mar 09 10:18:00 | sdc: product = LOGICAL VOLUME  
Mar 09 10:18:00 | sdc: rev = 1.34
Mar 09 10:18:00 | sdc: h:b:t:l = 4:1:0:2
Mar 09 10:18:00 | sdc: path state = running

Mar 09 10:18:00 | sdc: 29180 cyl, 255 heads, 63 sectors/track, start at 0
Mar 09 10:18:00 | sdc: serial = PDNMF0ARH7X102  
Mar 09 10:18:00 | sdc: get_state
Mar 09 10:18:00 | sdc: path checker = tur (controller setting)
Mar 09 10:18:00 | sdc: checker timeout = 30000 ms (sysfs setting)
Mar 09 10:18:00 | sdc: state = up
Mar 09 10:18:00 | sdc: uid_attribute = ID_SERIAL (internal default)
Mar 09 10:18:00 | sdc: uid = 3600508b1001cb48647aab5cfe2384999 (udev)
Mar 09 10:18:00 | sdc: detect_prio = 1 (config file default)
Mar 09 10:18:00 | sdc: prio = const (controller setting)
Mar 09 10:18:00 | sdc: prio args = (null) (controller setting)
Mar 09 10:18:00 | sdc: const prio = 1

# tree /sys/class
...
├── block
...
│   ├── sda -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/target4:1:0/4:1:0:0/block/sda
│   ├── sda1 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/target4:1:0/4:1:0:0/block/sda/sda1
│   ├── sda2 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/target4:1:0/4:1:0:0/block/sda/sda2
│   ├── sdb -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/target4:1:0/4:1:0:1/block/sdb
│   └── sdc -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/target4:1:0/4:1:0:2/block/sdc
...
├── iscsi_connection
├── iscsi_endpoint
├── iscsi_host
├── iscsi_iface
├── iscsi_session
├── iscsi_transport
│   └── bnx2i -> ../../devices/virtual/iscsi_transport/bnx2i
...
├── sas_device
│   └── end_device-4:1 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/port-4:1/end_device-4:1/sas_device/end_device-4:1
├── sas_end_device
│   └── end_device-4:1 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/port-4:1/end_device-4:1/sas_end_device/end_device-4:1
├── sas_expander
├── sas_host
│   └── host4 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/sas_host/host4
├── sas_phy
│   └── phy-4:0 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/phy-4:0/sas_phy/phy-4:0
├── sas_port
│   ├── port-4:0 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/port-4:0/sas_port/port-4:0
│   └── port-4:1 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/port-4:1/sas_port/port-4:1
├── scsi_host
...
│   ├── host4 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/scsi_host/host4
...
├── scsi_device
...
│   └── 4:1:0:2 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/target4:1:0/4:1:0:2/scsi_device/4:1:0:2
├── scsi_disk
...
│   └── 4:1:0:2 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/target4:1:0/4:1:0:2/scsi_disk/4:1:0:2
├── scsi_generic
...
│   └── sg3 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/target4:1:0/4:1:0:2/scsi_generic/sg3
├── scsi_host
...
│   ├── host4 -> ../../devices/pci0000:00/0000:00:01.0/0000:06:00.0/host4/scsi_host/host4
...

Is /dev/sdc a local SAS disk?

Comment 73 Nir Soffer 2017-03-10 13:59:18 UTC
Ben, do you think it makes sense that multipath grabs /dev/sdc?
(see comment 72).

Can configure multipath to grab only iSCSI and FC devices, or we have to blacklist
specific devices?

Comment 74 Nir Soffer 2017-03-10 14:03:08 UTC
Sahina, can you explain what is /dev/sdb (local disk?),
and why do you use multipath with this disk?

sdb                                                   8:16   0   3.7T  0 disk  
└─3600508b1001c62d37236307d4264f319                 253:13   0   3.7T  0 mpath 
  ├─gluster_vg_sdb-gluster_thinpool_sdb_tmeta       253:3    0  15.8G  0 lvm   
  ...
  ├─gluster_vg_sdb-gluster_lv_engine                253:6    0   100G  0 lvm   /gluster_bricks/engine
  ├─gluster_vg_sdb-lvcache_cmeta                    253:11   0    42M  0 lvm   
  ...
  └─gluster_vg_sdb-gluster_thinpool_sdb_tdata_corig 253:12   0     3T  0 lvm   
    ...

Comment 75 Sahina Bose 2017-03-10 14:21:25 UTC
(In reply to Nir Soffer from comment #74)
> Sahina, can you explain what is /dev/sdb (local disk?),
> and why do you use multipath with this disk?

Yes, /dev/sdb is the local SAS disk and /dev/sdc is the local SSD disk.

We do not use or need multipath on both these device. But it looks like the multipath entries are created once vdsm is configured.
> 
> sdb                                                   8:16   0   3.7T  0
> disk  
> └─3600508b1001c62d37236307d4264f319                 253:13   0   3.7T  0
> mpath 
>   ├─gluster_vg_sdb-gluster_thinpool_sdb_tmeta       253:3    0  15.8G  0 lvm
> 
>   ...
>   ├─gluster_vg_sdb-gluster_lv_engine                253:6    0   100G  0 lvm
> /gluster_bricks/engine
>   ├─gluster_vg_sdb-lvcache_cmeta                    253:11   0    42M  0 lvm
> 
>   ...
>   └─gluster_vg_sdb-gluster_thinpool_sdb_tdata_corig 253:12   0     3T  0 lvm
> 
>     ...

Comment 76 Ben Marzinski 2017-03-10 18:41:27 UTC
(In reply to Nir Soffer from comment #73)
> Ben, do you think it makes sense that multipath grabs /dev/sdc?
> (see comment 72).
> 
> Can configure multipath to grab only iSCSI and FC devices, or we have to
> blacklist
> specific devices?

It looks like there is only one path to sdc, so if setting

find_multipaths yes

is an option, then that should fix it.  You will also need to run

# multipath -w /dev/sdc

This will remove the device WWID from the wwids file. If you don't do this multipath will still recognize it as a multipath path device, and will create a multipath device for it, even though there is only one path (it will assume that
the other path must be down).

If you can't use find_multipaths to get the correct devices, you need to blacklist them.  There's no way to blacklist by transport protocol, but you don't have to blacklist a device at a time.  You can blacklist either by wwid or byvendor/product.  You can also blacklist all devices, and add exceptions for certain ones if it's easier that way.

Comment 77 Nir Soffer 2017-03-11 12:25:45 UTC
Sahina, I think you need to configure multipath for your special setup.
Vdsm does not know anything about your special devices and cannot configure
multipath for them.

The local devices used for local storage should be blacklisted in 
/etc/multipath.conf.

This means the you have to use a private multipath configuration, starting
with these lines:

# VDSM REVISION 1.3
# VDSM PRIVATE

Vdsm will never touch this file, but will also never update it vdsm is
changing multipath configuration. The user or your deployment solution is
responsible for updating multipath configuration as needed.

Comment 78 SATHEESARAN 2017-04-05 09:50:14 UTC
There are no more hangs or duplicate PVs message seen.
lvm2-lvmetad.socket and lvm2-lvmetad.service are masked and not running with latest RHVH 4.1

But still local disks requires to be blacklisted as mpath names are populated for the device