Bug 1571939

Summary: after vgcreate, lvcreate hangs on semop(65536, [{0, 0, 0}], 1
Product: [Community] LVM and device-mapper Reporter: Brian J. Murrell <brian.murrell>
Component: lvm2Assignee: Joe Thornber <thornber>
lvm2 sub component: Default / Unclassified QA Contact: cluster-qe <cluster-qe>
Status: CLOSED NOTABUG Docs Contact:
Severity: urgent    
Priority: unspecified CC: agk, brian.murrell, heinzm, jbrassow, msnitzer, prajnoha, thornber, zkabelac
Version: 2.02.171Flags: rule-engine: lvm-technical-solution?
rule-engine: lvm-test-coverage?
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-25 21:06:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vgcreate -vvvv output
none
lvcreate -vvvv output
none
vgcreate -vvvv output
none
lvcreate -vvvv output none

Description Brian J. Murrell 2018-04-25 18:33:55 UTC
Created attachment 1426783 [details]
vgcreate -vvvv output

Description of problem:
Intermittently, but it seems somewhat frequently, an lvcreate after a vgcreate will hang on what strace reports as:

semop(65536, [{0, 0, 0}], 1

Version-Release number of selected component (if applicable):
lvm2-2.02.171-8.el7.x86_64

How reproducible:
Intermittently

Steps to Reproduce:
1. vgcreate -vvvv --yes vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk1
2. lvcreate -vvvv --yes --wipesignatures n -l 100%FREE --name lv_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1 vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1

Actual results:
lvcreate hangs.  When observed with strace it's hanging on:

semop(65536, [{0, 0, 0}], 1

Expected results:
lvcreate should create the LV and return.

Additional info:
Please find attached the -vvvv detailed debug for both vgcreate and lvcreate

Comment 1 Brian J. Murrell 2018-04-25 18:34:44 UTC
Created attachment 1426784 [details]
lvcreate -vvvv output

Comment 2 Brian J. Murrell 2018-04-25 19:13:45 UTC
Jut hit another one.  Here's some dmsetup information for this instance:

# dmsetup table
vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk11-lv_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk11: 0 20963328 linear 8:64 2048
vg_00-lv_swap: 0 4104192 linear 252:2 2048
vg_00-lv_root: 0 36806656 linear 252:2 4106240
# dmsetup status
vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk11-lv_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk11: 0 20963328 linear 
vg_00-lv_swap: 0 4104192 linear 
vg_00-lv_root: 0 36806656 linear 
# dmsetup info -c
Name                                                                                      Maj Min Stat Open Targ Event  UUID                                                                
vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk11-lv_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk11 253   2 L--w    0    1      0 LVM-gKPZvhgpg4wI7diAuPi0zG9b50tH95Lj7voaQFBBFzO0wNrdkVMKwmRnJsl2HYrR
vg_00-lv_swap                                                                             253   1 L--w    2    1      0 LVM-LalLNtcRR88ct9zVi7LZsCPqMh0xk8BwygFpRrso5B2MhWxBRnFAZaKa7zx3CU4B
vg_00-lv_root                                                                             253   0 L--w    1    1      0 LVM-LalLNtcRR88ct9zVi7LZsCPqMh0xk8Bw1kbP9byPmrYSewsEgHdmkr9QyNBkVJdu
# dmsetup udevcookies
Cookie       Semid      Value      Last semop time           Last change time
0xd4db847    65536      1          Wed Apr 25 11:21:28 2018  Wed Apr 25 11:21:28 2018

This is all while the lvcreate is still hanging on the semop.  I will attach the -vvvv debug logs.

Comment 3 Brian J. Murrell 2018-04-25 19:14:15 UTC
Created attachment 1426793 [details]
vgcreate -vvvv output

Comment 4 Brian J. Murrell 2018-04-25 19:14:40 UTC
Created attachment 1426794 [details]
lvcreate -vvvv output

Comment 5 Zdenek Kabelac 2018-04-25 19:22:31 UTC
Well the hang is just wait on udev rules to confirm transaction.

So is the clean & clear installation on  VM ?
Udev rules have not been modified ?

Aren't you trying to use lvm2 in some cgroup environment ?
(lvm2 must be only run within the same world udev is running)

Is udev actually running ?

Can you grab log of udev in this system ?

Comment 6 Brian J. Murrell 2018-04-25 20:09:02 UTC
(In reply to Zdenek Kabelac from comment #5)
> 
> So is the clean & clear installation on  VM ?

Yes.

> Udev rules have not been modified ?

What kind of modifications are you referring to?  No admin/user has touched Udev rules after the installation, no, but of course various software packages installed are going to have installed their own udev rules.
 
> Aren't you trying to use lvm2 in some cgroup environment ?

I don't do anything specific about cgroups, so I have to assume they are in the same cgroup.

Given that this happens intermittently when running the exact same recipe (i.e. scripted) I have to assume that generally things are right given that it doesn't fail on every run of the script.

> Is udev actually running ?

Yes.

> Can you grab log of udev in this system ?

Since I have to wait for occurrences of this to happen and then release the host once I have gathered information, I want to make sure I am grabbing what you are looking for on the next occurrence.  Can you give me the command(s) you would like the output from?

Comment 7 Brian J. Murrell 2018-04-25 20:16:43 UTC
Perhaps this is what you are looking for:

Apr 25 11:56:41 localhost systemd[1]: Starting udev Kernel Device Manager...
Apr 25 11:56:41 localhost systemd-udevd[247]: starting version 219
Apr 25 11:56:41 localhost systemd-udevd[247]: Network interface NamePolicy= disabled on kernel command line, ignoring.
Apr 25 11:56:41 localhost systemd[1]: Started udev Kernel Device Manager.
Apr 25 11:56:42 localhost systemd[1]: Stopping udev Kernel Device Manager...
Apr 25 11:56:43 localhost systemd[1]: Stopped udev Kernel Device Manager.
Apr 25 11:56:43 localhost systemd[1]: Starting udev Kernel Device Manager...
Apr 25 11:56:43 localhost systemd-udevd[500]: starting version 219
Apr 25 11:56:43 localhost systemd-udevd[500]: Network interface NamePolicy= disabled on kernel command line, ignoring.
Apr 25 11:56:43 localhost systemd[1]: Started udev Kernel Device Manager.
Apr 25 11:58:29 lotus-59vm5 systemd-udevd[500]: worker [1339] /devices/virtual/block/dm-2 is taking a long time
Apr 25 12:00:30 lotus-59vm5 systemd-udevd[1339]: timeout '/usr/sbin/vgs --no-headings --units b -o size vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1'
Apr 25 12:00:30 lotus-59vm5 systemd-udevd[1339]: slow: '/usr/sbin/vgs --no-headings --units b -o size vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1' [1380]
Apr 25 12:00:31 lotus-59vm5 systemd-udevd[500]: worker [1339] /devices/virtual/block/dm-2 timeout; kill it
Apr 25 12:00:31 lotus-59vm5 systemd-udevd[500]: seq 1825 '/devices/virtual/block/dm-2' killed
Apr 25 12:00:31 lotus-59vm5 systemd-udevd[500]: worker [1339] terminated by signal 9 (Killed)

On the same system:

root      1355  1319  0 11:57 ?        00:00:00 lvcreate -vvvv --yes --wipesignatures n -l 100%FREE --name lv_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1 vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1

But is the "worker" (vgs I suppose) just also blocking on the same semaphore?  Notice that the lvcreate was started before the complaints from udev.

Comment 8 Zdenek Kabelac 2018-04-25 20:23:10 UTC
Yep your comment 7 seems to be explaining a lot.

Device you are creating  is likely located on some unusable or very slow PV device - thus udev is not able to finish processing of this device withing 30 seconds and kills whole rule.

This leaves whole transaction unfinished.


So what is the disk you try to use ?

Can you do some read of device ?

Like  'dd if=/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk1 of=...'

Do you optain a lot of data this way ?

How fast ?

Comment 9 Brian J. Murrell 2018-04-25 20:35:38 UTC
# dd if=/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 51.789 s, 207 MB/s

# /usr/sbin/vgs --no-headings --units b -o size vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1
^C  Interrupted...
  Giving up waiting for lock.
  /run/lock/lvm/V_vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1: flock failed: Interrupted system call
  Can't get lock for vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1
  Cannot process volume group vg_devdiskbyidscsi0QEMU_QEMU_HARDDISK_disk1

So vgs is still hanging but it's not "slow" it's hanging on the same lock as lvcreate.  Did you notice that all of the complaints from udev about vgs (starting at 11:58:29) are after lvcreate was started (at 11:57)?  What makes you think that vgs is not just being blocked by the same lock as/from lvcreate?

Comment 10 Brian J. Murrell 2018-04-25 20:44:57 UTC
That vgs command is coming from the following udev rule:

ACTION=="add|change", ENV{DM_VG_NAME}=="?*", PROGRAM="/usr/sbin/vgs --no-headings --units b -o size $env{DM_VG_NAME}", RESULT=="?*", ENV{IML_DM_VG_SIZE}="$result"

Is it possible that vgs being run on that udev event is triggering some kind of race?

Comment 11 Zdenek Kabelac 2018-04-25 20:49:33 UTC
Hmm from where you get that rule ???

When I've asked if this is clean installation - I'm pretty sure this does not come with default  lvm2 package.

You cannot run 'vgs' command in the middle of udev rule.

While udev rule is executed -  lvcreate still holds  'write' lock for VG - thus  'vgs' comamnd cannot obtain  'read' lock for VG - thus it deadlocks.


So yep - please remove your invalid udev rule - and lvcreate should start to work.


If udev rule comes from some other  RHEL package - we will need to reassing this BZ to this broken package.

Comment 12 Brian J. Murrell 2018-04-25 20:56:59 UTC
(In reply to Zdenek Kabelac from comment #11)
> Hmm from where you get that rule ???

It comes from a software package we are developing.

> When I've asked if this is clean installation

What is a "clean installation"?  Base O/S + LVM and nothing else?  What use is such a system?  Is LVM only expected to work on installations with no other software?  I'm not trying to be difficult.  I'm just trying to point out that LVM has to be able to live with other software to be useful.

> You cannot run 'vgs' command in the middle of udev rule.
> 
> While udev rule is executed -  lvcreate still holds  'write' lock for VG -
> thus  'vgs' comamnd cannot obtain  'read' lock for VG - thus it deadlocks.

Why is that a "deadlock"?  Doesn't vgs's read lock just block until the lvcreate write lock is released?

How is not just a basic race?  What if two users on the same system were to race in the same way with one running vgs right in the middle of another user running vgcreate; lvcreate?

> So yep - please remove your invalid udev rule - and lvcreate should start to
> work.

Let's first answer how this is not just a basic race that could happen without udev before we start to call the udev rule invalid, yes?

Comment 13 Zdenek Kabelac 2018-04-25 21:06:15 UTC
(In reply to Brian J. Murrell from comment #12)
> (In reply to Zdenek Kabelac from comment #11)
> > Hmm from where you get that rule ???
> 
> It comes from a software package we are developing.
> 
> > When I've asked if this is clean installation
> 
> What is a "clean installation"?  Base O/S + LVM and nothing else?  What use

Point was of having unmodified 'clean&clear' RHEL (so udev rules comes from distro)


> > While udev rule is executed -  lvcreate still holds  'write' lock for VG -
> > thus  'vgs' comamnd cannot obtain  'read' lock for VG - thus it deadlocks.
> 
> Why is that a "deadlock"?  Doesn't vgs's read lock just block until the
> lvcreate write lock is released?

Nope - as said - lvcreate waits till 'udev' is finished - so device is ready for  user - otherwise  'lvcreate' could have exited to early - before there are all symlinks - so  you would be able to hit race with commands like:

lvcreate && mkfs -   where mkfs might not found symlink /dev/vg/lv

> How is not just a basic race?  What if two users on the same system were to
> race in the same way with one running vgs right in the middle of another
> user running vgcreate; lvcreate?

udev rules are very specific - and only limited set of command are allowed
to be executed there - you can't put there any random command.

If you insist on running 'vgs' inside udev rules (though I've no idea actually why this can be useful in any way) - there are 2 things you need to be aware:

1st.  udev rule should NOT access any other disk expect this currently scanned -  very important rule - and VGS will try to access all device on your system!

2nd.  'vgs' can run lockless by setting  locking_type=0 via  --config option - but it's still seriously bad plan - so please consider solving this issue differently without modifying udev rules.

Comment 14 Brian J. Murrell 2018-04-26 12:04:53 UTC
FWIW, we use vgs in a udev rule because the size of the VG isn't provided as a property in any other existing udev event (as far as we have been able to determine) and the size of the VG is useful to us.

Comment 15 Zdenek Kabelac 2018-04-26 12:43:17 UTC
(In reply to Brian J. Murrell from comment #14)
> FWIW, we use vgs in a udev rule because the size of the VG isn't provided as
> a property in any other existing udev event (as far as we have been able to
> determine) and the size of the VG is useful to us.

I'm pretty sure that whatever you do here with VG size does *NOT* belong to udev rule processing.

There is pretty strict rule - whatever udev rule with block device is doing - it should be always access only one single device and nothing else - this clearly does not apply for commands like 'vgs'.

You most probably want to have something like systemd generator service - to fire action once device is ready....
But surely there are many other solutions - just none of them can use 'vgs' in the middle of udev rules...

Comment 16 Brian J. Murrell 2018-04-26 12:54:19 UTC
I think my point about VG size is that it would be useful if that were added as one of the properties to the udev event that gets fired when an LV is created.  Better would be a udev event when the VG was created so that we'd only have to deal with it once, but that's a bigger change.

Are systemd generators really the solution?

Generators are small executables that live in /usr/lib/systemd/system-generators/ and other directories listed above. systemd(1) will execute those binaries very early at bootup and at configuration reload time.

It doesn't seem like generators are run when block devices "appear".  Indeed, that seems to fit very well into the job of udev rather than systemd.  Or is there something about systemd generators and block device appearance that is not being covered in https://www.freedesktop.org/software/systemd/man/systemd.generator.html?

I'm still not sure I agree that this is not a bug, when it comes right down to it.

I understand how udev needs to be able to create the links when an lvcreate is run, and I am by no means suggesting that that should be asynchronous as you are implying that I am.

I also completely understand that in order to create those links, udev runs LVM (and other) commands to gather the information needed to create the links.

So ultimately vgs is deadlocking with some other LVM command (run by udev) is it not?

Also, given that we run vgs at udev rules position 99, why is LVM's udev processing not complete by the time our udev rule is being run?

Comment 17 Zdenek Kabelac 2018-04-26 13:02:04 UTC
lvm2 is using this mechanism to fire  'pvscan'  whenever new device appear in the system.


> I'm still not sure I agree that this is not a bug, when it comes right down to it.

I'm afraid there is not much else you can do...
It's core design of udev.

> So ultimately vgs is deadlocking with some other LVM command (run by udev) is it 
not?


Yes 'vgs' is waiting for VG lock till 'lvcreate' would drop  'write' VG lock - that is hold till the create LV device is ready in the system (so the write lock is dropped once udev confirms transaction).


Please drop the idea of using  'vgs' inside udev rules - it's not meant to work.

Comment 18 Brian J. Murrell 2018-04-26 13:30:57 UTC
(In reply to Zdenek Kabelac from comment #17)
> 
> Yes 'vgs' is waiting for VG lock till 'lvcreate' would drop  'write' VG lock
> - that is hold till the create LV device is ready in the system (so the
> write lock is dropped once udev confirms transaction).

And so why is this write lock not being dropped eventually, allowing vgs to proceed?  If vgs is waiting for a lock, why is it able to prevent lvcreate (and the rest of the lvm commands that udev runs to create the device links) from completing?

> Please drop the idea of using  'vgs' inside udev rules - it's not meant to
> work.

My point here isn't entirely on pursuing using vgs in udev rules.  It's to determine if there is a more fundamental deadlock bug in LVM here that us wanting to run vgs in udev has uncovered.

It's also still not clear why lvcreate and it's udev link creation is not complete before our vgs is even being run in udev rule position 99.

Comment 19 Zdenek Kabelac 2018-04-26 13:46:19 UTC
1. There is no locking bug in lvm2.

2. The only bug I can see is the bug in your udev rules - where you are using  wrong command.

3. Deadlock chains has been already explained in 3 comments above here.

4. There is no plan to drop 'VG' write lock in lvcreate prior waiting on udev to let users run lvm commands inside udev rules while process created LVs  - it's there for very good reason.

Comment 20 Brian J. Murrell 2018-04-26 14:21:46 UTC
(In reply to Zdenek Kabelac from comment #19)
> 1. There is no locking bug in lvm2.

How can you be so sure of that?  You haven't really concretely answered all of my questions which raise the issue of a potential bug.

From the very first response you have been defensive and dismissive to the point of not even wanting to consider that there might be a bug causing the deadlock.

Why can't you stop being so defensive and just step back and look objectively at the situation for a moment?

> 2. The only bug I can see is the bug in your udev rules - where you are
> using  wrong command.

Yet it has not been completely explained why it is wrong and why the problem it's causing is not a bug.
 
> 3. Deadlock chains has been already explained in 3 comments above here.

Not it hasn't.  There are hand-wavy explanations above but nothing that concretely explains why a deadlock occurs and why it's not a bug.

Locking on a resource is supposed to arbitrate access to the resource, not deadlock consumers of the resource.

> 4. There is no plan to drop 'VG' write lock in lvcreate prior waiting on
> udev to let users run lvm commands inside udev rules while process created
> LVs  - it's there for very good reason.

I'm not asking you to drop locking.  I don't see anywhere above where I asked for that.

Comment 21 Joe Thornber 2018-04-26 14:42:49 UTC
Hi Brian,

The issue is occurring because lvcreate is waiting on udev before it releases the vg lock, and your new udev rule is calling vgs which needs that lock.

so we have a loop:

lvcreate (hold vg lock) -> udev -> vgs (wants vg lock) ... hang


lvcreate holding the vg lock while activation completes is pretty fundamental in the design, so that's not going to change I'm afraid.  You'll have to be more circumspect with your udev rules.

As a side issue you should consider 'vgs' to be an expensive operation; potentially scanning all disks.  Running it every time a dev appears seems overkill.

Comment 22 Brian J. Murrell 2018-04-26 14:54:26 UTC
(In reply to Joe Thornber from comment #21)
> Hi Brian,

Hi Joe.
 
> The issue is occurring because lvcreate is waiting on udev before it
> releases the vg lock, and your new udev rule is calling vgs which needs that
> lock.

Right.  I understand that much.
 
> so we have a loop:
> 
> lvcreate (hold vg lock) -> udev -> vgs (wants vg lock) ... hang

Why does vgs *wanting* a lock that lvcreate is holding cause a hang though?  (Preaching to the choir, but to lay some ground work) the point of locking is so that one consumer (lvcreate) can continue uninterrupted while he does his work and any another consumer (vgs) that would otherwise contend, will either be refused the resource (the lock) or (told/choose to) wait for it.  But that second consumer simply *wanting* the lock should not be able to interfere with the first one who already has it.

> lvcreate holding the vg lock while activation completes is pretty
> fundamental in the design, so that's not going to change I'm afraid.

Absolutely agreed.  I'm not asking for that to change.  I'm trying to understand why vgs is able to deadlock lvcreate once lvcreate already has the lock.

> As a side issue you should consider 'vgs' to be an expensive operation;
> potentially scanning all disks.  Running it every time a dev appears seems
> overkill.

I don't disagree.  Unfortunately in the udev event that fires when the LV is created, the size of the VG is not available and there is no udev event when a VG is created, which admittedly would seem like a more logical place for the VG size.

Comment 23 Joe Thornber 2018-04-26 15:09:06 UTC
> Absolutely agreed.  I'm not asking for that to change.  I'm trying to understand why vgs is able to deadlock lvcreate once lvcreate already has the lock.

i) lvcreate has the lock
ii) lvcreate is waiting for udev
iii) udev is waiting for vgs
iv) vgs is waiting for the lock

Comment 24 Peter Rajnoha 2018-04-26 15:25:37 UTC
(In reply to Brian J. Murrell from comment #10)
> ACTION=="add|change", ENV{DM_VG_NAME}=="?*", PROGRAM="/usr/sbin/vgs
> --no-headings --units b -o size $env{DM_VG_NAME}", RESULT=="?*",
> ENV{IML_DM_VG_SIZE}="$result"

How are you using that information? In another udev rule? Or are you accessing the udev database through libudev then to get this information?

The usual way in these cases is to create a udev monitor (using libudev) and then when the uevent finished processing the udev rules, you can get notified and then you can execute your actions if you need to...

Comment 25 Brian J. Murrell 2018-04-26 15:39:28 UTC
(In reply to Joe Thornber from comment #23)
> 
> i) lvcreate has the lock
> ii) lvcreate is waiting for udev
> iii) udev is waiting for vgs
> iv) vgs is waiting for the lock

In above, "waiting for udev" really means waiting for some LVM commands that udev runs?  Those must be commands that don't want the lock that lvcreate took out but must block while vgs is running, yes?  Perhaps another lock?

With our vgs rule at position 99 in the udev rules list, why is it being executed before lvcreate (and it's udev triggered "workers") is complete and has released the lock?

Is something that udev is running that lvcreate is waiting for to finish running asynchronously and so it (the thing that udev is running) returns before it's actually complete (and letting lvcreate complete) allowing udev to continue on processing getting to our rule calling vgs?

Comment 26 Brian J. Murrell 2018-04-26 15:47:10 UTC
(In reply to Peter Rajnoha from comment #24)
> 
> How are you using that information?

I have asked the developer that is writing that bit of code why we need it.

But generally speaking we are writing a disk management system and so we use udev to build and maintain a representation of the devices in the machine for the management interface.  We of course need to catalogue not only the disks but how they are being used, as LVM devices being just one possibility.

> The usual way in these cases is to create a udev monitor (using libudev) and
> then when the uevent finished processing the udev rules, you can get
> notified and then you can execute your actions if you need to...

But if the rules are ordered correctly (ours is at 99) and run sequentially, how is that any different than just having udev execute the action directly?

Comment 27 Zdenek Kabelac 2018-04-26 16:00:26 UTC
So let's put here some udev knowhow:

PROGRAM rules are executed in-place to collect its output and use it for rule

RUN rules are collected and executed after all udev rules are process and link are created.

Your 'vgs' is using 'PROGRAM'
lvm2 95 udev notification uses 'RUN'

I hope this makes it clear where deadlock is coming from.

As suggesting for:

https://github.com/intel-hpdd/device-scanner/pull/162

ugly hack can be (as mentioned in comment 13):

'vgs --config 'global/locking_type=0'....'

but as has been also pointed several times - 'vgs' will scan all devices in your system.

Comment 28 Alasdair Kergon 2018-04-27 00:29:40 UTC
Seriously, don't mess about setting locking_type like that!  Will just cause you more problems.

We added a supported --readonly option for these situations.

      --readonly
              Run  the command in a special read-only mode which will read on-
              disk metadata without needing to take any locks.   This  can  be
              used  to  peek  inside  metadata used by a virtual machine image
              while the virtual machine is running.  It can also  be  used  to
              peek  inside  the metadata of clustered Volume Groups when clus-
              tered locking is not configured or running.  No attempt will  be
              made  to  communicate  with  the device-mapper kernel driver, so
              this option is unable to report whether or not  Logical  Volumes
              are actually in use.