Description of problem:
Running "pvs" takes an exclusive lock which prevents running multiple pvs commands concurrently.
Even if different filters are used (which contain entirely different devices) the commands are still serialized.
What is the full 'pvs' command line you are using? (Perhaps an alternative command would suffice, or perhaps that specific form of the command doesn't need the lock even though it takes it by default.)
pvs --config (filter with one device) PV_NAME
with or without additional fields, but I don't think that should matter.
which fields? pv fields? vg fields? lv fields? It makes a difference.
And do you know anything about the PV in advance?
For example, do you know that it always belongs to a VG - or might it be orphaned? If it belongs to a VG do you know the name of that VG already or not?
and many times we know the name of the vg, yes, but if this pv does not contain an mda...
If you're only accessing PV fields, it's irrelevant whether there's an mda or not.
well - please just provide the lists of fields so we can see if there's anything we can do. The global lock is certainly required by pvs in the general case, but perhaps your specific case doesn't need it - I don't know.
what we run today is:
lvm pvs --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_count,pe_alloc_count,mda_count
So it prints fields which need to be parsed from metadata (not only field in PV header).
This is basically either request for some unlocked way how to read metadata and/or for performance improvement when reading it.
So *if* lvmetad is in use, can we skip setting lock_global in process_each_pv() and rely on individual locks instead?
Hmm. I think it should be OK to not take the lock if we are talking to lvmetad, but we should double-check that no other code (besides just reading metadata) relying on the lock crept in.
Pushed upstream as e5709a3..b19f840.
Marking as SanityOnly,
QA will mark it as verified upon completion of the last REG test run.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.