Bug 1187481 - lvm2 2.02.115 breaks storaged, which breaks Cockpit
Summary: lvm2 2.02.115 breaks storaged, which breaks Cockpit
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 21
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Peter Rajnoha
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On: 1211287 1220344
Blocks: 1211183
TreeView+ depends on / blocked
 
Reported: 2015-01-30 07:34 UTC by Marius Vollmer
Modified: 2015-05-11 11:48 UTC (History)
14 users (show)

Fixed In Version: lvm2-2.02.116-3.fc21
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1211183 (view as bug list)
Environment:
Last Closed: 2015-02-02 17:23:39 UTC
Type: Bug


Attachments (Terms of Use)

Description Marius Vollmer 2015-01-30 07:34:41 UTC
Description of problem:

lvm2app has regressed with regard to the "lv_attr" property: lvm_lv_get_property does no longer deliver a value for it.

This means that storaged treats all logical volumes as inactive and doesn't recognize thin pools and snapshots anymore, among other things. 

Switching from lvm_lv_get_property to lvm_lv_get_attr is not enough.  The latter returns incomplete attributes where 'active' and 'open' are set to "unknown".

Version-Release number of selected component (if applicable):

lvm2-2.02.115-1.fc21.x86_64

How reproducible:

Always.

Steps to Reproduce:
1. Create a thin pool with Cockpit

Actual results:

Thin pool is created but shown as a inactive plain logical volume.

Expected results:

This pool is shown as a active thin pool

Additional info:

https://www.redhat.com/archives/lvm-devel/2015-January/msg00075.html

Comment 1 Marius Vollmer 2015-01-30 07:45:55 UTC
Looks like lvm2app is unmaintained and storaged should be ported away from it.  Would it make sense to remove lvm2app from Fedora (and maybe the world)?

Comment 2 Peter Rajnoha 2015-01-30 09:03:32 UTC
Currently, there are several changes in the reporting area of LVM2 code. One of those changes is that we're trying to reuse the same info and status ioctl result for one reporting line, removing a need to call several ioctl per one line of report output which can save some time and resources.

Unfortunately, one of the recent changes, where we tried to convert the lv_attr (and hence the lvm_lv_get_attr lvm2app counterpart), we introduced a regression that got unnoticed and it caused some of these "info" and "status" attributes in lv_attr (and the string returned by lvm_lv_get_attr) to have "unknown" (the "X") value.

Sorry for that! It should be fixed by this patch now:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=531cc58d899b7dd1421bfe83f5503aa266390334

I'll do a new update with this fix in for f21.

Comment 3 Marius Vollmer 2015-01-30 09:11:07 UTC
> It should be fixed by this patch now:

Great, thanks!

From reading the patch, I think this doesn't fully fix the regression, unfortunately.

lvm_lv_get_property can't find the "lv_attr" attribute at all anymore, because it's type was changed from LVS to LVSSTATUSINFO, as far as I can tell.

We can easily update storaged to use lvm_lv_get_attr, though.

What do you propose?

Comment 4 Marius Vollmer 2015-01-30 09:14:03 UTC
Correction: snapshots are recognized.

Comment 5 Peter Rajnoha 2015-01-30 09:34:04 UTC
Just looking at the lvm_lv_get_property. You can also use lvm_lv_get_attr - internally, it's the same code which gets the value - in case lvm_lv_get_attr it's direct call to acquire the attr string. The lvm_lv_get_property is the general one for all fields which has some extra wrappers to handle this in general way, but underneath, for the lv_attr, it returns exactly the same as lvm_lv_get_attr.

Comment 6 Marius Vollmer 2015-01-30 10:04:15 UTC
> You can also use lvm_lv_get_attr - internally, it's the same code which gets the value 

Yes, but we need to change storaged for that, and rush the new version into Fedora.  That's fine, but if you are going to fix the lvm_lv_get_property regression as well, we don't need to even do that.

Comment 7 Peter Rajnoha 2015-01-30 10:08:58 UTC
Yes, if you wait a bit, I think I can fix that. Hopefully today.

Comment 8 Peter Rajnoha 2015-01-30 10:22:54 UTC
So this one should fix the issue with lvm_lv_get_property:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=8650404df1884f1e040aad4343c8db0eac71e125

I've tested that on my machine, but I'll do a scratch for you to test too, just for sure.

Comment 9 Peter Rajnoha 2015-01-30 10:36:12 UTC
F21 scratch build here:
http://koji.fedoraproject.org/koji/taskinfo?taskID=8781186

Comment 10 Marius Vollmer 2015-01-30 10:44:50 UTC
> F21 scratch build here:

Awesome!  Testing...

Comment 11 Marius Vollmer 2015-01-30 10:55:14 UTC
(In reply to Marius Vollmer from comment #10)
> > F21 scratch build here:
> 
> Awesome!  Testing...

Works as expected.  Lvm_lv_get_property returns good values for "lv_attr"  and storaged recognizes volumes as thin pools again.

Thanks!

Comment 12 Peter Rajnoha 2015-01-30 11:35:50 UTC
OK, there's going to be a new upstream release today anyway, so I'll do rawhide and F21 pkgs then.

Comment 13 Fedora Update System 2015-01-30 17:42:54 UTC
lvm2-2.02.116-3.fc21 has been submitted as an update for Fedora 21.
https://admin.fedoraproject.org/updates/lvm2-2.02.116-3.fc21

Comment 14 Fedora Update System 2015-02-01 00:26:24 UTC
Package lvm2-2.02.116-3.fc21:
* should fix your issue,
* was pushed to the Fedora 21 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing lvm2-2.02.116-3.fc21'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2015-1504/lvm2-2.02.116-3.fc21
then log in and leave karma (feedback).

Comment 15 Fedora Update System 2015-02-02 17:23:39 UTC
lvm2-2.02.116-3.fc21 has been pushed to the Fedora 21 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 16 Marius Vollmer 2015-02-03 08:02:46 UTC
Thanks, excellent service, will shop again!


Note You need to log in before you can comment on or make changes to this bug.