RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1573960 - lvconvert - don't return success doing -m conversion on degraded raid1 LV
Summary: lvconvert - don't return success doing -m conversion on degraded raid1 LV
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-02 15:19 UTC by Heinz Mauelshagen
Modified: 2021-09-03 12:39 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.178-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 11:02:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3193 0 None None None 2018-10-30 11:03:20 UTC

Description Heinz Mauelshagen 2018-05-02 15:19:34 UTC
Description of problem:
"lvconvert -mN RaidLV" was used to try repairing a degraded raid1 LV
instead of the recommended "lvconvert --repair RaidLV"

Version-Release number of selected component (if applicable):
2.02.177(2)-RHEL7

How reproducible:
Always

Steps to Reproduce:
1. create e.g. 2-legged raid1 LV in VG with 2 PVs
2. fail 1 PV
3. run "lvconvert -m1 RaidLV"

Actual results:
Leaves the raid1 LV degraded but returns success

Expected results:
Don't return success because the degraded raid1 LV has 2 legs still (one inoperational). Error report in this degraded situation.

Additional info:
https://bugzilla.redhat.com/show_bug.cgi?id=1572528 documents the bogus "lvconvert -m ..." use/behaviour.

Comment 2 Heinz Mauelshagen 2018-05-03 16:52:21 UTC
lvm2 upstream commit 4ebfd8e8eb68442efc334b35bc1f22eda3e4dd3d

Comment 3 Red Hat Bugzilla Rules Engine 2018-05-03 16:52:30 UTC
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.

Comment 7 Roman Bednář 2018-08-07 09:32:33 UTC
Verified.


# lvs -a -o lv_name,devices
  LV               Devices                            
  root             /dev/vda2(205)                     
  swap             /dev/vda2(0)                       
  raid1            raid1_rimage_0(0),raid1_rimage_1(0)
  [raid1_rimage_0] /dev/sda1(1)                       
  [raid1_rimage_1] /dev/sdb1(1)                       
  [raid1_rmeta_0]  /dev/sda1(0)                       
  [raid1_rmeta_1]  /dev/sdb1(0) 


# echo "offline" > /sys/block/sda/device/state

# pvscan
  /dev/sda: open failed: No such device or address
  Error reading device /dev/sda1 at 0 length 4.
  Error reading device /dev/sda1 at 4096 length 4.
  PV /dev/sda1   VG vg              lvm2 [<29.99 GiB / 28.98 GiB free]
  PV /dev/sdb1   VG vg              lvm2 [<29.99 GiB / 28.98 GiB free]
  ....
  ....
  Total: 11 [306.97 GiB] / in use: 3 [66.97 GiB] / in no VG: 8 [<240.00 Gi

# vgreduce --removemissing -f vg
  WARNING: Not using lvmetad because a repair command was run.
  /dev/sda: open failed: No such device or address
  Couldn't find device with uuid xK7r9b-RZ0o-NBM8-6RH5-Pnwm-ovOF-FUNNYX.
  WARNING: Couldn't find all devices for LV vg/raid1_rimage_0 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV vg/raid1_rmeta_0 while checking used and assumed devices.
  Wrote out consistent volume group vg.


# vgextend vg /dev/sdc1
  WARNING: Not using lvmetad because a repair command was run.
  /dev/sda: open failed: No such device or address
  /dev/sda1: open failed: No such device or address
  /dev/sda: open failed: No such device or address
  /dev/sda1: open failed: No such device or address
  Volume group "vg" successfully extended

# lvconvert -y -m 1 vg/raid1 /dev/sdc1
  WARNING: Not using lvmetad because a repair command was run.
  /dev/sda: open failed: No such device or address
  /dev/sda1: open failed: No such device or address
  Can't change number of mirrors of degraded vg/raid1.
  Please run "lvconvert --repair vg/raid1" first.
  WARNING: vg/raid1 already has image count of 2.

# lvconvert --repair vg/raid1
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  /dev/sda: open failed: No such device or address
  /dev/sda1: open failed: No such device or address
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Faulty devices in vg/raid1 successfully replaced.

# lvs -a -o lv_name,devices
  WARNING: Not using lvmetad because a repair command was run.
  /dev/sda: open failed: No such device or address
  /dev/sda1: open failed: No such device or address
  LV               Devices                            
  root             /dev/vda2(205)                     
  swap             /dev/vda2(0)                       
  raid1            raid1_rimage_0(0),raid1_rimage_1(0)
  [raid1_rimage_0] /dev/sdc1(1)                       
  [raid1_rimage_1] /dev/sdb1(1)                       
  [raid1_rmeta_0]  /dev/sdc1(0)                       
  [raid1_rmeta_1]  /dev/sdb1(0)  


3.10.0-926.el7.x86_64

lvm2-2.02.180-1.el7    BUILT: Fri Jul 20 19:21:35 CEST 2018

Comment 9 errata-xmlrpc 2018-10-30 11:02:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3193


Note You need to log in before you can comment on or make changes to this bug.