Bug 1573960
| Summary: | lvconvert - don't return success doing -m conversion on degraded raid1 LV | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Heinz Mauelshagen <heinzm> |
| Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> |
| lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | unspecified | ||
| Priority: | unspecified | CC: | agk, cmarthal, heinzm, jbrassow, msnitzer, prajnoha, rbednar, rhandlin, zkabelac |
| Version: | 7.5 | Keywords: | Reopened |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.178-1.el7 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-10-30 11:02:26 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Heinz Mauelshagen
2018-05-02 15:19:34 UTC
lvm2 upstream commit 4ebfd8e8eb68442efc334b35bc1f22eda3e4dd3d Development Management has reviewed and declined this request. You may appeal this decision by reopening this request. Verified. # lvs -a -o lv_name,devices LV Devices root /dev/vda2(205) swap /dev/vda2(0) raid1 raid1_rimage_0(0),raid1_rimage_1(0) [raid1_rimage_0] /dev/sda1(1) [raid1_rimage_1] /dev/sdb1(1) [raid1_rmeta_0] /dev/sda1(0) [raid1_rmeta_1] /dev/sdb1(0) # echo "offline" > /sys/block/sda/device/state # pvscan /dev/sda: open failed: No such device or address Error reading device /dev/sda1 at 0 length 4. Error reading device /dev/sda1 at 4096 length 4. PV /dev/sda1 VG vg lvm2 [<29.99 GiB / 28.98 GiB free] PV /dev/sdb1 VG vg lvm2 [<29.99 GiB / 28.98 GiB free] .... .... Total: 11 [306.97 GiB] / in use: 3 [66.97 GiB] / in no VG: 8 [<240.00 Gi # vgreduce --removemissing -f vg WARNING: Not using lvmetad because a repair command was run. /dev/sda: open failed: No such device or address Couldn't find device with uuid xK7r9b-RZ0o-NBM8-6RH5-Pnwm-ovOF-FUNNYX. WARNING: Couldn't find all devices for LV vg/raid1_rimage_0 while checking used and assumed devices. WARNING: Couldn't find all devices for LV vg/raid1_rmeta_0 while checking used and assumed devices. Wrote out consistent volume group vg. # vgextend vg /dev/sdc1 WARNING: Not using lvmetad because a repair command was run. /dev/sda: open failed: No such device or address /dev/sda1: open failed: No such device or address /dev/sda: open failed: No such device or address /dev/sda1: open failed: No such device or address Volume group "vg" successfully extended # lvconvert -y -m 1 vg/raid1 /dev/sdc1 WARNING: Not using lvmetad because a repair command was run. /dev/sda: open failed: No such device or address /dev/sda1: open failed: No such device or address Can't change number of mirrors of degraded vg/raid1. Please run "lvconvert --repair vg/raid1" first. WARNING: vg/raid1 already has image count of 2. # lvconvert --repair vg/raid1 WARNING: Disabling lvmetad cache for repair command. WARNING: Not using lvmetad because of repair. /dev/sda: open failed: No such device or address /dev/sda1: open failed: No such device or address Attempt to replace failed RAID images (requires full device resync)? [y/n]: y Faulty devices in vg/raid1 successfully replaced. # lvs -a -o lv_name,devices WARNING: Not using lvmetad because a repair command was run. /dev/sda: open failed: No such device or address /dev/sda1: open failed: No such device or address LV Devices root /dev/vda2(205) swap /dev/vda2(0) raid1 raid1_rimage_0(0),raid1_rimage_1(0) [raid1_rimage_0] /dev/sdc1(1) [raid1_rimage_1] /dev/sdb1(1) [raid1_rmeta_0] /dev/sdc1(0) [raid1_rmeta_1] /dev/sdb1(0) 3.10.0-926.el7.x86_64 lvm2-2.02.180-1.el7 BUILT: Fri Jul 20 19:21:35 CEST 2018 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3193 |