| Summary: | [vdsm] [storage] 2.2.6 - LogicalVolumesScanError - host goes to non-operational as it can't access its vg (filter issue) | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 5 | Reporter: | Haim <hateya> |
| Component: | vdsm22 | Assignee: | Eduardo Warszawski <ewarszaw> |
| Status: | CLOSED NEXTRELEASE | QA Contact: | Omri Hochman <ohochman> |
| Severity: | high | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 5.6 | CC: | abaron, bazulay, cpelland, danken, dnaori, ewarszaw, iheim, mgoldboi, smizrahi, yeylon |
| Target Milestone: | rc | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-01-28 19:26:48 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 670810 | ||
The volume group metadata is in volumes not included in the filter.
Only
/dev/mapper/1REDHAT_SCALE23
/dev/mapper/1REDHAT_SCALE24
are in the filter.
Thread-27818::DEBUG::2011-01-13 17:28:32,098::misc::96::irs::'/usr/bin/sudo -n /usr/sbin/vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] write_cache_state=0 filter = [ \\"a%/dev/mapper/1REDHAT_SCALE23|/dev/mapper/1REDHAT
_SCALE24|/dev/mapper/1REDHAT_SCALE3|/dev/mapper/1REDHAT_SCALE4|/dev/mapper/1REDHAT_SCALE5|/dev/mapper/1REDHAT_SCALE6|/dev/mapper/1REDHAT_SCALE7%\\", \\"r%.*%\\" ] } backup { retain_min = 50 retain_days = 0 } " d6b96c6b-168d-48e5-918f-
6e5a7035ce0b' (cwd None)
Thread-27818::WARNING::2011-01-13 17:28:32,279::misc::121::irs::FAILED: <err> = ' Volume group "d6b96c6b-168d-48e5-918f-6e5a7035ce0b" not found\n'; <rc> = 5
[root@nott-vds3 ~]# vgs d6b96c6b-168d-48e5-918f-6e5a7035ce0b -o +pv_name -v
Using volume group(s) on command line
Finding volume group "d6b96c6b-168d-48e5-918f-6e5a7035ce0b"
Found duplicate PV nwoav1f4fnb4NLWWr6ZL19065gb1BtSD: using /dev/sdcu not /dev/sdaw
VG Attr Ext #PV #LV #SN VSize VFree VG UUID PV
d6b96c6b-168d-48e5-918f-6e5a7035ce0b wz--n- 128.00M 7 6 0 83.12G 79.25G yIioBo-gr3x-bk5Z-zqlC-09BD-xZTJ-p7YU6a /dev/mpath/1REDHAT_SCALE29
d6b96c6b-168d-48e5-918f-6e5a7035ce0b wz--n- 128.00M 7 6 0 83.12G 79.25G yIioBo-gr3x-bk5Z-zqlC-09BD-xZTJ-p7YU6a /dev/mpath/1REDHAT_SCALE28
d6b96c6b-168d-48e5-918f-6e5a7035ce0b wz--n- 128.00M 7 6 0 83.12G 79.25G yIioBo-gr3x-bk5Z-zqlC-09BD-xZTJ-p7YU6a /dev/mpath/1REDHAT_SCALE27
d6b96c6b-168d-48e5-918f-6e5a7035ce0b wz--n- 128.00M 7 6 0 83.12G 79.25G yIioBo-gr3x-bk5Z-zqlC-09BD-xZTJ-p7YU6a /dev/mpath/1REDHAT_SCALE26
d6b96c6b-168d-48e5-918f-6e5a7035ce0b wz--n- 128.00M 7 6 0 83.12G 79.25G yIioBo-gr3x-bk5Z-zqlC-09BD-xZTJ-p7YU6a /dev/mpath/1REDHAT_SCALE25
d6b96c6b-168d-48e5-918f-6e5a7035ce0b wz--n- 128.00M 7 6 0 83.12G 79.25G yIioBo-gr3x-bk5Z-zqlC-09BD-xZTJ-p7YU6a /dev/mpath/1REDHAT_SCALE24
d6b96c6b-168d-48e5-918f-6e5a7035ce0b wz--n- 128.00M 7 6 0 83.12G 79.25G yIioBo-gr3x-bk5Z-zqlC-09BD-xZTJ-p7YU6a /dev/mpath/1REDHAT_SCALE23
should be fixed by http://git.engineering.redhat.com/?p=users/dkenigsb/vdsm.git;a=commitdiff;h=dfa7ec0348fd122109f065e148acaf9240dd781c patches committed to RHEL-5 branch, moving to MODIFIED. Closing all rhel-5.7 clones of rhev-2.2.6 bugs as they have served their purpose. |
Description of problem: issue: testing version 2.2.6, with certain storage configuration where each storage domain has 8 pvs, hsm goes to non-operational due to lvm error that comes from wrong vdsm filtering, message is: LogicalVolumesScanError: Logical volume scanning error: "vgname=d6b96c6b-168d-48e5-918f-6e5a7035ce0b, lvs=['metadata', 'leases', 'ids', 'inbox', 'outbox', 'master'] problem is at certain point in time (happens every 10) in refreshStoragePool command, volume group is not found ( Volume group "d6b96c6b-168d-48e5-918f-6e5a7035ce0b" not found), and command failed, however, when issuing vgs command, output shows that vg is there. [root@nott-vds3 vdsm]# vgs Found duplicate PV nwoav1f4fnb4NLWWr6ZL19065gb1BtSD: using /dev/sdcu not /dev/sdaw Found duplicate PV nwoav1f4fnb4NLWWr6ZL19065gb1BtSD: using /dev/mpath/1REDHAT_SCALE7 not /dev/mpath/1REDHAT_SCALE29 Found duplicate PV Pc7CgtOVvv6l1hjZUUUsUqT2Mn57D4Zr: using /dev/sddo not /dev/sddj Found duplicate PV Pc7CgtOVvv6l1hjZUUUsUqT2Mn57D4Zr: using /dev/sdcz not /dev/sddo Found duplicate PV Pc7CgtOVvv6l1hjZUUUsUqT2Mn57D4Zr: using /dev/sdde not /dev/sdcz VG #PV #LV #SN Attr VSize VFree 12c3cdfd-aef0-4752-b9ee-b58eea9014ae 8 6 0 wz--n- 95.00G 91.12G 17d52d67-3188-4c5e-99ac-b34ecb5687f7 8 6 0 wz--n- 95.00G 91.12G d6b96c6b-168d-48e5-918f-6e5a7035ce0b 7 6 0 wz--n- 83.12G 79.25G f4e49402-1ede-4f66-9cbc-5ac255d243ff 7 6 0 wz--n- 83.12G 79.25G vg0 reviwing this issue with Edurado, it appears as if the problem comes from wrong vdsm filter. [root@nott-vds3 ~]# /usr/sbin/lvs --config " devices { preferred_names = [\"^/dev/mapper/\"] write_cache_state=0 } backup { retain_min = 50 retain_days = 0 } " -o name,attr --noheadings d6b96c6b-168d-48e5-918f-6e5a7035ce0b/metadata d6b96c6b-168d-48e5-918f-6e5a7035ce0b/leases d6b96c6b-168d-48e5-918f-6e5a7035ce0b/ids d6b96c6b-168d-48e5-918f-6e5a7035ce0b/inbox d6b96c6b-168d-48e5-918f-6e5a7035ce0b/outbox d6b96c6b-168d-48e5-918f-6e5a7035ce0b/master Found duplicate PV nwoav1f4fnb4NLWWr6ZL19065gb1BtSD: using /dev/sdcu not /dev/sdaw ids -wi-a- inbox -wi-a- leases -wi-a- master -wi-a- metadata -wi-a- outbox -wi-a- [root@nott-vds3 ~]# /usr/sbin/lvs --config " devices { preferred_names = [\"^/dev/mapper/\"] write_cache_state=0 filter = [ \"a%/dev/mapper/1REDHAT_SCALE23|/dev/mapper/1REDHAT_SCALE24|/dev/mapper/1REDHAT_SCALE3|/dev/mapper/1REDHAT_SCALE4|/dev/mapper/1REDHAT_SCALE5|/dev/mapper/1REDHAT_SCALE6|/dev/mapper/1REDHAT_SCALE7%\", \"r%.*%\" ] } backup { retain_min = 50 retain_days = 0 } " -o name,attr --noheadings d6b96c6b-168d-48e5-918f-6e5a7035ce0b/metadata d6b96c6b-168d-48e5-918f-6e5a7035ce0b/leases d6b96c6b-168d-48e5-918f-6e5a7035ce0b/ids d6b96c6b-168d-48e5-918f-6e5a7035ce0b/inbox d6b96c6b-168d-48e5-918f-6e5a7035ce0b/outbox d6b96c6b-168d-48e5-918f-6e5a7035ce0b/master Volume group "d6b96c6b-168d-48e5-918f-6e5a7035ce0b" not found Skipping volume group d6b96c6b-168d-48e5-918f-6e5a7035ce0b repro steps: 1) 2 hosts connected to iscsi storage domain 2) storage pool configuration: - 3 domains - each domain is consisted with 2 targets - both targets exposed same phisical devices (2 path) - each target is connected twice, to 2 different ip address. 3) check out non-spm host