Bug 1261531
| Summary: | Extend of VG does not check if additional devices are already part of it | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Fred Rolland <frolland> | |
| Component: | vdsm | Assignee: | Fred Rolland <frolland> | |
| Status: | CLOSED ERRATA | QA Contact: | Elad <ebenahar> | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | 3.5.3 | CC: | acanan, amureini, bazulay, ebenahar, gwatson, lsurette, nsoffer, rbalakri, Rhev-m-bugs, tnisan, ycui, yeylon, ykaul, ylavi | |
| Target Milestone: | ovirt-3.6.0-rc3 | Keywords: | ZStream | |
| Target Release: | 3.6.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | Bug Fix | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | 1258632 | |||
| : | 1265907 (view as bug list) | Environment: | ||
| Last Closed: | 2016-03-09 19:45:15 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1258632 | |||
| Bug Blocks: | 1265907 | |||
|
Comment 9
Nir Soffer
2015-11-02 16:20:26 UTC
Thanks Nir, based on these steps, an attempt to create a VG out of a PV right after a successful VG creation of this device fails.
[root@green-vdsb ~]# vdsClient -s 0 createVG 472a6f81-1665-4a8a-b3b4-8f100f491d25 360060160f4a0300024e704bdf781e511
ikRbn9-L7yg-n0uZ-LjCz-fMKU-N6NA-2owhyN
[root@green-vdsb ~]# vdsClient -s 0 createVG 472a6f81-1665-4a8a-b3b4-8f100f491d25 360060160f4a0300024e704bdf781e511
Failed to initialize physical device: ("['/dev/mapper/360060160f4a0300024e704bdf781e511']",)
Used vdsm-4.17.10-5.el7ev.noarch
Elad, the fix is on extend VG not create VG. Can you please try to extend a VG twice with the same device ? (In reply to Fred Rolland from comment #11) > Elad, the fix is on extend VG not create VG. > Can you please try to extend a VG twice with the same device ? Extend storage domain out of the same pv twice is not allowed: [root@green-vdsb 00000001-0001-0001-0001-000000000004]# vdsClient -s 0 extendStorageDomain 992fa11e-d046-4911-a898-13a5db4f0457 00000001-0001-0001-0001-000000000004 360060160f4a0300056c5b11d0782e511 [root@green-vdsb 00000001-0001-0001-0001-000000000004]# vdsClient -s 0 extendStorageDomain 992fa11e-d046-4911-a898-13a5db4f0457 00000001-0001-0001-0001-000000000004 360060160f4a0300056c5b11d0782e511 Cannot extend Volume Group: "vgname=992fa11e-d046-4911-a898-13a5db4f0457, devname=['/dev/mapper/360060160f4a0300056c5b11d0782e511']" Elad, can you add the error in vdsm log? The vdsClient error is too generic, but vdsm log should explain why the operation failed. Thread-127::ERROR::2015-11-03 09:25:48,258::task::866::Storage.TaskManager.Task::(_setError) Task=`b1bf8280-175c-46aa-beeb-d5aeddf9d8bf`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 731, in extendStorageDomain
pool.extendSD(sdUUID, dmDevs, force)
File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 1962, in extendSD
sdCache.produce(sdUUID).extend(devlist, force)
File "/usr/share/vdsm/storage/blockSD.py", line 740, in extend
lvm.extendVG(self.sdUUID, devlist, force)
File "/usr/share/vdsm/storage/lvm.py", line 991, in extendVG
raise se.VolumeGroupExtendError(vgName, pvs)
VolumeGroupExtendError: Cannot extend Volume Group: "vgname=992fa11e-d046-4911-a898-13a5db4f0457, devname=['/dev/mapper/360060160f4a0300056c5b11d0782e511']"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0362.html |