Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 653138

Summary: [vdsm][iscsi]PhysDevInitializationError: Failed to initialize physical device - on create VG
Product: Red Hat Enterprise Linux 6 Reporter: Moran Goldboim <mgoldboi>
Component: vdsmAssignee: Ayal Baron <abaron>
Status: CLOSED DUPLICATE QA Contact: yeylon <yeylon>
Severity: high Docs Contact:
Priority: low    
Version: 6.1CC: iheim, Rhev-m-bugs, srevivo, ykaul
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-11-30 20:24:34 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdsm log none

Description Moran Goldboim 2010-11-14 16:34:58 UTC
Created attachment 460383 [details]
vdsm log

Description of problem:
when trying to create a vg on an iscsi tgt on rhel 6 - the operation fails on this error. pvcreate command which fails on vdsm succeeds manually (without filters)

Server:Rhel6 

Version-Release number of selected component (if applicable):
vdsm-4.9-23.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1.add a storage domain to rhevm using iscsi (data domain)
2.
3.
  
Actual results:


Expected results:


Additional info:
VDSM:
Thread-1435::DEBUG::2010-11-14 18:18:14,079::clientIF::207::Storage.Dispatcher.Protect::(wrapper) [10.35.104.7]
Thread-1435::DEBUG::2010-11-14 18:18:14,080::resourceManager::784::irs::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1435::DEBUG::2010-11-14 18:18:14,080::resourceManager::819::irs::(cancelAll) Owner.cancelAll requests {}
Thread-1435::INFO::2010-11-14 18:18:14,080::dispatcher::95::Storage.Dispatcher.Protect::(run) Run and protect: createVG, args: ( vgname=596cccdd-9183-4a86-a856-7de41a19f406 devlist=['1IET_00310001'])
Thread-1435::DEBUG::2010-11-14 18:18:14,081::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: moving from state init -> state preparing
Thread-1435::DEBUG::2010-11-14 18:18:14,081::misc::106::Storage.Misc.excCmd::(execCmd) '/usr/bin/sudo -n /sbin/dmsetup ls --target multipath' (cwd None)
Thread-1435::DEBUG::2010-11-14 18:18:14,102::misc::126::Storage.Misc.excCmd::(execCmd) SUCCESS: <err> = ''; <rc> = 0
Thread-1435::DEBUG::2010-11-14 18:18:14,104::misc::106::Storage.Misc.excCmd::(execCmd) '/usr/bin/sudo -n /sbin/lvm pvcreate --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 filte
r = [ \\"a%/dev/mapper/1IET_00010001|/dev/mapper/1IET_00020001|/dev/mapper/1IET_00030001|/dev/mapper/1IET_00080001|/dev/mapper/1IET_00310001|/dev/mapper/2010000304833505c00002a004cd17700%\\", \\"r%.*%\\" ] }  global {  wait_for_locks=1 }
  backup {  retain_min = 50  retain_days = 0 } " --metadatasize 128m /dev/mapper/1IET_00310001' (cwd None)
Thread-1435::DEBUG::2010-11-14 18:18:14,127::misc::126::Storage.Misc.excCmd::(execCmd) FAILED: <err> = '  Device /dev/mapper/1IET_00310001 not found (or ignored by filtering).\n'; <rc> = 5
Thread-1435::ERROR::2010-11-14 18:18:14,128::task::855::TaskManager.Task::(_setError) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 863, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/hsm.py", line 761, in public_createVG
    lvm.createVG(vgname, devices)
  File "/usr/share/vdsm/storage/lvm.py", line 855, in createVG
    _lvminfo.initpv(pvs[0], withmetadata=True)
  File "/usr/share/vdsm/storage/lvm.py", line 607, in initpv
    raise se.PhysDevInitializationError(device)
PhysDevInitializationError: Failed to initialize physical device: ('/dev/mapper/1IET_00310001',)
Thread-1435::DEBUG::2010-11-14 18:18:14,129::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: Task._run: d7388173-a77e-4c6b-97e5-2463a6bc8592 ('596cccdd-9183-4a86-a856-7de41a19f406', ['1IET_00310001']) {} 
failed - stopping task
Thread-1435::DEBUG::2010-11-14 18:18:14,129::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: stopping in state preparing (force False)
Thread-1435::DEBUG::2010-11-14 18:18:14,130::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: ref 1 aborting True
Thread-1435::INFO::2010-11-14 18:18:14,130::task::1167::TaskManager.Task::(prepare) aborting: Task is aborted: 'Failed to initialize physical device' - code 601
Thread-1435::DEBUG::2010-11-14 18:18:14,130::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: Prepare: aborted: Failed to initialize physical device
Thread-1435::DEBUG::2010-11-14 18:18:14,131::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: ref 0 aborting True
Thread-1435::DEBUG::2010-11-14 18:18:14,131::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: Task._doAbort: force False
Thread-1435::DEBUG::2010-11-14 18:18:14,132::resourceManager::819::irs::(cancelAll) Owner.cancelAll requests {}
Thread-1435::DEBUG::2010-11-14 18:18:14,132::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: moving from state preparing -> state aborting
Thread-1435::DEBUG::2010-11-14 18:18:14,133::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: _aborting: recover policy none
Thread-1435::DEBUG::2010-11-14 18:18:14,133::task::491::TaskManager.Task::(_debug) Task d7388173-a77e-4c6b-97e5-2463a6bc8592: moving from state aborting -> state failed
Thread-1435::DEBUG::2010-11-14 18:18:14,133::resourceManager::784::irs::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1435::DEBUG::2010-11-14 18:18:14,134::resourceManager::819::irs::(cancelAll) Owner.cancelAll requests {}
Thread-1435::ERROR::2010-11-14 18:18:14,134::dispatcher::104::Storage.Dispatcher.Protect::(run) {'status': {'message': "Failed to initialize physical device: ('/dev/mapper/1IET_00310001',)", 'code': 601}, 'args': [('/dev/mapper/1IET_0031
0001',)]}
Thread-1435::ERROR::2010-11-14 18:18:14,135::dispatcher::106::Storage.Dispatcher.Protect::(run) Traceback (most recent call last):
  File "/usr/share/vdsm/storage/dispatcher.py", line 97, in run
    result = ctask.prepare(self.func, *args, **kwargs)
  File "/usr/share/vdsm/storage/task.py", line 1174, in prepare
    raise self.error
PhysDevInitializationError: Failed to initialize physical device: ('/dev/mapper/1IET_00310001',)

multipath -ll:
[root@gold-vdsd ~]# multipath -ll
1IET_00010001 dm-3 IET,VIRTUAL-DISK
size=15G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 17:0:0:1 sde 8:64 active ready  running
1IET_00020001 dm-10 IET,VIRTUAL-DISK
size=15G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 14:0:0:1 sdd 8:48 active ready  running
1IET_00080001 dm-19 IET,VIRTUAL-DISK
size=15G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 15:0:0:1 sdc 8:32 active ready  running
1IET_00030001 dm-17 IET,VIRTUAL-DISK
size=15G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 10:0:0:1 sdb 8:16 active ready  running
2010000304833505c00002a004cd17700 dm-18 SUN,SOLARIS
size=15G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 12:0:0:0 sdf 8:80 failed faulty running
1IET_00310001 dm-20 IET,VIRTUAL-DISK
size=15G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 16:0:0:1 sdg 8:96 active ready  running

Comment 2 Ayal Baron 2010-11-30 20:24:34 UTC

*** This bug has been marked as a duplicate of bug 642575 ***