Bug 678886 - [VDSM][Storage] iSCSI domain validation failed with 'unknown device'.
Summary: [VDSM][Storage] iSCSI domain validation failed with 'unknown device'.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.1
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Eduardo Warszawski
QA Contact: yeylon@redhat.com
URL:
Whiteboard: Storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-02-20 16:01 UTC by David Naori
Modified: 2016-04-18 06:38 UTC (History)
10 users (show)

Fixed In Version: vdsm-4.9-52.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-19 15:28:02 UTC
Target Upstream Version:


Attachments (Terms of Use)
full vdsm-log (3.17 MB, application/octet-stream)
2011-02-20 16:01 UTC, David Naori
no flags Details

Description David Naori 2011-02-20 16:01:53 UTC
Created attachment 479777 [details]
full vdsm-log

Description of problem: When iSCSI SD contains inaccessible PV, the PV marked as unknown device, the VG marked as parital and getStorageDomainInfo fails with the Error below :

Thread-9478::ERROR::2011-02-20 16:46:28,354::task::854::TaskManager.Task::(_setError) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 862, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/hsm.py", line 1303, in public_getStorageDomainInfo
    self.validateSdUUID(sdUUID)
  File "/usr/share/vdsm/storage/hsm.py", line 113, in validateSdUUID
    SDF.produce(sdUUID=sdUUID).validate()
  File "/usr/share/vdsm/storage/blockSD.py", line 600, in validate
    svdsm.testReadDevices(devices)
  File "<string>", line 2, in testReadDevices
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
    raise convert_to_error(kind, result)
OSError: [Errno 2] No such file or directory: 'unknown device'
 
Version-Release number of selected component (if applicable):

-vdsm-4.9-49.el6.x86_64

How reproducible:
-always

Steps to Reproduce:
1.add iscsi storage domain with 2 or more targets 
2.Block connection to one target 

Additional info:

-pvs output: 

  PV                        VG                                   Fmt  Attr PSize   PFree
  /dev/mapper/1IET_00150001 8ebeceaa-e7ac-4606-8506-704c3fc805f8 lvm2 a-    11.81g  8.38g
  /dev/mapper/1Net-3-50G    3afc595e-3d0a-4fa7-8c22-c72688a1ff01 lvm2 a-    49.81g 37.38g
  /dev/mapper/1Net-4-50G    3afc595e-3d0a-4fa7-8c22-c72688a1ff01 lvm2 a-    49.94g 43.94g
  /dev/sda2                 vg0                                  lvm2 a-   136.24g     0
  unknown device            8ebeceaa-e7ac-4606-8506-704c3fc805f8 lvm2 a-    11.94g 11.94g

Comment 3 Igor Lvovsky 2011-02-20 16:39:51 UTC
We need to check it for several scenarios:

1. for attached domain
2. for active domain
3. for last domain (active)

I suspect that for active domains the getRepoStats will return same wrong error.

Comment 6 David Naori 2011-03-09 10:16:42 UTC
Verified on vdsm-4.9-52.el6

vdsClient -s 0 getStorageDomainInfo b9750b78-f531-4161-ac3e-fe1805297861

Domain is either partially accessible or entirely inaccessible: ('b9750b78-f531-4161-ac3e-fe1805297861: [\'  /dev/mapper/1LIBVIRT3: read failed after 0 of 4096 at 107374116864: Input/output error\', \'  /dev/mapper/1LIBVIRT3: read failed after 0 of 4096 at 107374174208: Input/output error\', \'  /dev/mapper/1LIBVIRT3: read failed after 0 of 4096 at 0: Input/output error\', \'  /dev/mapper/1LIBVIRT3: read failed after 0 of 4096 at 4096: Input/output error\', "  Couldn\'t find device with uuid kFcRY0-PUkK-0BWo-HGfc-En60-G4Mf-d12jAR.", \'  The volume group is missing 1 physical volumes.\']',)


Note You need to log in before you can comment on or make changes to this bug.