RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 870364 - Can not add NFS storage in RHEVM
Summary: Can not add NFS storage in RHEVM
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.3
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Eduardo Warszawski
QA Contact: Haim
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-26 09:15 UTC by Mike Cao
Modified: 2015-11-23 03:36 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-11-01 11:51:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm log (747.62 KB, text/plain)
2012-10-26 09:17 UTC, Mike Cao
no flags Details

Description Mike Cao 2012-10-26 09:15:21 UTC
Description of problem:
can not add NFS storage in RHEVM

Version-Release number of selected component (if applicable):
vdsm-4.9.6-39.0.el6_3.x86_64
vdsm-python-4.9.6-39.0.el6_3.x86_64
vdsm-cli-4.9.6-39.0.el6_3.noarch
RHEVM 3.1.0-21.el6ev

How reproducible:
100%

Steps to Reproduce:
1.install RHEVM
2.install RHEL6.3
3.Add the RHEL6.3 host into RHEVM
4.Add NFS storage for the hosts
  
Actual results:
Always failed to add NFS storage

Expected results:
NFS storage can be added successfully

Additional info:
1.Add local storage can not trigger this issue 
2.NFS configuration
#cat /etc/exports 
/var/images *(rw,all_squash,async,anonuid=36,anongid=36)
/var/isos *(rw,all_squash,async,anonuid=36,anongid=36)

Comment 1 Mike Cao 2012-10-26 09:17:25 UTC
Created attachment 633753 [details]
vdsm log

Comment 3 Allon Mureinik 2012-10-30 08:09:23 UTC
Relevant exception:
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2189, in createStorageDomain
    typeSpecificArg, storageType, domVersion)
  File "/usr/share/vdsm/storage/nfsSD.py", line 93, in create
    fsd = cls(os.path.join(mntPoint, sdUUID))
  File "/usr/share/vdsm/storage/fileSD.py", line 113, in __init__
    sdUUID = metadata[sd.DMDK_SDUUID]
  File "/usr/share/vdsm/storage/persistentDict.py", line 85, in __getitem__
    return dec(self._dict[key])
  File "/usr/share/vdsm/storage/persistentDict.py", line 193, in __getitem__
    with self._accessWrapper():
  File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__
    return self.gen.next()
  File "/usr/share/vdsm/storage/persistentDict.py", line 147, in _accessWrapper
    self.refresh()
  File "/usr/share/vdsm/storage/persistentDict.py", line 224, in refresh
    lines = self._metaRW.readlines()
  File "/usr/share/vdsm/storage/fileSD.py", line 82, in readlines
    return misc.stripNewLines(self._oop.directReadLines(self._metafile))
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 272, in callCrabRPCFunction
    *args, **kwargs)
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 190, in callCrabRPCFunction
    raise err
RuntimeError: maximum recursion depth exceeded

Comment 4 Allon Mureinik 2012-10-30 10:59:46 UTC
reproduced in build 39, does not reproduce in build 40.

Reassinging to Eduardo since he mentioned wanting to look into this, but IMHO, this bug can ve closed.

Comment 5 Allon Mureinik 2012-10-30 17:15:01 UTC
Closed after consulting with Haim

Comment 6 Mike Cao 2012-10-31 05:40:44 UTC
Hi, Allon

I still can reproduce on build 40

Could you tell me how you add storage successfully ?

Mike

Additional info :
Logs:

Thread-234::DEBUG::2012-10-31 13:34:58,135::persistentDict::185::Storage.PersistentDict::(__init__) Created a persistant dict with FileMetadataRW backend
Thread-234::DEBUG::2012-10-31 13:34:58,136::persistentDict::226::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=[]
Thread-234::WARNING::2012-10-31 13:34:58,136::persistentDict::248::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is
Thread-234::DEBUG::2012-10-31 13:34:58,137::persistentDict::162::Storage.PersistentDict::(transaction) Starting transaction
Thread-234::DEBUG::2012-10-31 13:34:58,137::persistentDict::168::Storage.PersistentDict::(transaction) Flushing changes
Thread-234::DEBUG::2012-10-31 13:34:58,138::persistentDict::287::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=images', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.66.9.101:/var/images', 'ROLE=Regular', 'SDUUID=99b91dd8-3d0e-4c11-a0ac-a918471046a8', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=a50fc55b773945386451a77e97b8ddc52cfc4c08']
Thread-234::DEBUG::2012-10-31 13:34:58,139::persistentDict::170::Storage.PersistentDict::(transaction) Finished transaction
Thread-234::DEBUG::2012-10-31 13:34:58,140::fileSD::107::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/10.66.9.101:_var_images/99b91dd8-3d0e-4c11-a0ac-a918471046a8
Thread-234::DEBUG::2012-10-31 13:34:58,141::persistentDict::185::Storage.PersistentDict::(__init__) Created a persistant dict with FileMetadataRW backend
Thread-234::WARNING::2012-10-31 13:34:58,152::remoteFileHandler::185::Storage.CrabRPCProxy::(callCrabRPCFunction) Problem with handler, treating as timeout
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 177, in callCrabRPCFunction
    rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 146, in _recvAll
    timeLeft):
  File "/usr/lib64/python2.6/contextlib.py", line 83, in helper
    return GeneratorContextManager(func(*args, **kwds))
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 133, in _poll
    raise Timeout()
Timeout
Thread-234::ERROR::2012-10-31 13:34:58,153::task::853::TaskManager.Task::(_setError) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2221, in createStorageDomain
    typeSpecificArg, storageType, domVersion)
  File "/usr/share/vdsm/storage/nfsSD.py", line 93, in create
    fsd = cls(os.path.join(mntPoint, sdUUID))
  File "/usr/share/vdsm/storage/fileSD.py", line 113, in __init__
    sdUUID = metadata[sd.DMDK_SDUUID]
  File "/usr/share/vdsm/storage/persistentDict.py", line 85, in __getitem__
    return dec(self._dict[key])
  File "/usr/share/vdsm/storage/persistentDict.py", line 193, in __getitem__
    with self._accessWrapper():
  File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__
    return self.gen.next()
  File "/usr/share/vdsm/storage/persistentDict.py", line 147, in _accessWrapper
    self.refresh()
  File "/usr/share/vdsm/storage/persistentDict.py", line 224, in refresh
    lines = self._metaRW.readlines()
  File "/usr/share/vdsm/storage/fileSD.py", line 82, in readlines
    return misc.stripNewLines(self._oop.directReadLines(self._metafile))
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 282, in callCrabRPCFunction
    raise Timeout("Operation stuck on remote handler")
Timeout: Operation stuck on remote handler
Thread-234::DEBUG::2012-10-31 13:34:58,154::task::872::TaskManager.Task::(_run) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::Task._run: ed054f8c-b557-4440-b03f-2b5692120fe4 (1, '99b91dd8-3d0e-4c11-a0ac-a918471046a8', 'images', '10.66.9.101:/var/images', 1, '0') {} failed - stopping task
Thread-234::DEBUG::2012-10-31 13:34:58,154::task::1199::TaskManager.Task::(stop) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::stopping in state preparing (force False)
Thread-234::DEBUG::2012-10-31 13:34:58,154::task::978::TaskManager.Task::(_decref) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::ref 1 aborting True
Thread-234::INFO::2012-10-31 13:34:58,155::task::1157::TaskManager.Task::(prepare) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::aborting: Task is aborted: u'Operation stuck on remote handler' - code 100
Thread-234::DEBUG::2012-10-31 13:34:58,156::task::1162::TaskManager.Task::(prepare) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::Prepare: aborted: Operation stuck on remote handler
Thread-234::DEBUG::2012-10-31 13:34:58,156::task::978::TaskManager.Task::(_decref) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::ref 0 aborting True
Thread-234::DEBUG::2012-10-31 13:34:58,156::task::913::TaskManager.Task::(_doAbort) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::Task._doAbort: force False
Thread-234::DEBUG::2012-10-31 13:34:58,157::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-234::DEBUG::2012-10-31 13:34:58,157::task::588::TaskManager.Task::(_updateState) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::moving from state preparing -> state aborting
Thread-234::DEBUG::2012-10-31 13:34:58,157::task::537::TaskManager.Task::(__state_aborting) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::_aborting: recover policy none
Thread-234::DEBUG::2012-10-31 13:34:58,158::task::588::TaskManager.Task::(_updateState) Task=`ed054f8c-b557-4440-b03f-2b5692120fe4`::moving from state aborting -> state failed
Thread-234::DEBUG::2012-10-31 13:34:58,158::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-234::DEBUG::2012-10-31 13:34:58,159::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-234::ERROR::2012-10-31 13:34:58,159::dispatcher::69::Storage.Dispatcher.Protect::(run) Operation stuck on remote handler
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/dispatcher.py", line 61, in run
    result = ctask.prepare(self.func, *args, **kwargs)
  File "/usr/share/vdsm/storage/task.py", line 1164, in prepare
    raise self.error
Timeout: Operation stuck on remote handler
Thread-240::DEBUG::2012-10-31 13:34:58,249::BindingXMLRPC::171::vds::(wrapper) [10.66.10.115]
Thread-240::DEBUG::2012-10-31 13:34:58,250::task::588::TaskManager.Task::(_updateState) Task=`a0e33782-caba-4e2b-ac13-94cff900bc44`::moving from state init -> state preparing

Comment 7 Liron Aravot 2012-10-31 09:30:18 UTC
verified as working on build 39. therefore it seems like something is wrong with your NFS configuration. please try with nfs v3 and provide your nfs configuration .thanks.

Comment 8 Mike Cao 2012-10-31 09:46:08 UTC
I still NFS server on RHEL6.3 host

# rpm -qa|grep nfs
nfs-utils-1.2.3-30.el6.x86_64
nfs-utils-lib-1.1.5-5.el6.x86_64
[root@localhost ~]# cat /etc/exports 
/var/images *(rw,all_squash,async,anonuid=36,anongid=36)
/var/isos *(rw,all_squash,async,anonuid=36,anongid=36)

[root@localhost var]# ll -lZ /var/
drwxrwxr-x. vdsm kvm  unconfined_u:object_r:var_t:s0   images
drwxr-xr-x. vdsm kvm  unconfined_u:object_r:var_t:s0   isos

Comment 9 Daniel Paikov 2012-11-01 11:51:25 UTC
According to my tests, this is a configuration issue and not a VDSM bug.


Note You need to log in before you can comment on or make changes to this bug.