Created attachment 1064878 [details] Add iso domain fails # rpm -q vdsm vdsm-4.17.2-1.el7ev.noarch The goal is to add ISO domain hosted on remote NFS server. Storage type = NFS To reproduce bug, there should be special _bogus_ step: an attempt to add ISO domain where: Domain function == DATA (set is deliberately wrong) (Everything can happen.....) Fill up correct `Export path' Press OK It fails as expected. Up to this point everything is OK. The storage list is empty. Go and try add ISO domain again with correct `Domain function == ISO' It fails with: Error while executing action: Cannot add Storage Connection. Storage connection already exists. Storage list is empty. See attached screenshot.
Seems similar to https://bugzilla.redhat.com/show_bug.cgi?id=1020812
Andrei, are you sure you were using oVirt 3.6? Can u please attach the engine and vdsm logs I was trying to reproduce this with my ISO domain and this doesn't seems to reproduce.
I can reproduce it on rhevm-3.6.0-0.12.master.el6.noarch
Created attachment 1066494 [details] Engine log
Created attachment 1066495 [details] host log
Andrei, I think that the logs form the Host and from the engine are not synchronized. It looks that you are trying to add a new domain, but the domain is already in use (see in the logs "Storage domain is not empty") Also I didn't saw any indication of "Cannot add Storage Connection. Storage connection already exists.". Do you have the full logs that contains this exception, and also more specific reproduce steps that I can try to reproduce on my env (since it appears working on my env)
It is the bug what I am talking about. To reproduce it its is necessary to run one simple step: Import ISO domain as DATA type. Further attempts to add this ISO domain as ISO type will fail. See attached screenshots. Have you tried to repeat this simple step?
Created attachment 1066564 [details] oengine add ISO domain
Created attachment 1066565 [details] oengine add ISO domain fails
I can see backtrace with exception from host engine : Thread-98585::ERROR::2015-08-24 17:45:56,805::hsm::2538::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2534, in disconnectStorageServer conObj.disconnect() File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect return self._mountCon.disconnect() File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect self._mount.umount(True, True) File "/usr/share/vdsm/storage/mount.py", line 256, in umount return self._runcmd(cmd, timeout) File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd raise MountError(rc, ";".join((out, err))) MountError: (32, ';umount: /rhev/data-center/mnt/10.34.73.3:_nfs_iso: mountpoint not found\n') Is it okay ??? Why do you ignore it ? It is in attached log from host.
(In reply to Andrei Stepanov from comment #10) > I can see backtrace with exception from host engine : > > Thread-98585::ERROR::2015-08-24 > 17:45:56,805::hsm::2538::Storage.HSM::(disconnectStorageServer) Could not > disconnect from storageServer > Traceback (most recent call last): > File "/usr/share/vdsm/storage/hsm.py", line 2534, in > disconnectStorageServer > conObj.disconnect() > File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect > return self._mountCon.disconnect() > File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect > self._mount.umount(True, True) > File "/usr/share/vdsm/storage/mount.py", line 256, in umount > return self._runcmd(cmd, timeout) > File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd > raise MountError(rc, ";".join((out, err))) > MountError: (32, ';umount: /rhev/data-center/mnt/10.34.73.3:_nfs_iso: > mountpoint not found\n') > > > Is it okay ??? Why do you ignore it ? It is in attached log from host. It is not related to the bug you have described, you can open a separate bug on that exception. The exception simply means that the engine is trying to clean the connection twice. It happens twice since the Storage Domain doesn't exists on the Storage. so first the engine tries to fetch the info from the Storage Domain by connecting to it, and then it tries to disconnect from it. and again when the engine fails to import the Storage Domain and tries to disconnect from it, when it doesn't succeed to import it eventually. > Have you tried to repeat this simple step? As I wrote before, I did tried to reproduce this as you described (This is the second time now) and this does not get reproduced on my env. The reproduce steps are: 1. Try to import an empty NFS Storage Domain (The operation fails) 2. Try to add an ISO Storage Domain with the same path result: The operation succeeded. also, based on the images you have attached to the bug, this is not the issue you have described when opening the bug of existing Storage connections. It looks that your ISO path is not empty. Can u please try that again with a new clean path, and please attach the full engine and VDSM logs if this does reproduce?
I cannot reproduce