Bug 1494531 - removing datacenter keep nfs mounted
Summary: removing datacenter keep nfs mounted
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.3.5
: 4.3.5.1
Assignee: shani
QA Contact: Shir Fishbain
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-22 12:44 UTC by Sandro Bonazzola
Modified: 2019-07-30 14:08 UTC (History)
6 users (show)

Fixed In Version: ovirt-engine-4.3.5.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-30 14:08:17 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.3+


Attachments (Terms of Use)
log collector report (9.19 MB, application/x-xz)
2017-09-25 11:40 UTC, Sandro Bonazzola
no flags Details
vdsm logs (684.57 KB, application/x-xz)
2019-05-07 13:49 UTC, Sandro Bonazzola
no flags Details
master engine log may 29th attempt (20.52 KB, application/x-xz)
2019-05-29 09:26 UTC, Sandro Bonazzola
no flags Details
master vdsm with debug log may 29th attempt (64.46 KB, application/x-xz)
2019-05-29 09:27 UTC, Sandro Bonazzola
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 100698 0 master MERGED core: Call disconnectStorage while detaching a master SD 2019-06-17 08:17:32 UTC
oVirt gerrit 100862 0 ovirt-engine-4.3 MERGED core: Call disconnectStorage while detaching a master SD 2019-06-17 10:07:14 UTC

Description Sandro Bonazzola 2017-09-22 12:44:15 UTC
Description of problem:
Engine running on bare metal, attached single host, attached NFS data domain.
After detaching the storage domain, removed it, moved the host to maintenance and removed it, on the host the data domain is still mounted

Version-Release number of selected component (if applicable):
vdsm-4.20.3-81.git81e8845.el7.centos.x86_64
ovirt-engine-4.2.0-0.0.master.20170920172148.git8366a0b.el7.centos.noarch

How reproducible:
tried only once

Actual results:
domain is still mounted after removing everything from engine

Expected results:
domain shouldn't be still mounted on the host after removal of the domain

Comment 1 Allon Mureinik 2017-09-24 14:39:57 UTC
Can you share the logs please?

Comment 2 Sandro Bonazzola 2017-09-25 11:39:16 UTC
Reproduced today, attaching log collector logs.

Comment 3 Sandro Bonazzola 2017-09-25 11:40:08 UTC
Created attachment 1330485 [details]
log collector report

Comment 5 Sandro Bonazzola 2019-01-28 09:34:38 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 7 Sandro Bonazzola 2019-05-07 13:43:30 UTC
I can still reproduce this:

engine: ovirt4.home (ovirt-engine 4.3.3.7)
node: node0.home (oVirt Node 4.3.3.1)
storage: mididell.home (CentOS 7)


[root@node0 ~]# mount |grep nfs
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
mididell.home:/export/data on /rhev/data-center/mnt/mididell.home:_export_data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.113,local_lock=none,addr=192.168.1.107)
mididell.home:/export/iso on /rhev/data-center/mnt/mididell.home:_export_iso type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.113,local_lock=none,addr=192.168.1.107)

On engine:
Storage -> Storage Domains -> iso
Maintenance

[root@node0 ~]# mount |grep nfs
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
mididell.home:/export/data on /rhev/data-center/mnt/mididell.home:_export_data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.113,local_lock=none,addr=192.168.1.107)

On engine:
Storage -> Storage Domains -> iso
Detach

On engine:
Storage -> Storage Domains -> data
Maintenance

[root@node0 ~]# mount |grep nfs
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)

On engine:
Storage -> Storage Domains -> data
Detach
Storage -> Storage Domains -> iso
Remove
Storage -> Storage Domains -> data
Remove

[root@node0 ~]# mount |grep nfs
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
mididell.home:/export/data on /rhev/data-center/mnt/mididell.home:_export_data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.113,local_lock=none,addr=192.168.1.107)

Comment 8 Sandro Bonazzola 2019-05-07 13:48:39 UTC
In vdsm.log I see:

2019-05-07 13:40:23,496+0000 INFO  (jsonrpc/7) [vdsm.api] START disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'tpgt': u'1', u'id': u'f644fb6a-9b12-41b0-a32c-a94ae2
463ddf', u'connection': u'mididell.home:/export/data', u'iqn': u'', u'user': u'', u'ipv6_enabled': u'false', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.1
68.1.112,45300, flow_id=6619b8aa, task_id=3a8e39bb-b675-4018-88f8-8579fd5594cd (api:48)
2019-05-07 13:40:23,496+0000 INFO  (jsonrpc/7) [storage.Mount] unmounting /rhev/data-center/mnt/mididell.home:_export_data (mount:212)
2019-05-07 13:40:23,502+0000 ERROR (jsonrpc/7) [storage.HSM] Could not disconnect from storageServer (hsm:2502)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2498, in disconnectStorageServer
    conObj.disconnect()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 437, in disconnect
    return self._mountCon.disconnect()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 202, in disconnect
    self._mount.umount(True, True)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 214, in umount
    umount(self.fs_file, force=force, lazy=lazy, freeloop=freeloop)
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
    return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
    **kwargs)
  File "<string>", line 2, in umount
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
    raise convert_to_error(kind, result)
MountError: (32, ';umount: /rhev/data-center/mnt/mididell.home:_export_data: mountpoint not found\n')
2019-05-07 13:40:23,619+0000 INFO  (jsonrpc/7) [vdsm.api] FINISH disconnectStorageServer return={'statuslist': [{'status': 477, 'id': u'f644fb6a-9b12-41b0-a32c-a94ae2463ddf'}]} from=::ffff:192.168.1.112,45300, f
low_id=6619b8aa, task_id=3a8e39bb-b675-4018-88f8-8579fd5594cd (api:54)
2019-05-07 13:40:23,620+0000 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer succeeded in 0.13 seconds (__init__:312)
2019-05-07 13:40:25,674+0000 INFO  (periodic/3) [vdsm.api] START repoStats(domains=()) from=internal, task_id=6a0ca582-74bf-4196-9e3b-71ade70f156e (api:48)
2019-05-07 13:40:25,674+0000 INFO  (periodic/3) [vdsm.api] FINISH repoStats return={} from=internal, task_id=6a0ca582-74bf-4196-9e3b-71ade70f156e (api:54)


Attaching vdsm logs.

Comment 9 Sandro Bonazzola 2019-05-07 13:49:27 UTC
Created attachment 1565213 [details]
vdsm logs

Comment 10 shani 2019-05-16 13:30:11 UTC
Hi Sandro,
Can you please supply the engine.log?
Also, can you please re-generate the VDSM log on DEBUG mode?

Comment 11 Sandro Bonazzola 2019-05-29 09:25:25 UTC
Reproduced on master:
ovirt4.home: CentOS-7-x86_64-Minimal-1810.iso + yum update + ovirt-release-master - ovirt-engine-4.4.0-0.0.master.20190527202653.git066bed3.el7
host1.home: CentOS-7-x86_64-Minimal-1810.iso + yum update + ovirt-release-master - vdsm-4.40.0-277.giteef7917.el7.x86_64
storage: mididell.home (CentOS 7)

Added host1 to the engine

ON host1:
[root@host1 ~]# vdsm-client Host setLogLevel level=DEBUG
true

- attached data domain
- attached iso domain

[root@host1 ~]# mount |grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
mididell.home:/export/data on /rhev/data-center/mnt/mididell.home:_export_data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.120,local_lock=none,addr=192.168.1.107)
mididell.home:/export/iso on /rhev/data-center/mnt/mididell.home:_export_iso type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.120,local_lock=none,addr=192.168.1.107)

On engine:
Storage -> Storage Domains -> iso
Maintenance

# mount |grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
mididell.home:/export/data on /rhev/data-center/mnt/mididell.home:_export_data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.120,local_lock=none,addr=192.168.1.107)


On engine:
Storage -> Storage Domains -> iso
Detach

On engine:
Storage -> Storage Domains -> data
Maintenance
(2 times because the first one iso domain was locked and operation has been blocked on the first time)

# mount |grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)


On engine:
Storage -> Storage Domains -> data
Detach
Storage -> Storage Domains -> iso
Remove
Storage -> Storage Domains -> data
Remove


# mount |grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
mididell.home:/export/data on /rhev/data-center/mnt/mididell.home:_export_data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.120,local_lock=none,addr=192.168.1.107)

Attaching debug vdsm and engine logs.

Comment 12 Sandro Bonazzola 2019-05-29 09:26:15 UTC
Created attachment 1574660 [details]
master engine log may 29th attempt

Comment 13 Sandro Bonazzola 2019-05-29 09:27:03 UTC
Created attachment 1574661 [details]
master vdsm with debug log may 29th attempt

Comment 15 Avihai 2019-07-07 12:48:05 UTC
Hi Shani,
The initial description is very simple, but I see the rest of the bug describe a different more complex scenario.

Can you please provide a clear reproduction scenario?

Comment 16 Avihai 2019-07-07 13:18:05 UTC
Verified on 4.3.5.2-0.1.el7.

verification scenario:

1) Create DC with 2 SD's ISO+data(NFS)
2) move ISO SD to maintenance and detach it.
3) move data SD to maintenance and detach it.

Already moving the SD to maintenance the mount is removed.
After both SD's are removed from the host no nfs mounts exist as expected.

[root@storage-ge8-vdsm3 ~]# mount |grep nfs
[root@storage-ge8-vdsm3 ~]#

Comment 17 Sandro Bonazzola 2019-07-30 14:08:17 UTC
This bugzilla is included in oVirt 4.3.5 release, published on July 30th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.5 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.