Created attachment 1719995 [details]
Added logs and gif
Description of problem:
When trying to put the only host in a data center into maintenance and the host is a Power 9, the deactivation fails and the host become non responsive for some seconds.
Version-Release number of selected component (if applicable):
RHV 4.4.3-7 engine (x86 Intel engine) (ovirt-engine 18.104.22.168-0.5.el8ev)
and host (vdsm.ppc64le 4.40.32-1.el8ev) both with RHEL 8.3 installed
Steps to Reproduce:
1. Create a 4.5 data center and a ppc64 architecture Power 9, 4.5 compatibility level cluster.
2. Add the Power 9 host and create a new NFS storage domain.
3. Activate the storage domain.
4. Put the NFS storage domain into maintenance.
1. Created a 4.5 data center and a ppc64 architecture Power 9, 4.5 compatibility level cluster.
2. Added the Power 9 host and created a new NFS storage domain.
3. Activated the storage domain.
4. The NFS storage domain failed to deactivate, the Power 9 host became non responsive for 5 seconds and then got back up.
Steps 1-3 are expected.
4. The NFS storage domain should deactivate and put into maintenance.
The host shouldn't become non-responsive never
Tamir, can you try and reproduce it with the same NFS storage but with a non PPC cluster?
I tested it using x86 cluster and couldn't produce the issue.
This is related only to PPC.
Tamir, I see that you closed bug 1886424 and they both look to me like they stem from the same cause, most likely environmental issue with the NFS server, can you please try and retest again?
Hi Tal, I have reproduced it.
it's still happens with the same host, but different NFS server with the same steps I did previously.
This bug/RFE didn't get enough attention so far and is now flagged as pending close.
Please review if it is still relevant and provide additional details/justification/patches
if you believe it should get more attention for the next oVirt release.
This bug didn't get any attention in a long time, and it's not planned in foreseeable future. oVirt development team has no plans to work on it.
Please feel free to reopen if you have a plan how to contribute this feature/bug fix.