Description of problem: When setup to manage the firewall on the hosts, RHEV-M sets up iptables to block port 111 as well as the port used by rpc.statd. This disrupts important push notifications between NFS storage servers and the RHEV hosts that have to to with the NFSv3 file locking status. How reproducible: Easily Steps to Reproduce: 1. Setup RHEV-M to manage firewall on the hosts 2. setup some hosts 3. use NFSv3 storage domains. 4. inspect iptables on one of the hosts. Actual results: iptables is configured on the hosts to block port 111 as well as the port used by rpc.statd Expected results: If using NFS storage, iptables should be configured with port 111 open as well as an additional port for rpc.statd. In addition, rpc.statd should be configured to start up on a fixed port number instead of a dynamic one.
Hi Allon, Can you share more information on this issue? Does it mean that NFS in entirely broken now? Only NFSv3? I am also curious to know how this is raised only now? Thanks, Nir
(In reply to Nir Yechiel from comment #1) > Does it mean that NFS in > entirely broken now? Only NFSv3? NFSv4 doesn't need rpcbind, but this hardly diminishes the issue - NFSv3 is the common usecase. > I am also curious to know how this is > raised only now? It was raised as soon as it was encountered. Why wasn't it encountered earlier? I have no clue. Either this is a recent regression in RHEV and/or the platform, or the QE matrix is simply lacking.
Allon, Please explain why this is a blocker ? We do not use nfs locks (and as far as I recall we have never used). In addition no functional problem occurs on a regular environment, it (rpc.statd redundant process ... a bug was opened on RHEL as well) happens only on ART where QE restart the rpcbind independently. In addition this is not a Networking bug rather a Storage one (and personally I don't think this is a bug). AFAIK - we only configure 111 port on Gluster node only and we have never done that for regular node see IPTablesConfig and IPTablesConfigForGluster in vdc_options.
(In reply to Barak from comment #3) > Allon, > > Please explain why this is a blocker ? > We do not use nfs locks (and as far as I recall we have never used). That's my default for z-streams, but you're right, this should be a exception, not a blocker. > In addition no functional problem occurs on a regular environment, it > (rpc.statd redundant process ... a bug was opened on RHEL as well) happens > only on ART where QE restart the rpcbind independently. > > In addition this is not a Networking bug rather a Storage one (and > personally I don't think this is a bug). > > AFAIK - we only configure 111 port on Gluster node only and we have never > done that for regular node > > see IPTablesConfig and IPTablesConfigForGluster in vdc_options.
Hi Tal, Can you explain what 'installing a new host from a virt mode RHEV only' means? Cheers, Julie
Of course, when you are installing RHEV/oVirt via the installer you have three application modes: Gluster, Virt & both. If you installed the application without Gluster support (i.e. Virt mode), when you added a new host the host deploy wouldn't have open port 111 wherein Gluster/Both applications modes it would have, this bug fix makes sure that the port will be opened also in Virt mode.
Thanks Tal.
Tested using ovirt-engine-3.6.0-0.0.master.20150412172306.git55ba764
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE