Bug 1177624 - RHEV-M managed firewall blocks NFS rpc.statd notifications
Summary: RHEV-M managed firewall blocks NFS rpc.statd notifications
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.0
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ovirt-3.6.0-rc
: 3.6.0
Assignee: Tal Nisan
QA Contact: lkuchlan
URL:
Whiteboard:
Depends On:
Blocks: 1192014
TreeView+ depends on / blocked
 
Reported: 2014-12-29 13:47 UTC by Barak Korren
Modified: 2016-03-10 11:58 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
When installing a new host using the Virt mode (without Gluster support), port 111 was not opened in TCP & UDP and blocked rpc.statd. With this update, the required ports are opened in the firewall.
Clone Of:
: 1192014 (view as bug list)
Environment:
Last Closed: 2016-03-10 10:33:51 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
ylavi: Triaged+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 37602 0 master MERGED host-deploy: Configure firewall to allow rpc.statd port Never

Description Barak Korren 2014-12-29 13:47:05 UTC
Description of problem:
When setup to manage the firewall on the hosts, RHEV-M sets up iptables to block port 111 as well as the port used by rpc.statd. This disrupts important push notifications between NFS storage servers and the RHEV hosts that have to to with the NFSv3 file locking status.

How reproducible:
Easily

Steps to Reproduce:
1. Setup RHEV-M to manage firewall on the hosts
2. setup some hosts
3. use NFSv3 storage domains.
4. inspect iptables on one of the hosts.

Actual results:
iptables is configured on the hosts to block port 111 as well as the port used by rpc.statd

Expected results:
If using NFS storage, iptables should be configured with port 111 open as well as an additional port for rpc.statd. In addition, rpc.statd should be configured to start up on a fixed port number instead of a dynamic one.

Comment 1 Nir Yechiel 2014-12-30 09:08:02 UTC
Hi Allon,

Can you share more information on this issue? Does it mean that NFS in entirely broken now? Only NFSv3? I am also curious to know how this is raised only now?

Thanks,
Nir

Comment 2 Allon Mureinik 2014-12-31 12:01:28 UTC
(In reply to Nir Yechiel from comment #1)

> Does it mean that NFS in
> entirely broken now? Only NFSv3? 
NFSv4 doesn't need rpcbind, but this hardly diminishes the issue - NFSv3 is the common usecase.

> I am also curious to know how this is
> raised only now?
It was raised as soon as it was encountered.
Why wasn't it encountered earlier? I have no clue.
Either this is a recent regression in RHEV and/or the platform, or the QE matrix is simply lacking.

Comment 3 Barak 2015-01-04 17:16:37 UTC
Allon,

Please explain why this is a blocker ?
We do not use nfs locks (and as far as I recall we have never used).


In addition no functional problem occurs  on a regular environment, it (rpc.statd redundant process ... a bug was opened on RHEL as well) happens only on ART where QE restart the rpcbind independently.

In addition this is not a Networking bug rather a Storage one (and personally I don't think this is a bug).

AFAIK - we only configure 111 port on Gluster node only and we have never done that for regular node

see IPTablesConfig and IPTablesConfigForGluster in vdc_options.

Comment 4 Allon Mureinik 2015-01-06 16:34:13 UTC
(In reply to Barak from comment #3)
> Allon,
> 
> Please explain why this is a blocker ?
> We do not use nfs locks (and as far as I recall we have never used).
That's my default for z-streams, but you're right, this should be a exception, not a blocker.

> In addition no functional problem occurs  on a regular environment, it
> (rpc.statd redundant process ... a bug was opened on RHEL as well) happens
> only on ART where QE restart the rpcbind independently.
> 
> In addition this is not a Networking bug rather a Storage one (and
> personally I don't think this is a bug).
> 
> AFAIK - we only configure 111 port on Gluster node only and we have never
> done that for regular node
> 
> see IPTablesConfig and IPTablesConfigForGluster in vdc_options.

Comment 6 Julie 2015-03-20 05:59:49 UTC
Hi Tal,
    Can you explain what 'installing a new host from a virt mode RHEV only' means? 

Cheers,
Julie

Comment 7 Tal Nisan 2015-03-22 13:23:43 UTC
Of course, when you are installing RHEV/oVirt via the installer you have three application modes: Gluster, Virt & both.
If you installed the application without Gluster support (i.e. Virt mode), when you added a new host the host deploy wouldn't have open port 111 wherein Gluster/Both applications modes it would have, this bug fix makes sure that the port will be opened also in Virt mode.

Comment 8 Julie 2015-03-22 23:27:01 UTC
Thanks Tal.

Comment 9 lkuchlan 2015-04-20 14:16:39 UTC
Tested using ovirt-engine-3.6.0-0.0.master.20150412172306.git55ba764

Comment 10 Allon Mureinik 2016-03-10 10:33:51 UTC
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE

Comment 11 Allon Mureinik 2016-03-10 10:36:14 UTC
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE

Comment 12 Allon Mureinik 2016-03-10 10:41:50 UTC
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE

Comment 13 Allon Mureinik 2016-03-10 11:58:44 UTC
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE


Note You need to log in before you can comment on or make changes to this bug.