Bug 1126583 - Unable to connect to instances via VNC, nova vnc settings on compute are incorrect.
Summary: Unable to connect to instances via VNC, nova vnc settings on compute are inco...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 5.0 (RHEL 7)
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ga
: Installer
Assignee: Jason Guiditta
QA Contact: Toni Freger
URL:
Whiteboard:
Depends On: 1126332
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-04 20:28 UTC by Mike Burns
Modified: 2014-08-21 18:08 UTC (History)
9 users (show)

Fixed In Version: openstack-foreman-installer-2.0.19-1.el6ost
Doc Type: Bug Fix
Doc Text:
Clone Of: 1126332
Environment:
Last Closed: 2014-08-21 18:08:10 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1090 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement Advisory 2014-08-22 15:28:08 UTC

Comment 1 Mike Orazi 2014-08-04 21:44:08 UTC
Can we get a limit/range of ports to open for vpn access?  Would 5, 10, or 20 be a reasonable limit?

Comment 2 Lars Kellogg-Stedman 2014-08-04 23:08:36 UTC
I suspect that none of those is a reasonable limit (it seems likely that someone may want to spawn more than 20 instances on a given compute host).  If we're limiting access to the controller(s), then I'm not sure we need to tune this too tightly, e.g. it may be sufficient to do this:

    -A INPUT -p tcp --dport 5900:65535 -s x.x.x.x -j ACCEPT

(where x.x.x.x is the address of a controller, with a rule for each controller)

And that would cover us in all situations.

If people think that's "too open" (and provide supporting documentation), maybe:

    -A INPUT -p tcp --dport 5900:5999 -j ACCEPT

Which gets us to 100 instances/host, which is...bigger?  Maybe better than 20?

Comment 3 Jason Guiditta 2014-08-05 15:37:51 UTC
Just to clarify (think I am being dense here): This firewall rule belongs on the compute node, and opens the port range only to the controller's openstack public network?

Comment 4 Lars Kellogg-Stedman 2014-08-05 15:55:30 UTC
The first part is correct.  This is about modifying the compute host firewall.

We would be opening access to the controller's *management* network address, which is where connections to the compute node vnc servers would originate.

Comment 5 Jason Guiditta 2014-08-05 18:16:10 UTC
(In reply to Lars Kellogg-Stedman from comment #2)
> I suspect that none of those is a reasonable limit (it seems likely that
> someone may want to spawn more than 20 instances on a given compute host). 
> If we're limiting access to the controller(s), then I'm not sure we need to
> tune this too tightly, e.g. it may be sufficient to do this:
> 
>     -A INPUT -p tcp --dport 5900:65535 -s x.x.x.x -j ACCEPT
> 
> (where x.x.x.x is the address of a controller, with a rule for each
> controller)
> 
> And that would cover us in all situations.
> 
> If people think that's "too open" (and provide supporting documentation),
> maybe:
> 
>     -A INPUT -p tcp --dport 5900:5999 -j ACCEPT

So, I just did a deployment with nova networking, and it turns out this ^^ is exactly what we are already setting, and iptables -S shows:
-A INPUT -p tcp -m multiport --dports 5900:5999 -m comment --comment "001 nova compute incoming" -j ACCEPT

Now, I can certainly bump up this number to the first range you suggest, and add the controller as the source.  One concern there though, is how this would impact HA, since the user will access via the VIP, which haproxy will translate into one of however many nodes are on the back end.  I suppose staypuft could populate this with a list, once I figured out the syntax puppet-firewall needs for multiple sources.

> 
> Which gets us to 100 instances/host, which is...bigger?  Maybe better than
> 20?

I am testing now with:

vncserver_listen => '0.0.0.0',

and 

vncserver_proxyclient_address => '0.0.0.0',

Comment 7 Lars Kellogg-Stedman 2014-08-06 12:54:45 UTC
So, it turns out the first setup I looked at -- with the firewall issue -- was a packstack deployment.  So, never mind.

I think we can CLOSE NOTABUG this bz.  We still need the other one (1126332) on fixing nova.conf.

Comment 8 Jason Guiditta 2014-08-11 21:54:21 UTC
Patch posted to properly configure nova on the compute nodes, firewall vnc ports are already open (refinement may be needed later).  Patch posted:
https://github.com/redhat-openstack/astapor/pull/346

Comment 9 Jason Guiditta 2014-08-11 23:29:16 UTC
Merged

Comment 11 Omri Hochman 2014-08-21 08:02:08 UTC
Verified with: 
ruby193-rubygem-staypuft-0.2.5-1.el6ost.noarch
openstack-foreman-installer-2.0.21-1.el6ost.noarch
 
According to Bz #1126332  : the link that is being created by attempting to open VNC console is pointing to internal network IP like 192.168.0.6 , this should be replace by external network IP to be able to open the console. 

The iptables problem mentioned in this bug is fixed , I've changed the VNC link to use the external IP and by that managed to Open VNC console - iptables was non-issue.

Comment 12 errata-xmlrpc 2014-08-21 18:08:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1090.html


Note You need to log in before you can comment on or make changes to this bug.