Bug 1518093 - Jumphost for machines in the cage
Summary: Jumphost for machines in the cage
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: project-infrastructure
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: M. Scherer
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1518062
TreeView+ depends on / blocked
 
Reported: 2017-11-28 08:19 UTC by Nigel Babu
Modified: 2020-03-12 12:58 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-03-12 12:58:18 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nigel Babu 2017-11-28 08:19:19 UTC
We need a jump host for machines in the cage that can be given to the larger developer community for debugging test failures, among other things.

Comment 1 M. Scherer 2017-11-29 11:33:42 UTC
So, I would like to try to get a bit more information on what we plan. Right now, the builders are on the internal network, and so I need to verify if there is any risk by giving shell to folks who can then jump on the internal network. 

I am also wondering if we should have some automation regarding wiping the builder, and pulling the ssh keys from github.

And if we want a bastion, we somehow need some kind of user accounts on the bastion, so fix the freeipa setup.

Comment 2 Nigel Babu 2018-10-08 03:08:50 UTC
What I'd like to do is an on-demand setup. An ansible playbook that will setup temporary bastion access into the internal network and to a particular builder. This is going to be needed quite rarely, but in the event it's needed, we have no way to provide this.

So some automation, but not a lot of automation is good enough for right now.

Comment 3 M. Scherer 2018-10-08 13:32:25 UTC
Then people could jump to the rest of the LAN. I would frankly deny that request for now, or we would need to do a lot of change on the networking setup.

Comment 4 M. Scherer 2018-10-08 14:20:48 UTC
Ok so trying to figure a bit more how I would attack that if i was given a server with root access. 

That's purely theorical, cause I think no one will pull this off. Assuming someone is root on the builder, even with the lan locked down, someone malicious could wreck havoc with IP/MAC on the internal lan, resulting into potential MITM (if someone steal the ip of the internal squid/unbound). 

Itself, it shouldn't cause much trouble, but someone doing mitm on git.gluster.org could inject code in the build, thus resulting into compromission of more internal builders. This wouldn't result into much however.

We have the issue of keepalived not encrypting the VRRP password (https://louwrentius.com/configuring-attacking-and-securing-vrrp-on-linux.html ), which could result into more way to do MITM (this time on the firewall level), which isn't great either. 

So that would result into MITM either on squid/unbound side, on the proxy side, or on the firewall. 

We have the issue of using gluster for proxy internally, which may need some care since we just found a ton of issue last month: https://access.redhat.com/errata/RHSA-2018:2608 and so I would like to do more hardening on this side.

I am also unsure on the auth we are using, cause if a user can use a MITM to be part of the gluster cluster, that would permit to steal the lets encrypt certs, then the same attacker can decode the traffic on the proxy side. But that's a bit far fetched, and there isn't any auth or anything worth anyway.

So, nothing urgent come to mind (even if the MITM is kinda bad, but I think nothing critical would happen, just dos/disruption), and maybe I am just too cautious, but I am a bit uneasy for now.

I wonder if this couldn't be used for that:
https://github.com/gravitational/teleport

Since I guess adding logging/audit would likely help a lot to deter a attacker

Comment 5 Worker Ant 2020-03-12 12:58:18 UTC
This bug is moved to https://github.com/gluster/project-infrastructure/issues/40, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.