We need a jump host for machines in the cage that can be given to the larger developer community for debugging test failures, among other things.
So, I would like to try to get a bit more information on what we plan. Right now, the builders are on the internal network, and so I need to verify if there is any risk by giving shell to folks who can then jump on the internal network.
I am also wondering if we should have some automation regarding wiping the builder, and pulling the ssh keys from github.
And if we want a bastion, we somehow need some kind of user accounts on the bastion, so fix the freeipa setup.
What I'd like to do is an on-demand setup. An ansible playbook that will setup temporary bastion access into the internal network and to a particular builder. This is going to be needed quite rarely, but in the event it's needed, we have no way to provide this.
So some automation, but not a lot of automation is good enough for right now.
Then people could jump to the rest of the LAN. I would frankly deny that request for now, or we would need to do a lot of change on the networking setup.
Ok so trying to figure a bit more how I would attack that if i was given a server with root access.
That's purely theorical, cause I think no one will pull this off. Assuming someone is root on the builder, even with the lan locked down, someone malicious could wreck havoc with IP/MAC on the internal lan, resulting into potential MITM (if someone steal the ip of the internal squid/unbound).
Itself, it shouldn't cause much trouble, but someone doing mitm on git.gluster.org could inject code in the build, thus resulting into compromission of more internal builders. This wouldn't result into much however.
We have the issue of keepalived not encrypting the VRRP password (https://louwrentius.com/configuring-attacking-and-securing-vrrp-on-linux.html ), which could result into more way to do MITM (this time on the firewall level), which isn't great either.
So that would result into MITM either on squid/unbound side, on the proxy side, or on the firewall.
We have the issue of using gluster for proxy internally, which may need some care since we just found a ton of issue last month: https://access.redhat.com/errata/RHSA-2018:2608 and so I would like to do more hardening on this side.
I am also unsure on the auth we are using, cause if a user can use a MITM to be part of the gluster cluster, that would permit to steal the lets encrypt certs, then the same attacker can decode the traffic on the proxy side. But that's a bit far fetched, and there isn't any auth or anything worth anyway.
So, nothing urgent come to mind (even if the MITM is kinda bad, but I think nothing critical would happen, just dos/disruption), and maybe I am just too cautious, but I am a bit uneasy for now.
I wonder if this couldn't be used for that:
Since I guess adding logging/audit would likely help a lot to deter a attacker
This bug is moved to https://github.com/gluster/project-infrastructure/issues/40, and will be tracked there from now on. Visit GitHub issues URL for further details