Bug 1498151 - Move download server to the community cage
Summary: Move download server to the community cage
Alias: None
Product: GlusterFS
Classification: Community
Component: project-infrastructure
Version: mainline
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: M. Scherer
QA Contact:
Depends On:
TreeView+ depends on / blocked
Reported: 2017-10-03 14:46 UTC by M. Scherer
Modified: 2019-02-04 02:12 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2019-02-04 02:12:40 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Description M. Scherer 2017-10-03 14:46:52 UTC
Description of problem:

Today, yet another rowhammer style attack paper went out, explaining 
https://arxiv.org/pdf/1710.00551.pdf (there is a link to the various papers)

While this is not a new attack, and I guess a rather complex one to mount, we should mitigate the risk by moving the download server and the ansible deployment in the cage. I heard about people using rowhammer to flip some bits to bypass pam verification (no paper or conference have been published yet afaik, so i wasn't able to evaluate the praticality). 

While rackspace is using ECC (or so do I hope, that's what lshw report) and that's mitigating the attack to be a denial of service only, I would sleep better at night if we moved the 2 servers out of rackspace and in the cage in case improvements to the attack do get published.

The rest of the VM are not as critical as theses 2, even if the freeipa server should also be moved.

I am already in the process of moving salt-master since some weeks, I just need to finish the move.

Comment 1 Amye Scavarda 2017-10-03 17:41:44 UTC
The website redirect issues need to take higher priority than this. That's causing an existing user impact, this can come after that ticket's resolved. 
https://bugzilla.redhat.com/show_bug.cgi?id=1490994 is the ticket.

Comment 2 Karsten Wade 2017-10-03 18:47:02 UTC
I think these can be worked in parallel. Let's test Michael's solution for the redirect issue. For who and how, unfortunately we don't yet have in place rules of engagement around the WordPress instance.

I'll start an email to discuss admin on the WP instance.

In the other ticket we can resolve who tests what, where, and get that whole thing taken care of.

For security concerns, it seems prudent to take care of them ASAP, also considering that Michael's involvement in the other bug is lightweight in providing us regexp solutions to test. Would you be OK with working these in parallel?

Comment 3 Amye Scavarda 2017-10-03 19:07:59 UTC
From what I see from the initial comments, I think this is something that could be looked at later this week. 

Let's get the current redirects sorted out first, and then we can move to moving the download server.

Comment 4 M. Scherer 2017-10-19 13:06:02 UTC
So the move of salt-master is done, I am fixing the last stuffs and will then take the old VM offline. (and open bug for the issue I did see)

Comment 5 M. Scherer 2017-10-19 15:32:50 UTC
So, the article I mention in 1st comment is out now:


Comment 6 M. Scherer 2017-10-19 16:28:42 UTC
I also asked to Rackspace about KSM usage.

Comment 7 M. Scherer 2017-10-25 12:23:08 UTC
So Rackspace do not use KSM according to support, and according to ex-rackspace folk I reached, they did tested and it was not good enough in practice.

So I just need to move the download server now, but that's less urgent (until the next rowhammer-like issue)

Comment 8 Nigel Babu 2018-10-08 03:14:15 UTC
Updating the bug so it's just the download server. What's the plan here? We need to move off Rackspace before shutdown (That's when the free tier ends).

Do we plan to move download server into the cage or do we want to put it on another cloud vendor?

Comment 9 M. Scherer 2018-10-08 13:19:03 UTC
During which shutdown, you mean "christmas shutdown", and I thought we did already ended the free tier a while ago ?

Comment 10 Amye Scavarda 2018-10-08 16:23:09 UTC
Correct on 'Christmas Shutdown'. 
We've been in a grace period with Rackspace, but we need to turn everything off from that space by December 21st. The grace period expires at the end of December.

Comment 11 Nigel Babu 2019-02-04 02:12:40 UTC
This is now complete.

Note You need to log in before you can comment on or make changes to this bug.