Bug 1368339 - Access to slave{24,27 and one other centos-slave}.cloud.gluster.org
Summary: Access to slave{24,27 and one other centos-slave}.cloud.gluster.org
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: project-infrastructure
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-19 06:20 UTC by Kaushal
Modified: 2016-08-23 07:48 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-08-23 07:48:23 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
kshlm rsa ssh key (394 bytes, text/plain)
2016-08-19 06:20 UTC, Kaushal
no flags Details

Description Kaushal 2016-08-19 06:20:40 UTC
Created attachment 1192041 [details]
kshlm rsa ssh key

I need temporary access to slave24, slave27 and anyone other centos-slave, to look around. The machines need not be stopped or removed from the build pool.

The two mentioned slaves have been failing all centos6-regression jobs, due to the failure of a particular test (./tests/bugs/cli/bug-1320388.t). They fail with different reasons on each. But the same test has passed on other slaves.

The failures seen are because of TLS, and I want to check if there is a difference in the environment on these 2 machines and one other.

My SSH key is attached.

Comment 1 Nigel Babu 2016-08-19 06:27:16 UTC
I've taken the machines out of the pool anyway.

You have slave24, slave27, and slave32. Login as jenkins@. Let me know in the bug when you're done with them.

Comment 2 Kaushal 2016-08-19 07:01:08 UTC
I'm done with the machines. I've not changed anything and you can add them to the pool directly. Thanks.

Comment 3 Nigel Babu 2016-08-23 06:18:44 UTC
slave32 is back in the pool. I think I saw the review request for the fix somewhere. I'll hold off on returning the two machines back into the pool until the bug is fixed.

Comment 4 Kaushal 2016-08-23 06:27:45 UTC
The fix has been merged into master [1]. The backports to release-3.7 and release-3.8 are under review [2] and [3].

[1] https://review.gluster.org/15202
[2] https://review.gluster.org/15227
[3] https://review.gluster.org/15228

Comment 5 Nigel Babu 2016-08-23 07:48:23 UTC
Excellent, I've put the machines back into the pool


Note You need to log in before you can comment on or make changes to this bug.