Bug 1368339

Summary: Access to slave{24,27 and one other centos-slave}.cloud.gluster.org
Product: [Community] GlusterFS Reporter: Kaushal <kaushal>
Component: project-infrastructureAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, gluster-infra, nigelb
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-08-23 07:48:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
kshlm rsa ssh key none

Description Kaushal 2016-08-19 06:20:40 UTC
Created attachment 1192041 [details]
kshlm rsa ssh key

I need temporary access to slave24, slave27 and anyone other centos-slave, to look around. The machines need not be stopped or removed from the build pool.

The two mentioned slaves have been failing all centos6-regression jobs, due to the failure of a particular test (./tests/bugs/cli/bug-1320388.t). They fail with different reasons on each. But the same test has passed on other slaves.

The failures seen are because of TLS, and I want to check if there is a difference in the environment on these 2 machines and one other.

My SSH key is attached.

Comment 1 Nigel Babu 2016-08-19 06:27:16 UTC
I've taken the machines out of the pool anyway.

You have slave24, slave27, and slave32. Login as jenkins@. Let me know in the bug when you're done with them.

Comment 2 Kaushal 2016-08-19 07:01:08 UTC
I'm done with the machines. I've not changed anything and you can add them to the pool directly. Thanks.

Comment 3 Nigel Babu 2016-08-23 06:18:44 UTC
slave32 is back in the pool. I think I saw the review request for the fix somewhere. I'll hold off on returning the two machines back into the pool until the bug is fixed.

Comment 4 Kaushal 2016-08-23 06:27:45 UTC
The fix has been merged into master [1]. The backports to release-3.7 and release-3.8 are under review [2] and [3].

[1] https://review.gluster.org/15202
[2] https://review.gluster.org/15227
[3] https://review.gluster.org/15228

Comment 5 Nigel Babu 2016-08-23 07:48:23 UTC
Excellent, I've put the machines back into the pool