Bug 1350631

Summary: Some machines don't have /opt/qa updated
Product: [Community] GlusterFS Reporter: Nigel Babu <nigelb>
Component: project-infrastructureAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, gluster-infra, mscherer
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: Triaged
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-14 14:10:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nigel Babu 2016-06-28 01:01:09 UTC
On slave24 and slave25, /opt/qa has not been updated since May 2. Can we look into what's causing that?

Comment 1 M. Scherer 2016-06-28 08:56:30 UTC
I suspect the 2 servers were down during the last run of ansible-playbook, but this is also supposed to be run at night. I will dig the log.

Comment 2 M. Scherer 2016-06-28 08:58:57 UTC
So on slave24:

fatal: [slave24.cloud.gluster.org]: FAILED! => {"changed": false, "cmd": "/usr/bin/git ls-remote origin -h refs/heads/master", "failed": true, "msg": "", "rc": 128, "stderr": "", "stdout": "", "stdout_lines": []}

I test slave25 now.

Comment 3 M. Scherer 2016-06-28 10:16:57 UTC
So slave25 is the same, but not slave29. I didn't found yet any relevent difference. It also work when run from my laptop and nigel one (the ansible playbook, that's it). I strongly suspect a issue on the salt bus.

Comment 4 Nigel Babu 2016-08-02 15:57:58 UTC
I actually noticed this on slave22 right now. There was no effect of git pull on the folder. I tried to rm -r /opt/qa and do a fresh git clone. The rm worked, the fresh clone didn't work.

[root@slave22 opt]# git clone https://github.com/gluster/glusterfs-patch-acceptance-tests.git qa
Cloning into 'qa'...
[root@slave22 opt]# ls -l
total 0

Comment 5 Nigel Babu 2016-08-02 16:08:08 UTC
[root@slave22 qa]# GIT_TRACE=1 git pull
trace: exec: 'git-pull'
trace: run_command: 'git-pull'
trace: built-in: git 'rev-parse' '--git-dir'
trace: built-in: git 'rev-parse' '--is-bare-repository'
trace: built-in: git 'rev-parse' '--show-toplevel'
trace: built-in: git 'ls-files' '-u'
trace: built-in: git 'symbolic-ref' '-q' 'HEAD'
trace: built-in: git 'config' '--bool' 'branch.master.rebase'
trace: built-in: git 'config' '--bool' 'pull.rebase'
trace: built-in: git 'rev-parse' '-q' '--verify' 'HEAD'
trace: built-in: git 'fetch' '--update-head-ok'
trace: run_command: 'git-remote-https' 'origin' 'https://github.com/gluster/glusterfs-patch-acceptance-tests.git'

Comment 6 Nigel Babu 2016-08-08 05:25:18 UTC
This just gets weirder and weirder. I can clone from gerrit successfully. It's just github that has an issue.

Comment 7 Nigel Babu 2016-08-08 05:34:52 UTC
Ha, I just realized what's going on. We're running into this issue. Seemingly the fix didn't apply to this machine. https://access.redhat.com/solutions/2313911

Comment 8 Nigel Babu 2016-09-14 14:10:20 UTC
Closing this now that we have the root cause.