Bug 1259915 - Can not Install one Node without having install go through and install Master
Can not Install one Node without having install go through and install Master
Status: CLOSED CURRENTRELEASE
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer (Show other bugs)
3.0.0
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Samuel Munilla
Gaoyun Pei
:
Depends On: 1278245
Blocks: 1267746
  Show dependency treegraph
 
Reported: 2015-09-03 15:11 EDT by Ryan Howe
Modified: 2015-11-20 10:42 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-20 10:42:20 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ryan Howe 2015-09-03 15:11:33 EDT
Description of problem:

Wanting the ability to install one node by itself with out running the installer through the master. Fail at TASK: [Unarchive the tarball on the node]


Version-Release number of selected component (if applicable): v3.0.1 


How reproducible:
100%

Steps to Reproduce:
1.Run Ansible install with only a node configured and defined 

Actual results:
Fails

Expected results:
Install and configure a Node then move on to next steps 

https://docs.openshift.com/enterprise/3.0/admin_guide/master_node_configuration.html#creating-new-configuration-files
-----https://bugzilla.redhat.com/show_bug.cgi?id=1259913

Then start node 
https://docs.openshift.com/enterprise/3.0/admin_guide/master_node_configuration.html#launching-servers-using-configuration-files

Additional info:

Documentation on this is lacking too. 

Install Error: 

TASK: [Unarchive the tarball on the node] *************************************
<172.17.28.10> ESTABLISH CONNECTION FOR USER: root
<172.17.28.10> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 172.17.28.10 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1441305864.64-102905425420716 && echo $HOME/.ansible/tmp/ansible-tmp-1441305864.64-102905425420716'
EXEC previous known host file not found for 172.17.28.10
<172.17.28.10> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 172.17.28.10 /bin/sh -c 'rc=flag; [ -r /etc/openshift/node ] || rc=2; [ -f /etc/openshift/node ] || rc=1; [ -d /etc/openshift/node ] && rc=3; python -V 2>/dev/null || rc=4; [ x"$rc" != "xflag" ] && echo "${rc} "/etc/openshift/node && exit 0; (python -c '"'"'import hashlib; BLOCKSIZE = 65536; hasher = hashlib.sha1(); afile = open("'"'"'/etc/openshift/node'"'"'", "rb") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())'"'"' 2>/dev/null) || (python -c '"'"'import sha; BLOCKSIZE = 65536; hasher = sha.sha(); afile = open("'"'"'/etc/openshift/node'"'"'", "rb") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())'"'"' 2>/dev/null) || (echo '"'"'0 '"'"'/etc/openshift/node)'
EXEC previous known host file not found for 172.17.28.10
<172.17.28.10> PUT /tmp/openshift-ansible-P6LJPHp/node-node3.example.com.tgz TO /root/.ansible/tmp/ansible-tmp-1441305864.64-102905425420716/source
fatal: [node3.example.com] => file or module does not exist: /tmp/openshift-ansible-P6LJPHp/node-node3.example.com.tgz

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
           to retry, use: --limit @/root/config.retry

localhost                  : ok=6    changed=0    unreachable=0    failed=0
node3.example.com           : ok=8    changed=0    unreachable=1    failed=0
Comment 2 Brenton Leanhardt 2015-09-09 11:54:12 EDT
I'm assuming what this is asking for is the ability to add an additional node to an environment that has already been installed.  We're tracking that here: https://trello.com/c/IEkIu7n2/40-8-oo-install-support-for-adding-additional-masters-and-nodes-scale-up.
Comment 8 Gaoyun Pei 2015-11-13 05:04:50 EST
Mark this bug as VERIFIED for the main function is working well. Test with atomic-openshift-utils-3.0.12-1.git.0.4c09c5b.el7aos.noarch.

Now we could add new nodes to a pre-existing environment with atomic-openshift-installer command.

1) Add nodes in interactive way
  1. Run 'atomic-openshift-installer install'
  2. Input the existing master and node host information following the guide
  3. The installer would detect the role of existing hosts, and asks whether adding new node or running a fresh install
  4. Choose adding addtional node, input the host information of the new nodes
  5. Confirm with the facts collected, then start installation of the new nodes

2) Add nodes with an installer.cfg.yml file
  1. Prepare a config yaml file which contains the existing hosts and the nodes to be installed.
  2. Run 'atomic-openshift-installer -c installer.cfg.yml install'
  3. The installer would detect the role of existing hosts and the nodes to be installed, confirm with the facts collected, start installation of the new nodes

3) Add nodes with an installer.cfg.yml file in un-attended way
  1. Prepare a config yaml file which contains the existing hosts and the nodes to be installed, should include all the information needed
  2. Run 'atomic-openshift-installer -u -c installer.cfg.yml install'
  3. The installer would detect the role of existing hosts and the nodes to be installed and start installation of the new nodes

There's still issues existing in certain edge cases. The following bugs were filed to track them.
https://bugzilla.redhat.com/show_bug.cgi?id=1279374
https://bugzilla.redhat.com/show_bug.cgi?id=1281717

Note You need to log in before you can comment on or make changes to this bug.