Description of problem: I have a customer who is using local yum repository (instead of CDN or iso installaiton). The cluster is recently upgraded from 1.3.3 to 2.0. The rhscon-agent installation getting failed on this node. Version-Release number of selected component (if applicable): rhscon-core-0.0.45-1.el7scon.x86_64 ceph-base-10.2.2-38.el7cp.x86_64 How reproducible: Always Steps to Reproduce: 1. Install a RHCS 1.3.3 cluster 2. Create a local yum repository with the required ISOs of RHCS, RHSC ..etc and use it over http on the storage nodes Eg:- # cat /etc/yum.repos.d/mon.repo [mon] name = mon_repo baseurl = http://dhcp8-25.gsslab.pnq.redhat.com/yum/rhceph2.0/MON/ enabled = 1 gpgcheck = 0 2. Upgrade it to 2.0 3. Do the required for taking over existing cluster to ansible 4. Install a Red Hat Storage console server 5. From the storage monitor nodes/OSD nodes try to install console agent, using # curl dhcp8-242.gsslab.pnq.redhat.com:8181/setup/agent/ | bash Actual results: Agent installation doesn't succeed and not getting hosts requests on the console. Expected results: Agent installation should succeed. Additional info: The error messages seen in the Storage console node /var/log/messages ~~~~~~~~~~~ Dec 11 20:33:29 dhcp8-242 ceph-installer-gunicorn: 2016-12-11 20:33:29,810 INFO [ceph_installer.controllers.agent][MainThread] defining "dhcp8-242.gsslab.pnq.redhat.com" as the master host for the minion configuration Dec 11 20:33:29 dhcp8-242 ceph-installer-gunicorn: 2016-12-11 20:33:29,870 INFO [ceph_installer.util][MainThread] Setting redhat_storage to False Dec 11 20:33:29 dhcp8-242 ceph-installer-gunicorn: 2016-12-11 20:33:29,871 INFO [ceph_installer.util][MainThread] Setting redhat_use_cdn to True Dec 11 20:33:30 dhcp8-242 ceph-installer-gunicorn: /usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py:573: SAWarning: Unicode type received non-unicodebind param value. Dec 11 20:33:30 dhcp8-242 ceph-installer-gunicorn: param.append(processors[key](compiled_params[key])) Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,391: INFO/MainProcess] Received task: ceph_installer.tasks.call_ansible[2335c3cb-a766-4971-af90-931d45c5816a] Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,420: WARNING/Worker-1] "CEPH_ANSIBLE_PATH" environment variable is not defined Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:32,478 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,478: INFO/Worker-1] BEGIN (implicit) Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:32,490 INFO sqlalchemy.engine.base.Engine SELECT tasks.id AS tasks_id, tasks.identifier AS tasks_identifier, tasks.endpoint AS tasks_endpoint, tasks.user_agent AS tasks_user_agent, tasks.request AS tasks_request, tasks.http_method AS tasks_http_method, tasks.command AS tasks_command, tasks.stderr AS tasks_stderr, tasks.stdout AS tasks_stdout, tasks.started AS tasks_started, tasks.ended AS tasks_ended, tasks.succeeded AS tasks_succeeded, tasks.exit_code AS tasks_exit_code Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: FROM tasks Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: WHERE tasks.identifier = ? Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: LIMIT ? OFFSET ? Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,490: INFO/Worker-1] SELECT tasks.id AS tasks_id, tasks.identifier AS tasks_identifier, tasks.endpoint AS tasks_endpoint, tasks.user_agent AS tasks_user_agent, tasks.request AS tasks_request, tasks.http_method AS tasks_http_method, tasks.command AS tasks_command, tasks.stderr AS tasks_stderr, tasks.stdout AS tasks_stdout, tasks.started AS tasks_started, tasks.ended AS tasks_ended, tasks.succeeded AS tasks_succeeded, tasks.exit_code AS tasks_exit_code Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: FROM tasks Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: WHERE tasks.identifier = ? Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: LIMIT ? OFFSET ? Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:32,490 INFO sqlalchemy.engine.base.Engine ('3f489b14-c8f1-421d-a03f-31a6c170cb36', 1, 0) Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,490: INFO/Worker-1] ('3f489b14-c8f1-421d-a03f-31a6c170cb36', 1, 0) Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:32,535 INFO sqlalchemy.engine.base.Engine UPDATE tasks SET command=?, started=? WHERE tasks.id = ? Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,535: INFO/Worker-1] UPDATE tasks SET command=?, started=? WHERE tasks.id = ? Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:32,536 INFO sqlalchemy.engine.base.Engine ('/bin/ansible-playbook -v -u ceph-installer /usr/share/ceph-ansible/site.yml.sample -i /tmp/3f489b14-c8f1-421d-a03f-31a6c170cb36_Z6dooq --extra-vars {"agent_master_host": "dhcp8-242.gsslab.pnq.redhat.com", "ceph_stable": true, "fetch_directory": "/var/lib/ceph-installer/fetch"}', '2016-12-11 20:33:32.534415', 15) Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,536: INFO/Worker-1] ('/bin/ansible-playbook -v -u ceph-installer /usr/share/ceph-ansible/site.yml.sample -i /tmp/3f489b14-c8f1-421d-a03f-31a6c170cb36_Z6dooq --extra-vars {"agent_master_host": "dhcp8-242.gsslab.pnq.redhat.com", "ceph_stable": true, "fetch_directory": "/var/lib/ceph-installer/fetch"}', '2016-12-11 20:33:32.534415', 15) Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:32,541 INFO sqlalchemy.engine.base.Engine COMMIT Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,541: INFO/Worker-1] COMMIT Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,553: WARNING/Worker-1] "CEPH_ANSIBLE_PATH" environment variable is not defined Dec 11 20:33:32 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:32,554: INFO/Worker-1] Running command: /bin/ansible-playbook -v -u ceph-installer /usr/share/ceph-ansible/site.yml.sample -i /tmp/3f489b14-c8f1-421d-a03f-31a6c170cb36_Z6dooq --extra-vars {"agent_master_host": "dhcp8-242.gsslab.pnq.redhat.com", "ceph_stable": true, "fetch_directory": "/var/lib/ceph-installer/fetch"} Dec 11 20:33:38 dhcp8-242 dhclient[1029]: DHCPREQUEST on eth0 to 10.74.132.66 port 67 (xid=0x56c09e37) Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:41,295 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:41,295: INFO/Worker-1] BEGIN (implicit) Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:41,317 INFO sqlalchemy.engine.base.Engine SELECT tasks.id AS tasks_id, tasks.identifier AS tasks_identifier, tasks.endpoint AS tasks_endpoint, tasks.user_agent AS tasks_user_agent, tasks.request AS tasks_request, tasks.http_method AS tasks_http_method, tasks.command AS tasks_command, tasks.started AS tasks_started Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: FROM tasks Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: WHERE tasks.id = ? Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:41,317: INFO/Worker-1] SELECT tasks.id AS tasks_id, tasks.identifier AS tasks_identifier, tasks.endpoint AS tasks_endpoint, tasks.user_agent AS tasks_user_agent, tasks.request AS tasks_request, tasks.http_method AS tasks_http_method, tasks.command AS tasks_command, tasks.started AS tasks_started Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: FROM tasks Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: WHERE tasks.id = ? Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:41,317 INFO sqlalchemy.engine.base.Engine (15,) Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:41,317: INFO/Worker-1] (15,) Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:41,343 INFO sqlalchemy.engine.base.Engine UPDATE tasks SET stderr=?, stdout=?, ended=?, succeeded=?, exit_code=? WHERE tasks.id = ? Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:41,343: INFO/Worker-1] UPDATE tasks SET stderr=?, stdout=?, ended=?, succeeded=?, exit_code=? WHERE tasks.id = ? Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:41,347 INFO sqlalchemy.engine.base.Engine (u'', u'\nPLAY [mons] ******************************************************************* \nskipping: no hosts matched\n\nPLAY [agents] ***************************************************************** \n\nGATHERING FACTS *************************************************************** \nok: [10.65.8.200]\n\nTASK: [ceph-agent | determine if node is registered with subscription-manager.] *** \nfailed: [10.65.8.200] => {"changed": false, "cmd": ["subscription-manager", "identity"], "delta": "0:00:03.032178", "end": "2016-12-11 20:40:42.427633", "rc": 1, "start": "2016-12-11 20:40:39.395455", "stdout_lines": [], "warnings": []}\nstderr: This system is not yet registered. Try \'subscription-manager register --help\' for more information.\n\nFATAL: all hosts have already failed -- aborting\n\nPLAY RECAP ******************************************************************** \n to retry, use: --limit @/var/lib/ceph-installer/site.sample.retry\n\n10.65.8.200 : ok=1 changed=0 unreachable=0 failed=1 \n\n', '2016-12-11 20:33:41.252696', 0, 2, 15) Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:41,347: INFO/Worker-1] (u'', u'\nPLAY [mons] ******************************************************************* \nskipping: no hosts matched\n\nPLAY [agents] ***************************************************************** \n\nGATHERING FACTS *************************************************************** \nok: [10.65.8.200]\n\nTASK: [ceph-agent | determine if node is registered with subscription-manager.] *** \nfailed: [10.65.8.200] => {"changed": false, "cmd": ["subscription-manager", "identity"], "delta": "0:00:03.032178", "end": "2016-12-11 20:40:42.427633", "rc": 1, "start": "2016-12-11 20:40:39.395455", "stdout_lines": [], "warnings": []}\nstderr: This system is not yet registered. Try \'subscription-manager register --help\' for more information.\n\nFATAL: all hosts have already failed -- aborting\n\nPLAY RECAP ******************************************************************** \n to retry, use: --limit @/var/lib/ceph-installer/site.sample.retry\n\n10.65.8.200 : ok=1 changed=0 unreachable=0 failed=1 \n\n', '2016-12-11 20:33:41.252696', 0, 2, 15) Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: 2016-12-11 20:33:41,353 INFO sqlalchemy.engine.base.Engine COMMIT Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:41,353: INFO/Worker-1] COMMIT Dec 11 20:33:41 dhcp8-242 ceph-installer-celery: [2016-12-11 20:33:41,370: INFO/MainProcess] Task ceph_installer.tasks.call_ansible[2335c3cb-a766-4971-af90-931d45c5816a] succeeded in 8.96450262307s: None ~~~~~~~~~~~
Changed the assignee to Alfredo as its related to ceph installer.
To use repositories that already exist in the distro then this must be enabled: ceph_origin: distro I addition to: ceph_stable_rh_storage_cdn_install: false Please provide full output when trying this again if it doesn't work.
From the script to setup the agent, it is not possible to configure the ceph-installer to use anything different than the CDN. The API endpoint doesn't allow anything different either. The workaround would be to use ceph-ansible directly the set `ceph_origin: distro` and to set the master (where the console server lives), like: agent_master_host: "master_host.example.com"
Upstream pull request: https://github.com/ceph/ceph-ansible/pull/1184
This fix is required for Dell EMC's JetStream 7.0 release of OSP-10
This is also the case with nodes installed with RHCS 2.X which have never been upgraded.
When will ceph-ansible-2.1.3-1.el7scon land in rhel-7-server-rhscon-2-installer-rpms?
I think this should be acknowledged by Alfredo or someone from ceph-installer team
So, the question that Alan asked in comment 26, when will ceph-ansible-2.1.3-1.el7scon land in rhel-7-server-rhscon-2-installer-rpms? This is the rpm that we need per our installation of the Storage Console. We need it in rhel-7-server-rhscon-2-installer-rpms
(In reply to Kurt Hey from comment #37) > So, the question that Alan asked in comment 26, when will > ceph-ansible-2.1.3-1.el7scon land in rhel-7-server-rhscon-2-installer-rpms? > This is the rpm that we need per our installation of the Storage Console. We > need it in rhel-7-server-rhscon-2-installer-rpms Hi Kurt, this fix is currently targeted for a late Feb 2017 update for Ceph, but still needs to go through testing.
Tested on console server with ceph-ansible-2.1.9-1.el7scon.noarch ceph-installer-1.2.2-1.el7scon.noarch rhscon-ceph-0.0.43-1.el7scon.x86_64 rhscon-core-0.0.45-1.el7scon.x86_64 rhscon-core-selinux-0.0.45-1.el7scon.noarch rhscon-ui-0.0.60-1.el7scon.noarch and on storage nodes with ceph-base-10.2.5-26.el7cp.x86_64 ceph-common-10.2.5-26.el7cp.x86_64 ceph-deploy-1.5.36-1.el7cp.noarch ceph-mon-10.2.5-26.el7cp.x86_64 ceph-selinux-10.2.5-26.el7cp.x86_64 libcephfs1-10.2.5-26.el7cp.x86_64 python-cephfs-10.2.5-26.el7cp.x86_64 rhscon-agent-0.0.19-1.el7scon.noarch rhscon-core-selinux-0.0.45-1.el7scon.noarch All nodes are not registered to CDN and only local repositories were used: $ subscription-manager repos This system has no repositories available through subscriptions. I've install Ceph 1.3 cluster by ceph-deploy, then upgrade it to last Ceph 2.x and then import it to Console 2. It has passed so --> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:0515