Bug 1403576
Summary: | Agent installation fails in ceph nodes upgraded from 1.3 to 2.0 and connected to local yum repository | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Storage Console | Reporter: | Riyas Abdulrasak <rnalakka> |
Component: | agent | Assignee: | Alfredo Deza <adeza> |
Status: | CLOSED ERRATA | QA Contact: | Martin Kudlej <mkudlej> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 2 | CC: | adeza, agunn, alan_bishop, arkady_kanevsky, aschoen, cdevine, christopher_dearborn, dahorak, dcain, edonnell, gmeno, japplewh, John_walsh, kdreyer, kurt_hey, mkudlej, morazi, nthomas, rajini.karthik, randy_perryman, rghatvis, sankarshan, sds-qe-bugs, smerrow, sreichar, tcole, vsarmila, vumrao |
Target Milestone: | --- | ||
Target Release: | 2 | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | ceph-ansible-2.1.3-1.el7scon | Doc Type: | Bug Fix |
Doc Text: |
Previously, Red Hat Console Agent setup performed by the ceph-ansible utility only supported installations by using the Content Delivery Network (CDN). Installations with an ISO file or local Yum repository failed. With this update, all installations are successful.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-03-14 15:51:33 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1356451, 1395340 |
Description
Riyas Abdulrasak
2016-12-11 15:44:55 UTC
Changed the assignee to Alfredo as its related to ceph installer. To use repositories that already exist in the distro then this must be enabled: ceph_origin: distro I addition to: ceph_stable_rh_storage_cdn_install: false Please provide full output when trying this again if it doesn't work. From the script to setup the agent, it is not possible to configure the ceph-installer to use anything different than the CDN. The API endpoint doesn't allow anything different either. The workaround would be to use ceph-ansible directly the set `ceph_origin: distro` and to set the master (where the console server lives), like: agent_master_host: "master_host.example.com" Upstream pull request: https://github.com/ceph/ceph-ansible/pull/1184 This fix is required for Dell EMC's JetStream 7.0 release of OSP-10 This is also the case with nodes installed with RHCS 2.X which have never been upgraded. When will ceph-ansible-2.1.3-1.el7scon land in rhel-7-server-rhscon-2-installer-rpms? I think this should be acknowledged by Alfredo or someone from ceph-installer team I think this should be acknowledged by Alfredo or someone from ceph-installer team So, the question that Alan asked in comment 26, when will ceph-ansible-2.1.3-1.el7scon land in rhel-7-server-rhscon-2-installer-rpms? This is the rpm that we need per our installation of the Storage Console. We need it in rhel-7-server-rhscon-2-installer-rpms (In reply to Kurt Hey from comment #37) > So, the question that Alan asked in comment 26, when will > ceph-ansible-2.1.3-1.el7scon land in rhel-7-server-rhscon-2-installer-rpms? > This is the rpm that we need per our installation of the Storage Console. We > need it in rhel-7-server-rhscon-2-installer-rpms Hi Kurt, this fix is currently targeted for a late Feb 2017 update for Ceph, but still needs to go through testing. Tested on console server with ceph-ansible-2.1.9-1.el7scon.noarch ceph-installer-1.2.2-1.el7scon.noarch rhscon-ceph-0.0.43-1.el7scon.x86_64 rhscon-core-0.0.45-1.el7scon.x86_64 rhscon-core-selinux-0.0.45-1.el7scon.noarch rhscon-ui-0.0.60-1.el7scon.noarch and on storage nodes with ceph-base-10.2.5-26.el7cp.x86_64 ceph-common-10.2.5-26.el7cp.x86_64 ceph-deploy-1.5.36-1.el7cp.noarch ceph-mon-10.2.5-26.el7cp.x86_64 ceph-selinux-10.2.5-26.el7cp.x86_64 libcephfs1-10.2.5-26.el7cp.x86_64 python-cephfs-10.2.5-26.el7cp.x86_64 rhscon-agent-0.0.19-1.el7scon.noarch rhscon-core-selinux-0.0.45-1.el7scon.noarch All nodes are not registered to CDN and only local repositories were used: $ subscription-manager repos This system has no repositories available through subscriptions. I've install Ceph 1.3 cluster by ceph-deploy, then upgrade it to last Ceph 2.x and then import it to Console 2. It has passed so --> VERIFIED Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:0515 |