Bug 1539858
| Summary: | Cinder backup service fails to deploy in a container under pacemaker (HA) | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Alan Bishop <abishop> |
| Component: | puppet-tripleo | Assignee: | Alan Bishop <abishop> |
| Status: | CLOSED ERRATA | QA Contact: | Tzach Shefi <tshefi> |
| Severity: | medium | Docs Contact: | |
| Priority: | high | ||
| Version: | 12.0 (Pike) | CC: | cschwede, jjoyce, jschluet, pgrist, samccann, slinaber, tvignaud |
| Target Milestone: | z2 | Keywords: | Triaged, ZStream |
| Target Release: | 12.0 (Pike) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | puppet-tripleo-7.4.8-1.el7ost | Doc Type: | Bug Fix |
| Doc Text: |
The Puppet code that creates the cinder-volume and cinder-backup pacemaker bundles was creating two different Puppet resources using the same resource name. Puppet doesn't allow duplicate resource names, and this caused an error when trying to create the cinder-backup bundle. This caused the cinder-backup service to not work because the service was never started.
The Puppet code was updated to assign unique names for all of the cinder-volume and cinder-backup Puppet resources. Now the cinder-backup pacemaker bundle is created, which in turn allows the cinder-backup service to work.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-03-28 17:28:51 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Alan Bishop
2018-01-29 18:20:55 UTC
Stuck on clone of this BZ for none HA once I get that needinfo/issue resolved I'll try this one https://bugzilla.redhat.com/show_bug.cgi?id=1539090#c5 Verified on: puppet-tripleo-7.4.8-4.el7ost.noarch Deployed OPS12 HA with containerized Cinder. (undercloud) [stack@undercloud-0 ~]$ cat containerized-cinder.yaml resource_registry: OS::TripleO::Services::CinderApi: /usr/share/openstack-tripleo-heat-templates/docker/services/cinder-api.yaml OS::TripleO::Services::CinderScheduler: /usr/share/openstack-tripleo-heat-templates/docker/services/cinder-scheduler.yaml OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/docker/services/pacemaker/cinder-backup.yaml OS::TripleO::Services::CinderVolume: /usr/share/openstack-tripleo-heat-templates/docker/services/pacemaker/cinder-volume.yaml OS::TripleO::Services::Iscsid: /usr/share/openstack-tripleo-heat-templates/docker/services/iscsid.yaml [root@controller-0 ~]# docker ps | grep cinder bbfd8693c6a6 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-volume:pcmklatest "/bin/bash /usr/lo..." 2 hours ago Up 2 hours (healthy) openstack-cinder-volume-docker-0 cba77126042b docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-api:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours (healthy) cinder_api_cron 07f01f5d9146 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-scheduler:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours (healthy) cinder_scheduler d191c3716dc2 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-api:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours cinder_api [root@controller-1 ~]# docker ps | grep cinder 369d7124a4ce docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-backup:pcmklatest "/bin/bash /usr/lo..." 2 hours ago Up 2 hours (healthy) openstack-cinder-backup-docker-0 9ba678a3a4fa docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-api:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours (healthy) cinder_api_cron d2d8891a7578 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-scheduler:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours (healthy) cinder_scheduler ac8bc4a09483 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-api:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours cinder_api On one of the controllers $pcs resource show ... Docker container: openstack-cinder-volume [docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-volume:pcmklatest] openstack-cinder-volume-docker-0 (ocf::heartbeat:docker): Started controller-0 Docker container: openstack-cinder-backup [docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-backup:pcmklatest] openstack-cinder-backup-docker-0 (ocf::heartbeat:docker): Started controller-1 Test volume and backup create. cinder create 2 cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ee6bb1a0-b9e3-44ed-a6ee-e8851cda3ee9 | available | - | 2 | - | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ $ cinder backup-create ee6bb1a0-b9e3-44ed-a6ee-e8851cda3ee9 +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | 7e821ebf-1e2b-4889-91c9-8bae2ea93635 | | name | None | | volume_id | ee6bb1a0-b9e3-44ed-a6ee-e8851cda3ee9 | +-----------+--------------------------------------+ cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | 7e821ebf-1e2b-4889-91c9-8bae2ea93635 | ee6bb1a0-b9e3-44ed-a6ee-e8851cda3ee9 | available | - | 2 | 42 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ Both volume and backup are avliable on an HW OPS12 dockerized Cinder deployment. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0607 |