Bug 1516947
Summary: | [UPDATES] Failed to setup heat output: sudo: a password is required | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Yurii Prokulevych <yprokule> | |
Component: | Ceph-Ansible | Assignee: | Sébastien Han <shan> | |
Status: | CLOSED ERRATA | QA Contact: | Yogev Rabl <yrabl> | |
Severity: | urgent | Docs Contact: | Aron Gunn <agunn> | |
Priority: | urgent | |||
Version: | 2.4 | CC: | adeza, agunn, aschoen, augol, ceph-eng-bugs, ceph-qe-bugs, gfidente, gkadam, gmeno, hnallurv, kdreyer, lbezdick, mbultel, nthomas, sankarshan, sasha, seb, shan, yrabl | |
Target Milestone: | rc | |||
Target Release: | 2.5 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | RHEL: ceph-ansible-3.0.25-1.el7cp Ubuntu: ceph-ansible_3.0.25-2redhat1 | Doc Type: | Known Issue | |
Doc Text: |
.Failing to set up the Heat output for the Red Hat OpenStack Platform
The `ceph-ansible` utility requires that the user running `ceph-ansible` has passwordless `sudo` privileges, otherwise an attempt to use a `ceph-ansible` playbook fails. To work around this issue, make sure that the user running `ceph-ansible` has passowordless `sudo` access configured.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1528431 1536068 (view as bug list) | Environment: | ||
Last Closed: | 2018-02-21 19:46:24 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1528431 | |||
Bug Blocks: | 1536068 |
Description
Yurii Prokulevych
2017-11-23 16:04:15 UTC
Giulio could you please have a look at the SOS report? One workaround for this is to add the user running ansible to the sudoers file. done, feel free to re-arrange Failed to update the overcloud with the latest ceph container image with an error in one of the OSDs: fatal: [192.168.24.11]: FAILED! => {"changed": false, "cmd": ["docker", "run", "--rm", "--entrypoint", "/usr/bin/ceph", "brew-pulp-docker01.web.pro d.ext.phx2.redhat.com:8888/rhceph:ceph-2-rhel-7-docker-candidate-81064-20180205070134", "--version"], "delta": "0:00:00.662242", "end": "2018-02-06 03:13:19.996086", "msg": "non-zero return c ode", "rc": 127, "start": "2018-02-06 03:13:19.333844", "stderr": "container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"open /sys/fs/cgroup/pids/system.slice/docker-5f32487f85449859a1d51d2cb12ff2336ffdeeec8876ea7132ce438830f51147.scope/cgroup.procs: no such file or directory\\\"\"\n/usr/bin/docker-c urrent: Error response from daemon: invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configu ration for process caused \\\\\\\"open /sys/fs/cgroup/pids/system.slice/docker-5f32487f85449859a1d51d2cb12ff2336ffdeeec8876ea7132ce438830f51147.scope/cgroup.procs: no such file or directory\\ \\\\\"\\\"\\n\".", "stderr_lines": ["container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"open /sys/fs/cgroup /pids/system.slice/docker-5f32487f85449859a1d51d2cb12ff2336ffdeeec8876ea7132ce438830f51147.scope/cgroup.procs: no such file or directory\\\"\"", "/usr/bin/docker-current: Error response from daemon: invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\ \\\\\"open /sys/fs/cgroup/pids/system.slice/docker-5f32487f85449859a1d51d2cb12ff2336ffdeeec8876ea7132ce438830f51147.scope/cgroup.procs: no such file or directory\\\\\\\"\\\"\\n\"."], "stdout" : "", "stdout_lines": []} This looks like a Docker error to me. Nothing related to the container image. Yogev, can you investigate further this error? It looks like the Docker engine is having an issue. Let us know if there is something we can help you with. But for now, I believe you're hitting an issue that is unrelated to the original bug. What is your plan? Are you going to test on another env? Thanks The controller Ceph image was updated but the Ceph storage nodes (the OSDs) were not updated [heat-admin@ceph-0 ~]$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a618c3ca7c97 docker-registry.engineering.redhat.com/ceph/rhceph-2-rhel7:2.4-4 "/entrypoint.sh" About an hour ago Up About an hour ceph-osd-ceph-0-vdb 2826a5b4a576 192.168.24.1:8787/rhosp12/openstack-cron:2018-01-24.2 "kolla_start" 12 hours ago Up About an hour logrotate_crond [heat-admin@controller-2 ~]$ sudo docker ps | grep ceph ce5f796336e8 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:ceph-2-rhel-7-docker-candidate-81064-20180205070134 "/entrypoint.sh" 8 minutes ago Up 8 minutes ceph-mon-controller-2 the version is: ceph-ansible-3.0.23-1.el7cp.noarch Yogev, this looks like a different issue. Which test are you running? Why do you expect the image to change? Anyway, can you provide the playbook logs? Ideally, an env with the error as well. Thanks in advance. leseb, the environment is being preserved for you Thanks, let me know when it's available and send me details so I can login. Thanks The environment is ready, available. its detailed have been provided on IRC This is solved in https://bugzilla.redhat.com/show_bug.cgi?id=1526513 verified on ceph-ansible-3.0.25-1.el7cp.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0340 |