Bug 1408848
Summary: | [RFE] Alert user to set $CEPH_CONF var if cluster name is non-default | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Storage Console | Reporter: | Kyle Squizzato <ksquizza> |
Component: | ceph-ansible | Assignee: | Sébastien Han <shan> |
Status: | CLOSED ERRATA | QA Contact: | Vidushi Mishra <vimishra> |
Severity: | medium | Docs Contact: | Erin Donnelly <edonnell> |
Priority: | medium | ||
Version: | 2 | CC: | adeza, aschoen, ceph-eng-bugs, edonnell, gmeno, hnallurv, kdreyer, nthomas, sankarshan, seb |
Target Milestone: | --- | Keywords: | FutureFeature |
Target Release: | 2 | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | ceph-ansible-2.2.1-1.el7scon | Doc Type: | Enhancement |
Doc Text: |
.Use CEPH_ARGS to ensure all commands work for clusters with unique names
In Red Hat Ceph Storage, the `cluster` variable `in group_vars/all` determines the name of the cluster. Changing the default value to something else means that all the command line calls need to be changed as well. For example, if the cluster name is `foo`, then `ceph health` becomes `ceph --cluster foo health`.
An easier way to handle this is to use the environment variable `CEPH_ARGS`. In this case, run `export CEPH_ARGS="--cluster foo"`. With that, you can run all command line calls normally.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-06-19 13:16:21 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1437916 |
Description
Kyle Squizzato
2016-12-27 17:26:42 UTC
(In reply to Kyle Squizzato from comment #0) > Additional info: > While it'd be nice to set the env variable for users automatically, we'd > probably only be able to set it for the root/ceph users and future users > that are created would all need the variable to be added to their shell > profiles in order to function. It's worth noting we could use /etc/profile here as well, but I'm a bigger fan of the "let the user handle it" approach. Even so what you're saying is right this is difficult to achieve in Ansible. We cannot write an "info" message at the end of the play but only during the execution. This means the "info" message will be drowned by all the others messages. The only thing we can do is to write a doc section about this and put a comment in the code right after "cluster" in group_vars/all.yml. What do you think Kyle? This should be in the next release after the 2.1 series. This should be in the next release after the 2.1 series. Added doc text, let me know if that works for you. LGTM :) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1496 |