Bug 1367817
| Summary: | Help for vdsClient for glusterVolumehealInfo has unreadable formatting | ||
|---|---|---|---|
| Product: | [oVirt] vdsm | Reporter: | Lukas Svaty <lsvaty> |
| Component: | Documentation | Assignee: | Ramesh N <rnachimu> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | RamaKasturi <knarra> |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.18.10 | CC: | bugs, knarra |
| Target Milestone: | ovirt-4.1.0-beta | Flags: | rule-engine:
ovirt-4.1+
rule-engine: planning_ack+ rnachimu: devel_ack+ rule-engine: testing_ack+ |
| Target Release: | 4.19.2 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | vdsm-gluster-4.19.1-24 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-02-01 14:38:34 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Lukas Svaty
2016-08-17 14:38:12 UTC
Verified and works fine with build vdsm-gluster-4.19.2-2.el7ev.noarch.
Executed the command "vdsClient | grep -A 150 glusterVolumeHealInfo" and i do not see the output getting displayed each character on new line.
[root@rhsqa-grafton4 ~]# vdsClient | grep -A 150 glusterVolumeHealInfo
glusterVolumeHealInfo
[volumeName=<volume_name>]
<volume_name> is existing volume name
lists self-heal info for the gluster volume
glusterVolumeProfileInfo
volumeName=<volume_name> [nfs={yes|no}]
<volume_name> is existing volume name
get gluster volume profile info
glusterVolumeProfileStart
volumeName=<volume_name>
<volume_name> is existing volume name
start gluster volume profile
glusterVolumeProfileStop
volumeName=<volume_name>
<volume_name> is existing volume name
stop gluster volume profile
glusterVolumeRebalanceStart
volumeName=<volume_name> [rebalanceType=fix-layout] [force={yes|no}]
<volume_name> is existing volume name
start volume rebalance
glusterVolumeRebalanceStatus
volumeName=<volume_name>
<volume_name> is existing volume name
get volume rebalance status
glusterVolumeRebalanceStop
volumeName=<volume_name> [force={yes|no}]
<volume_name> is existing volume name
stop volume rebalance
glusterVolumeRemoveBrickCommit
volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
<volume_name> is existing volume name
<brick[,brick, ...]> is existing brick(s)
commit volume remove bricks
glusterVolumeRemoveBrickForce
volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
<volume_name> is existing volume name
<brick[,brick, ...]> is existing brick(s)
force volume remove bricks
glusterVolumeRemoveBrickStart
volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
<volume_name> is existing volume name
<brick[,brick, ...]> is existing brick(s)
start volume remove bricks
glusterVolumeRemoveBrickStatus
volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
<volume_name> is existing volume name
<brick[,brick, ...]> is existing brick(s)
get volume remove bricks status
glusterVolumeRemoveBrickStop
volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
<volume_name> is existing volume name
<brick[,brick, ...]> is existing brick(s)
stop volume remove bricks
glusterVolumeReplaceBrickCommitForce
volumeName=<volume_name> existingBrick=<existing_brick> newBrick=<new_brick>
<volume_name> is existing volume name
<existing_brick> is existing brick
<new_brick> is new brick
commit volume replace brick
glusterVolumeReset
volumeName=<volume_name> [option=<option>] [force={yes|no}]
<volume_name> is existing volume name
reset gluster volume or volume option
glusterVolumeSet
volumeName=<volume_name> option=<option> value=<value>
<volume_name> is existing volume name
<option> is volume option
<value> is value to volume option
set gluster volume option
glusterVolumeSetOptionsList
list gluster volume set options
glusterVolumeSnapshotConfigList
volumeName=<volume_name>
get gluster volume snapshot configuration
glusterVolumeSnapshotConfigSet
volumeName=<volume_name>optionName=<option_name>optionValue=<option_value>
Set gluster snapshot configuration at volume leval
glusterVolumeSnapshotCreate
volumeName=<volume_name> snapName=<snap_name> [snapDescription=<description of snapshot>] [force={yes|no}]
create gluster volume snapshot
glusterVolumeSnapshotDeleteAll
volumeName=<volume name>
delete all snapshots for given volume
glusterVolumeSnapshotList
[volumeName=<volume_name>]
snapshot list for given volume
glusterVolumeStart
volumeName=<volume_name> [force={yes|no}]
<volume_name> is existing volume name
start gluster volume
glusterVolumeStatsInfoGet
volumeName=<volume name>
Returns total, free and used space(bytes) of gluster volume
glusterVolumeStatus
volumeName=<volume_name> [brick=<existing_brick>] [option={detail | clients | mem}]
<volume_name> is existing volume name
option=detail gives brick detailed status
option=clients gives clients status
option=mem gives memory status
get volume status of given volume with its all brick or specified brick
glusterVolumeStop
volumeName=<volume_name> [force={yes|no}]
<volume_name> is existing volume name
stop gluster volume
glusterVolumesList
[volumeName=<volume_name>]
[remoteServer=<remote_server]
<volume_name> is existing volume name <remote_server> is a remote host name
list all or given gluster volume details
hibernate
<vmId> <hiberVolHandle>
Hibernates the desktop
hostdevChangeNumvfs
<device_name>, <numvfs>
Change number of virtual functions for given physical function.
hostdevHotplug
<vmId> <hostdevspec>
Hotplug hostdevto existing VM
hostdevspec specification of the device
hostdevHotunplug
<vmId> <hostdevspec>
Hotplug hostdevto existing VM
names names of the devices
hostdevListByCaps
[<caps>]
Get available devices on host with given capability. Leave caps empty to list all devices.
hostdevReattach
<device_name>
Reattach device back to a host.
hotplugDisk
<vmId> <drivespec>
Hotplug disk to existing VM
drivespec parameters list: r=required, o=optional
r iface:<ide|virtio> - Unique identification of the existing VM.
r index:<int> - disk index unique per interface virtio|ide
r [pool:UUID,domain:UUID,image:UUID,volume:UUID]|[GUID:guid]|[UUID:uuid]
r format: cow|raw
r readonly: True|False - default is False
r propagateErrors: off|on - default is off
o bootOrder: <int> - global boot order across all bootable devices
o shared: exclusive|shared|none
o optional: True|False
hotplugMemory
<vmId> <memDeviceSpec>
Hotplug memory to a running VM NUMA node
memDeviceSpec parameters list: r=required, o=optional
r size: memory size to plug in mb.
r node: guest NUMA node id to plug into
hotplugNic
|