Description of problem: Not able to do a force removal of the host. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Login to rhsc-shell by typing the command "rhsc-shell" 2. connect by using the following command "connect --url https://10.70.35.155/api --user admin@internal --password redhat --insecure" 3. add host by using the command "add host --name TestHost1 --address "10.70.35.1" --root_password redhat --cluster-name TestCluster" 4. add volume to that host by using the command "add glustervolume --cluster-identifier TestCluster --name vol1 --volume_type DISTRIBUTE --bricks-brick "brick.server_id=35cfcaa5-1b1e-4be7-b87d-5a018ea98d98,brick.brick_dir=/home/brickInfo/b10" --bricks-brick "brick.server_id=35cfcaa5-1b1e-4be7-b87d-5a018ea98d98,brick.brick_dir=/home/brickInfo/b20" 5. start the volume by using the command "action glustervolume <volname|id> start --cluster-identifier <clusterName|id>" 6. Now move the host to maintenance state by the command "action host <hostname|id> deactivate" 7. Now force remove the host by the command "remove host TestHost1 --force" Actual results: error: status: 400 reason: Bad Request detail: var action remove, var type host, vds cannot remove host having gluster volume Expected results: It should remove the server and the volumes which has bricks residing on that server. Additional info: Force removal of host is happening properly with console.
Created attachment 714522 [details] Attaching engine,vdsm and glusterfs logs Attaching the logs
The correct syntax is remove host TestHost1 --force true Closing as not a bug since it works with the above command.