Description of problem: Created 3 MDS server, after a while i again tried to create another mds server, but i missed to change the same name. I am not seeing any error. Version-Release number of selected component (if applicable): ceph version 10.2.1-1.el7cp ceph-mds-10.2.1-1.el7cp.x86_64 How reproducible: Always Steps to Reproduce: 1.Created 3 mds server. ceph-deploy mds create 10.70.x.x:mds0 ceph-deploy mds create 10.70.x.x:mds1 ceph-deploy mds create 10.70.x.x:mds2 2.All 3 came up ps -ef |grep mds ceph 87399 1 0 15:21 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id mds0 --setuser ceph --setgroup ceph ceph 87531 1 0 15:21 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id mds1 --setuser ceph --setgroup ceph ceph 87893 1 0 15:53 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id mds2 --setuser ceph --setgroup ceph root 88341 4041 0 16:32 pts/1 00:00:00 grep --color=auto mds [root@cephqe3 ceph-ansible]# ceph mds stat e7:, 3 up:standby 3.Again i re-ran the same command ceph-deploy mds create 10.70.x.x:mds2 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy mds create 10.70.x.x:mds2 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : create [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2f2ec6b950> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : <function mds at 0x7f2f2ec482a8> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] mds : [('10.70.x.x', 'mds2')] [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts 10.70.x.x:mds2 [10.70.x.x][DEBUG ] connected to host: 10.70.x.x [10.70.x.x][DEBUG ] detect platform information from remote host [10.70.x.x][DEBUG ] detect machine type [ceph_deploy.mds][INFO ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo [ceph_deploy.mds][DEBUG ] remote host will use systemd [ceph_deploy.mds][DEBUG ] deploying mds bootstrap to 10.70.x.x [10.70.x.x][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [10.70.x.x][DEBUG ] create path if it doesn't exist [10.70.x.x][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.mds2 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-mds2/keyring [10.70.x.x][INFO ] Running command: systemctl enable ceph-mds@mds2 [10.70.x.x][INFO ] Running command: systemctl start ceph-mds@mds2 [10.70.x.x][INFO ] Running command: systemctl enable ceph.target Actual results: Again creating same mds server is not giving any error message, and the mds 2 is having same pid after re-running the command. ceph 87893 1 0 15:53 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id mds2 --setuser ceph --setgroup ceph Expected results: 1. It should throw an Error, when i am creating the MDS server same named. 2. If it succeeds, it should run as a new process. Additional info: NA
This isn't a bug. Monitor commands are deliberately idempotent; creating an MDS is a monitor command. I suppose ceph-deploy could do its own checks for existence, but that would need to be an RFE against that project and I'm not sure if it's a good idea.