For Ceph Monitor service running in container.


1. Log in to any node in the ceph cluster.

2. Use "ceph -s" command to find which monitor is out of quorum.

3. Remove that monitor. $hostname is the hostname of the monitor not in the quorum.

ceph orch daemon rm mon.$hostname --force

4. Check if the monitor is back to quorum. If it is not back, do the following steps.

5. dd the monitor container service. 

ceph orch daemon add mon $hostname

5. Add monitor back to quorum. $ip is the public IP address of the host.

ceph mon add $hostname $ip

6. Check if the monitor is working and back to quorum.


For Ceph Monitor service run as daemon.


use ssh to log in to the monitor node


# Stop the monitor
systemctl stop ceph-mon@$(hostname -s) 

# remove directory
rm -rf /var/lib/ceph/mon/ceph-mars400-168-6f1a 

#Get monitor map 
ceph mon getmap -o /tmp/monmap 
     got monmap epoch 4
#Get monitor key ring
ceph auth get mon. -o /tmp/mon.keyring 

#change ownership
chown -R ceph:ceph /tmp/monmap  /tmp/mon.keyring 

#Make monitor directory 
sudo -u ceph mkdir -p /var/lib/ceph/mon/ceph-$(hostname -s) 
#Make file system 
sudo -u ceph ceph-mon --id $(hostname -s) --mkfs --keyring /tmp/mon.keyring --monmap /tmp/monmap 

#start monitor
systemctl start ceph-mon@$(hostname -s) 

    Job for ceph-mon@mars400-168-6f1a.service failed because a fatal signal was delivered to the control process.

    See "systemctl status ceph-mon@mars400-168-6f1a.service" and "journalctl -xe" for details.

#reset failed
systemctl reset-failed 

# restart moonitor
systemctl start ceph-mon@$(hostname -s)

Or use the following steps


systemctl stop ceph-mon@$(hostname -s ) 
rm -rf /var/lib/ceph/mon/ceph-$(hostname -s ) 
ceph mon rm $(hostname -s) 
ceph-deploy mon 192.168.3.168