cephadm --image $img enter --name osd.$id

Sometime OSD daemon can not be up due to bluestore data error. 

Here we provide a process to repair the OSD bluestore and object data. 



In container environment pleae follow the steps from step 1. 

If the osd is running as a daemon, please go to step 4. 


1. ssh to the osd node. 

2. find the container image


ceph config dump |grep container_image
ceph config dump |grep container_image
global        basic     container_image                                 192.168.3.220:5000/ceph@sha256:5efbea0095c0905b3069b9165967b0652c1c0c411a1f59f1b21b91e8a6ade679  * 

img=192.168.3.220:5000/ceph@sha256:5efbea0095c0905b3069b9165967b0652c1c0c411a1f59f1b21b91e8a6ade679

osd.$id is the osd ID. Please replace $id for your osd ID.


3. enter the container shell


cephadm --image $img shell --name osd.$id

or 

cephadm --image $img enter --name osd.$id


4. Repair fsck & bluestore

sudo -u ceph ceph-bluestore-tool -i $id fsck
sudo -u ceph ceph-bluestore-tool -i $id repair


5. Repair objectstore

udo -u ceph ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-$id/ --op fsck
sudo -u ceph ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-$id/ --op repair


6. Reset and restart the daemon

systemctl reset-failed
systemctl restart ceph-osd@$i


7. check the osd daemon status

systemctl status ceph-osd@$i

If the OSD still can not boot up, you have to remove the OSD and recreat it.