Find slow Ceph OSD’s

Find the OSD in the cluster that’s bringing the whole party down.

grep 'slow request' /var/log/ceph/ceph.log | awk '{print $3}' | sort | uniq -c | sort -n

IO’s blocked when bringing new Ceph node online

Just a reminder folks, when you bring online a new Ceph node with a bunch of new(Or existing) OSD’s you might see it blocking IO’s for some reason.
This might show as “Requests are blocked > 32 sec” or simply as the VM’s accessing the cluster are stalling and or reporting their own IO issues.

FIREWALL!!
Don’t forget to turn off\configure the firewall on the new Ceph node
Ceph on the new node will start and it will appear normal but there will be no IO requests from the Virtual Hosts coming in(Check iftop)

While you are at it check SELinux is configured as it should be too(Off?)

Mikrotik BGP

To show what routes you are receiving from a particular peer on Mikrotik

 

ip route print detail where received-from={PeerName}

 

OR

ip route print detail where bgp-as-path=123456