Ceph commands

Get status of monitors

ceph quorum_status --format json-pretty

Set the weight of a disk lower so it gets less IO, or use it to scale a disk in\out of a cluster

ceph osd reweight osd.30 .01 --cluster mycluster

Move an OSD to a node in the cursh map, usefull if your OSD comes up not attached to a node

ceph osd crush move osd.27 host=adleast-node04 --cluster mycluster

Ensure this OSD is never a primary – Requires a setting on the mon

ceph osd primary-affinity osd.27 0 --cluster mycluster

How many PG’s on OSD x are primary?

ceph pg dump | grep active+clean | egrep "\[0," | wc -l

Activate an existing bluestore disk

ceph-volume lvm activate --bluestore 34 2c9b2862-1db6-4863-935d-32b7289fee7d

Find all bluestore disk ID’s and UUID’s

(Used to generate the above command)

lvs -o lv_tags | awk -vFS=, /ceph.osd_fsid/'{ OSD_ID=gensub(".*ceph.osd_id=([0-9]+),.*", "\\1", ""); OSD_FSID=gensub(".*ceph.osd_fsid=([a-z0-9-]+),.*", "\\1", ""); print OSD_ID,OSD_FSID }' | sort -n | uniq

Dealing with Spectre and Meltdown using Ansible

If we have to deal with this thing, lets do it in an intelligent way

I’m using Ansible across my infrastructure to manage most stuff so I cut a playbook to detect Spectre and Meltdown fixes in Centos7 using the articles from CyberCiti.biz



- hosts: all
become: yes
become_user: root

- name: "Check Kernel for Meltdown patches "
shell: "rpm -q --changelog kernel | egrep 'CVE-2017-5754' | wc -l"
ignore_errors: true
register: meltdown_patch_count

- name: "Meltdown result"
debug: var=meltdown_patch_count.stdout

- name: "Meltdown fix"
msg: "Installing meldown fix"
when: meltdown_patch_count.stdout=="0"

- name: "Check Kernel for Spectre patches "
shell: "rpm -q --changelog kernel | egrep 'CVE-2017-5715|CVE-2017-5753|CVE-2017-5754' | wc -l"
ignore_errors: true
register: spectre_patch_count

- name: "Spectre result"
debug: var=spectre_patch_count.stdout