Opentack nova – Host filters for Ceph and local ephemeral storage

OpenStack: use ephemeral and persistent root storage for different hypervisors Original article https://ceph.io/en/news/blog/2014/openstack-use-ephemeral-and-persistent-root-storage-for-different-hypervisors/ Computes with Ceph image backend and computes with local image backend. At some point, you might want to build hypervisor and use their local storage for virtual machine root disks. Using local storage will help you maximasing your IOs and will reduce […]

Read More »

Functional Ceph RadosGW docker container and config

This is just a basic example of getting RadosGW to run in docker with Keystone auth support. It runs S3 and Swift API’s out of the box, so you can use tools like s3cmd or ‘openstack container list’ using the below configs   Command to run inside docker container radosgw -d -f –cluster ceph –name […]

Read More »

Ceph – Edit running crushmap

Edit crushmap in a running cluster then re-inject it – Be careful! ceph osd getcrushmap -o crushmap crushtool -d crushmap -o crushmap.txtvim crushmap.txtcrushtool -c crushmap.txt -o crushmap1ceph osd setcrushmap -i crushmap1

Read More »

Manually add a ceph-mon

mkdir /tmp/cephmonceph auth get mon. -o /tmp/cephmon/mon.keyceph mon getmap -o /tmp/cephmon/monmapsudo ceph-mon -i mon-hostname –mkfs –monmap /tmp/cephmon/monmap –keyring /tmp/cephmon/mon.keyceph-mon -i mon-hostname –public-addr 10.50.1.71ps ax | grep ceph-monkill {mon-pid}chown ceph:ceph /var/lib/ceph/mon/ -Rsystemctl enable ceph-mon@mon-hostname.servicesystemctl start ceph-mon@mon-hostname.servicesystemctl status ceph-mon@mon-hostname.service

Read More »

Moutning a CephFS share using ceph-fuse

I always forget how to mount the subfolder of a CephFS filesystem sudo ceph-fuse -m 192.168.2.9:6789 -r /subfolder /folder/on/my/machine Note: The proceeding / after the -r is important Some great info here on restricting access to specific folders and more notes on mounting – https://manjusri.ucsc.edu/2017/09/25/ceph-fuse/

Read More »

Create Bluestore OSD backed by SSD

Dont take my word for it on the WAL sizing – Check http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/ This script will create a spare 20G Logical volume to use as the WAL for a second spinner later if you need it export SSD=sdc export SPINNER=sda vgcreate ceph-ssd-0 /dev/$SSD vgcreate ceph-hdd-0 /dev/$SPINNER lvcreate –size 20G -n block-0 ceph-hdd-0 lvcreate -l 100%FREE […]

Read More »

Ceph Nautilus – “Required devices (block and data) not present for bluestore”

When using the new ceph-volume scan and activate commands on ceph Nautilus after an upgrade from Luminous I was getting the following message [root@ceph2 ~]# ceph-volume simple activate –all –> activating OSD specified in /etc/ceph/osd/37-11af5440-dadf-40e3-8924-2bbad3ee5b58.json Running command: /bin/ln -snf /dev/sdh2 /var/lib/ceph/osd/ceph-37/block Running command: /bin/chown -R ceph:ceph /dev/sdh2 Running command: /bin/systemctl enable ceph-volume@simple-37-11af5440-dadf-40e3-8924-2bbad3ee5b58 Running command: /bin/ln […]

Read More »

Ceph scrubbing performance

Original article here – http://sudomakeinstall.com/linux-systems/ceph-scrubbing Ceph’s default IO priority and class for behind the scene disk operations should be considered required vs best efforts. For those of us who actually utilize our storage for services that require performance will quickly find that deep scrub grinds even the most powerful systems to a halt. Below are […]

Read More »