Generate OTP keys in linux – Extracting FreeOTP keys

apt install oathtool
oathtool --totp -b -d 6 KY3OUPMUYWCKS53F

Linux: TOTP Password Generator

https://github.com/philipsharp/FreeOTPDecoder
Enable USB debugging – https://www.kingoapp.com/root-tutorials/how-to-enable-usb-debugging-mode-on-android.htm
Backup the FreeOTP app – adb backup -f ~/freeotp.ab -noapk org.fedorahosted.freeotp
Decompress the backup – dd if=freeotp.ab bs=1 skip=24 | python -c "import zlib,sys;sys.stdout.write(zlib.decompress(sys.stdin.read()))" | tar -xvf -
Decode the keys – https://github.com/philipsharp/FreeOTPDecoder

Openstack – Manually edit VM

Find the host the VM is running on and the instance ID(Use console view to get instance ID)

cp /etc/libvirt/qemu/instance-0000030a.xml .
edit instance-0000030a.xml to be what you need it to be

While the VM is running (Warning, will crash the VM)

virsh destroy instance-0000030a
virsh undefine instance-0000030a
virsh define instance-0000030a.xml
virsh start instance-0000030a

Ceph scrubbing performance

Original article here – http://sudomakeinstall.com/linux-systems/ceph-scrubbing

Ceph’s default IO priority and class for behind the scene disk operations should be considered required vs best efforts. For those of us who actually utilize our storage for services that require performance will quickly find that deep scrub grinds even the most powerful systems to a halt.

Below are the settings to run the scrub as the lowest possible priority. This REQUIRES CFQ as the scheduler for the spindle disk. Without CFQ you cannot prioritize IO. Since only 1 service utilizes these disk CFQ performance will be comparable to deadline and noop.

Show the current scheduler

for file in /sys/block/sd*; do
echo ${file}
cat ${file}/queue/scheduler
echo “”
done

Set all disks to CFQ

for file in /sys/block/sd*; do
echo cfq > ${file}/queue/scheduler
cat ${file}/queue/scheduler
echo “”
done

Inject the new settings for the existing OSD:
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7'
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'

Edit your ceph.conf on your storage nodes to automatically set the the priority at runtime.
#Reduce impact of scrub.
osd_disk_thread_ioprio_class = "idle"
osd_disk_thread_ioprio_priority = 7

You can go a step further and setup redhats optimizations for the system charactistics.
tuned-adm profile latency-performance
This information referenced from multiple sources.

Reference documentation.
http://dachary.org/?p=3268

Disable scrubbing in realtime to determine its impact on your running cluster.
http://dachary.org/?p=3157

A detailed analysis of the scrubbing io impact.
http://blog.simon.leinen.ch/2015/02/ceph-deep-scrubbing-impact.html

OSD Configuration Reference
http://ceph.com/docs/master/rados/configuration/osd-config-ref/

Redhat system tuning.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-tuned_adm.html