MYSQL Backup to S3 script

Script

#!/bin/bash

# Set the enviroment variable so read function knows to seperate on ",".
export IFS=","

NOW=$(date +"%Y_%m_%d_%H_%M")

DATABASES_CONFIG_FILE="/home/bfnadmin/databases.csv"

S3_ENDPOINT="https://my-s3-storage.com"
S3_BUCKET="s3://sql-backups"

TEMP_BACKUP_DIR="backups"

while read HOST USERNAME PASSWORD DB_SRV DB_NAME;
do
echo "[$DB_SRV - $DB_NAME]"

mysqldump --single-transaction --quick --lock-tables=false \
-h $HOST \
-u $USERNAME \
-p$PASSWORD \
$DB_NAME | gzip > $TEMP_BACKUP_DIR/$DB_SRV-$DB_NAME-$NOW.sql.gz

echo "Uploading backup to S3 storage - $DB_SRV-$DB_NAME-$NOW.sql.gz"

aws --endpoint-url=$S3_ENDPOINT s3 cp $TEMP_BACKUP_DIR/$DB_SRV-$DB_NAME-$NOW.sql.gz $S3_BUCKET

rm $TEMP_BACKUP_DIR/$DB_SRV-$DB_NAME-$NOW.sql.gz

echo -e "\n\n"

done < $DATABASES_CONFIG_FILE

~/ .aws/credentials

[default]
aws_access_key_id=xxx
aws_secret_access_key=yyy

databases.csv

ip_Address,sql_user,sql_password,Host_Display_name,DB_Name

TCPDump commands

HTTP GET or POST commands on port 5000

tcpdump -i eth0 -s 0 -A 'tcp dst port 5000 and tcp[((tcp[12:1] & 0xf0)  >> 2):4] = 0x47455420 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504F5354'

Exchange Autodiscover with HTTP redirect method

Server 2012 R2
Create a new Website on the CAS “autodiscover-redirect”
create a virtual directory called “autodiscover”
Click the virtual directory then open “Http Redirect
enter the url of the exchange server e.g. “https://contoso.com/autodiscover/autodiscover.xml” (Must be HTTPS)
Ensure both “Redirect all requests to excact destination” and “Only redirect requests to content in this directory” are ticked
Test using either https://testconnectivity.microsoft.com/ or a mobile / outlook client.

https://www.mysysadmintips.com/windows/servers/503-configure-exchange-autodiscover-with-multiple-smtp-namespaces

Useful gnocchi commmands

resource update -a customerID:"customer1" 729389e687504dbf8b2c4aead14a607c --type my_consumable_resource

resource show 4f42c5e60813488ea166206c5bfc10cd --type my_consumable_resource

resource search project_id=729389e687504dbf8b2c4aead14a607c

Script to enable fast-diff on an entire pool of images and rebuild the object-map

This script enables the requisite features on all RBD images in a pool to allow you to run rbd du and have it return a result quickly as opposed to having to calculate the size very time

rbd ls -p backup1 | while read line; do
  echo "$line"
  rbd feature enable backup1/$line object-map exclusive-lock
  rbd object-map rebuild backup1/$line
  rbd snap ls backup1/$line | while read snap; do
        export snapname=$(echo $snap| awk '{print $2;}')
        if [ ! $snapname == "NAME" ]
        then
                echo "$line@$snapname"
                rbd object-map rebuild backup1/$line@$snapname
        fi
  done
done

Configure MikroTik VPN using Radius and NPS on Windows AD

Configure NPS on a Domain controller:
(Based on Windows Server 2019)
Install NPS Role
open NPS admin console
Select “RADIUS server for Dial-Up or VPN Connections” and click “Configure VPN or Dial-Up
Select “VPN Connections” and click Next
Click “Add” and fill in details as required (IP must be the IP of the router)
Take note of the Shared Secret
Click next on the rest of the screens (add groups as required)

Note: Before users will be able to authenticate using Radius “Allow Access” on the “Dial-in” Tab for the user in AD will need to be selected as “Control Access throught NPS Network Policy” does not work at least for Windows Server 2016 and above.

on the Mikrotik:
Click “Radius” then “+”
Complete the following:
Service: ppp
Domain: domain
Address: IP of NPS Server
Secret: Password defined while setting up NPS
Src Address: The IP of the interface (Must match the IP Specified in while setting up NPS)

Add the following rule in the firewall:
chain: input, Action: Accept, Protocol: TCP, Dst. Port: 1723
chain: input, Action: Accept. Protocol: 47 (gre)

Source:
https://mivilisnet.wordpress.com/2018/10/01/how-to-integrate-your-mikrotik-router-with-windows-ad/

Create Bluestore OSD backed by SSD

Dont take my word for it on the WAL sizing – Check http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/

This script will create a spare 20G Logical volume to use as the WAL for a second spinner later if you need it

export SSD=sdc
export SPINNER=sda

vgcreate ceph-ssd-0 /dev/$SSD
vgcreate ceph-hdd-0 /dev/$SPINNER

lvcreate --size 20G -n block-0 ceph-hdd-0
lvcreate -l 100%FREE -n block-1 ceph-hdd-0

lvcreate --size 20G -n ssd-0 ceph-ssd-0
lvcreate --size 20G -n ssd-1 ceph-ssd-0
lvcreate -l 100%FREE -n ssd-2 ceph-ssd-0

ceph-volume lvm create --bluestore --data ceph-hdd-0/block-1 --block.db ceph-ssd-0/ssd-0
ceph-volume lvm create --bluestore --data ceph-ssd-0/ssd-2

KVM\Qemu\Openstack – Manage a live migration

virsh qemu-monitor-command {VMNAME} --pretty '{"execute":"migrate_cancel"}'

Allow Virsh more downtime(If it cant keepup with RAM utilization)

virsh migrate-setmaxdowntime VMNAME 2500

 

Check migration status

virsh domjobinfo instance-000002ac
Job type: Unbounded
Operation: Outgoing migration
Time elapsed: 1307956 ms
Data processed: 118.662 GiB
Data remaining: 9.203 MiB
Data total: 8.005 GiB
Memory processed: 118.662 GiB
Memory remaining: 9.203 MiB
Memory total: 8.005 GiB
Memory bandwidth: 41.294 MiB/s
Dirty rate: 35040 pages/s
Page size: 4096 bytes
Iteration: 197
Constant pages: 1751031
Normal pages: 31041965
Normal data: 118.416 GiB
Expected downtime: 3314 ms
Setup time: 70 ms

 

https://www.redhat.com/archives/libvirt-users/2014-January/msg00007.html
https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/abort-live-migration.html

 

https://www.server24.eu/private-cloud/complete-live-migration-vms-high-load/

 

Oepnsatck Rocky – Keystone – Requesting a token scoped as a different project

Because yet again I found the documentation to be wrong on the Openstack site, here is what I have FINALLY managed to determine is the correct request to get a token issued to an admin user an another project. In my case i need this to create volume backups because the backup API does not allow you to project a project ID when creating a backup

http://KeystoneIP:5000/v3/auth/tokens

{
"auth": {
"scope": {
"project": {
"domain": {
"name": "Default"
},
"name": "OtherProjectName"
}
},
"identity": {
"password": {
"user": {
"domain": {
"name": "Default"
},
"password": "password",
"name": "admin"
}
},
"methods": [
"password"
]
}
}
}

Bonding in active-backup using linux bridges on Ubuntu 18

Because this was WAAYY more difficult to find any decent doco on that I had ever expected, here is what worked for me

I deleted the netplan config file at /etc/netplan/01-netcfg.yaml

rm /etc/netplan/01-netcfg.yaml

Ensure that ‘bonding’ appears in /etc/modules (I’s not here by default)

echo bonding >> /etc/modules

 

Here is /etc/network/interfaces

source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback

allow-hotplug ens3f0
iface ens3f0 inet manual
bond-master bond0

allow-hotplug ens3f1
iface ens3f1 inet manual
bond-master bond0

allow-hotplug bond0
iface bond0 inet static
address 172.16.103.12/24
gateway 172.16.103.254
mtu 9000

bond-mode active-backup
bond-miimon 100
bond-slaves none