Configure MikroTik VPN using Radius and NPS on Windows AD

Configure NPS on a Domain controller:
(Based on Windows Server 2019)
Install NPS Role
open NPS admin console
Select “RADIUS server for Dial-Up or VPN Connections” and click “Configure VPN or Dial-Up
Select “VPN Connections” and click Next
Click “Add” and fill in details as required (IP must be the IP of the router)
Take note of the Shared Secret
Click next on the rest of the screens (add groups as required)

Note: Before users will be able to authenticate using Radius “Allow Access” on the “Dial-in” Tab for the user in AD will need to be selected as “Control Access throught NPS Network Policy” does not work at least for Windows Server 2016 and above.

on the Mikrotik:
Click “Radius” then “+”
Complete the following:
Service: ppp
Domain: domain
Address: IP of NPS Server
Secret: Password defined while setting up NPS
Src Address: The IP of the interface (Must match the IP Specified in while setting up NPS)

Add the following rule in the firewall:
chain: input, Action: Accept, Protocol: TCP, Dst. Port: 1723
chain: input, Action: Accept. Protocol: 47 (gre)

Source:
https://mivilisnet.wordpress.com/2018/10/01/how-to-integrate-your-mikrotik-router-with-windows-ad/

Create Bluestore OSD backed by SSD

Dont take my word for it on the WAL sizing – Check http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/

This script will create a spare 20G Logical volume to use as the WAL for a second spinner later if you need it

export SSD=sdc
export SPINNER=sda

vgcreate ceph-ssd-0 /dev/$SSD
vgcreate ceph-hdd-0 /dev/$SPINNER

lvcreate --size 20G -n block-0 ceph-hdd-0
lvcreate -l 100%FREE -n block-1 ceph-hdd-0

lvcreate --size 20G -n ssd-0 ceph-ssd-0
lvcreate --size 20G -n ssd-1 ceph-ssd-0
lvcreate -l 100%FREE -n ssd-2 ceph-ssd-0

ceph-volume lvm create --bluestore --data ceph-hdd-0/block-1 --block.db ceph-ssd-0/ssd-0
ceph-volume lvm create --bluestore --data ceph-ssd-0/ssd-2

KVM\Qemu\Openstack – Manage a live migration

virsh qemu-monitor-command {VMNAME} --pretty '{"execute":"migrate_cancel"}'

Allow Virsh more downtime(If it cant keepup with RAM utilization)

virsh migrate-setmaxdowntime VMNAME 2500

 

Check migration status

virsh domjobinfo instance-000002ac
Job type: Unbounded
Operation: Outgoing migration
Time elapsed: 1307956 ms
Data processed: 118.662 GiB
Data remaining: 9.203 MiB
Data total: 8.005 GiB
Memory processed: 118.662 GiB
Memory remaining: 9.203 MiB
Memory total: 8.005 GiB
Memory bandwidth: 41.294 MiB/s
Dirty rate: 35040 pages/s
Page size: 4096 bytes
Iteration: 197
Constant pages: 1751031
Normal pages: 31041965
Normal data: 118.416 GiB
Expected downtime: 3314 ms
Setup time: 70 ms

 

https://www.redhat.com/archives/libvirt-users/2014-January/msg00007.html
https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/abort-live-migration.html

 

https://www.server24.eu/private-cloud/complete-live-migration-vms-high-load/

 

Oepnsatck Rocky – Keystone – Requesting a token scoped as a different project

Because yet again I found the documentation to be wrong on the Openstack site, here is what I have FINALLY managed to determine is the correct request to get a token issued to an admin user an another project. In my case i need this to create volume backups because the backup API does not allow you to project a project ID when creating a backup

http://KeystoneIP:5000/v3/auth/tokens

{
"auth": {
"scope": {
"project": {
"domain": {
"name": "Default"
},
"name": "OtherProjectName"
}
},
"identity": {
"password": {
"user": {
"domain": {
"name": "Default"
},
"password": "password",
"name": "admin"
}
},
"methods": [
"password"
]
}
}
}

Bonding in active-backup using linux bridges on Ubuntu 18

Because this was WAAYY more difficult to find any decent doco on that I had ever expected, here is what worked for me

I deleted the netplan config file at /etc/netplan/01-netcfg.yaml

rm /etc/netplan/01-netcfg.yaml

Ensure that ‘bonding’ appears in /etc/modules (I’s not here by default)

echo bonding >> /etc/modules

 

Here is /etc/network/interfaces

source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback

allow-hotplug ens3f0
iface ens3f0 inet manual
bond-master bond0

allow-hotplug ens3f1
iface ens3f1 inet manual
bond-master bond0

allow-hotplug bond0
iface bond0 inet static
address 172.16.103.12/24
gateway 172.16.103.254
mtu 9000

bond-mode active-backup
bond-miimon 100
bond-slaves none

 

Ceph Nautilus – “Required devices (block and data) not present for bluestore”

When using the new ceph-volume scan and activate commands on ceph Nautilus after an upgrade from Luminous I was getting the following message

[root@ceph2 ~]# ceph-volume simple activate --all
--> activating OSD specified in /etc/ceph/osd/37-11af5440-dadf-40e3-8924-2bbad3ee5b58.json
Running command: /bin/ln -snf /dev/sdh2 /var/lib/ceph/osd/ceph-37/block
Running command: /bin/chown -R ceph:ceph /dev/sdh2
Running command: /bin/systemctl enable ceph-volume@simple-37-11af5440-dadf-40e3-8924-2bbad3ee5b58
Running command: /bin/ln -sf /dev/null /etc/systemd/system/ceph-disk@.service
--> All ceph-disk systemd units have been disabled to prevent OSDs getting triggered by UDEV events
Running command: /bin/systemctl enable --runtime ceph-osd@37
Running command: /bin/systemctl start ceph-osd@37
--> Successfully activated OSD 37 with FSID 11af5440-dadf-40e3-8924-2bbad3ee5b58
--> activating OSD specified in /etc/ceph/osd/11-8c5b0218-4d32-404f-b06b-f6e90906ab7d.json
--> Required devices (block and data) not present for bluestore
--> bluestore devices found: [u'data']
--> RuntimeError: Unable to activate bluestore OSD due to missing devices

You can see one volume activated while the other didn’t.
It turns out this is because one volume was configure for bluestore and the other wasn’t and there is some sort of a bug in the ceph-volume command and when it writes out the /etc/ceph/osd/{OSDID}-GUID.json files it omits the “type”: “filestore” line for any non-bluestore disks, but the ceph-volume activate command assumes it’s a bluestore volume unless otherwise specified in the json file.
The quick and easy fix was to add the line “type”: “filestore”, to the json files for any non-bluestore disks and run ceph-volume simple activate –all again.
Time permitting i’ll hunt down the bug in the scan command and submit a pull request if it’s not already been done

Metadata service in DHCP namespace

Some gold info
http://kimizhang.com/metadata-service-in-dhcp-namespace/

Whats inside an APT package?

sudo dpkg –listfiles docker-ce

Generate OTP keys in linux – Extracting FreeOTP keys

apt install oathtool
oathtool --totp -b -d 6 KY3OUPMUYWCKS53F

Linux: TOTP Password Generator

https://github.com/philipsharp/FreeOTPDecoder
Enable USB debugging – https://www.kingoapp.com/root-tutorials/how-to-enable-usb-debugging-mode-on-android.htm
Backup the FreeOTP app – adb backup -f ~/freeotp.ab -noapk org.fedorahosted.freeotp
Decompress the backup – dd if=freeotp.ab bs=1 skip=24 | python -c "import zlib,sys;sys.stdout.write(zlib.decompress(sys.stdin.read()))" | tar -xvf -
Decode the keys – https://github.com/philipsharp/FreeOTPDecoder

Openstack – Manually edit VM

Find the host the VM is running on and the instance ID(Use console view to get instance ID)

cp /etc/libvirt/qemu/instance-0000030a.xml .
edit instance-0000030a.xml to be what you need it to be

While the VM is running (Warning, will crash the VM)

virsh destroy instance-0000030a
virsh undefine instance-0000030a
virsh define instance-0000030a.xml
virsh start instance-0000030a