Working with Openstack metadata service when using OVN

Metadata agent

Running the agent

neutron-ovn-metadata-agent –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/neutron_ovn_metadata_agent.ini

Configure neutron_ovn_metadata_agent.ini.j2 on the compute node(s)

[ovn]
ovn_nb_connection=tcp:{{OVN Controller IP}}:6641
ovn_sb_connection=tcp:{{OVN Controller IP}}:6642
ovn_metadata_enabled = true

Configure neutron.conf on the Neutron server

[ovn]
ovn_metadata_enabled = true

 

 

Reading

https://docs.openstack.org/networking-ovn/latest/admin/refarch/refarch.html – For a nice diagram on how the bits fit together

https://man7.org/linux/man-pages/man7/ovn-architecture.7.html – Some more in depth technical secrets hidden in this doc

https://patchwork.ozlabs.org/project/openvswitch/patch/1493118328-21311-1-git-send-email-dalvarez@redhat.com/

Specifically the example of local ports

- One logical switch sw0 with 2 ports (p1, p2) and 1 localport (lp)
- Two hypervisors: HV1 and HV2
- p1 will be in HV1 (OVS port with external-id:iface-id="p1")
- p2 will be in HV2 (OVS port with external-id:iface-id="p2")
- lp will be in both (OVS port with external-id:iface-id="lp")
- p1 should be able to reach p2 and viceversa
- lp on HV1 should be able to reach p1 but not p2
- lp on HV2 should be able to reach p2 but not p1


ovn-nbctl ls-add sw0
ovn-nbctl lsp-add sw0 p1
ovn-nbctl lsp-add sw0 p2
ovn-nbctl lsp-add sw0 lp
ovn-nbctl lsp-set-addresses p1 "00:00:00:aa:bb:10 10.0.1.10"
ovn-nbctl lsp-set-addresses p2 "00:00:00:aa:bb:20 10.0.1.20"
ovn-nbctl lsp-set-addresses lp "00:00:00:aa:bb:30 10.0.1.30"
ovn-nbctl lsp-set-type lp localport

add_phys_port() {
name=$1
mac=$2
ip=$3
mask=$4
gw=$5
iface_id=$6
sudo ip netns add $name
sudo ovs-vsctl add-port br-int $name -- set interface $name
type=internal
sudo ip link set $name netns $name
sudo ip netns exec $name ip link set $name address $mac
sudo ip netns exec $name ip addr add $ip/$mask dev $name
sudo ip netns exec $name ip link set $name up
sudo ip netns exec $name ip route add default via $gw
sudo ovs-vsctl set Interface $name external_ids:iface-id=$iface_id
}

# Add p1 to HV1, p2 to HV2 and localport to both

# HV1
add_phys_port p1 00:00:00:aa:bb:10 10.0.1.10 24 10.0.1.1 p1
add_phys_port lp 00:00:00:aa:bb:30 10.0.1.30 24 10.0.1.1 lp

$ sudo ip netns exec p1 ping -c 2 10.0.1.20
PING 10.0.1.20 (10.0.1.20) 56(84) bytes of data.
64 bytes from 10.0.1.20: icmp_seq=1 ttl=64 time=0.738 ms
64 bytes from 10.0.1.20: icmp_seq=2 ttl=64 time=0.502 ms

--- 10.0.1.20 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.502/0.620/0.738/0.118 ms

$ sudo ip netns exec lp ping -c 2 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
64 bytes from 10.0.1.10: icmp_seq=1 ttl=64 time=0.187 ms
64 bytes from 10.0.1.10: icmp_seq=2 ttl=64 time=0.032 ms

--- 10.0.1.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.032/0.109/0.187/0.078 ms


$ sudo ip netns exec lp ping -c 2 10.0.1.20
PING 10.0.1.20 (10.0.1.20) 56(84) bytes of data.

--- 10.0.1.20 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1000ms


$ sudo ovs-ofctl dump-flows br-int | grep table=32
cookie=0x0, duration=141.939s, table=32, n_packets=2, n_bytes=196,
idle_age=123, priority=150,reg14=0x3,reg15=0x2,metadata=0x7 actions=drop
cookie=0x0, duration=141.939s, table=32, n_packets=2, n_bytes=196,
idle_age=129, priority=100,reg15=0x2,metadata=0x7
actions=load:0x7->NXM_NX_TUN_ID[0..23],set_field:0x2->tun_metadata0,move:NXM_NX_REG14[0..14]->NXM_NX_TUN_METADATA0[16..30],output:59



# On HV2

add_phys_port p2 00:00:00:aa:bb:20 10.0.1.20 24 10.0.1.1 p2
add_phys_port lp 00:00:00:aa:bb:30 10.0.1.30 24 10.0.1.1 lp

$ sudo ip netns exec p2 ping -c 2 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
64 bytes from 10.0.1.10: icmp_seq=1 ttl=64 time=0.810 ms
64 bytes from 10.0.1.10: icmp_seq=2 ttl=64 time=0.673 ms

--- 10.0.1.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.673/0.741/0.810/0.073 ms

$ sudo ip netns exec lp ping -c 2 10.0.1.20
PING 10.0.1.20 (10.0.1.20) 56(84) bytes of data.
64 bytes from 10.0.1.20: icmp_seq=1 ttl=64 time=0.357 ms
64 bytes from 10.0.1.20: icmp_seq=2 ttl=64 time=0.062 ms

--- 10.0.1.20 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.062/0.209/0.357/0.148 ms

$ sudo ip netns exec lp ping -c 2 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.

--- 10.0.1.10 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

$ sudo ovs-ofctl dump-flows br-int | grep table=32
cookie=0x0, duration=24.169s, table=32, n_packets=2, n_bytes=196,
idle_age=12, priority=150,reg14=0x3,reg15=0x1,metadata=0x7 actions=drop
cookie=0x0, duration=24.169s, table=32, n_packets=2, n_bytes=196,
idle_age=14, priority=100,reg15=0x1,metadata=0x7
actions=load:0x7->NXM_NX_TUN_ID[0..23],set_field:0x1->tun_metadata0,move:NXM_NX_REG14[0..14]->NXM_NX_TUN_METADATA0[16..30],output:40

Run cryptominer while the screen is locked

dbus-monitor --session "type=signal,interface=org.gnome.ScreenSaver" | 
while read MSG; do
LOCK_STAT=`echo $MSG | grep boolean | awk '{print $2}'`
if [[ "$LOCK_STAT" == "true" ]]; then
echo "was locked"
killall ethdcrminer64
screen -d -m /home/user/Downloads/Claymore/ethdcrminer64 -epool exp-us.dwarfpool.com:8018 -ewal 0xaddresshere/m3 -epsw x -allpools 1 -gser 2 -allcoins exp
else
echo "was un-locked"
killall ethdcrminer64
fi
done

Openstack Nova – InstanceNotFound – Modify the HardReboot function to recreate domain from XML if it doesnt exist

Ive not finished writing this article… If you are interested in discussing this further feel free to hit me up cory@hawkless.id.au

So in the process of managing an OpenStack cluster where you have Pet VM’s that are not treated like Cattle (Read about Pets vs Cattle here, TL;DR – Pets = Import VM that cant die, Cattle, you can easily kill the VM and replace it with another fresh;y built one) You almost certainly come across a situation where a VM for whatever reason wil get stuck into an error state.
So the VM existins in the database, you can see it in the console, but when you go to power it on it just fails.
You check the host that it’s assigned to and you find an error like this ‘..InstanceNotFound..’ UGH!? WHY?
I’m still not sure WHY OpenStack gets itself into this tangle and up until now Ive not been sure how to get it out either. The only solution has been to delete the Instance definition and create it from scratch, which can be a giant pain in the ass.

 

 

 

2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server [req-4e2f492f-e9a4-4bcf-82d9-34746b93a074 f2800cf724264988aab44aa21bf1dae4 40c2b46fb47c4ee7ac076b259c4e0814 - default default] Exception during message handling: InstanceNotFound: Instance ec88fe77-ce03-4cfb-a377-deef13cde2cc could not be found.
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in wrapped
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server function_name, call_dict, binary)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server self.force_reraise()
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in wrapped
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 186, in decorated_function
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server "Error: %s", e, instance=instance)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server self.force_reraise()
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 156, in decorated_function
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 976, in decorated_function
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 202, in decorated_function
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4222, in resize_instance
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server self._revert_allocation(context, instance, migration)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server self.force_reraise()
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4219, in resize_instance
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server instance_type, clean_shutdown)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4257, in _resize_instance
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server timeout, retry_interval)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 8032, in migrate_disk_and_power_off
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server disk_info = self._get_instance_disk_info(instance, block_device_info)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 7809, in _get_instance_disk_info
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server guest = self._host.get_guest(instance)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 526, in get_guest
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return libvirt_guest.Guest(self._get_domain(instance))
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 546, in _get_domain
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server raise exception.InstanceNotFound(instance_id=instance.uuid)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server InstanceNotFound: Instance ec88fe77-ce03-4cfb-a377-deef13cde2cc could not be found.
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server
2020-05-25 15:19:15.655 79840 INFO nova.compute.resource_tracker [req-4e2f492f-e9a4-4bcf-82d9-34746b93a074 f2800cf724264988aab44aa21bf1dae4

Bulk add IP’s to Netbox IPAM and attach to Interface

I need to add a /30 IP to every interface on a switch to enable host based routing in a virtualization cluster.
I’m lazy and didn’t want to key them all in by hand. So I wrote this script in Pyhton3


It uses a starting interface ID(Assuming that all of the interfaces on your switch are in numerical order) and it uses a starting IP.

It loops through each interface ID until it’s the number define in the where loop which should be set to the last interface ID of your device.

It adds 4 to the last digit of the IP address(Assuming you are working with /30’s)


import requests
import json



url="http://192.168.60.20:8000/api/ipam/ip-addresses/"
headers = {'Content-type': 'application/json', 'Authorization': 'Token 0123456789abcdef0123456789abcdef01234567'}

lastDigit=201
interfaceID=37
while interfaceID<47:
data={
"address": "10.72.72."+str(lastDigit)+"/30",
"status": "active",
"interface": interfaceID
}

response=requests.post(url,data=json.dumps(data), headers=headers)
print(data)
print(response.status_code)
interfaceID+=1
lastDigit+=4

Swap space notes

Whats using swap space

for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file; done | sort -k 2 -n -r | less

Who is eating all of my RAM?

ps aux --sort=-%mem | head

Where are my swap files

 cat /proc/swaps

How to add more swap space

1. Create empty file:
This file will contain virtual memory contents so make file big enough for your needs. This one will create 1Gb file which means +1Gb swap space for your system:

dd if=/dev/zero of=/media/fasthdd/swapfile.img bs=1024 count=1M

If you want to make 3Gb file then change count value to count=3M. See man dd for more information.

2. Bake swap file:
Following command is going to make “swap filesystem” inside your fresh swap file.

mkswap /media/fasthdd/swapfile.img

3. Bring up on boot:
To make sure that your new swap space is activated while booting up computer you should add it to filesystem configuration file /etc/fstab. Add it to end of file, this is recommended because other filesystems (at least one that contains swap file) must be mounted in read-write mode before we can access any files.

# Add this line to /etc/fstab
/media/fasthdd/swapfile.img swap swap sw 0 0

4. Activate:
You can either reboot your computer or activate new swap file by hand with following command:

swapon /media/fasthdd/swapfile.img

Original articles

https://askubuntu.com/questions/178712/how-to-increase-swap-space
https://www.cyberciti.biz/faq/linux-which-process-is-using-swap/

EasyRSA – Make a certificate and copy to ansible staging dir

I use this script on my CA server to create a certificate for each new server we provision. This allows our internal PKI to function.

This script creates a certificate then copies it to the Ansible server where is can be deployed to the destination host

Obviously you’ll need to take the necessary precautions around key security

ISSUE_NAME=$1.domain.local

cd /home/admin/EasyRSA-3.0.5/
/home/admin/EasyRSA-3.0.5/easyrsa build-server-full $ISSUE_NAME nopass
ssh edpk-ansible..local 'mkdir -p /home/admin/ansible/files/'$1'/'
scp /home/admin/EasyRSA-3.0.5/pki/issued/$ISSUE_NAME.crt edpk-ansible..local:/home/admin/ansible/files/$1/$1.crt
scp /home/admin/EasyRSA-3.0.5/pki/private/$ISSUE_NAME.key edpk-ansible..local:/home/admin/ansible/files/$1/$1.key
cd ~

Ubuntu interfaces file examples

Example 1 – Includes some static routes and manually specified IP’s

auto lo
iface lo inet static
address 103.90.59.9/32

auto ens3
iface ens3 inet static
address 172.2.1.17
network 172.2.1.0
netmask 255.255.254.0
up route add -net 172.2.0.0 netmask 255.255.0.0 gw 172.2.1.1
up route add -net 172.2.0.0 netmask 255.255.0.0 gw 172.2.1.1

iface ens3 inet6 static
address 2405:cc:ee:110::7
netmask 64
autoconf 0
accept_ra 0
gateway 2405:cc:ee:110:ff:ff

auto ens4
iface ens4 inet static
address 172.23.2.12
network 172.23.2.0
netmask 255.255.255.0
gateway 172.23.2.254

source /etc/network/interfaces.d/*.cfg