Converting a Generation 2 HyperV VM to boot in KVM\Openstack

Brining a HyperV Gen2 VM into Openstack

Convert the vhdx to raw and into ceph
qemu-img convert -f vpc -O raw AC-TS01-C.VHD rbd:volumes/AC-TS01.raw

Attach the old(Existing) disk and the new blank disk to a linux box
Install clonezilla from apt
On the new disk create a new partition(This will be an MBR partition not a GPT partition and is key to being able to boot in KVM)
Fdisk/dev/sdX
N
P
..
..
Clone the Windows volume(Just the big volume, ignore the small piss-ant recovery volumes)
Might then need to mount this volume on a windows box to check the NTFS partition is ok, if it’s not showing a drive letter I had some success with resizing the partition with Easus partition manger, which presumable re-wrote the ntfs partition headers and then the disk appeared in windows
THEN you need to boot the os, it’ll fail

Attach a windows server ISO, boot to recovery(Needs to be the correct OS recovery environment. I tried using a 2016 DVD to recover 2012 R2 and it didn’t work)
Run

bcdboot C:\windows
Or
bcdboot C:\windows /s c: /f ALL

BOOTREC /FIXMBR

BOOTREC /FIXBOOT

Then reboot and all good
If you haven’t preinstalled virtio might need to boot on sata then install virtio drivers

Fix sluggish mouse cursor

I find the cursor speed is sometimes to slow on my Ubuntu machine with a Logitech MX trackball

And for reasons I dont recall I wanted to set the speed with a script instead of the setting application

 

#!/bin/bash
xinput set-prop "pointer:Logitech MX Ergo" "libinput Accel Speed" 1

Running CollectD as a container

Dockerfile

FROM ubuntu:18.04
RUN apt update
RUN apt-get --no-install-recommends install collectd -y
RUN apt install -y python-pip
RUN pip install collectd-gnocchi

Running the container

docker run -it --net=host --privileged -v:collectd.conf: /etc/collectd/collectd.conf collectDContainerImage collectd -C /etc/collectd/collectd.conf -f

Vagrant: Error while activating network: Call to virNetworkCreate failed: internal error: Network is already in use by interface

Network is already in use by interface ” Threw me for a minute

The issue was that an IP address specified in the Vagrant file was in use by the host i was using. Changed the IP’s in the vagrant file.. Problem solved


==> worker1: An error occurred. The error will be shown after all tasks complete.
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.

An error occurred while executing the action on the 'central'
machine. Please handle this error then try again:

Error while activating network: Call to virNetworkCreate failed: internal error: Network is already in use by interface br0.

An error occurred while executing the action on the 'worker1'
machine. Please handle this error then try again:

Error while activating network: Call to virNetworkCreate failed: internal error: Network is already in use by interface br0.

An error occurred while executing the action on the 'worker2'
machine. Please handle this error then try again:

Working with Openstack metadata service when using OVN

Metadata agent

Running the agent

neutron-ovn-metadata-agent –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/neutron_ovn_metadata_agent.ini

Configure neutron_ovn_metadata_agent.ini.j2 on the compute node(s)

[ovn]
ovn_nb_connection=tcp:{{OVN Controller IP}}:6641
ovn_sb_connection=tcp:{{OVN Controller IP}}:6642
ovn_metadata_enabled = true

Configure neutron.conf on the Neutron server

[ovn]
ovn_metadata_enabled = true

 

 

Reading

https://docs.openstack.org/networking-ovn/latest/admin/refarch/refarch.html – For a nice diagram on how the bits fit together

https://man7.org/linux/man-pages/man7/ovn-architecture.7.html – Some more in depth technical secrets hidden in this doc

https://patchwork.ozlabs.org/project/openvswitch/patch/1493118328-21311-1-git-send-email-dalvarez@redhat.com/

Specifically the example of local ports

- One logical switch sw0 with 2 ports (p1, p2) and 1 localport (lp)
- Two hypervisors: HV1 and HV2
- p1 will be in HV1 (OVS port with external-id:iface-id="p1")
- p2 will be in HV2 (OVS port with external-id:iface-id="p2")
- lp will be in both (OVS port with external-id:iface-id="lp")
- p1 should be able to reach p2 and viceversa
- lp on HV1 should be able to reach p1 but not p2
- lp on HV2 should be able to reach p2 but not p1


ovn-nbctl ls-add sw0
ovn-nbctl lsp-add sw0 p1
ovn-nbctl lsp-add sw0 p2
ovn-nbctl lsp-add sw0 lp
ovn-nbctl lsp-set-addresses p1 "00:00:00:aa:bb:10 10.0.1.10"
ovn-nbctl lsp-set-addresses p2 "00:00:00:aa:bb:20 10.0.1.20"
ovn-nbctl lsp-set-addresses lp "00:00:00:aa:bb:30 10.0.1.30"
ovn-nbctl lsp-set-type lp localport

add_phys_port() {
name=$1
mac=$2
ip=$3
mask=$4
gw=$5
iface_id=$6
sudo ip netns add $name
sudo ovs-vsctl add-port br-int $name -- set interface $name
type=internal
sudo ip link set $name netns $name
sudo ip netns exec $name ip link set $name address $mac
sudo ip netns exec $name ip addr add $ip/$mask dev $name
sudo ip netns exec $name ip link set $name up
sudo ip netns exec $name ip route add default via $gw
sudo ovs-vsctl set Interface $name external_ids:iface-id=$iface_id
}

# Add p1 to HV1, p2 to HV2 and localport to both

# HV1
add_phys_port p1 00:00:00:aa:bb:10 10.0.1.10 24 10.0.1.1 p1
add_phys_port lp 00:00:00:aa:bb:30 10.0.1.30 24 10.0.1.1 lp

$ sudo ip netns exec p1 ping -c 2 10.0.1.20
PING 10.0.1.20 (10.0.1.20) 56(84) bytes of data.
64 bytes from 10.0.1.20: icmp_seq=1 ttl=64 time=0.738 ms
64 bytes from 10.0.1.20: icmp_seq=2 ttl=64 time=0.502 ms

--- 10.0.1.20 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.502/0.620/0.738/0.118 ms

$ sudo ip netns exec lp ping -c 2 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
64 bytes from 10.0.1.10: icmp_seq=1 ttl=64 time=0.187 ms
64 bytes from 10.0.1.10: icmp_seq=2 ttl=64 time=0.032 ms

--- 10.0.1.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.032/0.109/0.187/0.078 ms


$ sudo ip netns exec lp ping -c 2 10.0.1.20
PING 10.0.1.20 (10.0.1.20) 56(84) bytes of data.

--- 10.0.1.20 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1000ms


$ sudo ovs-ofctl dump-flows br-int | grep table=32
cookie=0x0, duration=141.939s, table=32, n_packets=2, n_bytes=196,
idle_age=123, priority=150,reg14=0x3,reg15=0x2,metadata=0x7 actions=drop
cookie=0x0, duration=141.939s, table=32, n_packets=2, n_bytes=196,
idle_age=129, priority=100,reg15=0x2,metadata=0x7
actions=load:0x7->NXM_NX_TUN_ID[0..23],set_field:0x2->tun_metadata0,move:NXM_NX_REG14[0..14]->NXM_NX_TUN_METADATA0[16..30],output:59



# On HV2

add_phys_port p2 00:00:00:aa:bb:20 10.0.1.20 24 10.0.1.1 p2
add_phys_port lp 00:00:00:aa:bb:30 10.0.1.30 24 10.0.1.1 lp

$ sudo ip netns exec p2 ping -c 2 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
64 bytes from 10.0.1.10: icmp_seq=1 ttl=64 time=0.810 ms
64 bytes from 10.0.1.10: icmp_seq=2 ttl=64 time=0.673 ms

--- 10.0.1.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.673/0.741/0.810/0.073 ms

$ sudo ip netns exec lp ping -c 2 10.0.1.20
PING 10.0.1.20 (10.0.1.20) 56(84) bytes of data.
64 bytes from 10.0.1.20: icmp_seq=1 ttl=64 time=0.357 ms
64 bytes from 10.0.1.20: icmp_seq=2 ttl=64 time=0.062 ms

--- 10.0.1.20 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.062/0.209/0.357/0.148 ms

$ sudo ip netns exec lp ping -c 2 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.

--- 10.0.1.10 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

$ sudo ovs-ofctl dump-flows br-int | grep table=32
cookie=0x0, duration=24.169s, table=32, n_packets=2, n_bytes=196,
idle_age=12, priority=150,reg14=0x3,reg15=0x1,metadata=0x7 actions=drop
cookie=0x0, duration=24.169s, table=32, n_packets=2, n_bytes=196,
idle_age=14, priority=100,reg15=0x1,metadata=0x7
actions=load:0x7->NXM_NX_TUN_ID[0..23],set_field:0x1->tun_metadata0,move:NXM_NX_REG14[0..14]->NXM_NX_TUN_METADATA0[16..30],output:40

Run cryptominer while the screen is locked

dbus-monitor --session "type=signal,interface=org.gnome.ScreenSaver" | 
while read MSG; do
LOCK_STAT=`echo $MSG | grep boolean | awk '{print $2}'`
if [[ "$LOCK_STAT" == "true" ]]; then
echo "was locked"
killall ethdcrminer64
screen -d -m /home/user/Downloads/Claymore/ethdcrminer64 -epool exp-us.dwarfpool.com:8018 -ewal 0xaddresshere/m3 -epsw x -allpools 1 -gser 2 -allcoins exp
else
echo "was un-locked"
killall ethdcrminer64
fi
done

Openstack Nova – InstanceNotFound – Modify the HardReboot function to recreate domain from XML if it doesnt exist

Ive not finished writing this article… If you are interested in discussing this further feel free to hit me up cory@hawkless.id.au

So in the process of managing an OpenStack cluster where you have Pet VM’s that are not treated like Cattle (Read about Pets vs Cattle here, TL;DR – Pets = Import VM that cant die, Cattle, you can easily kill the VM and replace it with another fresh;y built one) You almost certainly come across a situation where a VM for whatever reason wil get stuck into an error state.
So the VM existins in the database, you can see it in the console, but when you go to power it on it just fails.
You check the host that it’s assigned to and you find an error like this ‘..InstanceNotFound..’ UGH!? WHY?
I’m still not sure WHY OpenStack gets itself into this tangle and up until now Ive not been sure how to get it out either. The only solution has been to delete the Instance definition and create it from scratch, which can be a giant pain in the ass.

 

 

 

2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server [req-4e2f492f-e9a4-4bcf-82d9-34746b93a074 f2800cf724264988aab44aa21bf1dae4 40c2b46fb47c4ee7ac076b259c4e0814 - default default] Exception during message handling: InstanceNotFound: Instance ec88fe77-ce03-4cfb-a377-deef13cde2cc could not be found.
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in wrapped
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server function_name, call_dict, binary)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server self.force_reraise()
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in wrapped
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 186, in decorated_function
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server "Error: %s", e, instance=instance)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server self.force_reraise()
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 156, in decorated_function
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 976, in decorated_function
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 202, in decorated_function
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4222, in resize_instance
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server self._revert_allocation(context, instance, migration)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server self.force_reraise()
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4219, in resize_instance
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server instance_type, clean_shutdown)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4257, in _resize_instance
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server timeout, retry_interval)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 8032, in migrate_disk_and_power_off
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server disk_info = self._get_instance_disk_info(instance, block_device_info)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 7809, in _get_instance_disk_info
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server guest = self._host.get_guest(instance)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 526, in get_guest
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server return libvirt_guest.Guest(self._get_domain(instance))
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 546, in _get_domain
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server raise exception.InstanceNotFound(instance_id=instance.uuid)
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server InstanceNotFound: Instance ec88fe77-ce03-4cfb-a377-deef13cde2cc could not be found.
2020-05-25 15:19:11.240 79840 ERROR oslo_messaging.rpc.server
2020-05-25 15:19:15.655 79840 INFO nova.compute.resource_tracker [req-4e2f492f-e9a4-4bcf-82d9-34746b93a074 f2800cf724264988aab44aa21bf1dae4