A quick and dirty script to backup Openstack config files

You will of course need to run ssh-copy-id root@hostname for each machine you want to connect to prior to running this script

 

#!/bin/bash
# declare an array called array and define 3 vales
osdirs=( "/etc/nova" "/etc/neutron" "/etc/cinder" "/etc/glance" "/etc/keystone" "/etc/httpd" )
servers=("vm-os-ks01" "vm-os-glance01" "vm-os-dash01" "vm-os-net01" "vm-os-net02" "vm-os-cinder01" "vm-os-radosgw01" )

for s in "${servers[@]}"
do
    for d in "${osdirs[@]}"
    do
        echo "Server $s Dir $d"
        scp -r root@$s:$d /root/backups/$s/$d
    done
done

Linux get server hardware information

Get the model number of a server, tested on HP DL380 and 180’s

 

[root@cephosd4 ~]# dmidecode | grep "System Information" -A20
System Information
        Manufacturer: HP
        Product Name: ProLiant DL180 Gen9
        Version: Not Specified
        Serial Number: xxxxxxx
        UUID: xxx-3335-5541-xxx-313930303734
        Wake-up Type: Power Switch
        SKU Number: 778453-B21
        Family: ProLiant

OpenBGPD on pfSense

Here is a working config from a multi site MPLS VPN connection managed by AAPT

We terminated the connection in the data center with pfSense and OpenBGPD

Default route for all remote sites is via the data center

 

AS 64512
fib-update yes
listen on 10.252.0.18
router-id 10.252.0.18

network 0.0.0.0/0
network 192.168.30.0/24

neighbor 10.252.0.17 {
descr "TPG"
remote-as 2764
announce all  
local-address 10.252.0.18
}

deny from any
deny to any
allow from 10.252.0.17
allow to 10.252.0.17

hpacucli on Linux

Original article here – http://www.thegeekstuff.com/2014/07/hpacucli-examples/

Using hpacucli to manage RAID

Create a single disk RAID0 (How i use Ceph on my HP DL180’s)

hpacucli ctrl slot=2 create type=ld drives=1I:1:8 raid=0

 

Show all logical volumes

[root@management ~]# hpacucli controller slot=0 logicaldrive all show

Smart Array P410i in Slot 0 (Embedded)

array A

logicaldrive 1 (136.4 GB, RAID 1, OK)

array B

logicaldrive 2 (1.4 TB, RAID 5, Recovering, 52% complete)

 

Script to E-Mail in case of RAID failure

#!/bin/bash
###
#If something went wrong with the HP smartarray disks this script will send an error email
###
MAIL=notifications@domain.com.au
HPACUCLI=`which hpacucli`
HPACUCLI_TMP=/tmp/hpacucli.log

if [ `/usr/sbin/uname26 hpacucli controller slot=2 physicaldrive all show | grep -e 'Fail\|Rebuil\|err\|prob' -i | wc -l` -gt 0 ]
then
echo failure
msg="RAID Controller Errors"
#echo $msg
#$msg2=`hpacucli controller slot=1 physicaldrive all show`
logger -p syslog.error -t RAID "$msg"
echo "Hostname: " $HOSTNAME >> $HPACUCLI_TMP
/usr/sbin/uname26 $HPACUCLI controller slot=2 physicaldrive all show >> $HPACUCLI_TMP
mail -s "$HOSTNAME [ERROR] - $msg" -r RaidError@domain.com.au "$MAIL" < $HPACUCLI_TMP
rm -f $HPACUCLI_TMP
#else
#echo "Everything Good"
fi

Ceph – Generate a new key

When using puppet to rollout Ceph you may want to generate new keys for the admin keys etc

 

[root@lib-cephxx ~]# ceph-authtool --gen-print-key
AQBTuxlXlpMTNBAAykh5+4vHHnTcjhq4FWFw8g==

 

Example config for Squid reverse proxy using OWA on Exchange 2100 or 2016

/etc/squid/squid.conf

 

 

visible_hostname mail.domain.com
redirect_rewrites_host_header off
cache_mem 32 MB
maximum_object_size_in_memory 128 KB
#logformat combined %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh
access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
cache_mgr nomail_address_given
forwarded_for transparent
ssl_unclean_shutdown on

#This line is to fix the 2mb connection limit
client_persistent_connections off

https_port 443 accel cert=/etc/squid/wildcard.crt key=/etc/squid/wildcard.key defaultsite=mail.domain.com options=NO_SSLv2,NO_SSLv3,CIPHER_SERVER_PREFERENCE dhparams=/etc/squid/dhparams.pem cipher=ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4
#Was
#https_port 443 accel cert=/etc/squid/wildcard.crt key=/etc/squid/wildcard.key defaultsite=mail.domain.com options=NO_SSLv3:NO_SSLv2

#OWA
#This line has sslversion=3 this has something to do with the 2mb limit
cache_peer 192.168.30.147 parent 443 0 proxy-only no-query no-digest front-end-https=on originserver login=PASS ssl sslflags=DONT_VERIFY_PEER connection-auth=on name=ExchangeCAS
acl site_OWA dstdomain mail.domain.com autodiscover.domain.com
cache_peer_access ExchangeCAS allow site_OWA
http_access allow site_OWA
#miss_access allow site_OWA

#TSG
cache_peer 192.168.30.133 parent 443 0 proxy-only no-query no-digest front-end-https=on originserver login=PASS ssl sslflags=DONT_VERIFY_PEER connection-auth=on name=TSGServer
acl site_TSG dstdomain tsg.domain.com
cache_peer_access TSGServer allow site_TSG
http_access allow site_TSG

 

Fail2Ban

Unban an IP

fail2ban-client set sshd unbanip --iphere--

Show status of jails

[root@l1-adl3 ~]# fail2ban-client status
Status
|- Number of jail: 1
`- Jail list: sshd

 

Show the Fail2Ban log

cat /var/log/fail2ban.log

Cinder Ceph config

Puppet settings

cinder::controller::settings:
        DEFAULT:
                rpc_backend: "rabbit"
                my_ip: "%{::ipaddress_eth0}"
                verbose: "True"
                debug: "False"
                auth_strategy: "keystone"
                enabled_backends: "ceph_sas,ceph_sata"
                glance_host: "%{hiera('glance::virtualIP')}"
                volume_driver: "cinder.volume.drivers.rbd.RBDDriver"
                rbd_pool: "volumes"
                rbd_ceph_conf: "/etc/ceph/ceph.conf"
                rbd_flatten_volume_from_snapshot: "false"
                rbd_max_clone_depth: "5"
                rbd_store_chunk_size: "4"
                rados_connect_timeout: "-1"
                glance_api_version: "2"
                default_volume_type: "sas"
        ceph_sas:
                rbd_pool: "volumes"
                volume_driver: "cinder.volume.drivers.rbd.RBDDriver"
                rbd_ceph_conf: "/etc/ceph/ceph.conf"
                rbd_flatten_volume_from_snapshot: "false"
                rbd_max_clone_depth: "5"
                rbd_store_chunk_size: "4"
                rados_connect_timeout: "-1"
                glance_api_version: "2"
                rbd_user: "cinder"
                rbd_secret_uuid: "%{hiera('otherPasswords::cephUUID')}"
                volume_backend_name: "volumes"
        ceph_sata:
                rbd_pool: "volumes_sata"
                volume_backend_name: "volumes_sata"
                volume_driver: "cinder.volume.drivers.rbd.RBDDriver"
                rbd_ceph_conf: "/etc/ceph/ceph.conf"
                rbd_flatten_volume_from_snapshot: "false"
                rbd_max_clone_depth: "5"
                rbd_store_chunk_size: "4"
                rados_connect_timeout: "-1"
                glance_api_version: "2"
                rbd_user: "cinder"
                rbd_secret_uuid: "%{hiera('otherPasswords::cephUUID')}"

        database:
                connection: "mysql://cinder:%{hiera('databasePasswords::cinder')}@%{hiera('mysql::virtualIP')}/cinder"

        keystone_authtoken:
                auth_uri: "http://%{hiera('keystone::virtualIP')}:5000"
                auth_url: "http://%{hiera('keystone::virtualIP')}:35357"
                auth_plugin: "password"
                project_domain_id: "default"
                user_domain_id: "default"
                project_name: "service"
                username: "cinder"
                password: "%{hiera('servicePasswords::cinder')}"

        oslo_messaging_rabbit:
                rabbit_host:    "lib-ks01"
                rabbit_userid:  "openstack"
                rabbit_password: "%{hiera('servicePasswords::rabbit')}"

Create Ceph users

[root@lib-cephmon1 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
        key = AQCNHLxW3mtbIRAAvZp7yzw2kYaidD2s8inIgw==

Take the output of that command and dump it into /etc/ceph/ceph.client.cinder.keyring

vim /etc/ceph/ceph.client.cinder.keyring

Paste

[client.glance]
        key = AQCNHLxW3mtbIRAAvZp7yzw2kYaidD2s8inIgw==

Set permission on keyring file

chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

Then create the volume types in the DB to match the config

cinder type-create regular
cinder type-key regular set volume_backend_name=volumes

cinder type-create sata
cinder type-key sata set volume_backend_name=volumes_sata

cinder type-list

 

Create the volumes in ceph(Need to be run on a server with admin rights to ceph cluster, Cinder user doesnt have this level of access)

ceph osd pool create volumes_sata 200

Cinder volume service error when opening Ceph

Whe configuring Cinder for a Ceph backend i got this

 

2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager [req-07be81aa-8417-4367-a19d-cace7e6ab7a0 - - - - -] Failed to initialize driver.
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager Traceback (most recent call last):
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in init_host
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     self.driver.check_for_setup_error()
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     return f(*args, **kwargs)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 299, in check_for_setup_error
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     with RADOSClient(self):
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 251, in __init__
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     self.cluster, self.ioctx = driver._connect_to_rados(pool)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 841, in _wrapper
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     return r.call(f, *args, **kwargs)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 223, in call
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     return attempt.get(self._wrap_exception)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     six.reraise(self.value[0], self.value[1], self.value[2])
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 325, in _connect_to_rados
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     conffile=self.configuration.rbd_ceph_conf)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/rados.py", line 253, in __init__
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     self.conf_read_file(conffile)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/rados.py", line 302, in conf_read_file
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     raise make_ex(ret, "error calling conf_read_file")
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager Error: error calling conf_read_file: errno EACCES
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager

 

A quick and dirty solution was to chmod 777 /etc/cinder