Example config for Squid reverse proxy using OWA on Exchange 2100 or 2016

/etc/squid/squid.conf

 

 

visible_hostname mail.domain.com
redirect_rewrites_host_header off
cache_mem 32 MB
maximum_object_size_in_memory 128 KB
#logformat combined %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh
access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
cache_mgr nomail_address_given
forwarded_for transparent
ssl_unclean_shutdown on

#This line is to fix the 2mb connection limit
client_persistent_connections off

https_port 443 accel cert=/etc/squid/wildcard.crt key=/etc/squid/wildcard.key defaultsite=mail.domain.com options=NO_SSLv2,NO_SSLv3,CIPHER_SERVER_PREFERENCE dhparams=/etc/squid/dhparams.pem cipher=ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4
#Was
#https_port 443 accel cert=/etc/squid/wildcard.crt key=/etc/squid/wildcard.key defaultsite=mail.domain.com options=NO_SSLv3:NO_SSLv2

#OWA
#This line has sslversion=3 this has something to do with the 2mb limit
cache_peer 192.168.30.147 parent 443 0 proxy-only no-query no-digest front-end-https=on originserver login=PASS ssl sslflags=DONT_VERIFY_PEER connection-auth=on name=ExchangeCAS
acl site_OWA dstdomain mail.domain.com autodiscover.domain.com
cache_peer_access ExchangeCAS allow site_OWA
http_access allow site_OWA
#miss_access allow site_OWA

#TSG
cache_peer 192.168.30.133 parent 443 0 proxy-only no-query no-digest front-end-https=on originserver login=PASS ssl sslflags=DONT_VERIFY_PEER connection-auth=on name=TSGServer
acl site_TSG dstdomain tsg.domain.com
cache_peer_access TSGServer allow site_TSG
http_access allow site_TSG

 

Fail2Ban

Unban an IP

fail2ban-client set sshd unbanip --iphere--

Show status of jails

[root@l1-adl3 ~]# fail2ban-client status
Status
|- Number of jail: 1
`- Jail list: sshd

 

Show the Fail2Ban log

cat /var/log/fail2ban.log

Cinder Ceph config

Puppet settings

cinder::controller::settings:
        DEFAULT:
                rpc_backend: "rabbit"
                my_ip: "%{::ipaddress_eth0}"
                verbose: "True"
                debug: "False"
                auth_strategy: "keystone"
                enabled_backends: "ceph_sas,ceph_sata"
                glance_host: "%{hiera('glance::virtualIP')}"
                volume_driver: "cinder.volume.drivers.rbd.RBDDriver"
                rbd_pool: "volumes"
                rbd_ceph_conf: "/etc/ceph/ceph.conf"
                rbd_flatten_volume_from_snapshot: "false"
                rbd_max_clone_depth: "5"
                rbd_store_chunk_size: "4"
                rados_connect_timeout: "-1"
                glance_api_version: "2"
                default_volume_type: "sas"
        ceph_sas:
                rbd_pool: "volumes"
                volume_driver: "cinder.volume.drivers.rbd.RBDDriver"
                rbd_ceph_conf: "/etc/ceph/ceph.conf"
                rbd_flatten_volume_from_snapshot: "false"
                rbd_max_clone_depth: "5"
                rbd_store_chunk_size: "4"
                rados_connect_timeout: "-1"
                glance_api_version: "2"
                rbd_user: "cinder"
                rbd_secret_uuid: "%{hiera('otherPasswords::cephUUID')}"
                volume_backend_name: "volumes"
        ceph_sata:
                rbd_pool: "volumes_sata"
                volume_backend_name: "volumes_sata"
                volume_driver: "cinder.volume.drivers.rbd.RBDDriver"
                rbd_ceph_conf: "/etc/ceph/ceph.conf"
                rbd_flatten_volume_from_snapshot: "false"
                rbd_max_clone_depth: "5"
                rbd_store_chunk_size: "4"
                rados_connect_timeout: "-1"
                glance_api_version: "2"
                rbd_user: "cinder"
                rbd_secret_uuid: "%{hiera('otherPasswords::cephUUID')}"

        database:
                connection: "mysql://cinder:%{hiera('databasePasswords::cinder')}@%{hiera('mysql::virtualIP')}/cinder"

        keystone_authtoken:
                auth_uri: "http://%{hiera('keystone::virtualIP')}:5000"
                auth_url: "http://%{hiera('keystone::virtualIP')}:35357"
                auth_plugin: "password"
                project_domain_id: "default"
                user_domain_id: "default"
                project_name: "service"
                username: "cinder"
                password: "%{hiera('servicePasswords::cinder')}"

        oslo_messaging_rabbit:
                rabbit_host:    "lib-ks01"
                rabbit_userid:  "openstack"
                rabbit_password: "%{hiera('servicePasswords::rabbit')}"

Create Ceph users

[root@lib-cephmon1 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
        key = AQCNHLxW3mtbIRAAvZp7yzw2kYaidD2s8inIgw==

Take the output of that command and dump it into /etc/ceph/ceph.client.cinder.keyring

vim /etc/ceph/ceph.client.cinder.keyring

Paste

[client.glance]
        key = AQCNHLxW3mtbIRAAvZp7yzw2kYaidD2s8inIgw==

Set permission on keyring file

chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

Then create the volume types in the DB to match the config

cinder type-create regular
cinder type-key regular set volume_backend_name=volumes

cinder type-create sata
cinder type-key sata set volume_backend_name=volumes_sata

cinder type-list

 

Create the volumes in ceph(Need to be run on a server with admin rights to ceph cluster, Cinder user doesnt have this level of access)

ceph osd pool create volumes_sata 200

Cinder volume service error when opening Ceph

Whe configuring Cinder for a Ceph backend i got this

 

2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager [req-07be81aa-8417-4367-a19d-cace7e6ab7a0 - - - - -] Failed to initialize driver.
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager Traceback (most recent call last):
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in init_host
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     self.driver.check_for_setup_error()
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     return f(*args, **kwargs)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 299, in check_for_setup_error
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     with RADOSClient(self):
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 251, in __init__
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     self.cluster, self.ioctx = driver._connect_to_rados(pool)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 841, in _wrapper
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     return r.call(f, *args, **kwargs)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 223, in call
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     return attempt.get(self._wrap_exception)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     six.reraise(self.value[0], self.value[1], self.value[2])
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 325, in _connect_to_rados
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     conffile=self.configuration.rbd_ceph_conf)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/rados.py", line 253, in __init__
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     self.conf_read_file(conffile)
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/rados.py", line 302, in conf_read_file
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager     raise make_ex(ret, "error calling conf_read_file")
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager Error: error calling conf_read_file: errno EACCES
2016-02-11 16:02:58.939 13065 ERROR cinder.volume.manager

 

A quick and dirty solution was to chmod 777 /etc/cinder

Ceph disk not online after a reboot

After a reboot of my Ceph test VM’s a few OSD’s didnt come online
When running ceph osd tree I saw

[root@lib-cephosd1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME             UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.29984 root default
-2 0.09995     host lib-cephosd1
 1 0.00999         osd.1              up  1.00000          1.00000
 0 0.00999         osd.0              up  1.00000          1.00000
 2 0.00999         osd.2              up  1.00000          1.00000
 3 0.00999         osd.3              up  1.00000          1.00000
 4 0.00999         osd.4              up  1.00000          1.00000
 5 0.00999         osd.5              up  1.00000          1.00000
 6 0.00999         osd.6              up  1.00000          1.00000
 7 0.00999         osd.7              up  1.00000          1.00000
 8 0.00999         osd.8              up  1.00000          1.00000
 9 0.00999         osd.9              up  1.00000          1.00000
-3 0.09995     host lib-cephosd2
10 0.00999         osd.10             up  1.00000          1.00000
12 0.00999         osd.12             up  1.00000          1.00000
14 0.00999         osd.14             up  1.00000          1.00000
16 0.00999         osd.16             up  1.00000          1.00000
18 0.00999         osd.18             up  1.00000          1.00000
20 0.00999         osd.20             up  1.00000          1.00000
22 0.00999         osd.22             up  1.00000          1.00000
24 0.00999         osd.24             up  1.00000          1.00000
26 0.00999         osd.26             up  1.00000          1.00000
28 0.00999         osd.28             up  1.00000          1.00000
-4 0.09995     host lib-cephosd3
11 0.00999         osd.11             up  1.00000          1.00000
13 0.00999         osd.13           down        0          1.00000
15 0.00999         osd.15             up  1.00000          1.00000
17 0.00999         osd.17             up  1.00000          1.00000
19 0.00999         osd.19             up  1.00000          1.00000
21 0.00999         osd.21             up  1.00000          1.00000
23 0.00999         osd.23             up  1.00000          1.00000
25 0.00999         osd.25             up  1.00000          1.00000
27 0.00999         osd.27             up  1.00000          1.00000
29 0.00999         osd.29             up  1.00000          1.00000

 

We can see osd.13 is down on the node ceph-osd3

 

On the node lib-cephosd3 I ran service ceph restart osd.13

But we greeted with

[root@lib-cephosd3 ~]#  service ceph restart osd.13
/etc/init.d/ceph: osd.13 not found (/etc/ceph/ceph.conf defines osd.11 osd.15 osd.17 osd.19 osd.21 osd.23 osd.25 osd.27 osd.29 , /var/lib/ceph defines osd.11 osd.15 osd.17 osd.19 osd.21 osd.23 osd.25 osd.27 osd.29)

 

Hmm.

Running ceph-disk list shows the disk as being there

[root@lib-cephosd3 ~]# ceph-disk list
WARNING:ceph-disk:Old blkid does not support ID_PART_ENTRY_* fields, trying sgdisk; may not correctly identify ceph volumes with dmcrypt
/dev/sda other, unknown
/dev/sdb other, unknown
/dev/vda :
 /dev/vda1 other, xfs, mounted on /boot
 /dev/vda2 other, LVM2_member
/dev/vdb :
 /dev/vdb1 ceph data, active, cluster ceph, osd.11, journal /dev/vdb2
 /dev/vdb2 ceph journal, for /dev/vdb1
/dev/vdc :
 /dev/vdc1 ceph data, prepared, cluster ceph, osd.13, journal /dev/vdc2
 /dev/vdc2 ceph journal, for /dev/vdc1
/dev/vdd :
 /dev/vdd1 ceph data, active, cluster ceph, osd.15, journal /dev/vdd2
 /dev/vdd2 ceph journal, for /dev/vdd1
/dev/vde :
 /dev/vde1 ceph data, active, cluster ceph, osd.17, journal /dev/vde2
 /dev/vde2 ceph journal, for /dev/vde1
/dev/vdf :
 /dev/vdf1 ceph data, active, cluster ceph, osd.19, journal /dev/vdf2
 /dev/vdf2 ceph journal, for /dev/vdf1
/dev/vdg :
 /dev/vdg1 ceph data, active, cluster ceph, osd.21, journal /dev/vdg2
 /dev/vdg2 ceph journal, for /dev/vdg1
/dev/vdh :
 /dev/vdh1 ceph data, active, cluster ceph, osd.23, journal /dev/vdh2
 /dev/vdh2 ceph journal, for /dev/vdh1
/dev/vdi :
 /dev/vdi1 ceph data, active, cluster ceph, osd.25, journal /dev/vdi2
 /dev/vdi2 ceph journal, for /dev/vdi1
/dev/vdj :
 /dev/vdj1 ceph data, active, cluster ceph, osd.27, journal /dev/vdj2
 /dev/vdj2 ceph journal, for /dev/vdj1
/dev/vdk :
 /dev/vdk1 ceph data, active, cluster ceph, osd.29, journal /dev/vdk2
 /dev/vdk2 ceph journal, for /dev/vdk1

 

Interestingly mount Shows the disk as not being mounted?

[root@lib-cephosd3 ~]# mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=920412k,nr_inodes=230103,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/centos-root on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=33,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
/dev/vda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/vdh1 on /var/lib/ceph/osd/ceph-23 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/vdj1 on /var/lib/ceph/osd/ceph-27 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/vdi1 on /var/lib/ceph/osd/ceph-25 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/vdb1 on /var/lib/ceph/osd/ceph-11 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/vdd1 on /var/lib/ceph/osd/ceph-15 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/vdk1 on /var/lib/ceph/osd/ceph-29 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/vdf1 on /var/lib/ceph/osd/ceph-19 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/vde1 on /var/lib/ceph/osd/ceph-17 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/vdg1 on /var/lib/ceph/osd/ceph-21 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)

 

What if i manually mount it?

mount /dev/vdc1 /var/lib/ceph/osd/ceph-13

cd /var/lib/ceph/osd/ceph-13/

[root@lib-cephosd3 ceph-13]# ll
total 44
-rw-r--r--.  1 root root  502 Feb  9 11:30 activate.monmap
-rw-r--r--.  1 root root    3 Feb  9 11:30 active
-rw-r--r--.  1 root root   37 Feb  9 11:30 ceph_fsid
drwxr-xr-x. 77 root root 1216 Feb  9 12:30 current
-rw-r--r--.  1 root root   37 Feb  9 11:30 fsid
lrwxrwxrwx.  1 root root   58 Feb  9 11:30 journal -> /dev/disk/by-partuuid/644f4d32-d440-4a6c-9fee-0a2187ac2eaf
-rw-r--r--.  1 root root   37 Feb  9 11:30 journal_uuid
-rw-------.  1 root root   57 Feb  9 11:30 keyring
-rw-r--r--.  1 root root   21 Feb  9 11:30 magic
-rw-r--r--.  1 root root    6 Feb  9 11:30 ready
-rw-r--r--.  1 root root    4 Feb  9 11:30 store_version
-rw-r--r--.  1 root root   53 Feb  9 11:30 superblock
-rw-r--r--.  1 root root    0 Feb  9 11:34 sysvinit
-rw-r--r--.  1 root root    3 Feb  9 11:30 whoami

 

Looks ok?

Can i start that osd now?

[root@lib-cephosd3 osd]#  service ceph restart osd.13
=== osd.13 ===
=== osd.13 ===
Stopping Ceph osd.13 on lib-cephosd3...done
=== osd.13 ===
create-or-move updated item name 'osd.13' weight 0.01 at location {host=lib-cephosd3,root=default} to crush map
Starting Ceph osd.13 on lib-cephosd3...
Running as unit run-12509.service.
[root@lib-cephosd3 osd]# ceph osd tree
ID WEIGHT  TYPE NAME             UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.29984 root default
-2 0.09995     host lib-cephosd1
 1 0.00999         osd.1              up  1.00000          1.00000
 0 0.00999         osd.0              up  1.00000          1.00000
 2 0.00999         osd.2              up  1.00000          1.00000
 3 0.00999         osd.3              up  1.00000          1.00000
 4 0.00999         osd.4              up  1.00000          1.00000
 5 0.00999         osd.5              up  1.00000          1.00000
 6 0.00999         osd.6              up  1.00000          1.00000
 7 0.00999         osd.7              up  1.00000          1.00000
 8 0.00999         osd.8              up  1.00000          1.00000
 9 0.00999         osd.9              up  1.00000          1.00000
-3 0.09995     host lib-cephosd2
10 0.00999         osd.10             up  1.00000          1.00000
12 0.00999         osd.12             up  1.00000          1.00000
14 0.00999         osd.14             up  1.00000          1.00000
16 0.00999         osd.16             up  1.00000          1.00000
18 0.00999         osd.18             up  1.00000          1.00000
20 0.00999         osd.20             up  1.00000          1.00000
22 0.00999         osd.22             up  1.00000          1.00000
24 0.00999         osd.24             up  1.00000          1.00000
26 0.00999         osd.26             up  1.00000          1.00000
28 0.00999         osd.28             up  1.00000          1.00000
-4 0.09995     host lib-cephosd3
11 0.00999         osd.11             up  1.00000          1.00000
13 0.00999         osd.13             up  1.00000          1.00000
15 0.00999         osd.15             up  1.00000          1.00000
17 0.00999         osd.17             up  1.00000          1.00000
19 0.00999         osd.19             up  1.00000          1.00000
21 0.00999         osd.21             up  1.00000          1.00000
23 0.00999         osd.23             up  1.00000          1.00000
25 0.00999         osd.25             up  1.00000          1.00000
27 0.00999         osd.27             up  1.00000          1.00000
29 0.00999         osd.29             up  1.00000          1.00000
[root@lib-cephosd3 osd]#

 

 

[root@lib-cephosd1 ~]# ceph -w
    cluster 46ded320-ec09-40bc-a6c4-0a8ad3341035
     health HEALTH_OK
     monmap e2: 3 mons at {lib-cephmon1=172.18.0.51:6789/0,lib-cephmon2=172.18.0.52:6789/0,lib-cephmon3=172.18.0.53:6789/0}
            election epoch 26, quorum 0,1,2 lib-cephmon1,lib-cephmon2,lib-cephmon3
     osdmap e289: 30 osds: 30 up, 30 in
      pgmap v763: 576 pgs, 5 pools, 0 bytes data, 0 objects
            1235 MB used, 448 GB / 449 GB avail
                 576 active+clean

2016-02-11 13:53:39.539715 mon.0 [INF] pgmap v763: 576 pgs: 576 active+clean; 0 bytes data, 1235 MB used, 448 GB / 449 GB avail

 

That worked 🙂

Errors running Glance[Liberty] on Centos 7

When trying to install\run glance on Centos 7 I came across the following error

ImportError: No module named oslo_policy

This was resolved by installing the python oslo_policy module by running

pip install oslo.policy

You may or not need to install pip(Python’s equivalent to Yum almost?)

yum install python-pip

 

Thats all wonderful but now i get this!

AttributeError: 'module' object has no attribute 'PY2'

Research suggests that it’s the python ‘six’ package that is out of date

pip install --upgrade six

Solved!

Handy Ceph Commands

Show all disks on the OSD you are logged into
ceph-disk list

Example output

[root@cephosd1 ~]# ceph-disk list
WARNING:ceph-disk:Old blkid does not support ID_PART_ENTRY_* fields, trying sgdisk; may not correctly identify ceph volumes with dmcrypt
/dev/sda :
 /dev/sda1 other, xfs, mounted on /boot
 /dev/sda2 other, LVM2_member
/dev/sdb other, unknown
/dev/sdc other, unknown
/dev/sdd :
 /dev/sdd1 ceph data, active, cluster ceph, osd.28, journal /dev/sdd2
 /dev/sdd2 ceph journal, for /dev/sdd1
/dev/sde :
 /dev/sde1 ceph data, active, cluster ceph, osd.26, journal /dev/sde2
 /dev/sde2 ceph journal, for /dev/sde1
/dev/sdf :
 /dev/sdf1 ceph data, active, cluster ceph, osd.22, journal /dev/sdf2
 /dev/sdf2 ceph journal, for /dev/sdf1
/dev/sdg :
 /dev/sdg1 ceph data, active, cluster ceph, osd.27, journal /dev/sdg2
 /dev/sdg2 ceph journal, for /dev/sdg1
/dev/sdh :
 /dev/sdh1 ceph data, active, cluster ceph, osd.25, journal /dev/sdh2
 /dev/sdh2 ceph journal, for /dev/sdh1
/dev/sdi :
 /dev/sdi1 ceph data, active, cluster ceph, osd.23, journal /dev/sdi2
 /dev/sdi2 ceph journal, for /dev/sdi1
/dev/sdj :
 /dev/sdj1 ceph data, active, cluster ceph, osd.21, journal /dev/sdj2
 /dev/sdj2 ceph journal, for /dev/sdj1
/dev/sdk :
 /dev/sdk1 ceph data, active, cluster ceph, osd.20, journal /dev/sdk2
 /dev/sdk2 ceph journal, for /dev/sdk1
/dev/sdl :
 /dev/sdl1 ceph data, active, cluster ceph, osd.19, journal /dev/sdl2
 /dev/sdl2 ceph journal, for /dev/sdl1
/dev/sdm :
 /dev/sdm1 ceph data, active, cluster ceph, osd.24, journal /dev/sdm2
 /dev/sdm2 ceph journal, for /dev/sdm1
[root@cephosd1 ~]#

Openstack Liberty – Neutron config file issue

All the doco I could lay my hand on for configuring OpenStack Liberty(Which at the time of writing, is not alot!) shows the config file for the OpenVSwitch agent to be located at “/etc/neutron/plugin.ini” which is a symlink to “/etc/neutron/plugins/ml2/ml2_conf.ini”

I configured /etc/neutron/plugins/ml2/ml2_conf.ini as per the config i needed but it just didn’t work.
Upon running ps ax | grep open

3324 ? Ss 0:10 /usr/bin/python2 /usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir  /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent --log-file /var/log/neutron/openvswitch-agent.log

 

This tells me that the openvswitch agent isn’t reading form any of the config files i expected, when i manually killed the process and ran the command in the SSH console(Probably less than ideal) with the adjusted config file location it worked!

 

So a tweak of /usr/lib/systemd/system/neutron-openvswitch-agent.service changing line 9 to read ExecStart=/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent --log-file /var/log/neutron/openvswitch-agent.log then restarting the service gave me the results I was looking for.

Recreate DFS Database

I use DFS alot for work, sometimes it just gets grumpy and needs to be kicked in the guts.

Deleting the database isnt easy, becuase widnows keeps a pretty tight grib on anything in SystemVolumeInformation.

Use this script to move the DFS folder and have the service create a fresh new database

Hint – Look for DFS Database events in the application log, not the DFS replication log

 

set DFSR_DB_DRIVE=C:
 
REM Removes the broken DFSR configuration database from the system
net stop dfsr
%DFSR_DB_DRIVE%
 
icacls "%DFSR_DB_DRIVE%\System Volume Information" /grant "Domain Admins":F
cd "%DFSR_DB_DRIVE%\System Volume Information"
move DFSR %DFSR_DB_DRIVE%\DFSR_backup
 
cd ..
icacls "%DFSR_DB_DRIVE%\System Volume Information" /remove:g "Domain Admins"
net start dfsr
dfsrdiag PollAD /Member:%userdomain%\%computername%

 

 

 

Original Article – http://sheeponline.net/recreate-dfsr-database.html

Fixed NIC names (eth*) on RHEL and CentOS 7

http://tristan.terpelle.be/2015/06/fixed-nic-names-eth-on-rhel-and-centos-7.html