D-bus signaling performance

While working on lvm-dubstep the question was posed if D-bus could handle the number of changes that could happen in a short period of time, especially PropertiesChanged signals when a large number of logical volumes or physical volumes were present on the system (eg. 120K PVs and 10K+ LVs).  To test this idea I put together a simple server and client which simply tries to send an arbitrary number of signals as fast as it possibly can.  The number settled upon was 10K because during early testing I was running into time-out exceptions when trying to send more in a row.  Initial testing was done using the dbus-python library and even though numbers seemed sufficient, people asked about sd-bus and sd-bus utilizing kdbus, so the experiment was expanded to include these as well.  Source code for the testing is available here.


Test configuration

  • One client listening for signals.
  • Python tests were run on a Fedora 22 VM.
  • Sd-bus was run on a similarly configured VM running Fedora 23 alpha utilizing systemd 222-2.  Kdbus was built as a kernel module from the latest systemd/kdbus repo.  F23 kernel version: 4.2.0-0.rc8.git0.1.fc23.x86_64.
  • I tried running all the tests on the F23 VM, but the python tests were failing horribly with journal entries: Sep 01 10:53:14 sdbus systemd-bus-proxyd[663]: Dropped messages due to queue overflow of local peer (pid: 2615 uid: 0)


The results:

  • Python utilizing dbus-python library was able to send and receive 10K signals in a row, varying payload size from 32-128K without any issues.  As I mentioned before if I try to send more than that in a row, especially at larger payload sizes I do get time-outs on the send.
  • sd-bus without using kdbus I was only able to send up to 512 byte payloads before the server would error out with: Failed to emit signal: No buffer space available.
  • sd-bus with kdbus the test completes, seemingly without error, but the number of signals received is not as expected and appears to vary with payload size.

Messages/second is the total number of messages divided by the total time to receive them.



MiB/Second is the messages per second multiplied by the payload size.



Average time delta is the time difference from when the signal was placed on the bus until is was read by the signal handler.  This shows quite a bit of buffering for the python test case at larger payload sizes.



Percentage of signals that were received by the signal handler.  As you can see once the payload > 2048, kdbus appears to silently discard signals away as nothing appeared in kernel output and the return codes were all good in user space.



  • The C implementation using sd-bus without kdbus performs slightly poorer than the dbus-python which surprised me.
  • kdbus is by far the best performing with payloads < 2048 bytes
  • Signals are by definition sent without a response.  Having a reliable way for the sender to know that a signal was not sent seems pretty important, so kdbus seems to have a bug in that no error was being returned to the transmitter or that there is a bug in my test code.
  • Code likely has bugs and/or does something sub optimal, please do a pull request and I will incorporate the changes and re-run the tests.

libtool library versioning ( -version-info ‘current[:revision[:age]]’ )

The documentation on the gnu libtool manual states:

The following explanation may help to understand the above rules a bit better: consider that there are three possible kinds of reactions from users of your library to changes in a shared library:

  1. Programs using the previous version may use the new version as drop-in replacement, and programs using the new version can also work with the previous one. In other words, no recompiling nor relinking is needed. In this case, bump revision only, don’t touch current nor age.
  2. Programs using the previous version may use the new version as drop-in replacement, but programs using the new version may use APIs not present in the previous one. In other words, a program linking against the new version may fail with “unresolved symbols” if linking against the old version at runtime: set revision to 0, bump current and age.
  3. Programs may need to be changed, recompiled, relinked in order to use the new version. Bump current, set revision and age to 0.

This was confusing to me until I played around with the -version-info value and looked at the library output on my linux development system.

-version-info 0:0:0 => libfoo.so.0.0.0
-version-info 1:0:1 => libfoo.so.0.1.0

-version-info 1:0:0 => libfoo.so.1.0.0
-version-info 2:0:1 => libfoo.so.1.1.0
-version-info 2.1.1 => libfoo.so.1.1.1
-version-info 3.0.2 => libfoo.so.1.2.0

-version-info 4:0:0 => libfoo.so.4.0.0

The current:revision:age is an abstraction which allows libtool to do the correct thing for the platform you are building and deploying on. This wasn’t immediately evident when reading the documentation.


Using google authenticator with OpenBSD SSH logins

For an introduction on two factor authentication please see: http://en.wikipedia.org/wiki/Two-step_verification

NOTE: Make sure you leave a terminal with root access if you are using a remote system until you have tested that you can indeed authenticate to it!

Steps tested on OpenBSD 5.4, used tools on EL6 client to generate QR code. I’m mainly documenting this here so I can remember how to do this again.

1. Install: login_oauth-.tgz eg.

# pkg_add -v http://mirror.planetunix.net/pub/OpenBSD/5.4/packages/i386/login_oath-0.8p1.tgz

Make sure to look over the readme: /usr/local/share/doc/pkg-readmes/login_oath-version

2. Add a new login class by editing /etc/login.conf and add change user(s) to use it.

# The TOTPPW login class requires both TOTP and passwd; the user must
# supply the password in the format OTP/password, e.g. 816721/axlotl.


Change user(s) login class eg.

# usermod -L totppw username

3. Generate a random key in the users home directory

$ openssl rand -hex 20 > ~/.totp-key

Note: Make sure your home directory and this file are not world readable! Otherwise you will be prevented from logging in.

4. Convert hex string key to base32

Documents show perl, I have supplied some code in python.

import binascii
import base64
import sys

if __name__ == '__main__':
    if len(sys.argv) != 2:
        print 'Syntax: %s ' % ( sys.argv[0])

    print base64.b32encode(binascii.unhexlify(sys.argv[1]))

5. Generate QR code so you don’t have to type in a big random sequence on your smart phone. Free web based ones exist, but only use if you trust.

$ qrencode -o ~/.totp-key.png 
"otpauth://totp/?secret=BASE 32 SECRET&issuer=Your name, etc."

More info: http://code.google.com/p/google-authenticator/wiki/KeyUriFormat

6. Using google authenticator create a new software key using your created QR code. If you don’t want to create a QR code, then create a new entry, set to time based and enter the 64 characters. I’m guessing it may take a few tries to get it correct.

7. Test that google authenticator and oathtool –totp `cat ~/.totp-key` match. If they don’t make sure both the phone and the computer dates and time match.

8. Using a separate terminal, ssh to remote system with just the password, this should fail. Then try using OTP/password for your password and you should get authenticated.

Buffer bloat mitigation with OpenBSD pf

For an introduction to buffer bloat read more here http://en.wikipedia.org/wiki/Bufferbloat .

My home network utilizes OpenBSD and the built in packet filter (pf). I use cable for broadband internet and found that if I tried to upload a large file my internet connecting became very unable with high amounts of latency. After utilizing tools such as http://netalyzr.icsi.berkeley.edu/blog/ it became obvious that I was suffering from buffer bloat.

After doing some searching I came across using altq support in pf to try some configuration changes to reduce the buffer bloat in my configuration.

I added a queue for my external network card with the following:

altq on $ext_if bandwidth 1Mb hfsc queue { bb }
queue bb bandwidth 100% qlimit 9 hfsc ( default )

and the corresponding rule to tag outgoing traffic for that interface to this queue

pass out on $ext_if keep state queue( bb )

My home connection is advertised as 5Mb down, 1Mb up. In testing I get about 1.1Mb up so I setup my outgoing queue to limit outgoing to 1Mb. A typical setting would be 97% of maximum. One of the most important values in the queue setup is the number of buffers. Mine is currently set at 9. This is how I determined what the value should be.

Maximum upstream bandwidth in packets is upstream bandwidth in bytes / size of a packet. In my case 1000000/8 = bandwidth in bytes / 1460 (size of packet) which yields 85 packets a second. So if I set my buffer size to 85 I should have about 1 second of latency. In my case I like my latency low so I divided by 10 to try and get a 100ms latency under full upstream use which is 8.5, which I rounded up to 9.

So how does it work?

Average pings to slashdot.org in milliseconds

Idle connection:                 23.8   ( 0% packet loss)
Maxed upstream use:              95.1   ( 0% packet loss)
Maxed upstream without altq:   2083.4   (10% packet loss)

Quite the improvement!

Thanks to the information on https://calomel.org/pf_hfsc.html for helpful tips.

Orchestrating Your Storage: libStorageMgmt

NOTE: Updated 4/2/2015 to reflect new project links and updated command line syntax.


This paper discusses some of the advanced features that can be used in modern storage subsystems to improve IT work flows. Being able to manage storage whether it be direct attached, storage area network (SAN) or networked file system is vital. The ability to manage different vendor solutions consistently using the same tools opens a new range of storage related solutions. LibStorageMgmt meets this need.


Many of today’s storage subsystems have a range of features. Some examples include: create, delete, re-size, copy, make space-efficient copies and mirrors for block storage. Networked file systems can offload copies of files or even entire file systems quickly while using little to no additional storage, or keep numerous read-only point-in-time copies of file system state. This allows users to quickly provision new virtual machines, take instantaneous copies of databases, and back up other files. For example a user could quiesce a data base, call to the array to make a point-in-time copy of the data, and then resume database operations within seconds. Then the user could take as much time as necessary to replicate the copy to a remote location or removable media. There are many other valuable features that are available through the array management interface. In fact, in many cases, it’s necessary to use this out-of-band management interface to enable the use of features that are available in-band, across the data interface.


To use these advanced features, users must install proprietary tools and libraries for each array vendor. This allows users to fully exploit their hardware, but at the cost of learning new command line and graphical user interfaces and programming to new application programming interfaces (APIs) for each vendor. Open-source solutions frequently cannot use proprietary libraries to manage storage because of incompatible licensing. In other cases, the open-source developer cannot redistribute the vendor libraries. Thus the end users must manually install all of the required pieces themselves. The Storage Network Industry Association (SNIA) and the associated Storage Management Initiative Specification (SMI-S) have an ongoing effort to address this need with a well-defined and established storage standard. The standard is quite large. Preventing administrators and developers from leveraging it easily. With the scope and complexity of such a large standard, it is difficult for vendors to implement it without variations in behavior. The SMI-S members’ focus is on being the providers of the API and not consumers of them, so the emphasis is from the array provider perspective. The SMI-S standard must define an API for each new feature. The specification always trails vendor defined APIs.

The LibStorageMgmt solution

The libStorageMgmt project’s goal is to provide an open-source vendor-agnostic library and command line tool to allow administrators and developers the ability to leverage storage subsystem features in a consistent and unified manner. When a developer chooses to use the library, their users will benefit by their ability to use any of the supported arrays or future arrays when they are added. The library is licensed under the LGPL which allows use of the library in open-source and commercial applications. The command-line interface (lsmcli) has been designed with scriptability in mind, with configurable output to ease parsing. The library API has language bindings for C and Python. The library architecture uses plug-ins for easy integration with different arrays. The plug-ins execute in their own address space allowing the plug-in developer to choose whatever license that is most appropriate for their specific requirements. The separate address space provides fault isolation in the event of a plug-in crash, which will be very helpful if the plug-in is provided in binary form only.

LibStorageMgmt currently has plug-in support for:

  • NetApp filer (ontap)
  • Linux LIO (targetd)
  • Nexentastor (nstor)
  • SMI-S (smispy) Note: feature support varies by provider
  • Array simulator (sim) Allows testing of client code/scripts without requiring an array

Support for additional arrays is in development and will be released as they become available.

Example: Live database backup

An administrator has a MySQL database that they would like to do a live “hot” backup to minimize disruption to end users. They also use NetApp filers for their storage, and would like to leverage the hardware features it provides for point-in-time space efficient copies. The database is located on an iSCSI logical disk provided by the filer. (These are referred to as volumes in libStorageMgmt.)

The overall flow of operations:

  • Craft a uniform resource identifier for the array (URI) for use with libStorageMgmt
  • Identify the appropriate disk and obtain its libStorageMgmt ID
  • Quiesce the database
  • Use libStorageMgmt to issue a command to the array to replicate the disk
  • Release the database to continue
  • Use libStorageMgmt to grant access to then newly created disk to an initiator so that it can be mounted and backed-up

Crafting the URI

As the admin is using NetApp they need to select the ontap plug-in by crafting a URI. The URI looks like “ontap+ssl://root@filer_host_name_or_ip/”. The beginning of the URI specifies the plug-in with an optional indicator that the user would like to use SSL for communication. The user “root” is used for authentication, and the filer can be addressed by hostname or IP address. This example will be using the command line interface. We can either specify the URI on the command line with ‘-u’ or set an environment variable LSMCLI_URI to avoid typing for every command. The password can be prompted with a “-P”, or supplied in the environmental variable LSMCLI_PASSWORD.

Identify the disk to replicate

The administrator queries the array to identify the volume that the database is located on. To correctly identify which disk the admin first takes a look to see where the file system is mounted by looking for the UUID of the file system. Then they look in /dev/disk/by-id to identify the specific disk.

# lsblk -f | grep cd15fc03-749e-4d5b-9960-b3936ff25a62
sdb ext4 cd15fc03-749e-4d5b-9960-b3936ff25a62 /mnt/db

$ ls -gG /dev/disk/by-id/ | grep sdb
lrwxrwxrwx. 1 9 Apr 30 12:24 scsi-360a98000696457714a346c4f5851304f -> ../../sdb
lrwxrwxrwx. 1 9 Apr 30 12:24 wwn-0x60a98000696457714a346c4f5851304f -> ../../sdb

We can now use the SCSI disk id to identify the disk on the array.

$ lsmcli list --type volumes -t" " | grep 60a98000696457714a346c4f5851304f
idWqJ4lOXQ0O /vol/lsm_lun_container_lsm_test_aggr/tony_vol 60a98000696457714a346c4f5851304f 512 102400 OK 52428800 987654-32-0 e284bcf0-68e5-11e1-ad9b-000c29659817

This command displays all the available volumes for the array. It outputs a number of different fields for each volume on the storage array. The fields are separated by a space ( using -t” “) with the fields defined as: ID, Name, vpd83, block size, #blocks, status, size bytes, system ID and pool ID.

Definitions of each:

  • ID – Array unique identifier for the Volume (virtual disk)
  • Name – Human readable name
  • vpd83 – SCSI Inquiry data for page 0×83
  • block size – Number of bytes in each disk block (512 is common)
  • #block – Number of blocks on disk
  • status – Current status of disk
  • size bytes – Current size of disk in bytes
  • system ID – Unique identifier for this array
  • pool ID – Unique storage pool that virtual disk resides on

So, the array ID for the volume we are interested in is idWqJ4lOXQ0O.


Quiesce the database

Before issuing the replicate command, quiesce the database. For MySQL this can be done by establishing a connection and run “FLUSH TABLES WITH READ LOCK” and leaving the connection open.


Replicate the disk

To replicate the disk the user can issue the command (just outputting result ID for brevity):

$ lsmcli volume-replicate --vol idWqJ4lOXQ0O --rep-type CLONE --name "db_copy" -t” “ | awk '{print $1;}'


This command creates a clone (space efficient copy) of a disk. The “-r” indicates replicate with the argument specifying which volume ID to replicate, “-type” is the type of replication to perform and “–name” is the human readable name of the copy. For more information about the available options type “lsmcli –help” or “man lsmcli” for additional information. The command line will return the details of the newly created disk. The output is identical to the information returned if you listed the volume, as shown above. In this example we just grabbed the volume ID as that is all we need to grant access to it in the following steps.


Release the database

Once this is done you can call “UNLOCK TABLES” or close the connection to the database.


Grant access to newly created disk

To access the newly created disk for backup we need to grant access to it for an initiator. There are two different ways to grant access to a volume for an initiator. Some arrays support groups of initiators which are referred to as access groups. For other arrays you specify individual mappings from initiator to volume. To determine what mechanism the arrays supports we take a look at the capabilities listed for the array.


To find out what capabilities an array has, we need to find the system ID:

$ lsmcli list --type systems
ID          | Name        | Status | Info
987654-32-0 | netappdevel | OK

Then issue the command to query the capabilities by passing the system id:

$ lsmcli --capabilities --sys 987654-32-0 | grep ACCESS_GROUP



The Ontap plug-in supports access groups. In this example, we know the initiator we want to use has iSCSI IQN iqn.1994-05.com.domain:01.89bd03. We will look up the access group that has the iSCSI IQN of interest in it.


List the access groups, looking for the IQN of interest to backup too.


$ lsmcli list --type ACCESS_GROUPS

ID                               | Name    | Initiator IDs                    | System ID
e11c718b99e26b1ca8b45f2df455c70b | fedora  | iqn.1994-05.com.domain:01.5d8644 | 987654-32-0
e11c718b99e26b1ca8b45f2df455c70b | fedora  | iqn.1994-05.com.domain:01.b7885f | 987654-32-0
0a9a917c8cf4183f4646534f5597eb02 | Tony_AG | iqn.1994-05.com.domain:01.89bd01 | 987654-32-0
0a9a917c8cf4183f4646534f5597eb02 | Tony_AG | iqn.1994-05.com.domain:01.89bd03 | 987654-32-0


The one we are interested in has ID 0a9a917c8cf4183f4646534f5597eb02. So at this point we can grant access for the new volume by issuing:


$ lsmcli volume-mask --ag 0a9a917c8cf4183f4646534f5597eb02 --volume idWqJ4qtb1f1


If the IQN of interest is not available it can be added to an existing access group or added to a new access group. An example of adding to an existing access group:


$ lsmcli access-group-add --ag 0a9a917c8cf4183f4646534f5597eb02 --init iqn.1994-05.com.domain:01.89bd04


To see what volumes are visible and accessible to an initiator we can issue:


$ lsmcli access-group-volumes --ag 0a9a917c8cf4183f4646534f5597eb02 -t" " -H
idWqJ4lOXQ0O /vol/lsm_lun_container_lsm_test_aggr/tony_vol 60a98000696457714a346c4f5851304f 512 102400 OK 50.00 MiB 987654-32-0 e284bcf0-68e5-11e1-ad9b-000c29659817
idWqJ4qtb1f1 /vol/lsm_lun_container_lsm_test_aggr/db_copy 60a98000696457714a34717462316631 512 102400 OK 50.00 MiB 987654-32-0 e284bcf0-68e5-11e1-ad9b-000c29659817


At this point you need to re-scan for targets on the host. Please check documentation appropriate for your distribution. Once the disk is visible to the host it can then be mounted and then backed up as usual.


This sequence of steps would be the same regardless of vendor, only the URI would be different. Other operations that are currently available for volumes include: delete, re-size, replicate a range of logical blocks, access group creations and modification, and a number of ways to interrogate relationships between initiators and volumes. This coupled with a stable API allows developers and administrators a consistent way to leverage these valuable features.


Having a consistent and reliable way to manage storage allows for the creation of new applications that can benefit from such features. Quickly provisioning a new virtual machine by replicating a disk template with very little additional disk space is one such example. Having an open source project that can be improved, developed, and molded by a community of users will ensure the best possible solution. LibStorageMgmt is looking for contributors in all areas (eg. users, developers, reviewers, array documentation, testing).



Project: https://github.com/libstorage/libstoragemgmt/

Project documentation: http://libstorage.github.io/libstoragemgmt-doc/


Mailing lists:



IRC at #libStorageMgmt http://freenode.net

There is suppose to be hot water in the water heater, right?

A local contractor installed a new furnace and water heater for me on 1/21/2013. The install appeared to go well. The furnace is keeping our house warm and the water heater runs without making crazy noise and it is producing hot water. All is perfect in the world, well not quite…

While checking out the water heater (AO Smith GDHE-50) I noticed that the lower side connect was quite cold, the brass drain was very cold too. My three other water heaters never exhibited anything like this when they had hot water in them. The valve and the side connectors were quite warm when the unit was at standby.

To quantify how much cold water is in the heater I did the following experiment. Immediately after the unit completed a heating cycle (120F, 8F differential) I turned the water heater off. I then closed the cold water value to the water heater and opened a hot water faucet to allow air into the system. Then I systematically drained a gallon of water at a time from the water heater drain and took its temperature with a digital thermometer, repeating until I hit water that was 120F. My best guess at starting was that there is at least 16 gallons of fairly cold water in the heater as each vertical inch is just under a gallon of water and the side connector is 16″ off the floor.

The water heater had 11 gallons of water < 60F. The results of this experiment indicate to me that something is wrong. Past water heaters, I was able to get very hot water instantly out of the drain.

OK, so what’s the problem as long as we have hot water coming out the top? Bacterial growth in the tank. See http://www.treehugger.com/green-food/is-it-safe-to-turn-down-your-water-heater-temperature.html

This looks like the ultimate petri dish.

Fool me once, shame on you, fool me twice …

American water heater company, the maker of the water heater I installed issued me a return authorization number for the water heater that would not run when installed per the instructions. I installed the new one (1/5) and this one works better (it will start), but still not great. It makes noise when starting and the flame is quite yellow and has bad shape. I have posted videos of the start and flame for technical support to look at.

This is the replacement unit starting : http://youtu.be/_Bb3IgamFdo

I contacted technical support again via email and sent them video footage of the poor flame. After a few days technical support contacted me again and FedEx’d me a smaller orifice to try, a #30. The heater comes standard with a #29. I got this smaller orifice and installed it and the unit ran very poorly http://youtu.be/ml-uqXvCrp8

At this point I gave up, I contacted a local contractor and scheduled an install of a new furnace and water heater. I was done trying to make this water heater work.

The replacement water heater was returned to Lowe’s for a refund. Lowe’s was very helpful throughout this very frustrating experience.

I didn’t do anything wrong

Ben, a plumber from K&S came by and spent 3 hours going over my install and inspecting the water heater. He found nothing wrong with my install, whew! After talking to technical support we got the unit running by removing the intake and restricting the exhaust. We had hot water, but the install wasn’t going to pass code as it deviated from the installation instructions.

Water heaters go virtually unnoticed, that is until they don’t work!