Friday, November 4, 2011

Using Google's CoreDumper library

The Google team has published a library called CoreDumper for generating process core dumps programatically. This can be useful for post-analysis in environments where core files are not or cannot be generated/saved by the system (ulimit restrictions, etc).

At my work we've incorporated the CoreDumper library into our Production code, and are using it in conjunction with our exception handling to generate core files when conditions warrant.

One nice feature of this library is the ability to generate a compressed core file, thus significantly reducing disk space consumed by the core.

An example of using this feature follows.

#include <limits>
#include "google/coredumper.h"

void generateCore( const std::string &p_filenameFullPath )
// Be sure to use a mutex for concurrency (not shown).
// Reasons are discussed here:

// The following will create a core file using the specified filename
// and compressed with the gzip compression algorithm.
// Here we're not enforcing limits on the core file size, so if using C++,
// get the limit of size_t as specified. Else replace with SIZE_MAX.
const int iResult =
WriteCompressedCoreDump( p_filenameFullPath.c_str(),

if( 0 == iResult )
// replace this call to std::out with a call to your logging system.
std::cout << "generated core: "
<< p_filenameFullPath.c_str()
<< std::endl;
const unsigned int errLen = 128;
char error[ errLen ] = { '\0' };
strerror_r( errno, error, errLen );

// replace this call to std::cerr with a call to your logging system.
std::cerr << "failed to generate core: "
<< error
<< std::endl;

Friday, July 23, 2010

VirtualBox WinXP iTunes High CPU Utilization

I recently deployed an instance of VirtualBox 3.2 on my home server (Ubuntu 8.04) with WinXP as a guest. My intent was to run an instance of iTunes within that VirtualBox instance to serve the mp3's residing on my fileserver (with the added benefit of Home Sharing & iTunesRemote/AirportExpress).

Everything is running great, however I noticed that even when idle VirtualBox was using close to 100% CPU even when the XP guest was idle but running iTunes. Strange enough VirtualBox's high cpu utilization would drop to reasonable levels (< 10%) when I quit iTunes but continued to run the XP guest.

To solve, I first updated the XP guest to not use ACPI as described here in the last post by kyboren . After making the modifications I shut down the XP guest and restarted VirtualBox. After the XP guest finished loading, the VirtualBox CPU load was reduced by roughly half, though it was still consuming nearly 50% CPU while the XP guest sat idle but running iTunes. Better, but still not ideal.

Next, I disabled Virtual Memory within the XP guest, and restarted only the XP guest. This time, VirtualBox is only using 15% of CPU while the XP guest sits idle but running iTunes. Much better; to me, 15% CPU utilization is an acceptable amount, and is far lower than the 95+% experienced earlier.

I suspect that it was the XP guest restart (and not changing of the virtual memory) that brought VirtualBox's CPU use down to 15%. Upon starting VirtualBox fresh I see that it again hits 50% CPU utilization with the XP guest idle but running iTunes. However restarting the XP guest (via Start --> Turn Off Computer --> Restart) seems to knock the VirtualBox CPU utilization down to an acceptable 15% with the XP guest idle but running iTunes.

So for now my 'fix' is to start VirtualBox, let my XP guest fully boot, then restart my XP guest. Kludgy, but it meets my needs.

Tuesday, February 23, 2010

Convert long to byte array in C++ or C

This post explains how to convert an unsigned long int into a byte array in C/C++. This post assumes that the datatype unsigned long int uses 4 bytes of internal storage; however the examples can easily be adapted to other built-in datatypes (unsigned short int, unsigned long long int, etc).

The code to convert a 4-byte unsigned long int into a 4-byte array:

unsigned long int longInt = 1234567890;
unsigned char byteArray[4];

// convert from an unsigned long int to a 4-byte array
byteArray[0] = (int)((longInt >> 24) & 0xFF) ;
byteArray[1] = (int)((longInt >> 16) & 0xFF) ;
byteArray[2] = (int)((longInt >> 8) & 0XFF);
byteArray[3] = (int)((longInt & 0XFF));

So what's happening in the above code? We're basically using a combination of bit shifting and bit masking in order to chop up the unsigned long into 4 pieces. Each of these pieces ends up being a value small enough to be stored in the unsigned char array (remember an unsigned char is 1 byte, and capable of holding values 0-255).

The bit shifting "drops" the right-most bytes, and the bit masking serves to convert the "new" right-most byte into a hex value between 0-255.

Note that in the last line of code we didn't need to do any bit shifting; here we're converting the right-most byte of the unsigned long, and therefore don't want to throw it away.

An alternate solution would be to first apply the mask, and then shift:

byteArray[0] = (int)((longInt & 0xFF000000) >> 24 );
byteArray[1] = (int)((longInt & 0x00FF0000) >> 16 );
byteArray[2] = (int)((longInt & 0x0000FF00) >> 8 );
byteArray[3] = (int)((longInt & 0X000000FF));

Next, let's convert the 4-byte array back into an unsigned long int:

unsigned long int anotherLongInt;

anotherLongInt = ( (byteArray[0] << 24)
+ (byteArray[1] << 16)
+ (byteArray[2] << 8)
+ (byteArray[3] ) );

Here we're taking each piece of the byte array, but now shifting the bits to the left, and adding the results. In essence this is taking each value between 0-255 and depending on the position padding the right-side with an appropriate number of zeroes in order to replicate the significance of the individual values before they are summed.

And an alternate solution to accomplish the same:

anotherLongInt = ((unsigned int) byteArray[0]) << 24;
anotherLongInt |= ((unsigned int) byteArray[1]) << 16;
anotherLongInt |= ((unsigned int) byteArray[2]) << 8;
anotherLongInt |= ((unsigned int) byteArray[3]);

And that's it!

Note that additional fortifications are required when these operations are required in portable code. In that case you won't want to make assumptions on the size of the data types and should instead use additional logic to automatically detect the data type sizes at runtime for the platform on which you're running. Otherwise, the above should be fine if you have a homogeneous and controlled environment in which your code will run.