Wednesday, November 9, 2011

Hitachi Deskstar + RocketRAID 232x

Spent the morning tracking down a stupid RAID problem, so hopefully this helps someone...

I use RocketRAID 2320 cards as HBAs in a few Ubuntu servers. I use software RAID (mdadm), so no need for the RAID features.

The trick to using the rr232x as an HBA is to configure each drive as a separate single-disk JBOD "array" in the RocketRAID BIOS. In the servers I've set up before, I've used eight Western Digital (Green) 1.5 TB and 2.0 TB drives, did the JBOD thing, and they appeared to the OS without issue.

My new server has 12 3.0 TB Hitachi Deskstars (H3IK3000), and when I installed the rr232x driver, I saw no drives. The RAID card could see the drives, and the OS could see the RAID card, but the OS couldn't see the drives.

When the drives that had been attached to the RR2320 were plugged directly into the motherboard, I noticed that they didn't spin up. The OS saw them and tried to talk, but timed out and gave up. Identical drives not exposed to the RR2320 worked fine.

There turned out to be two problems:

  1. The staggered spin-up feature of the RR2320. Apparently not compatible with these Deskstars, when enabled it flips a bit telling the drives not to spin up at all. The solution is to enable-then-disable the staggered spin up setting (Settings menu in the BIOS utility).
  2. The RR2320 just doesn't seem to like H3IK3000s. Solution: run the drives in "Legacy" mode rather than as JBOD arrays, a tidbit I picked up from here.
To run in Legacy mode, put an empty partition on a drive (plug into mobo, "mklabel" and "mkpart" in parted). The RR2320 will ignore it and pass it through to the OS.

Recap:
  1. Enable-then-disable staggered spin-up
  2. Plug each drive into the motherboard and fire up parted
    1. mklabel gpt
    2. mkpart (enter: null, xfs, 1, 3tb)
    3. set 1 raid on
  3. Plug drives back into the RR2320 -- should see them on boot
  4. mdadm --create /dev/md0 --raid-level=6 --raid-devices=12 /dev/sdb1 /dev/sdc1 ... /dev/sdm1
  5. mkfs.xfs /dev/md0

Friday, September 2, 2011

Evil C++ #4: An example of buffer overflows in C++ STL vectors

#include<iostream>
#include<vector>
#include<stdlib.h>
/** People sometimes get the false impression that because STL and iterators
* are fairly new additions to C++, they do sensible, commonplace things like
* bounds checking.
*
* This is sadly not the case, since preserving backwards-compatibility with C
* means preserving the loaded-gun-pointed-at-your-foot aspects, too.
*
* A semi-common related problem is to have doubles take on impossibly tiny
* values in somewhere in your code. Tiny doubles are usually the result of
* reinterpreting the ghost of an int as a double -- see end.
*/
// YEAH!
class Awesome
{
public:
int a;
double b;
std::string c;
Awesome() : a(5), b(42.0), c("woot") { }
};
int main(int argc, char* argv[])
{
// how many doubles to dereference?
int n;
if(argc > 1)
n = atoi(argv[1]);
else
n = 10;
// soil up the memory space
std::vector<Awesome*> foo;
for(unsigned i = 0; i < 10 * n; i++)
foo.push_back(new Awesome());
for(unsigned i = 0; i < 10 * n; i++)
delete foo[i];
// think iterators are smart? think again.
std::vector<double> b(1);
std::vector<double>::iterator it;
// walk right off the end. a segfault is the best thing that could happen,
// since at least we'd know something went wrong.
for(it = b.begin(); it < b.end() + n; it++)
std::cout << *it << " ";
std::cout << "\n";
// ints interpreted as doubles = tiny number
int fooInt = 42;
double *fooDouble = reinterpret_cast<double*>(&fooInt);
std::cout << "fooInt = " << fooInt << std::endl
<< "fooDouble = " << *fooDouble << std::endl;
}

Friday, August 26, 2011

Really Super Quick Start Guide to Setting Up SLURM

SLURM is the awesomely-named Simple Linux Utility for Resource Management written by the good people at LLNL. It's basically a smart task queuing system for clusters. My cluster has always run Sun Grid Engine, but it looks like SGE is more or less dead in the post-Oracle Sun software apocalypse. In light of this and since SGE recently looked at me the wrong way, I'm hoping to ditch it for SLURM. I like pop culture references and software that works.

The "Super Quick Start Guide" for LLNL SLURM has a lot of words, at least one of which is "make." If you're lazy like me, just do this:

0. Be using Ubuntu
1. Install: # apt-get install slurm-llnl
2. Create key for MUNGE authentication: /usr/sbin/create-munge-key
3a. Make config file: https://computing.llnl.gov/linux/slurm/configurator.html
3b. Put config file in: /etc/slurm-llnl/slurm.conf
4. Start master: # slurmctld
5. Start node: # slurmd
6. Test that fool: $ srun -N1 /bin/hostname

Bam.

(In my config file, I specified "localhost" as the master and the node. Probably a good place to start.)

Friday, June 17, 2011

Evil C++ #3: Weird automatic overloading of constructors

#include<iostream>
/** C++ does some strange stuff with constructors. Having default values makes
* sense, but they've added some weird, unintuitive shorthand that's just an
* accident waiting to happen:
*
* MyClass c = MyClass(5) <--> MyClass c = 5
*
* where in both cases 5 is taken as the first argument. In both cases, it will
* do whatever casting is needed and allowed.
*
* What weirds me out is that this syntax is specific to single-argument
* constructors. You could pull this in Python (MyClass c = 1, 2, 3)...
*/
class Foo
{
public:
Foo(int a=42, int b=21) { fA=a; fB=b; }
int fA, fB;
};
int main(int argc, char* argv[])
{
// 1. use the defaults
Foo foo1;
std::cout << "foo1: " << foo1.fA << " " << foo1.fB << "\n";
// 2. call the constructor explicitly
Foo foo2 = Foo(1,2);
std::cout << "foo2: " << foo2.fA << " " << foo2.fB << "\n";
// 3. call the constructor with 1 arg
Foo foo3 = Foo(5);
std::cout << "foo3: " << foo3.fA << " " << foo3.fB << "\n";
// 4. c++ does the same thing as (3)
Foo foo4 = 5;
std::cout << "foo4: " << foo4.fA << " " << foo4.fB << "\n";
// 5. c++ invents an operator= for us...
Foo foo5 = foo4;
std::cout << "foo5: " << foo5.fA << " " << foo5.fB << "\n";
// ... which copies ...
std::cout << "foo5 @ " << &foo5 << ", foo4 @ " << &foo4 << "\n";
// which is different from pointer assignment:
Foo* fooPtr1 = new Foo(3);
Foo* fooPtr2 = fooPtr1;
std::cout << "fooPtr1: " << fooPtr1->fA << " " << fooPtr1->fB << "\n";
std::cout << "fooPtr2: " << fooPtr2->fA << " " << fooPtr2->fB << "\n";
std::cout << "fooPtr1 @ " << fooPtr1 << ", fooPtr2 @ " << fooPtr2 << "\n";
return 0;
}

Saturday, June 4, 2011

Evil C++ #2: Using GCC's -ftrapv flag to debug integer overflows

In C++, overflowing an integer type won't cause an exception and can result in weird numbers propagating through your program. GCC's ftrapv flag has your back.

#include<iostream>
#include<signal.h>
#include<limits.h>
/** g++'s -ftrapv flag provides some protection against integer overflows. It
* is a little awkward to use, though. All it will do is "trap" -- you must
* provide a signal handler to deal with it.
*
* (You must compile with -ftrapv for this to work)
*/
// a simple signal handler. it must take the signal as an argument, per
// signal.h, whether we use it or not.
void handler(int /*signal*/)
{
std::cout << "Overflow'd!" << std::endl;
}
int main()
{
// when we get a SIGABRT, call handler
signal(SIGABRT, &handler);
// LONG_MAX is the largest long integer on this system, from limits.h
long a = LONG_MAX;
int b = 1;
long c = a + b;
return 0;
}
view raw ftrapv.cpp hosted with ❤ by GitHub

Thursday, June 2, 2011

Evil C++ #1: Brackets and "at" for accessing STL vector elements

This is the first in a series of code snippets that demonstrate C/C++ pitfalls.

(For an thorough explanation of the many ways C++ is out to get you, see Yossi Kreinin's excellent C++ FQA).

#include<iostream>
#include<vector>
/** There are two ways to access the ith element of an STL vector, the usual
* v[i] syntax or using v.at(i).
*
* The former doesn't check array boundaries, so something like v[v.size()+1]
* works and gives you whatever happens to be sitting in that memory location.
*
* v.at() has the same purpose, but actually throws an exception
* (std::out_of_range) when you're outside array boundaries. This may be
* helpful for debugging buffer overflows, a common source of C++ headaches.
*
* Both v[i] and v.at(i) are of constant complexity.
*/
int main()
{
std::vector<int> a(5);
// Accessing the (a.size()+1)th element with brackets returns junk
std::cout << a[6] << std::endl;
// Accessing the (a.size()+1)th element with at throws an exception
std::cout << a.at(6) << std::endl;
return 0;
}

Ignoring GCC warnings on a per-file basis

In most cases, ignoring GCC warnings is a Bad Idea. Treating warnings as errors results in better code.

However, sometimes we are forced to deal with other people's code. For instance, a project I work on relies on JsonCpp. We include this in our source tree so that every user doesn't to have to go get JsonCpp source code in order to compile this thing.

Such dependencies can be a problem if you want really strict compiler options, since libraries will often be slightly incompatible with your particular standard (ANSI, C++0x, ...) or not be up to your lofty expectations. In my case, JsonCpp gives me a couple of warnings with GCC options -W, -Wall, -ansi, -pedantic. This means I can't compile my code with -Werror, which makes me sad. I certainly don't want to modify these external libraries.

Fortunately, in recent GCC versions ways of selectively disabling warnings have been added. If your problems are confined to headers, you can replace -I/path/to/headers with -isystem/path/to/headers and GCC will treat them as system headers, ignoring warnings.

Another less-desirable solution is to use pragmas. Headers can be marked as system headers by putting at the top:

#pragma GCC system_header


If the problems lie in the source files themselves, neither of these tricks work. We can, however, add to the top of the files causing the warnings things like this:

#pragma GCC diagnostic ignored "-Wunused-parameter"
#pragma GCC diagnostic ignored "-Woverflow"


to disable specific warnings generated by that file.

To figure out the names of the warnings causing the problems, recompile with the -fdiagnostics-show-option option on the g++ line. This is especially useful in the case of default warnings (i.e. those which aren't optional) like -Woverflow since they are harder to find in the documentation.

This isn't a great solution, since it does require some modification of the libraries. However, you can easily generate a patch from your changes and apply it to any new library versions should you decide later to upgrade them. Hopefully someday GCC will include an "ignore warnings from this file or subdirectory" option, but until then... it works.

Saturday, April 30, 2011

SNO+ Explained

In the spirit of the comic book and/or children's story I wrote to explain the miniCLEAN dark matter experiment (http://deapclean.org/about/), I have attempted to summarize the SNO+ experiment on a single awesome page.

SNO+ is a multi-purpose particle physics experiment studying all things neutrino. Neutrinos are very light elementary particles. They come from the Sun, their antiparticles (antineutrinos) come from nuclear reactors, and these two things (neutrinos and antineutrinos) might in fact be the same thing.

It sounds like we know shamefully little about neutrinos, which is more or less true. Hence SNO+, which is studying all of the above to figure this stuff out.

SNO+ Explained



(click to enlarge)

Tuesday, March 29, 2011

Ubuntu 10.04 on a SunFire X4500

A while back, I inherited command of a small cluster with a monster of a disk: the Sun Microsystems SunFire X4500 storage server. Two dual-core AMD Opterons, 16GB of ECC memory, and -- count 'em -- 48 hard disks on six LSI SATA controllers. X4500s sold with drives up to 1 TB; mine has 500 GB drives for a total of 20 TB of storage, less the two reserved for the OS.
This seemed like a pretty rad place to host my users' home directories and data, but serving ZFS over NFS turned out to be unusably slow due to some issue with Solaris/ZFS/NFS and being too picky about synchronous IO. Later SunFire servers were sold with a solid state drive as a separate intent log (slog) (i.e. journal) device. Writes committed to this were good enough for ZFS, and the flushes are virtually instantaneous.

I added a new PCI-X LSI SATA controller (since the X4500 has zero spare SATA ports) and an Intel X-25E SSD as a slog, and saw about an order of magnitude improvement in write performance. This was still worse than any of my ext4 Linux NFS servers, but usable.

On account of this poor performance and the cloud of FUD surrounding Sun/Oracle these days, the time has come for the X4500 to run Linux. And not one of the distributions it shipped with -- that would be boring.

My other NFS servers run Ubuntu 10 Server, so I chose 10.04 LTS as the new OS. Confident that it was possible since a guy on the internet did this once, I set out to Ubuntify the beast.

The X4500 wasn't feeling my external USB CD drive, so I started ILOM remote console redirection from the service processor's web interface dealie. CD and CD image redirection actually worked, and the X4500 is configured by default to boot to the redirected virtual CD drive. Ubuntu installer achieved.



This proceeded exactly as it would on a real computer, even using a monitor and USB keyboard plugged right into the X4500. The partitioner took forever to scan, but found all 48 disks plus my SSD which it thought was a swap partition. Annoyingly, the OS sees the 48 internal disks as if each were on its own controller.

Not wanting to go scorched-earth just yet, I had hoped to leave the two UFS-formatted Solaris root disks and the ZFS slog SSD untouched. So, I popped two new drives for Ubuntu into my external enclosure next to the SSD.

Unfortunately, there is some hardcoded magic in the internal SATA controller that only spins up two devices for the BIOS to see. So you *have* to put the boot partition on one of these. More on that later. Not knowing this, I proceeded with the install.


I set up software RAID-1 (mirroring) in the installer, like so:


Then, started the install:


This was going swimmingly until it failed to install GRUB with a horrible red error screen. This is what forum guy (n6mod) had warned us about.


I finished the install and rebooted (leaving the CD attached).

When I got back to the installer, I chose rescue mode and followed the prompts, then selected the root device (/dev/md0) as the environment for the shell. I installed smartmontools and used smartctl --all /dev/device-name to get the serial numbers of the Solaris disks. They turned out to be in physical slots 0 and 1... go figure.
I shut down, popped those guys out and replaced them with my own two disks (right out of the external enclosure). I rebooted again and returned to the recovery console, where I installed GRUB2 with apt-get install grub2. After telling it to install the boot stuff on my boot partition (now /dev/sdz1, was /dev/sday1 when I partitioned it), I rebooted once again and it Just Worked. Bam.

Now, I had to handle the disks -- how to RAID these things up!? I cobbled them into seven 6-disk RAID5 arrays chained up into a single RAID0 array (i.e. RAID5+0). With this architecture, a single disk failure necessitates reading 6 drives, not all 48, to recover. I wrote a python script to do the partitioning and md creation as 48 is a big number. This hierarchy leaves a few spares, which I am ignoring for now but could be used as spares if a device fails (if only hot spares could be shared... I guess these are warm spares??).


I had planned to format this with ext4 like my other servers, but it turns out that while ext4 volumes can be up to 1 EB in size, e2fsprogs (including mkfs.ext4) only supports up to 16 TB.

After perusing some flame wars on Phoronix and the Wikipedia comparison of file systems for a bit and learning that all filesystems suck and can't be shrunk, I settled on XFS.

XFS performance is best if blocks are aligned with RAID chunks, which mdadm makes 64k by default. So I formatted /dev/md20 (the giant stripey thing) with:

mkfs.xfs -f -d su=65536,sw=7 /dev/md20

where su = RAID chunk size, sw = number of disks in RAID-0. I then mounted the volume and fired up bonnie++ for some benchmarking goodness. The results were pretty amazing:
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
vorton 31976M 847 94 387044 84 174342 69 1715 97 534121 99 578.6 63
Latency 24186us 32591us 62407us 10213us 50292us 48030us

Version 1.96 ------Sequential Create------ --------Random Create--------
vorton -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 2098 23 +++++ +++ 1741 19 2186 24 +++++ +++ 1978 23
Latency 53673us 2197us 56708us 36312us 56us 46485us

1.96,1.96,vorton,1,1301449025,31976M,,847,94,387044,84,174342,69,1715,97,534121,
99,578.6,63,16,,,,,2098,23,+++++,+++,1741,19,2186,24,+++++,+++,1978,23,24186us,
32591us,62407us,10213us,50292us,48030us,53673us,2197us,56708us,36312us,56us,46485us

Yup: sequential block reads at 387 M/s and writes at 534 M/s. Not too bad for a bunch of 7200 RPM drives.

To conclude: it works. Should you be so blessed/afflicted as to have a SunFire X4500 laying around, don't give up hope -- this is still a blazing fast disk server and can be migrated to Ubuntu Server in a matter of hours.

Basically, best. enclosure. ever. (Though this thing is cool too).

UPDATE: The X4500 has 4 Intel e1000 gigabit network ports. One I'm using for the interwebs, leaving the others for the cluster. I used ifenslave to bond them all together (mode 6, adaptive load balancing), and used iperf to saturate it at 2.56 Gbit/s:

From sphaleron
Pretty speedy, and you could probably get closer the theoretical limit (3 Gbit/s) using bond mode 4 (802.11ad dynamic link aggregation).