I recently received a question about SGI’s pandora after someone found my run-pandora.sh script in my hpc-admin-scripts repo. They were looking for a way to test a server with a fair bit of memory in a short amount of time. They’d tried Memtest86 and found it to be incredibly slow when running single-threaded or proved too unstable when running on all cores. When they found my repo, they figured it’d be worth asking about pandora in the hopes it would be appropriate for their needs.
Recently, I’ve been thinking that I should write down some of my views on IT. I don’t believe in a black & white world, but in one that’s full of realities, tradeoffs, and compromises. I’ve worked with people who refuse to (or are unable to) recognize that and spend energy trying to dictate instead of collaborate, typically to the detriment of themselves and frustration of everyone around them. IT exists to support and enable an organization, and should not be an end unto itself.
So a while back I started looking at alternative VPS hosting providers. I was impressed by the service Linode provided, but started wondering if I’d get better bang for my buck going elsewhere. At the time, I was paying $20 / month for their smallest Xen VPS or $25 / month if I wanted their backup service. My hosting needs were modest, especially since I’d migrated just about everything from dynamic stuff with a DB backend to primarily serving static content. So I could really get away with something with leaner. I shied away from the extremely cheap OpenVZ providers, and tried a couple of different KVM VPS providers before I found one that offered a balance of cost, reliability, and performance.
TL;DR Support for NUMA systems in torque/Moab breaks existing means of specifying shared memory jobs and limits scheduling flexibility in heterogeneous compute environments.
My experience is based on SUSE SLES 11 SP1/SP2 with a stock kernel, so YMMV if you’re running a newer mainline kernel without all the backports.
I tested on two Supermicro systems. One with an LSI HBA card with 22x 2TB enterprise SATA drives (originally purchased to run OpenSolaris/ZFS). Second has an Adaptec hardware RAID controller with 36x 2TB enterprise SATA drives. Some of the data loss and stability issues I experienced may be attributed to later discovering the “enterprise” drives used in the first system turned out to be less RAID-friendly than the manufacturer claimed, eventually leading to them to replace ALL of my drives with a different model.
Btrfs was a preview in SLES SP1 and is “supported” in SP2 but with major restrictions if you wanted a supported configuration. Support in SP2 requires that you create btrfs filesystems using Yast and live with the limited options it allows. I’m guessing what you can do via Yast is the subset of features they tested enough to be willing to try and support. I tried using Yast to set up btrfs on one of our systems, but found their constraints too limiting given my use case and the organization I’d settled on in the SP1 days.
Update I should note I only partially sorted out getting this to actually work. Probably requires more tinkering and may be best to just go get an app that does the lock/unlock for you.
This procedure is roughly based on Lock Your Mac When Your iPhone Is Out of Range. I’d seen this in the past, but never got around to figuring out how to set it up. Since I’ve got an iPhone with good battery life, leaving Bluetooth on isn’t as scary as it was on my old Android phone, so I thought I’d give it a shot.
I’ve gone the extra steps of figuring out how to retrieve your password from Keychain in order to do the unlock. The sample unlock AppleScript in that post suggests storing your Mac account password in plain text in the script (not so great) and offers that you can save your script as “Run-only” to obfuscate it. I tried that for kicks and while the script itself is obfuscated, your plain text password is still there if you just cat the file.
Yet another time of migration (blog-wise)… TextDrive -> Joyent -> TextDrive 2 My web hosting has been a little up in the air recently (see Slashdot: Joyent Drops Lifetime Account Holders). I paid a few hundred bucks several years back (2005) for “lifetime” web hosting at what I perceived to be a cool up and coming company (read: they claimed to be pushing to support a lot of the flashy new web tech that wasn’t well supported by most shared web hosting providers at the time).
So this upcoming week is the big annual supercomputing convention, SC10, down in New Orleans. Since I’m skipping out (anxiously waiting for the arrival of Little Miss Sunshine), I’ve got time to actually try and read through the slew of new product announcements and news coverage. So today I saw this quote on twitter from hpc_guru and just had to share:
“Cost of the building next generation of supercomputers is not the problem. The cost of running the machines is what concerns engineers.”
I guess I’ve been swimming in bad karma or something lately. First, I broke the nice set of Sennheiser HD515 cans I use at work leaving me with the craptacular closed Aiwas I’d used before seeing the light. Then, I broke the belt clip on my cellphone. Fortunately, superglue seems to have solved both problems. Then, I realized my Thinkpad wasn’t charging like it should. Everything indicated it was charging, but the battery meter kept getting lower.
I’ve been suffering from some bad computer mojo as of recent. First our group fileserver tanked. Not that I really came into physical contact with it, but I talked to it over the network all the time. Then my brand spankin' new NAS drive (got a Buffalo LinkStation, basically an external hard drive enclosure with ethernet instead of USB/Firewire) had some sort of catastrophic firmware failure while I was using it and I was forced to RMA it.