Skip to main content

The Science of Sleep Presents: A Better Alarm Clock

Although I can't quite find a reference right now, take it from me that it's well documented that we have much more trouble waking up from deep sleep (a.k.a. slow wave sleep) than from light sleep or from REM sleep. You've probably experienced this often enough: sometimes you've had plenty of sleep, but you still feel hopelessly groggy when the alarm wakes you up, and other times you've only had three hours but you feel amazingly alert! And you're like, whuuu?

Well, at least some of the time, the reason is that you woke up between sleep cycles rather than during slow wave sleep. So, in 2002, I had the idea of an alarm clock that would monitor your sleep cycle, and would only wake you between cycles, never during slow wave. Since cycles are regular and last about 90 minutes, if you absolutely needed to be up at a particular time, the alarm would calculate whether there is enough time left for another full cycle, and if there wasn't, it would wake you early.

It was, of course, a brilliant idea. But, as they say, you snooze, you lose. By the time started to begin to think about maybe talking to someone about developing a product, it was 2005, maybe even 2006. A quick Google search turned up Axon Labs, a startup created in 2003 solely to develop just the kind of system I had envisioned.

Missed opportunity? Maybe. But I was more excited than disappointed, because it meant that the dream (I'm just on fire today!) of better waking was closer to reality than I could have imagined. I signed up for their newsletter to be kept up to date.

All this to say, finally, earlier this week I got an email from them, saying that they are going to come out with a limited release by the end of this year! Woot! The email just made my day, and it should make yours too. Head on over to the Axon Labs website to sign up for their updates. Alternatively, do subscribe to this blog (or bookmark it, if you live in that era)—you'll certainly be reading a review from me the moment I can get my hands on one.

(Well, ok: the next day.)

iPhone Jailbreaking

Last month, I jailbroke and carrier-unlocked my first-generation iPhone. I followed this iClarified tutorial. Near the end I got an "error 1600," which I was able to rectify by following the instructions found near the end of this MacRumors post.

No biggy. I can't say I wasn't a little nervous, but each time something went wrong, I just turned my phone off and on and it was back to where I'd started. After a few restarts, I became reassured that I wouldn't end up with a brick on my hands, so long as I was careful following the instructions.

After my eventual success, I became a staunch advocate of jailbreaking and unlocking (the latter currently only available on the first-gen iPhone, though the folks over at iPhone-dev appear to have finally broken through the 3G's defenses). For the coders among you, you get a cool terminal:


But, perhaps more useful, with the unlock, any SIM card from anywhere in the world will work. That's how I was able to use my iPhone with my Movistar prepaid SIM when I went to Spain. Of course, their data charges were abusive, and there's no 2.5G (aka EDGE) in Spain as far as I could gather, which substantially diminished the usefulness of the phone, but at least I got to keep my phone and iPod in a single device.

An additional advantage of jailbreaking, by the way, is the awesome PDANet, a tethering application that allows you, with minimal setup, use your AT&T (or whatever) internet connection with your laptop—much nicer to type an email!

So, a couple of weeks ago, I'm telling my friend that he should unlock/jailbreak, and he tells me about his friend's experience with the jailbreak: his iPhone had problems and the nice folks at the Genius Bar told him that they couldn't help him because his phone had been jailbroken.

You can imagine me getting worried today as I notice my phone has a problem: I can't get any sound whatsoever from the built-in speaker—only through the headset. Turning it on and off and re-synching did nothing.

Well, I am happy to report that "undoing" the jailbreak is easy as pie: go to iTunes while your phone is connected to your computer, hit "restore," and whenever it asks you whether it should use a backup, say no. This will restore the iPhone to factory settings, getting rid of all your jailbroken apps and so on. The phone itself will actually remain jailbroken, but not in any way that an Apple Genius can easily determine. If you head to the Apple Store, you'll get the same service as all those non-jailbreakers!

Anyway, you might be hoping to hear how my trip to the Apple Store went, but it's not gonna happen this time. I ended up finding the solution to my problem in this Apple Support message board.


In one of the best papers in computational biology that I have ever seen, Greg Stephens and colleagues have analysed the movements of nematode worms, and found that they can be decomposed into just four fundamental shapes: virtually any shape that the worm can take is a combination of these four shapes.

I'm surprised that no one at picked up on it. I suppose the title seems innocuous enough: "Dimensionality and dynamics in the behavior of C. elegans." I suppose that, had I read just that title, I too would have overlooked it.

Thankfully I subscribe to Faculty of 1000, the world's largest journal club. This paper was marked "Exceptional," the highest rating available, by Leonard Maler, of the University of Ottawa. I'll reprint the first sentence of his review because it sums up the paper (and what makes it so brilliant) so nicely:

In this intriguing paper, the apparent random wiggling of a worm (C. elegans) was decomposed into a small set of invariant "wiggles", whimsically termed "eigenworms" by the authors.

For those of you that haven't been keeping up with their linear algebra, "eigen" is German and roughly translates to "characteristic." The prefix is most commonly used when referring to the eigenvectors and eigenvalues of a matrix.

Let me break that down a bit further: a vector is a column of numbers, and how many of them there are is called the dimension of the vector; a matrix is a set of numbers arranged in rows and columns.

The authors of the paper began by describing a worm's shape as a vector of 100 angles: each worm is artificially divided into 101 segments and then its shape is automatically described by the 100 angles between adjacent segments. They gathered data on thousands of worms, producing thousands of 100-dimensional vectors that define the recorded shape of a particular worm. With all these vectors, one can ask: are any two angles correlated? For example, when angle #5 is 3 degrees to the left, does that tell us anything about angle #6? Of course it does, but we can make it mathematically precise: we can define a correlation matrix of 100 rows by 100 columns, in which an entry on the diagonal is always 1 (because an angle can always predict its own value with 100% accuracy), and the entry on row i and column j defines the correlation between angles i and j.

Now, every square matrix (same number of rows and columns) has at least one eigenvector and a corresponding eigenvalue. This is a vector which, when multiplied by the matrix (we won't get into matrix multiplication here) gives itself, multiplied by a simple number, called its eigenvalue.

Now, n-dimensional symmetric matrices, such as our correlation matrix mentioned above (let's call it C), can actually be decomposed into a product of three matrices: C = QAQ', where Q is a matrix in which each column is one of the n eigenvectors, A is a matrix of all 0s, except the n entries on the diagonal, which are the eigenvalues (in the same order as the eigenvectors are in Q—usually sorted from largest to smallest eigenvalue), and Q' is the same as Q except the rows and columns are interchanged.

Now, the magic happens: if some of the eigenvalues are very large compared to the rest, then we can set the small eigenvalues (equivalently, their corresponding eigenvectors) to 0, and you will still get something close to the original C! That is, C ~= QBQ', where B is the same as A with the small eigenvalues set to 0.

In the case of the Eigenworm paper, the authors found that just four of the 100 eigenvalues accounted for more than 90% of the variability of worm shapes! By extension, it's just four eigenvectors of C that account for this variability: these vectors define four worm shapes: eigenworms! Again: 90% of any worm's shape is defined by a combination of just four shapes. (Imagine the same thing for humans!)

The authors then exploit this property to predict worms' dynamics and movement based on the possible combinations of these shapes. They predict the role of each of the eigenworms in worm navigation: the first two correspond to a wave propagating along a worm's body, and thus contribute to forward motion. The third one corresponds to curvature, and therefore variations in the contribution of this eigenworm are responsible for the worm turning in its trajectory. Finally, the fourth eigenworm defines movement of the worm's head as it forages and navigates.

In a final coup-de-grace in an already stellar paper, Stephens et al. test their predictions by defining how movement in the space defined by the eigenworms translates to worms' movement. This test they pass with flying colours, with the worms turning precisely when the third eigenworm would predict, and moving at the speed that the first and second eigenworms would.

I suppose one might think, big deal, so they can tell how a worm moves. But this is mathematics describing almost everything about a biological phenomenon, and all in the swift swoop of a single paper. It doesn't get any better than this.

Greg J. Stephens, Bethany Johnson-Kerner, William Bialek, William S. Ryu (2008). Dimensionality and Dynamics in the Behavior of C. elegans PLoS Computational Biology, 4 (4) DOI: 10.1371/journal.pcbi.1000028

Hi-def Music

Time for another great TED talk, this one more about art than science and policy.

Ever since I tried out first the Bose Tri-port headphones, and later the Shure E5c's, I've been convinced that we would all come to regret our current obsession with low-bitrate, high-compression audio files. 128kbps mp3 is often called "CD quality," which is a blatant lie. 128kbps AAC (.m4a files, as created by iTunes) comes closer but is still fairly high compression. 256-320kbps AAC files match CD quality and should be the default setting in all CD ripping software. (Of course, that would require Apple to cut the number of songs that they claim fits in an iPod by half, which is a big no-no.)

But in fact, even CD quality is nowhere near the limit of human perception. The end credits for the videogame Metal Gear Solid 2 feature a jazz piece titled Can't Say Goodbye to Yesterday that is fairly good, but really, nothing special. The recording, however, was in Dolby Digital 5.1 channel audio, and on my surround sound system it was a stunning experience—you are literally placed in the middle of the band, surrounded by the instruments, with the singer right in front of you. Keeping all of our music in CD and CD-like formats is short-sighted. Neil Young recently lamented the dominance of CDs and mp3s in an era in which digital storage and powerful computers are increasingly cheap.

Which takes us to John Walker's TED talk. Think of all the extraordinary music performances of the 50s, 60s, and 70s, before digital recording technology existed. John Walker has analysed those performances from analog recordings and (for piano only, so far) recreated them on a computer-controlled grand piano (built by Yamaha), to record with whatever new and sophisticated recording equipment is available currently. He has decoupled the performance and the recording.

Now, of course, this technology is limited to piano, and likely will be for a long time. It's one thing to determine which piano keys were pressed from a recording, and another entirely to do the same for an entire orchestra, or, worse even, to reconstruct a singer's vocal cords. Playback in those cases will also require some new technologies that are not quite ready for prime-time. But, for now, we can enjoy some timeless piano performances with arbitrarily good recordings.

Microsoft Silverlight

I have to say that despite the bad press Silverlight is getting at Wikipedia, I was pretty impressed using it in the NBC Olympics site. Four live feeds at once? Yes please. This is what digital television was supposed to bring us, but never did. More important, fast forward, rewind and skip were stunningly responsive, which is more than I can say for Flash-based video. Finally, over my decent but not world-class DSL connection, video quality was fantastic, even at full-screen.

Yeah, Silverlight uses proprietary software and eschews open standards. Like Facebook's closed platform and data policies, this bothers me. But like Facebook, Silverlight is simply ahead of the competition. Until the alternatives catch up, you can't blame consumers for sticking to the closed (but superior) platforms.

iPhone update quirks

Being an Über-Geek, I of course own an iPhone. This morning was therefore upgrade time for me, to firmware v2.0. Everything went pretty smoothly, save for this one nonsense error message:

Cancelling and retrying made the problem go away (why do computers always defy the most fundamental principle of physics?). You would think that -1.48GB would still evaluate to less than 12.06GB, and so that the backup should have been carried out anyway.

Design Patterns

I'm reading this amazing book on object-oriented design:

Now, it's not like I taught myself programming and am only now beginning my formal education. I effectively completed a minor in computer science at the University of Melbourne, taking a substantial number of CS subjects every semester for three years. But I feel like a total n00b going through this book. For all the wonderful theory I'd been taught about algorithm complexity, and all the disparate programming languages (Haskell, C, Prolog, Assembly, and others), I don't remember once hearing about object-oriented (OO) programming during my undergrad at UniMelb.

[Ed's Note: a fellow UniMelb alumnus told me it sounded like I was attacking the CS program here. That was not at all my intention: all the CS subjects I took were electives, so I cherry-picked whichever ones I wanted to do based on whether they sounded interesting, not whether they were a core of the program. For all I know, Melbourne Uni's CS department might have the best OO programming course in the country. What's more, at least I was taught about modular, well-documented code with informative variable names, which is more than I can say for the majority of coders I have encountered. I merely wanted to illustrate that it's possible to be exposed to a wide range of Computer Science and yet not know about OO programming.]

Once I started writing my own code, however, I began using Marshall Cline's C++ FAQ Lite, a wonderful resource for C++ code tips. Cline focuses more on implementation than on good OO design, but every now and then his answer to an FAQ was puzzling without prior knowledge of OO programming. For example:

[6.9] Are virtual functions (dynamic binding) central to OO/C++?


Without virtual functions, C++ wouldn't be object-oriented. [...]

Reading these, I would think to myself, "Hmm, I should look into that," and be on my merry way with my (inflexible, error-prone) program. Eventually, I stumbled upon a question about learning OO design, in which Cline calls Design Patterns "must-read."

The purpose of this post is simply to agree wholeheartedly with Mr. Cline. Design Patterns is essentially a catalog of the most successful designs in OO programming—those that have stood the test of time and yielded the most flexible, adaptable, reusable code in the world. Anybody who thinks that they can come up with something better just off-the-cuff, from their own intuition, is most likely arrogant and delusional. I've only covered two patterns so far (Singleton and Composite, for the curious), and I am awed by their ingenuity but also by their simplicity. These patterns elegantly solve problems that every programmer faces at one point or another. What's more, people who have read the book can more effectively communicate about their designs by explicitly naming their components and how they interrelate.

Bottom line: if you do any programming at all, buy and read this book.

You can thank me later.

A Fantastic Neuroscience Talk by Jeff Hawkins

I thought it fitting that the inaugural post on my blog be about a talk at a symposium, specifically the TED conference in Monterey, CA. TED stands for Technology, Entertainment, Design, its original focus (foci?), but it has expanded to just about anything that's innovative. Huge names participate in TED (Bill Clinton, Richard Branson, Brian Greene, to name a few), and attendance is limited to 1,000. The result is utterly fascinating. Many of the talks are free to watch online.

Jeff Hawkins (founder of Palm and the Redwood Neuroscience Institute) gave his vision on the past and future of brain science in the most entertaining fashion here.

[ted id=125]

Funny, funny stuff!

More to come! Do check out the TED site, an endless source of entertainment and illumination.