# Our environmental future

Another link post, to a worthwhile article by Veronique Greenwood for Aeon (emphases mine):

For much of the thousands of years of human existence, our species has treated the world more or less as an open system. […] the general faith was that there were, say, more whales somewhere […] more trees somewhere […]. Even today, in the face of imminent climate change, we continue to function as though there’s more atmosphere somewhere, ready to whisk off our waste to someplace else. It is time, though, to think of the world as a closed system. When you look at the resources involved in maintaining even a single member of a developed society, it’s hard to avoid the knowledge that this cannot continue. Last year, Tim De Chant, an American journalist who runs the blog Per Square Mile, made striking depictions of the space required if everyone in the world live liked the inhabitants of a number of countries. If we all lived like Americans, even four planet Earths would not be enough.

The article does suggest, however, that a change of mindset will push us to inventive solutions to our environmental problems. I hope she’s right.

# A high-profile endorsement of F1000 Research

Michael Eisen speaking about F1000 Research to Nature:

“They are doing lots of things that PLOS should have done five years ago.”

I recently ranted about PLOS ONE (while still endorsing their mission) for this very reason. It’s good to know that the very top knows they need to adapt.

# Speed up your Mac’s wake up time using pmset. Do it again after upgrading to Mavericks

Last year I got a 15″ Retina Macbook Pro, an excellent machine. However, it was taking way longer than my 13″ MBP to wake up from sleep. After a few months of just accepting it as a flaw of the new machines and the cost of being an early adopter, I finally decided to look into the problem. Sure enough, I came across this excellent post from OS X Daily:

Is Your Mac Slow to Wake from Sleep? Try this pmset Workaround

Oooh, sweet goodness: basically, after 1h10min asleep, your Mac goes into a “deep sleep” mode that dumps the contents of RAM into your HDD/SSD and powers off the RAM. On wake, it needs to load up all the RAM contents again. This is slow when your machine has 16GB of RAM! Thankfully, you can make your Mac wait any amount of time before going into deep sleep. This will eat up your battery a bit more, but it’s worth it. Just type this into the Terminal:

sudo pmset -a standbydelay 86400

This changes the time to deep sleep to 24h. Since I rarely spend more than 24h without using my computer, I now have instant-on every time I open up my laptop!

Finally, the reason I wrote this now: upgrading to Mavericks sneakily resets your standbydelay to 4200. (Or, at least, it did for me.) Just run the above command again and you’ll be set, at least until the next OS upgrade comes along!

Update: the original source of this tip appears to be a post from Erv Walter on his site, Ewal.net. It goes into a lot more detail about the origin of this sleep mode — which indeed did not exist when I bought my previous Macbook Pro.

# All journals should require authors to publish their raw data

This is just a link post. The excellent and excellently-named Data Colada blog has a brilliant analysis of scientific fraud exposed by the raw data. Figures can obscure flaws that are immediately obvious in the numbers. (Although, Matt Terry’s awesome and hilarious Yoink might alleviate this.) In this case, averages of four numbers turning out to be integers every single timeand two independent experiments giving almost exactly the same distribution of values. (Frankly, if you can’t simulate random sampling from an underlying distribution, you don’t belong in the fraud world!)

The post demonstrates the importance of publishing as much data (and code) as possible with a paper. Words are fuzzy; data and code are precise.

See here for more.

# Why PLOS ONE is no longer my default journal

Time-to-publication at the world’s biggest scientific journal has grown dramatically, but the nail in the coffin was its poor production policies.

When PLOS ONE was announced in 2006, its charter immediately resonated with me. This would be the first journal where only scientific accuracy mattered. Judgments of “impact” and “interest” would be left to posterity, which is the right strategy when publishing is cheap and searching and filtering are easy. The whole endeavour would be a huge boon to “in-between” scientists straddling established fields — such as bioinformaticians.

My first first-author paper, Joint Genome-Wide Profiling of miRNA and mRNA Expression in Alzheimer’s Disease Cortex Reveals Altered miRNA Regulation, went through a fairly standard journal loop. We first submitted it to Genome Biology, which (editorially) deemed it uninteresting to a sufficiently broad readership; then to RNA, which (editorially) decided that our sample size was too small; and finally to PLOS ONE, where it went out to review. After a single revision loop, it was accepted for publication. It’s been cited more than 15 times a year, which is modest but above the Journal Impact Factor for Genome Biology — which means that the editors made a bad call rejecting it outright. (I’m not bitter!)

Overall, it was a very positive first experience at PLOS. Time to acceptance was under 3 months, time to publication under 4. The reviewers were no less harsh than in my previous experiences, so I felt (and still feel) that the reputation of PLOS ONE as a “junk” journal was (is) highly undeserved. (Update: There’s been a big hullabaloo about a recent sting targeting open access journals with a fake paper. PLOS ONE came away unscathed. See also the take of Mike Eisen, co-founder of PLOS.) And the number of citations certainly vindicated PLOS ONE’s approach of ignoring apparent impact.

So, when looking for a home for my equally-awkward postdoc paper (not quite computer vision, not quite neuroscience), PLOS ONE was a natural first choice.

The first thing to go wrong was the time to publication, about 6 months. Still better than many top-tier journals, but no longer a crushing advantage. And it’s not just me: there’s been plenty of discussion about time-to-publication steadily increasing at PLOS ONE. But I was not too worried about the publication time, since I’d put my paper up on the arXiv (and revised it at each round of peer-review, so you can see the revision history there — but not on PLOS ONE).

But, after multiple rounds of review, the time came for production, at which point they messed up two things: they did not include my present address; and they messed up Figure 1, which is supposed to be a small, single-column, illustrative figure, and which they made page-width. The effect is almost comical, and my first impression seeing page 2 would be to think that the authors are trying to mask their incompetence with giant pictures. (We’re not, I swear!)

Both of these mistakes could have been avoided if PLOS ONE did not have a policy of not letting you see the camera-ready pdf before it is published, and of not allowing corrections to papers unless they are technical or scientific, regardless of fault. Not to mention they could have, you know, actually looked at the dimensions embedded in the submitted TIFFs. With a $1,300 publication fee, PLOS could afford to take a little bit of extra care with production. Both of the above policies are utterly unnecessary — the added cost of sending authors a production proof is close to nil, and keeping track of revisions on online publications is also trivial (see the 22 year old arXiv for an example). We scientists live and die by our papers. We don’t want the culmination of years of work to be marred by a silly, easily-fixed formatting error, ossified by an unwieldy bureaucracy. I’ve been an avid promoter of PLOS (and PLOS ONE in particular) over the past few years, but I’m sad to say that’s not where my next paper will end up. Ultimately, PLOS ONE’s model, groundbreaking though it was, is already being supplanted by newcomers. PeerJ offers everything PLOS ONE does at a fraction of the cost, and further includes a preprint service and open peer-review. Ditto for F1000 Research, which in addition offers unlimited revisions (a topic close to my heart ;). And both use the excellent MathJAX to render mathematical formulas, unlike PLOS’s archaic use of embedded images. They get my vote for the journals of the future. [Note: the views expressed herein are mine alone — no co-authors were harmed consulted in the writing of this blog post.] References Nunez-Iglesias J, Liu CC, Morgan TE, Finch CE, & Zhou XJ (2010). Joint genome-wide profiling of miRNA and mRNA expression in Alzheimer’s disease cortex reveals altered miRNA regulation. PloS one, 5 (2) PMID: 20126538 Kravitz DJ, & Baker CI (2011). Toward a new model of scientific publishing: discussion and a proposal. Frontiers in computational neuroscience, 5 PMID: 22164143 Juan Nunez-Iglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, & Dmitri B. Chklovskii (2013). Machine learning of hierarchical clustering to segment 2D and 3D images arXiv arXiv: 1303.6163v3 Nunez-Iglesias J, Kennedy R, Parag T, Shi J, & Chklovskii DB (2013). Machine Learning of Hierarchical Clustering to Segment 2D and 3D Images. PloS one, 8 (8) PMID: 23977123 # Tesla makes a better place No sooner do I berate Tesla for not supporting battery swapping than they go and announce battery swap stations! It’d be nice if they weren’t proprietary, but I’ll take it. # A sad day in the fight against climate change Apparently Better Place is preparing for bankruptcy. I wrote an optimistic post about Better Place years ago, when they were just about to launch. They created a swappable battery system for electric cars along with corresponding battery swap stations. In my opinion, these were the most credible cure to range anxiety for electric vehicles. Batteries take a long time to charge, even on Tesla’s Supercharger stations, which they ludicrously refer to as “super-quick” just because they can give you half a charge in half an hour. A swap in Better Place’s stations took two minutes. Adoption of electric vehicles will remain minuscule until the range problem can be fixed. With transport accounting for about 20% of CO2 emissions worldwide, significant EV adoption would be a massive boon to the fight against climate change. And with Better Place out of the picture, that goal became just a little bit less real. # h5cat: quickly preview HDF5 file contents from the command-line As a first attempt at writing actually useful blog posts, I’ll publicise a small Python script I wrote to peek inside HDF5 files when HDFView is overkill. Sometimes you just want to know how many dimensions a stored array has, or its exact path within the HDF hierarchy. The “codebase” is currently tiny enough that it all fits below: #!/usr/bin/env python import os, sys, argparse import h5py from numpy import array arguments = argparse.ArgumentParser(add_help=False) arggroup = arguments.add_argument_group('HDF5 cat options') arggroup.add_argument('-g', '--group', metavar='GROUP', help='Preview only path given by GROUP') arggroup.add_argument('-v', '--verbose', action='store_true', default=False, help='Include array printout.') if __name__ == '__main__': parser = argparse.ArgumentParser( description='Preview the contents of an HDF5 file', parents=[arguments] ) parser.add_argument('fin', nargs='+', help='The input HDF5 files.') args = parser.parse_args() for fin in args.fin: print '>>>', fin f = h5py.File(fin, 'r') if args.group is not None: groups = [args.group] else: groups = [] f.visit(groups.append) for g in groups: print '\n ', g if type(f[g]) == h5py.highlevel.Dataset: a = f[g] print ' shape: ', a.shape, '\n type: ', a.dtype if args.verbose: a = array(f[g]) print a  h5cat is available on GitHub under an MIT license. Here’s an example use case: $ h5cat -v -g vi single-channel-tr3-0-0.00.lzf.h5
>>> single-channel-tr3-0-0.00.lzf.h5

vi
shape:  (3, 1)
type:  float64
[[ 0.        ]
[ 0.06224902]
[ 2.23062383]]


# On Happiness

Time for another TED talk… Psychologist Nancy Etcoff gave a fairly entertaining talk about happiness. Mostly it’s a bunch of “we’re gonna figure this out, we promise, here are some clues”, but there are a few nuggets in there that I found worth sharing.

First, most interesting to me, is a little bit of scientific evidence on the cliché that selflessness equals happiness: if you run language metrics on the works of suicidal poets, you find an excess of self-centred words, such as “I”, “me”, “my”, when compared to other poetry. Focusing on things other than yourself will make you a happier person.

# The electric car of the… Present?

I Love Symposia! is going back to its roots, with a post about a TED talk!

In his talk, Shai Agassi of Better Place lays out his vision for cheap electric cars running on electricity from 100% renewable sources, and using technology available today. If you live in Israel, Denmark, Australia or Northern California, you are first in line to try out their cars, which will be built by Renault and Nissan.

Agassi gets around the problem of the limited range of electric cars by making the battery quickly and easily replaceable. Thus you’ll stop at a petrol battery station and a robotic system will swap out the battery in less than two minutes—presto! Instant battery recharge. That’s less time than it takes to fill up.

With one trillion dollars set aside for the economic stimulus, it’ll be disappointing if none of it goes to building battery change stations in the US. Ditto for China.

If you’re lucky enough to be in one of the pilot regions, be sure to go to the Better Place website for more information! If not, then follow Al Gore’s advice and invest in green tech. As Agassi says, this is now a moral choice.

(Just as a quick aside, I was happy to discover that WordPress.com now allows you to embed TED talks! If you use WordPress.com, find the announcement here and the instructions here.)