Skip to main content

Review: Sony Digital Paper

Three years ago I excitedly posted about Sony's then-new writeable e-paper tablet, called Sony Digital Paper System (DPS-1).

Now it is finally mine and I love it.

Here's what I wrote about it when Sony announced it:

the iPad (et al) sucks for some things. Three of those are: (1) taking handwritten notes, (2) reading (some) pdfs in full-page view, and (3) reading in full daylight. By the sound of it, Sony’s new tablet will excel at all three

Having had it for about a month, I can confidently say that it does indeed excel at those three things, beyond my wildest dreams. Even with the improved competition of the identically-priced iPad Pro (which can now handle points 1 and 2 with aplomb), I still prefer the Sony. Here's why:

  • Despite a comparatively low resolution (1200 x 1600), e-ink is simply nicer on the eyes. To see why, look at this old post looking at an original iPad display and a Kindle e-ink display under a microscope. (More modern Retina displays are only marginally better, see here.) Here's a zoomed-in shot of a paper on the DPS-1: Like paper I can barely tell that it's not just a slightly low-res print.
  • And of course, for reading outdoors, e-ink is just infinitely better. Try this on an iPad if you're craving a good cry.
  • The Apple Pencil has received glowing reviews, but I've tried it, and it still feels decidedly like sliding on glass. The DPS's stylus and matte screen combine to create friction that feels remarkably like pencil-on-paper.
  • In today's distraction-filled digital world, disconnecting is an advantage. This will matter more or less depending on your work discipline, but for me it has been life-changing. The context switch that happens when I start to work on the DPS keeps me focused at a level I hadn't experienced for years. You can certainly use "Do not disturb" on an iPad, but having distracting apps such as email a double-tap away is a definite downside.

The DPS is one of those rare products that does one thing and one thing only (well, two) really well: read and annotate pdfs, and take handwritten notes. It's simply perfect for academics.

There's one caveat and it's the software. It is, in a word, amateurish. A few examples:

  • Cloud Sync works though WebDAV, a file transfer protocol with limited support from cloud storage providers (of the major players, only Box supports it as of this writing).
  • You can screen share with the DPS through a USB cable, which is great for giving pdf presentations, but it's done through a companion Mac OS app distributed as a java archive, which doesn't support full-screen.
  • You can make and delete files on the DPS, but you can't move them to other folders.
  • And so on.

The funny thing is that it gets regular software updates, but none attains the level of polish you might expect from a company of Sony's stature. I have a feeling that there's this one engineer in charge of this thing at Sony, and they are just hammering away by themselves, unsupported, but trying their darnedest to make it better all the time.

In short, I think Sony's development and marketing teams dropped the ball on this one. In its early days you couldn't buy it at retail stores — you actually had to write to Sony to explain why you wanted one! I imagine they wanted to avoid negative press from consumers who didn't know what they were getting into. And even now, retail availability is extremely limited. Just two stores carry it in the US (B&H Photo and CDW). In many countries you can't buy it at all, except shipped from those US stores.

Sony really needs to put these babies on demo at every university bookshop in the rich world. (At $800 US, I'll admit it's a luxury.) It would sell like hotcakes.

In short, if you read a lot of scientific papers, or do a lot of handwriting (e.g. for math), you will love the Digital Paper. I second what my friend @gamesevolving said: I should have gotten it a long time ago.

Why scientists should code in the open

All too often, I encounter published papers in which the code is "available upon request", or "available in the supplementary materials" (as a zip file). This is not just poor form. It also hurts your software's future. (And, in my opinion, when results depend on software, it is inexcusable.)

Given the numerous options for posting code online, there's just no excuse to give code in a less-than-convenient format, upon publication. When you publish, put your code on Github or Bitbucket.

In this piece, I'll go even further: put your code there from the beginning. Put your code there as soon as you finish reading this article. Here's why:

No, you won't get scooped

Reading code is hard. Ask any experienced programmer: most have trouble reading code they themselves wrote a few months ago, let alone someone else's code. It's extremely unlikely that someone will browse your code looking for a scoop. That time is better spent doing research.

It's never going to be ready

Another thing I hear is that they want to post their code, but they want to clean it up first, and remove all the "embarrassing" bits. Unfortunately, science doesn't reward time spent "cleaning up" your code, at least not yet. So the sad reality is that you probably will never actually get to the point where you are happy to post your code online.

But here's the secret: everybody is in that boat with you. That's why this document exists. I recommend you read it in full, but this segment is particularly important:

When it comes time to empirically evaluate new research with respect to someone else's prior work, even a rickety implementation of their work can save grad-student-months, or even grad-student-years, of time.

Matt Might himself is as thorough and high-profile as you get in computer science, and yet, he has this to say about code clean-up:

I kept telling myself that I'd clean it all up and release it some day. I have to be honest with myself: this clean-up is never going to happen.

Your code might not meet your standards, but, believe it or not, your code will help others, and the sooner it's out there, the sooner they can be helped.

You will gain collaborators and citations

If anyone is going to be rifling through your code, they will probably end up asking for your help. This happens with even the best projects: have a look at the activity on the mailing lists for scikit-learn or NumPy, two of the best-maintained open-source projects out there.

When you have to go back and explain how a piece of code worked, that's when you will actually take the time and clean it up. In the process, the person you help will be more likely to contribute to your project, either in code or in bug reports, improvement suggestions, or even citations.

In the case of my own gala project, I guess that about half of the citations it received happened because of its open-source code and open mailing list.

Your coding ability will automagically improve

I first heard this one from Albert Cardona. They say sunlight is the best disinfectant, and this is certainly true of code. Just the very idea that anyone can easily read their code will make most people more careful when programming. Over time, this care will become second nature, and you will develop a taste for nice, easy-to-read code.

In short, the alleged downsides of code-sharing are, at best, longshots, while there are many tangible upsides. Put your code out there. (And use a liberal open-source license!)

The cost of a Python function call

I've read in various places that the Python function call overhead is very high. As I was parroting this "fact" to Ed Schofield recently, he asked me what the cost of a function actually was. I had no idea. This prompted us to do a few quick benchmarks.

The short version is that it takes about 150ns to call a function in Python (on my laptop). This doesn't sound like a lot, but it means that you can make at most 6.7 million calls per second, two to three orders of magnitude slower than your processor's clock speed.

If you want your function to do something, such as, oh, I don't know, receive an input argument, this goes up to 350ns, throttling you at 2.8 million calls per second.

Benchmarking function calls

I cleaned up Ed's and my initial experiments to make a small module and timer to measure all these values. You can clone the repo and run python function-calls/ to check the numbers on your machine.

The benchmarks are variations of comparing the execution time of:

[code lang=python]
for i in range(n):


[code lang=python]
def f():

for i in range(n):

for some suitably large n. As I mentioned above, that comes out to an absolute minimum of 150ns per function call.

What this means

I've been making a fuss over the past year about the excellent Toolz and the way it enables elegant streaming data processing. (See my demo repo and my EuroSciPy talk.) You can read data from a modern SSD at speeds approaching 500MB/s. If you want to stream each byte through Python functions, you'll instantly lose two orders of magnitude of speed. And, the more functions you use, the slower you'll go, which discourages functional programming and modularity — the very things I was trying to promote!

In the DNA sequence processing I demo in the talk, I get a throughput of about 0.5MB/s. On one hand, this is kind of OK because we are using effectively zero RAM, so we can just let the code run over lunch. On the other, it's starting to bug me that 99% of my processor time is spent on Python function calls, rather than on actual data crunching.

This is a problem for Python. To work on seriously big data, you need to drop into a library written in C, such as NumPy or Pandas. You need to do this on a high level: any per-byte or per-data-element processing cannot be in Python, if you don't want to waste your processor's cycles. Python's ecosystem is Insanely Great, so this is mostly fine, but it does limit your ability to research or implement cool new methods using Python.

As an example, the generic_filter function in SciPy's ndimage package has infinitely many cool uses, but using it to process a 100MB image (which is small in biology) would take 15 seconds in function call overhead alone. Lest you think this is reasonable, SciPy's greyscale erosion, implemented in C, takes less than 4 seconds on an image that size. A lot of my once-lackadaisical attitude towards Python performance stemmed from not knowing how long things should take. A lot less than they do, it turns out.

What to do about it

As I mentioned, Python's high performance libraries are many and great. Look hard for optimised libraries that already do what you want. Try to express what you want to do as combinations of functions from NumPy, SciPy, Pandas, scikit-image, scikit-learn, and so on. Minimise the amount of time spent in Python. This is advice that you learn early on in scientific Python programming, but I didn't appreciate just how important it is.

At some point, that approach will fail, and you will want to do something cute and custom with your data points. Reach for Cython sooner rather than later. As a primer, I recommend Stefan Behnel's excellent tutorial from EuroSciPy 2015.

There is also Continuum's Numba, which is sometimes easier to use than Cython. I don't have any experience with it so I can't comment much here. However, I'd consider it a very valuable project to implement generic_filter in Numba. In the long-run, these are all workarounds, and I hope that the Python interpreter itself becomes faster, though there are few signs of that happening.

If you have other ideas on how to get around Python's function call cost, please let me know in the comments!

My first use of Python 3's `yield from`!

I never really understood why yield from was useful. Last weekend, I wanted to use Python 3.5's new os.scandir to explore a directory (and its subdirectories). Tragically, os.scandir is not recursive, and I find os.walks 3-tuple values obnoxious. Lo and behold, while I was trying to implement a recursive version of scandir, a yield from use just popped right out!

[code lang=python]
import os
def rscandir(path):
    for entry in os.scandir(path):
        yield entry
        if entry.is_dir():
            yield from rscandir(entry.path)

That's it! I have to admit that reads wonderfully. The Legacy Python (aka Python 2.x) alternative is quite a bit uglier:

[code lang=python]
import os
def rscandir(path):
    for p in os.listdir(path):
        yield p
        if os.path.isdir(p):
            for q in rscandir(p):
                yield q

Yuck. So, yet again: time to move away from Legacy Python! ;)

EuroSciPy 2015 debrief

The videos from EuroSciPy 2015 are up! This marks a good time to write up my thoughts on the conference. I’ve mentioned before that the yearly SciPy conference is stunningly useful. This year I couldn’t make it to Austin, but I did attend EuroSciPy, the European version of the same conference, in Cambridge, UK. It was spectacular.

Useful talks

The talk of the conference, for me, goes to Robin Wilson for recipy, which one can describe as a logging utility, if one wishes to make it sound as uninspiring as possible. Recipy’s strength is in its mind-boggling simplicity. Here is the unabridged usage guide:

[code lang=python]
import recipy

With this single line, your script will now generate an entry in a database every time it is run. It logs the start and end time, the working directory, the script's git hash, any differences between the working copy and the last git commit (!), and the names of any input and output files. (File hashes are coming soon, I’m assured). I don’t know about you but I have definitely lost count of the times I’ve looked at a file and wondered what script I ran to get it, or the input data that went into it. This library solves that problem with absolutely minimal friction for the user. I also enjoyed Nicolas Rougier’s talk on ReScience, a new journal dedicated to replicated (and replicable) scientific analyses. It’s a venue to publish all those efforts to replicate a result you read in a paper. Given recent findings about how poorly most papers replicate, I think this is a really important outlet. The other remarkable thing about it is that all review is open and done in the spirit of open source, on GitHub. Submission is by pull request, of course. With just one paper out so far, it’s a bit early to tell whether it’ll take off, but I really hope it does. I’ll be looking for stuff of my own to publish there, for sure. (Oh and by the way, they are looking for reviewers and editors!) Another great talk was Philipp Rudiger on HoloViews, an object-oriented plotting framework. They define an arithmetic on figures: A * B overlays figure B on A, while B + C creates two subplots out of B and C (and automatically labels them). Their example notebooks rely a lot on IPython magic, which I’m not happy about and means I haven’t fully grokked the API, but it seems like a genuinely useful way to think about plotting. A final highlight from the main session was Martin Weigert on Spimagine, his GPU-accelerated, 5D image analysis and visualisation framework. It was stupidly impressive. Although it’s a long-term project, I’m inclined to try to incorporate many of its components into scikit-image.


The tutorials are a great asset of both EuroSciPy and SciPy. I learn something new every year. The highlight for me was the Cython tutorial, in which Stefan Behnel demonstrated how easy it is to provide Python access to C++ code using Cython. (I have used Cython quite extensively, but only to speed up Python code, rather than wrap C or C++ code.)


I was feeling a bit hypocritical for missing the sprints this year, since I had to run off before the Sunday. Emmanuelle Gouillart, another scikit-image core dev, suggested having a small, unofficial sprint on Friday evening. It grew and grew into a group of about 30 people (including about 10 new to sprinting) who all gathered at the Enthought Cambridge office to work on scikit-image or the SciPy lecture notes. A brilliant experience. scikit-image sprint at Enthought (By the way, nothing creepy going on with that dude hunching over one of our sprinters — that's just husband-and-wife team Olivia and Robin Wilson! ;)

Final thoughts

As usual, I learned heaps and had a blast at this SciPy conference (my fourth). I hope it will remain a yearly ritual, and I hope someone reading this will give it a try next year!