… and combines rather well with that other game-changing library I like, Numba.
To illustrate this, let’s look at image erosion, which is the replacement of each pixel in an image by the minimum of its neighbourhood.
ndimage has a fast C implementation, which serves as a perfect benchmark against the generic version, using a generic filter with
min as the operator. Let’s start with a 2048 x 2048 random image:
>>> import numpy as np >>> image = np.random.random((2048, 2048))
and a neighbourhood “footprint” that picks out the pixels to the left and right, and above and below, the centre pixel:
>>> footprint = np.array([[0, 1, 0], ... [1, 1, 1], ... [0, 1, 0]], dtype=bool)
Now, we measure the speed of
generic_filter. Spoiler alert: it’s not pretty.
>>> from scipy import ndimage as ndi >>> %timeit ndi.grey_erosion(image, footprint=footprint) 10 loops, best of 3: 118 ms per loop >>> %timeit ndi.generic_filter(image, np.min, footprint=footprint) 1 loop, best of 3: 27 s per loop
As you can see, with Python functions,
generic_filter is unusable for anything but the tiniest of images.
A few months ago, I was trying to get around this by using Numba-compiled functions, but the way to feed C functions to SciPy was different depending on which part of the library you were using.
scipy.integrate used ctypes, while
PyCapsules, depending on your Python version, and Numba only supported the former method at the time. (Plus, this topic starts to stretch my understanding of low-level Python, so I felt there wasn’t much I could do about it.)
Enter this pull request to SciPy from Pauli Virtanen, which is live in the most recent SciPy version, 0.19. It unifies all C-function interfaces within SciPy, and Numba already supports this format. It takes a bit of gymnastics, but it works! It really works!
(By the way, the release is full of little gold nuggets. If you use SciPy at all, the release notes are well worth a read.)
First, we need to define a C function of the appropriate signature. Now, you might think this is the same as the Python signature, taking in an array of values and returning a single value, but that would be too easy! Instead, we have to go back to some C-style programming with pointers and array sizes. From the
This function also accepts low-level callback functions with one of the following signatures and wrapped in scipy.LowLevelCallable:
int callback(double *buffer, npy_intp filter_size, double *return_value, void *user_data) int callback(double *buffer, intptr_t filter_size, double *return_value, void *user_data)
The calling function iterates over the elements of the input and output arrays, calling the callback function at each element. The elements within the footprint of the filter at the current element are passed through the buffer parameter, and the number of elements within the footprint through filter_size. The calculated value is returned in return_value. user_data is the data pointer provided to
The callback function must return an integer error status that is zero if something went wrong and one otherwise.
So, we need a Numba cfunc that takes in:
doublepointer pointing to the values within the footprint,
- a pointer-sized integer that specifies the number of values in the footprint,
doublepointer for the result, and
- a void pointer, which could point to additional parameters, but which we can ignore for now.
The Numba type names are listed in this page. Unfortunately, at the time of writing, there’s no mention of how to make pointers there, but finding such a reference was not too hard. (Incidentally, it would make a good contribution to Numba’s documentation to add
CPointer to the Numba types page.)
So, armed with all that documentation, and after much trial and error, I was finally ready to write that C callable:
>>> from numba import cfunc, carray >>> from numba.types import intc, intp, float64, voidptr >>> from numba.types import CPointer >>> >>> >>> @cfunc(intc(CPointer(float64), intp, ... CPointer(float64), voidptr)) ... def nbmin(values_ptr, len_values, result, data): ... values = carray(values_ptr, (len_values,), dtype=float64) ... result = np.inf ... for v in values: ... if v < result: ... result = v ... return 1
The only other tricky bits I had to watch out for while writing that function were as follows:
- remembering that there’s two ways to de-reference a pointer in C:
*ptr, which is not valid Python and thus not valid Numba, and
ptr. So, to place the result at the given
doublepointer, we use the latter syntax. (If you prefer to use Cython, the same rule applies.)
- Creating an array out of the
len_valuesvariables, as shown here. That’s what enables the
for v in valuesPython-style access to the array.
Ok, so now what you’ve been waiting for. How did we do? First, to recap, the original benchmarks:
>>> %timeit ndi.grey_erosion(image, footprint=footprint) 10 loops, best of 3: 118 ms per loop >>> %timeit ndi.generic_filter(image, np.min, footprint=footprint) 1 loop, best of 3: 27 s per loop
And now, with our new Numba cfunc:
>>> %timeit ndi.generic_filter(image, LowLevelCallable(nbmin.ctypes), footprint=footprint) 10 loops, best of 3: 113 ms per loop
That’s right: it’s even marginally faster than the pure C version! I almost cried when I ran that.
Higher-order functions, ie functions that take other functions as input, enable powerful, concise, elegant expressions of various algorithms. Unfortunately, these have been hampered in Python for large-scale data processing because of Python’s function call overhead. SciPy’s latest update goes a long way towards redressing this.