This article is a follow up to my last article Image debluring (Part 1). There I wrote about the specific problem of image deblurring. The problem is to find the original image, which is convolved with a known point spread function. This can be solved with an interactive optimization procedure. My intention to write about this problem, was to learn about scientific optimization in general.
Everyone who is using Python should consider NumPy (in combination with SciPy) as the number one library for this task. It is easy to use, stable and many libraries are building on top of it. A feature I am missing is automatic differentiation. It is tedious and error prone to calculate gradients by hand, but they are needed for gradient descent based optimization algorithms. That is why I have tried out TensorFlow in the first article.
TensorFlow is a numerical library. It has automatic differentiation and modern optimization procedures built-in. Another big advantage is, that you can run the calculations on a GPU and even on a distributed cluster. TensorFlow is developed for deep learning problems. That is where the disadvantages come from if you want to do general scientific optimization. Because the computation is done lazy, debugging is relatively hard. In addition 64 bit floating point numbers, which are still often used outside of deep learning, are not completely supported.
In this article I want to test a different library. Autograd is an extension of NumPy. It can calculate the gradient of a function which consist of NumPy and SciPy functions. Unfortunately currently not all functions are support. Especially SciPy functions are often missing. Autograd is still in active development, so this problem might be less relevant in the future. The advantages of Autograd are that it is easy to use, debug and has all advantages of NumPy, like proper 64 bit support.