# Inverse Problem (Part 1)

The process of calculating the causal factors from an observation is called inverse problem. An inverse problem is much harder to solve than the corresponding forward counterpart, which is calculating the observation from the causal factors.

Many problems in science and math are inverse problems. They can be found in optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, and many other fields.

Two concrete examples are image deblurring and computer vision. Computer vision is the inverse problem of computer graphics and image deblurring is the inverse problem of image convolution. I have already covert the topic of image deblurring in previous posts (part 1 and part 2).

Inverse problems are hard, because

- they are often underdetermined. That means they have no unique solution.
- There exist no direct inversion of the forward problem,
- and they are Ill-posed, so small observation errors cause high inversion errors.

Inverse problems are often solved iteratively. For example for a given , we could try out different and evaluate . If is equal to , we know is a valid solution. This technique might take very long (infinitely long if is continuous) to find the correct result. A better procedure is to describe this problem as an optimization problem and solve it with gradient descent (or other gradient based algorithms). For that we have to define a loss function:

Minimizing this loss function gives us the right result.

If is differentiable and the loss function is convex, this problem can be solved with gradient descent, by iteratively calculating the next best solution with:

Lets define a small inverse problem, with the function

We search for . This example is not particularly hard, since you could solve it analytically. However it is a good example that shows the optimization procedure.

```
%matplotlib inline
import numpy as np
from sympy import symbols, lambdify
import matplotlib.pylab as plt
L, f, x, y = symbols('L f x y')
f = 2*x - 4
L = (y - f)**2
print('L=' + str(L))
L_func = lambdify(x, L.subs({y: 0.0}), 'numpy')
gt_x = np.arange(-4, 8, 0.01)
gt_loss = L_func(gt_x)
plt.plot(gt_x, gt_loss)
plt.xlabel('x')
plt.ylabel('loss')
plt.grid()
```

Solving this optimization problem with gradient descent requires differentiating the loss function. It would be possible to solve many optimization problems without a gradient, using evolutionary algorithms or estimating the gradient in a different way. If gradients are available you should use them, because they significantly simplify and accelerate the problem.

```
from sympy import diff
dL = diff(L, x)
print('dL: ' + str(dL))
```

Solving our exemplary optimization problem with gradient descent and properly selected , leads to the correct result . For this example we can verify the result analytically:

```
alpha = 0.05
current_x_opt = -3.0
current_loss = L.evalf(subs={y: 0.0, x: current_x_opt})
x_opt = [current_x_opt,]
loss = [current_loss,]
for i in range(10):
current_x_opt = current_x_opt - alpha * dL.evalf(subs={y: 0.0, x: current_x_opt})
current_loss = L.evalf(subs={y: 0.0, x: current_x_opt})
x_opt.append(current_x_opt)
loss.append(current_loss)
print('x_opt={0:.2f}'.format(current_x_opt))
plt.plot(loss)
plt.grid()
plt.xlabel('iteration')
plt.ylabel('loss')
```

```
plt.plot(gt_x, gt_loss)
plt.plot(x_opt, loss, 'o-', color='red')
plt.xlabel('x')
plt.ylabel('loss')
plt.grid()
```