I know that a square matrix $\mathbf{M}$ maps point $\mathbf{x}$ to point $\mathbf{y}$. Do I have enough information to work out $\mathbf{M}$?

In a word: no, unless you’re working in one dimension!

In general, to work out a square transformation matrix in $n$ dimensions, you need to know $n$ points - the matrix contains $n^2$ unknowns, and each point gives you $n$ equations. You need as many equations as you have unknowns, so $n$ points will suffice ((as long as they’re linearly independent)).

The bigger question is: how do you work that out?

It’s actually not all that hard - let’s suppose you have:

$\mathbf{M} \cdot \mathbf{x_1} = \mathbf{y_1}$ $\mathbf{M} \cdot \mathbf{x_2} = \mathbf{y_2}$ $\mathbf{M} \cdot \mathbf{x_3} = \mathbf{y_3}$ $…$ $\mathbf{M} \cdot \mathbf{x_n} = \mathbf{y_n}$

Where all of the $\mathbf{x_i}$s and $\mathbf{y_i}$s are column vectors. It turns out, you can combine all of that into a single equation:

$\mathbf{M}\cdot \mathbf{X} = \mathbf{Y}$,

where $\mathbf{X}$ is all of the $\mathbf{x_i}$s lined up side by side - and similarly for $\mathbf{Y}$.

To find the unknown $\mathbf{M}$, all you need to do is post-multiply both sides by $\mathbf{X}^{-1}$:

$\mathbf{M}\cdot \mathbf{X}\cdot \mathbf{X}^{-1} = \mathbf{Y} \cdot \mathbf{X} ^{-1}$ … and the matrices at the end of the left-hand side work out to be $\mathbf{I}$:

$\mathbf{M} = \mathbf{Y} \cdot \mathbf{X} ^{-1}$

(This is why the $\mathbf{x_i}$s have to be linearly independent: otherwise, $\mathbf{X}$ has no inverse.)

Now, I haven’t done proper grown-up matrix work for a long while, but I gather that no computer scientist worth his hash-salting algorithm would ever invert a matrix; there are almost certainly better algorithms for working this sort of thing out. However, there’s a certain neatness to this solution that I rather like. So there!