a repository of mathematical know-how

Complete the square

Quick description

When faced with a expression involving both purely quadratic terms (e.g. ax^2) and linear terms (e.g. bx) in one or more variables x, translate the variable x in order to absorb the linear terms into the quadratic ones. This can create additional constant terms, but all other things being equal, constant terms are preferable to deal with than linear terms.


High school algebra

Example 1: The quadratic formula

This is the classic example of the completing the square trick. Suppose one wants to find all solutions to the quadratic equation

 ax^2 + bx + c = 0

where a,b,c are parameters. The linear term prevents one from solving this equation directly; however, observe that a(x+x_0)^2 = ax^2 + 2ax x_0 + ax_0^2 for any shift x_0. Thus, (assuming a \neq 0) if one picks = b/2a, one can absorb the linear factor into the quadratic one, obtaining an equation

 a(x+\frac{b}{2a})^2 - a (\frac{b}{2a})^2 + c = 0

which only involves quadratic and constant terms. Now, the rules of high school algebra can solve this equation. Dividing by a and rearranging, one gets

 (x + \frac{b}{2a})^2 = \frac{b^2-4ac}{4a^2};

taking square roots, we arrive at the famous formula

 x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}

for the two solutions to the quadratic equation, bearing in mind that the square root may well be imaginary if b^2-4ac is negative. (For the a=0, the above formula does not make sense, but in that case the equation degenerates to the linear equation bx+c=0, which has one solution x=-\frac{c}{b}, unless b managed to degenerate to zero also, in which case there are no solutions when c \neq 0 and an infinite number of solutions when c=0.)

See "how to solve quadratic equations" for more discussion.

Example 2

A quadratic form is an expression like x^2+2y^2-z^2+3xy-3xz. That particular example is a quadratic form on \R^3, and in general a quadratic form over \R^n is a function q obtained by taking a symmetric bilinear form \beta on \R^n and defining q(\mathbf{x}) to be \beta(\mathbf{x},\mathbf{x}). In the example above, the bilinear form is \beta((x,y,z),(x',y',z'))=xx'+2yy'-zz'+\frac 32(xy'+x'y-xz'-x'z).

It is often convenient to diagonalize a quadratic form, which means writing it as a linear combination of squares of linearly independent linear forms. Let us do this for the example above, by completing the square.

Our aim will be to achieve this by removing from x^2+2y^2-z^2+3xy-3xz the square of a linear form in such a way that x is no longer involved. To do this, we first pick out the terms that involve x, which are x^2+3xy-3xz. We then try to find a linear form ax+by+cz such that when you square it the terms involving x are precisely these ones. With a bit of experience in completing the square, we know that x+\frac 32 y-\frac 32 z will do the job. Indeed,

(x+\frac 32 y-\frac 32 z)^2=x^2+3xy-3xz+\frac 94y^2-\frac 92yz+\frac 94z^2.


x^2+2y^2-z^2+3xy-3xz=(x+\frac 32 y-\frac 32 z)^2-\frac 14y^2+\frac 92yz-\frac{13}4z^2.

We would now like to finish this process by diagonalizing -\frac 14y^2+\frac 92yz-\frac{13}4z^2, which we can make look slightly nicer by writing it as -\frac 14(y^2-18yz+13z^2). Completing the square again, we find that


Plugging this in, we deduce that

x^2+2y^2-z^2+3xy-3xz=(x+\frac 32 y-\frac 32 z)^2-\frac 14(y-9z)^2+17z^2,

and the quadratic form has been diagonalized.

Example 3

Completing the square can be used to compute the Fourier transform of gaussians. In one dimension, this was done in "Use rescaling or translation to normalize parameters"; we do the multi-dimensional case here. Specifically, let us compute the Fourier transform

 \int_{\R^d} e^{-\pi x \cdot M x} e^{-2\pi i x \cdot \xi}\ dx(1)

of the Gaussian e^{-\pi x \cdot M x}, where M is a positive definite real symmetric d \times d matrix, and \xi \in \R^d. Observe that

 (x+x_0) \cdot M (x+x_0) = x \cdot Mx + 2 x \cdot M x_0 + x_0 \cdot M x_0

for all x, x_0 \in \C^d. Thus, if we pick x_0 = i M^{-1} \xi, we can complete the square and rewrite (1) as

 \int_{\R^d} e^{-\pi (x + i M^{-1} \xi) \cdot M (x + i M^{-1} \xi)} e^{\pi (i M^{-1} \xi) \cdot M (i M^{-1} \xi)}\ dx.

The constant term e^{\pi (i M^{-1} \xi) \cdot M (i M^{-1} \xi)}, being independent of x, can be pulled out of the integral and simplified, leaving us with

 e^{-\pi \xi \cdot M^{-1} \xi} \int_{\R^d} e^{-\pi (x + i M^{-1} \xi) \cdot M (x + i M^{-1} \xi)}\ dx;

contour shifting in each of the d variables of integration separately then allows us to rewrite this as

 e^{-\pi \xi \cdot M^{-1} \xi} \int_{\R^d} e^{-\pi x \cdot M x}\ dx.

Making the change of variables = M^{1/2} x to normalize the integrand, we can simplify this as

 e^{-\pi \xi \cdot M^{-1} \xi} \frac{1}{(\det M)^{1/2}} \int_{\R^d} e^{-\pi |y|^2}\ dy,

which by factoring the integral and using the standard integral \int_\R e^{-\pi x^2}\ dx = 1 (proven at square and rearrange) simplifies to

 \frac{1}{(\det M)^{1/2}} e^{-\pi \xi \cdot M^{-1} \xi}.
Remark Of course, one could also go about this computation by diagonalizing M first to place it in a normal form; it is instructive to see how both computations end up at the same answer at the end of the day.

Example 4

The standard proof of the Cauchy-Schwarz inequality \langle x, y \rangle \leq \|x\| \|y\| in a Hilbert space (which we take here to be real, for simplicity) can be viewed as a variant of the completing-the-square trick, but now one converts linear term into quadratic and constant terms rather than vice versa. Indeed, since

 \| x - ay \|^2 = \|x\|^2 - 2 a \langle x, y \rangle + a^2 \|y\|^2

for any a \in \R, we can write

 \langle x, y \rangle = \frac{1}{2a} \|x\|^2 + \frac{a}{2} \|y\|^2 - \frac{1}{a} \|x-ay\|^2

for any a > 0; in particular,

 \langle x, y \rangle \leq \frac{1}{2a} \|x\|^2 + \frac{a}{2} \|y\|^2.

If we then optimize in a, we obtain the Cauchy-Schwarz inequality.

Example 5

Suggestions welcome!

General discussion

Completing the square can also be done in several variables, whenever one is adding a quadratic form Q(x) = B(x,x) to a linear form L(x) plus some constant terms, provided that the quadratic form is non-degenerate (this is analogous to the a \neq 0 condition in the quadratic formula).

The method also works to some extent for higher degree polynomials, but is significantly weaker. For instance, with cubic equations ax^3+bx^2+cx+d=0, one can complete the cube to eliminate the quadratic factor bx^2 (at the expense of modifying the lower order terms cx+d), but the linear term remains, and further tricks are needed to solve this equation. See "How to solve cubic and quartic equations".


Possible further examples

I have a couple of suggestions. One is to do a Gaussian computation in this article. A natural example is to take a bivariate normal random variable (X,Y) and create out of X and Y two linear combinations U and V that are linearly independent. That should perhaps come after my second suggestion, which is simply to explain how to diagonalize a quadratic form (by giving a few examples). If I have a spare moment, I may add these examples.

The second one is one I care about because a lot of people I teach seem to have the impression that the only way of diagonalizing a quadratic form is to diagonalize the associated symmetric matrix.

Incidentally, I'm all in favour of using the same example in more than one article, so there might be a case for having the Fourier transform of a Gaussian done here as well.

completing the square to prove inequalities

I've seen this used multiple times, in a context similar to the usual proof of Cauchy-Schwarz inequality, where completing the square of a real-valued expression shows that some grouping of terms is non-negative.

Thanks for the suggestions!

I put up some quick examples to reflect them. Feel free to edit, of course.

Inline comments

The following comments were made inline in the article. You can click on 'view commented text' to see precisely where they were made.

Contour shifting

I've started an article about making complex substitutions in integrals. My guess is that this is what you mean by contour shifting, but I have not put this link in yet because I am not quite sure. If it is, and if this is the standard name for the technique, then I'll add a remark to that effect to the article.

Contour shifting

Hmm, I thought this was a common term, but I just googled for it and the first mathematical occurrence of it was... one of my own web pages. But "shifting the contour" is a reasonably common term. It's not _quite_ the same as complex substitution (it's based on Cauchy's theorem for contour integrals, rather than the change of variables formula), but it of course combines very well with such substitutions. But it also can be used independently of complex substitution, e.g. to evaluate integrals such as \int_{-\infty}^{\infty} \frac{e^{ix}}{x^2+1} by shifting the contour to a big semicircle etc.

I link to this nonexistent article in a couple other places, so I think I'll start a stub on it, and perhaps return to it later.

This article is

This article is unintelligible.

How can it be possible to transform the method of completing the square into such a "charabia".

By the way, a(x+x_0)^2 = ax^2 + 2ax x_0 + x_0^2 is false.

I don't find the article

I don't find the article unintelligible at all; perhaps you could clarify what you find problematic.

Also, editing a page to correct typos (like the missing coefficient you spotted) is very simple: just click on edit at the top of the article. I've corrected it here.

Post new comment

(Note: commenting is not possible on this snapshot.)

Before posting from this form, please consider whether it would be more appropriate to make an inline comment using the Turn commenting on link near the bottom of the window. (Simply click the link, move the cursor over the article, and click on the piece of text on which you want to comment.)