a repository of mathematical know-how

I want to find a non-trivial solution to a system of equations

Stub iconThis article is a stub.This means that it cannot be considered to contain or lead to any mathematically interesting information.

Suppose one wants to find a solution x \in V in some class V (such as a vector space) to some system of equations, e.g.

 f_1(x) = \ldots = f_d(x) = 0.

In some cases, there will be an obvious "trivial" solution (e.g. x=0), but one is interested in locating a "non-trivial" solution. In some cases one can go ahead and solve the system exactly, but sometimes the situation is so complicated that this is not feasible, and one would settle for more indirect methods of demonstrating existence of solutions. There are a variety of ways that this can be done.

  • Degree of freedom arguments. If the "dimension" of V is larger than the number of constraints in the system, then one can sometimes establish non-trivial solutions quite easily, especially if the system is linear (or approximately linear), and if the solution is "homogeneous" enough that trivial solutions can be located.

  • Collision arguments. This is related to the degrees of freedom argument. If one can use a non-trivial collision F(x)=F(y) to generate a non-trivial solution to the desired system, then the pigeonhole principle (or dimensional arguments) will give a solution as soon as the cardinality (or dimension) of the domain of F exceeds that of the range.

  • Lower bounding or approximating the number of solutions To demonstrate non-trivial solutions, it suffices to show that the number of solutions exceeds the number of trivial solutions. The number of solutions to a system of equations can sometimes be computed (or at least lower bounded) by analytic methods, such as the Hardy-Littlewood circle method, though one often has to take some sort of asymptotic limit n \to \infty before these bounds become strong enough to generate non-trivial solutions. (The number of trivial solutions is usually very easy to compute.)

  • Divisibility arguments This is a special case of the previous method. In some cases one can show for free that the number of solutions is divisible by some prime p; thus, once one has one trivial solution, one must have non-trivial solutions as well. The Chevalley-Warning theorem is particularly useful for this sort of argument.

  • Topological arguments The intersection number (counting multiplicity) of the surfaces  f_i(x) = 0 \} is often stable with respect to topological deformations and so can be studied by the tools of intersection cohomology. (Need more discussion here - not qualified to present it.) Other topological methods include degree theory, fixed point theory, and the intermediate value theorem.

  • Complex analytic arguments There are various complex-analytic ways to force the existence of solutions to equations such as f(z)=0, such as the argument principle. There are also various ways to deform the system to a simpler system while preserving the number of solutions, using tools such as Rouche's theorem.

  • Perturbation theory Sometimes one does not know how to find an exact solution immediately, but one can at least construct an approximate solution x' to the problem at hand (e.g. a solution where f_1(x'),\ldots,f_d(x') is small rather than zero). In such cases, one can sometimes hope to perturb x' to find a true solution, particularly if the derivatives (or linearization) of the functions f_1,\ldots,f_d around x' is sufficiently non-degenerate. Tools such as Taylor expansion, inverse function theorem and contraction mapping theorem tend to be useful for this task.

Example 1

Problem: Let E be a subset of F^n, where F is a finite field. Show that E is contained in a hypersurface of degree O( |E|^{1/n} ), where |E| is the cardinality of E (and the implied constant can depend on n).

Solution: What we are asking for here is to find a non-trivial polynomial P of degree at most d for some d = O(|E|^{1/n}) which vanishes at every point of E. But one can view the space of polynomials of degree at most d as a vector space of dimension about O(d^n), and the requirement that a polynomial vanish at every point of E can be viewed as a set of |E| homogeneous linear constraints on this vector space. Linear algebra then tells us that a non-trivial solution exists as soon as the dimension of the space (i.e. the degrees of freedom) exceeds |E|, and the claim follows.

This result, incidentally, is used in Dvir's proof of the finite field Kakeya conjecture; see this blog post for further discussion.

Example 2

(Kronecker approximation theorem - this is an "entropy" version of this general method, where we deal with approximate solutions to a problem rather than exact ones.)


Article structure

This should probably be restructured to match the form of the articles under "What kind of problem am I trying to solve?". I might even do so myself if I have the time and energy...

Post new comment

(Note: commenting is not possible on this snapshot.)

Before posting from this form, please consider whether it would be more appropriate to make an inline comment using the Turn commenting on link near the bottom of the window. (Simply click the link, move the cursor over the article, and click on the piece of text on which you want to comment.)