In this post we make use of Taylor polynomial expansions to efficiently solve one of 2018 Putnam competition problems.
Here is the problem statement.
Let f:R→R be an infinitely differentiable function satisfying f(0)=0 and f(1)=1, and f(x)≥0 for all x∈R. Show that there exist a positive integer n and a real number x such that f(n)(x)<0.
We proceed by contradiction, thus assuming that f(n)(x)≥0 for all real x and all positive integers n.
- Show that if there exists c<0 such that f(c)>0, then the Mean Value Theorem implies the existence of a point x∈(c,0) such that f′(x)<0. Conclude that it must be f(x)=0 for x≤0.
- Use the above fact, and infinite differentiability, to conclude that f(n)(0)=0 for all n∈Z+.
- Apply Taylor’s Theorem in the interval [0,1] to show that f(1)=f(n)(ηn)n!,for some ηn∈(0,1). Conclude that for each positive integer n there is a point ηn∈(0,1) such that f(n)(ηn)=n!
- Use 3. and the fact that, for all real x, f(n+1)(x)≥0 to prove that f(n)(1)≥n!
- Apply again Taylor’s Theorem, in the interval [1,2] this time, to write f(2)=n∑k=0f(k)(1)k!+f(n+1)(ξn)(n+1)! where we defined f(0)(x)=f(x) and where ξn∈(1,2). By 4. and our hypothesis, we get the contradiction f(2)≥n, ∀n∈Z+.
You might have noticed that there is nothing special in the hypothesis f(1)=1. As a further exercise, you could replace this hypothesis with the weaker requirement for f to be non-constant. (Hint: show that there must exist a>0 such that f(a)>0. Use the auxiliary function g(x)=f(ax)f(a)…)