# What is the domain of a solution of an ODE?

Jordan Bell
April 16, 2015

## 1 Introduction

This paper is about the question: what is the domain of the solution of a differential equation? In other words, what are blow-up conditions for a solution of an ordinary differential equation?

We cannot properly speak about a function before knowing its domain. An older notion of function is an “analytical expression” (see [28, p. 61], [15] and [25, Chapter 5]), in which the rules of the game allow us to write an expression like $\sqrt{\sin x}$ and then ask what its domain is, rather than letting $E=[0,\pi]$ and defining $f:E\to\mathbb{R}$ by $f(x)=\sqrt{\sin x}$. But I would like to speak about functions, not analytical expressions, and before we can manipulate a function in order to find an explicit expression for it, we must first know that the function exists, and it must have a certain domain, which we may be able to determine explicitly. In other words, one can solve a differential equation by supposing that there is a solution and using the fact that it solves the differential equation to show that it must have an explicit form, and then checking that the explicit function you end up with actually does solve the differential equation; but in this paper we would like to only talk about the properties of functions we already know exist, and to be guaranteed that the function we end up with is a solution because of the correctness of each step we took, not by manually checking that a certain expression we have found actually is a solution. (Indeed, there are places where it is a helpful exploratory device to assume that a sequence has a limit, like with a recurrent sequence, to find what the limit would have to be, and then to prove that this is the limit of the sequence.)

We can prove that a solution exists on some interval around $0$, but how big is this interval? To get our hands on an expression for $x(t)$ we need to talk about $x(t)$, and how can we talk about $x(t)$ before we know the domain of $x$? Of course we can check whether an expression given by an oracle is a solution of an initial value problem on some interval. But we would like to be certain that each step we take in determining the form of a solution is correct, so that we end up with a function, defined on a certain interval, that solves the initial value problem.

If we know the current state of a system and the way that it changes instantaneously, we would like to know its state at any future time. But it can happen that it doesn’t make sense to ask what the state of the system is at some later time.

The existence and uniqueness theorem is presented and discussed in Forsyth [6, pp. 26–41, Chapter II], Painlevé [21], and Ince [14], and is also presented in each of the references for Theorem 2. See also [23]. The history of differential equations is presented in [1]. Liouville’s anticipation of Picard approximations is explained in [18, p. 448, §32]. See also Youschkevitch [29] and Mawhin [20]. See also Tournés [27].

## 2 Maximal interval of existence

For $u\in\mathbb{R}^{2}$ and $r>0$, we define $B_{r}(u)=\{v\in\mathbb{R}^{2}:|u-v|. (For $u_{1}=(x_{1},t_{1}),u_{2}=(x_{2},t_{2})\in\mathbb{R}^{2}$, $|u_{1}-u_{2}|=\sqrt{|x_{1}-x_{2}|^{2}+|t_{1}-t_{2}|^{2}}$.) Let $E\subseteq\mathbb{R}^{2}$. We say that $f:E\to\mathbb{R}$ is locally Lipschitz if for each $u\in E$ there is some $\delta$ and some $K$ such that if $v,w\in B_{\delta}(u)$ then $|f(v)-f(w)|\leq K|v-w|$.

A helpful way to check that a function is locally Lipschitz is the following [2, p. 218, Theorem]. If $E\subseteq\mathbb{R}^{2}$ is open and the gradient $\nabla f:E\to\mathbb{R}^{2}$ of $f:E\to\mathbb{R}$ is continuous, then for any convex compact subset $A$ of $E$ and with $K=\max_{u\in A}|(\nabla f)(u)|$ we have $|f(u)-f(v)|\leq K|u-v|$ for all $u,v\in A$. It follows that if the gradient of $f:E\to\mathbb{R}$ is continuous, then $f$ is locally Lipschitz.

For example, let $E=\{(x,t)\in\mathbb{R}^{2}:x\neq 0\}$, and define $f:E\to\mathbb{R}$ by $f(x,t)=\frac{t^{2}}{x}$. Then $\nabla f:E\to\mathbb{R}^{2}$,

 $(\nabla f)(x,t)=\begin{pmatrix}-\frac{t^{2}}{x^{2}}\\ \frac{2t}{x}\end{pmatrix},$

is continuous, and hence $f$ is locally Lipschitz. (Indeed, $\nabla f$ is unbounded on $E$, and thus $f:E\to\mathbb{R}$ is not Lipschitz.)

Let $E\subseteq\mathbb{R}^{2}$ be open, let $f:E\to\mathbb{R}$, and let $(x_{0},t_{0})\in E$. A solution of the initial value problem

 $x^{\prime}=f(x,t),\qquad x(t_{0})=x_{0}$

is an interval $J$ with $t_{0}\in J$ and a function $x:J\to\mathbb{R}$ satisfying $x(t_{0})=x_{0}$ and $x^{\prime}(t)=f(x(t),t)$ for all $t\in J$.

The existence and uniqueness theorem for ordinary differential equations is the following.

###### Theorem 1.

Let $E\subseteq\mathbb{R}^{2}$, let $f:E\to\mathbb{R}$ be locally Lipschitz, and let $(x_{0},t_{0})\in E$. There is some $\epsilon>0$ such that there is one and only one $x:(t_{0}-\epsilon,t_{0}+\epsilon)\to\mathbb{R}$ that is a solution of the initial value problem

 $x^{\prime}=f(x,t),\qquad x(t_{0})=x_{0}.$

We say that an interval $J$ is a maximal interval of existence for an initial value problem if there is a solution of the initial value problem defined on $J$, and if for any interval $J^{\prime}$ that strictly includes $J$, there is no solution of the initial value problem defined on $J^{\prime}$. Cf. maximal domain of holomorphic function [22, p. 112, §2]. Lefschetz [17, p. 35]: “The analogy with the classical process of analytic continuation is obvious.” If we would like to speak about “the” solution of an initial value problem, what we would mean is a solution with a maximal domain.

Let $E\subseteq\mathbb{R}^{2}$ be open, $f:E\to\mathbb{R}$ be locally Lipschitz, and let $(x_{0},t_{0})\in E$. There exists [26, p. 51, Theorem 2.13] a maximal interval of existence $J=(\alpha,\beta)$, $-\infty\leq\alpha, for the initial value problem

 $x^{\prime}(t)=f(x(t),t),\qquad x(t_{0})=x_{0}.$

It is useful to talk about the maximal domain of existence for an initial value problem because we can say things about the behavior of the solution as $t$ approaches the endpoints of the interval. There isn’t a simple way of determining the maximal domain of existence of an initial value problem other than by explicitly finding the solution. But there are indeed differential equations whose solutions cannot be expressed in terms of elementary functions: Hubbard and Lundell [12] make precise what it means to be expressed in terms of elementary functions, and show that no solution of $x^{\prime}=t-x^{2}$ can be thus expressed. And in fact there exist computable $f$ such that the maximal interval of existence of initial value problem $x^{\prime}=f(x,t),x(0)=x_{0}$ is not computable [7].

The following theorem is proved in Arnold [2, p. 53, Corollary 5], in which it is called the extension theorem. It is also proved in Graves [8], Hurewicz [13, p. 17, Corollary], Bourbaki [3, p. 172, Theorem 2], Coddington and Levinson [4, p. 47, Theorem 1.3], Lefschetz [17, p. 35], Hartman [10, p. 12, Theorem 3.1], Hale [9], Hirsch, Smale and Devaney [11, p. 398, Theorem], and Teschl [26, p. 53, Corollary 2.16], as well as in other books.

###### Theorem 2.

Let $E\subseteq\mathbb{R}^{n}$ be open, let $f:E\to\mathbb{R}$ be locally Lipschitz, and let $(x_{0},t_{0})\in E$. Let $J=(\alpha,\beta)$ be the maximal interval of existence for the initial value problem

 $x^{\prime}(t)=f(x(t),t),\qquad x(t_{0})=x_{0}.$

If $\beta$ is finite, then for any compact set $K\subset E$ there is some $t\in[t_{0},\beta)$ such that $(x(t),t)\not\in K$. If $\alpha$ is finite, then for any compact set $K\subset E$ there is some $t\in(\alpha,t_{0}]$ such that $(x(t),t)\not\in K$.

It follows that as $t\to\alpha$ or $t\to\beta$, either $|x(t)|\to\infty$ or $(x(t),t)$ has a limit point on the boundary of $E$ (and any limit point of $(x(t),t)$ is on the boundary of $E$). In particular, if $E=\mathbb{R}^{2}$ then $\lim_{t\to\alpha}|x(t)|=\infty$ and $\lim_{t\to\beta}|x(t)|\to\infty$.

## 3 Two examples

Example 1. Consider the initial value problem

 $x^{\prime}=2tx^{2},\qquad x(0)=x_{0}.$

Let us do this very carefully. Let $x$ have maximal domain $(\alpha,\beta)$. If $x_{0}=0$, then $x(t)=0$ is a solution of the initial value problem, and the domain of the solution is $\mathbb{R}$. Otherwise, suppose $x_{0}\neq 0$. If there is some $t\in(\alpha,0)$ such that $x(t)=0$, then let

 $A=\sup\{t\in(\alpha,0):x(t)=0\};$

if there is no $t\in(\alpha,0)$ such that $x(t)=0$, then let $A=\alpha$. Since $x:(\alpha,\beta)\to\mathbb{R}$ is continuous, it follows that $A<0$. Likewise, if there is some $t\in(0,\beta)$ such that $x(t)=0$, then let

 $B=\inf\{t\in(0,\beta):x(t)=0\};$

if there is no $t\in(0,\beta)$ such that $x(t)=0$, then let $B=\beta$. Since $x$ is continuous, $B>0$.

Let $g(u)=-u^{-1}$. For $t\in(A,B)$, since $x^{\prime}(t)=2t(x(t))^{2}$ and $x(t)\neq 0$, we have

 $\frac{d(g\circ x)}{dt}(t)-2t=\frac{x^{\prime}(t)-2t(x(t))^{2}}{(x(t))^{2}}=0.$

Then, for $t\in(A,B)$, we have

 $\int_{0}^{t}\frac{d(g\circ x)}{ds}(s)-2sds=0.$

Hence,

 $g(x(t))-g(x(0))-t^{2}=0,$

i.e.

 $-\frac{1}{x(t)}+\frac{1}{x_{0}}-t^{2}=0.$

Therefore, if $t\in(A,B)$ then

 $x(t)=\frac{x_{0}}{1-t^{2}x_{0}}.$ (1)

If $A>\alpha$ then there is indeed at least one $t\in(\alpha,0)$ such that $x(t)=0$, and in particular $x(A)=0$. So, as $x:(\alpha,\beta)\to\mathbb{R}$ is continuous,

 $\lim_{t\to A}x(t)=0.$

But it follows from (1) that

 $\lim_{t\to A}x(t)=\frac{x_{0}}{1-A^{2}x_{0}}\neq 0.$

Thus $A=\alpha$. Likewise,

 $\lim_{t\to B}x(t)=\frac{x_{0}}{1-B^{2}x_{0}}\neq 0,$

and thus $B=\beta$. Therefore $(A,B)=(\alpha,\beta)$.

If $\alpha\neq-\infty$, then

 $\lim_{t\to\alpha}|x(t)|=\infty,$

and if $\beta\neq+\infty$, then

 $\lim_{t\to\beta}|x(t)|=\infty.$

Suppose that $x_{0}<0$. Then

 $\lim_{t\to\alpha}|x(t)|=\frac{x_{0}}{1-\alpha^{2}x_{0}}\neq\infty.$

Hence $\alpha=-\infty$. And

 $\lim_{t\to\beta}|x(t)|=\frac{x_{0}}{1-\beta^{2}x_{0}}\neq\infty,$

hence $\beta=+\infty$. Therefore, if $x_{0}<0$, then $x$ has domain $\mathbb{R}$.

Suppose that $x_{0}>0$. If $\alpha<\frac{1}{\sqrt{x_{0}}}$, then

 $\lim_{t\to\alpha}|x(t)|=\frac{x_{0}}{1-\alpha^{2}x_{0}}\neq\infty,$

as $1-\alpha^{2}x_{0}>0$. Therefore $\alpha\geq-\frac{1}{\sqrt{x_{0}}}$. Since $\lim_{t\to-\frac{1}{\sqrt{x_{0}}}}|x(t)|=\infty$, it follows that $\alpha=-\frac{1}{\sqrt{x_{0}}}$. Likewise, $\beta=\frac{1}{\sqrt{x_{0}}}$. Therefore, if $x_{0}>0$, then $x$ has domain $\left(-\frac{1}{\sqrt{x_{0}}},\frac{1}{\sqrt{x_{0}}}\right)$.

Example 2. Now let’s do an example of an initial value problem where the domain of the vector field is not $\mathbb{R}^{2}$. Consider the initial value problem

 $x^{\prime}=\frac{1}{1-t}\cdot\frac{1}{1-x},\qquad x(0)=x_{0}.$

Let $E=\mathbb{R}^{2}\setminus\{(1,1)\}$; the boundary of $E$ is $\{(1,1)\}$.

Let $x$ have maximal domain $(\alpha,\beta)$. Let $g(u)=u-\frac{u^{2}}{2}$. If $t\in(\alpha,\beta)$ then

 $\frac{d(g\circ x)}{dt}(t)-\frac{1}{1-t}=x^{\prime}(t)-x(t)x^{\prime}(t)-\frac{% 1}{1-t}=0.$

Thus

 $\int_{0}^{t}\frac{d(g\circ x)}{ds}(s)-\frac{1}{1-s}ds=0,$

so

 $g(x(t))-g(x_{0})+\log(1-t)=0.$

That is,

 $\frac{(x(t))^{2}}{2}-x(t)+g(x_{0})-\log(1-t)=0.$

Using the quadratic formula, we either have

 $x(t)=1+\sqrt{1-2(g(x_{0})-\log(1-t))}$

or

 $x(t)=1-\sqrt{1-2(g(x_{0})-\log(1-t))}.$

If $x_{0}>1$ then, since $x$ is continuous and $x(t)$ cannot be equal to $1$, we have

 $x(t)=1+\sqrt{1-2g(x_{0})+\log(1-t)}.$

As $x(t)$ is a real number, $1-2g(x_{0})+\log(1-t)\geq 0$, i.e. $\log(1-t)\geq g(x_{0})-\frac{1}{2}$, i.e. $1-t\geq\exp(g(x_{0})-\frac{1}{2})$, i.e. $t\leq 1-\exp\left(x_{0}-\frac{x_{0}^{2}}{2}-\frac{1}{2}\right)$. Let $B=1-\exp\left(x_{0}-\frac{x_{0}^{2}}{2}-\frac{1}{2}\right)$. As $t\to\beta$, either $|x(t)|\to\infty$ or $(1,1)$ is a limit point of $(x(t),t)$. But if $\beta, then

 $\lim_{t\to\beta}x(t)=1+\sqrt{1-2g(x_{0})+\log(1-\beta)}>0,$

from which we conclude two things: $\lim_{t\to\beta}|x(t)|\neq\infty$, and $(1,1)$ is not a limit point of $(x(t),t)$ as $t\to\beta$. It follows that $\beta=B$. On the other hand, if $\alpha\neq-\infty$, then $\lim_{t\to\alpha}|x(t)|=\infty$ (because $(1,1)$ cannot be limit point of $(x(t),t)$ as $t\to\alpha$, since $\alpha<0$). But

 $\lim_{t\to\alpha}x(t)=1+\sqrt{1-2g(x_{0})+\log(1-\alpha)},$

so $\lim_{t\to\alpha}|x(t)|\neq\infty$. It follows that $\alpha=-\infty$. Therefore, if $x_{0}>1$ then

 $x(t)=1+\sqrt{1-2x_{0}+\frac{x_{0}^{2}}{2}+2\log(1-t)},$

and the domain of $x$ is

 $\left(-\infty,1-\exp\left(x_{0}-\frac{x_{0}^{2}}{2}-\frac{1}{2}\right)\right).$

If $x_{0}<1$, then likewise,

 $x(t)=1+\sqrt{1-2x_{0}+\frac{x_{0}^{2}}{2}+2\log(1-t)},$

and the domain of $x$ is also

 $\left(-\infty,1-\exp\left(x_{0}-\frac{x_{0}^{2}}{2}-\frac{1}{2}\right)\right).$

## 4 The implicit function theorem and exact differential equations

Exact differential equation. Consider the initial value problem

 $x^{\prime}=\frac{\cos x}{t\sin x-x^{2}},\qquad x(0)=x_{0}.$

Let $E=\{(x,t)\in\mathbb{R}^{2}:t\sin x-x^{2}\neq 0\}$. Let $x$ have maximal domain $(\alpha,\beta)$.

Let $\psi(x,t)=t\cos x+\frac{x^{3}}{3}$. For $t\in(\alpha,\beta)$, we have

 $\displaystyle\frac{d}{dt}(\psi(x(t),t))$ $\displaystyle=$ $\displaystyle\psi_{t}(x(t),t)+\psi_{x}(x(t),t)x^{\prime}(t)$ $\displaystyle=$ $\displaystyle\cos(x(t))+(-t\sin(x(t))+(x(t))^{2})x^{\prime}(t)$ $\displaystyle=$ $\displaystyle 0.$

Therefore, for $t\in(\alpha,\beta)$,

 $\int_{0}^{t}\frac{d}{ds}(\psi(x(s),s))ds=0,$

and so

 $\psi(x(t),t)-\psi(x(s),0)=0,$

i.e.,

 $t\cos(x(t))+\frac{(x(t))^{3}}{3}-\frac{x_{0}^{3}}{3}=0.$ (2)

The implicit function theorem [16, p. 36, Theorem 3.2.1]: If $W\subseteq\mathbb{R}^{2}$ is open, $F:W\to\mathbb{R}$ has a continuous gradient, $(p_{0},q_{0})\in W$, $F(p_{0},q_{0})=0$, and $\frac{\partial F}{\partial q}(p_{0},q_{0})\neq 0$, then there exists an open interval $I$, $p_{0}\in I_{0}$, such that there is one and only one $h:I_{0}\to\mathbb{R}$ whose derivative is continuous and that satisfies $h(p_{0})=q_{0}$ and $F(p,h(p))=0$ for all $p\in I_{0}$. Similar to why there is a maximal interval of existence for an initial value problem with a locally unique solution (cf. [26, p. 51, Theorem 2.13]), there is an interval $I=(A,B)$, $p_{0}\in I$, such that there is one and only one $h:I\to\mathbb{R}$ satisfying $h(p_{0})=q_{0}$ and $F(p,h(p))=0$ for all $p\in I$, and for any interval $I^{\prime}$ that strictly contains $I$ there is no $h:I\to\mathbb{R}$ satisfying $h(p_{0})=q_{0}$ and $F(p,h(p))=0$ for all $p\in I^{\prime}$. Moreover, like Theorem 2, if $A\neq-\infty$ then $\frac{\partial F}{\partial q}(p,h(p))\to 0$ as $p\to A$, and if $B\neq+\infty$ then $\frac{\partial F}{\partial q}(p,h(p))\to 0$ as $p\to B$.

Let’s apply this to our initial value problem. Define $F:\mathbb{R}^{2}\to\mathbb{R}$ by $F(p,q)=p\cos(q)+\frac{q^{3}}{3}-\frac{x_{0}^{3}}{3}$. Let $(A,B)$, $A<0, be the maximal interval such that there exists $h:(A,B)\to\mathbb{R}$ satisfying $h(0)=x_{0}$ and $F(p,h(p))=0$ for all $p\in(A,B)$. For $(p,q)\in\mathbb{R}^{2}$, $\frac{\partial F}{\partial q}(p,q)=-p\sin q+q^{2}$. Thus, $F(p,q)=0$ and $\frac{\partial F}{\partial q}(p,q)=0$ means

 $p\cos q+\frac{q^{3}}{3}-\frac{x_{0}^{3}}{3}=0\qquad\textrm{and}\qquad-p\sin q+% q^{2}=0.$ (3)

It follows that $A$ is the greatest negative $p$ for which is a $q$ so that $p$ and $q$ satisfy (3), and $B$ is the least positive $p$ for which there is a $q$ so that $p$ and $q$ satisfiy (3).

But $x(t)$ satisfies (2) for each $t\in(\alpha,\beta)$, and $(A,B)$ is the maximal interval for which there is some $h:(A,B)\to\mathbb{R}$ satisfying $h(0)=x_{0}$ and (2) for each $t\in(A,B)$. Hence $\alpha\geq A$ and $\beta\leq B$. If $\alpha>A$ then either $\lim_{t\to\alpha}|x(t)|=\infty$ or $(x(t),t)$ has a limit point on $\partial E=\{(x,t)\in\mathbb{R}^{2}:t\sin x-x^{2}=0\}$; the first of these contradicts

 $\lim_{t\to\alpha}|x(t)|=\lim_{t\to\alpha}|h(t)|=|h(\alpha)|,$

and the second of these contradicts the minimality of $A$. Thus $\alpha=A$ and $\beta=B$.

Therefore, $x$ is the unique $h:(A,B)\to\mathbb{R}$ that satisfies (2), where $A$ is the greatest negative $p$ for which is a $q$ so that $p$ and $q$ satisfy (3), and $B$ is the least positive $p$ for which there is a $q$ so that $p$ and $q$ satisfiy (3).

Asking what the domain of the solution of an ODE is is like asking what the domain of an implicit function is. Some historical references I have collected about implicit functions are Ulisse Dini’s Lezioni di analisi infinitesimale, vol. 1, pp. 197–241; page 155 of Leibniz’ Mathematische Schriften, ed. Gerhardt, vol. I; page 241 of Euler’s Institutiones Calculi Differentialis; in a note I have written down Dini 1877-1888, p. 7, but not what the reference is. I suppose it’s his Fondamenti per la teorica delle funzioni di variabili reali. Of course, we can also call the inversion of a power series with nonzero constant coefficient a case of the implicit function theorem, and certainly Euler and Lagrange could do that; I’m not certain about Newton. Three additional references I have written down about the implicit function theorem are [19], [24], and the Russian [5].

## 5 Autonomous ODE

If $f$ is positive and does not depend on $t$, then we have the following result. Let $x_{0}\in\mathbb{R}$. If $f:[x_{0},\infty)\to\mathbb{R}$ is positive and continuous, then the maximal interval of existence for the initial value problem

 $x^{\prime}=f(x),\qquad x(0)=x_{0}$

is $(-\infty,T)$, where

 $T=\int_{x_{0}}^{\infty}\frac{du}{f(u)}.$

Let’s check this.

For example, for $x_{0}>0$, $p>1$ and $f(x)=x^{p}$, we have

 $T=\frac{1}{(p-1)x_{0}^{p-1}}.$

## References

• [1] T. Archibald (2003) Differential equations: a historical overview to circa 1900. In A History of Analysis, H. N. Jahnke (Ed.), History of Mathematics, Vol. 24, pp. 325–353. Cited by: §1.
• [2] V. I. Arnold (1973) Ordinary differential equations. MIT Press, Cambridge, MA. Note: Translated from the Russian by Richard A. Silverman Cited by: §2, §2.
• [3] N. Bourbaki (2004) Functions of a real variable: elementary theory. Elements of Mathematics, Vol. IV, Springer. Note: Translated from the French by Philip Spain Cited by: §2.
• [4] E. A. Coddington and N. Levinson (1955) Theory of ordinary differential equations. International Series in Pure and Applied Mathematics, McGraw-Hill, New York. Cited by: §2.
• [5] A. V. Dorofeeva (1989) The implicit function theorem and its connection with the theory of extremal problems. Istor. Metodol. Estestv. Nauk (36), pp. 34–44. External Links: ISSN 0579-0204, MathReview (Karl-Heinz Schlote) Cited by: §4.
• [6] A. R. Forsyth (1900) Theory of differential equations. Vol. II, Cambridge University Press. Cited by: §1.
• [7] D. S. Graça, N. Zhong, and J. Buescu (2009) Computability, noncomputability and undecidability of maximal intervals of IVPs. Trans. Amer. Math. Soc. 361 (6), pp. 2913–2927. External Links: ISSN 0002-9947, Document, Link, MathReview (Klaus Weihrauch) Cited by: §2.
• [8] L. M. Graves (1946) The theory of functions of real variables. McGraw-Hill, New York. External Links: MathReview (František Wolf) Cited by: §2.
• [9] J. K. Hale (1969) Ordinary differential equations. John Wiley & Sons. Cited by: §2.
• [10] P. Hartman (1964) Ordinary differential equations. John Wiley & Sons. Cited by: §2.
• [11] M. W. Hirsch, S. Smale, and R. L. Devaney (2013) Differential equations, dynamical systems, and an introduction to chaos. third edition, Academic Press. Cited by: §2.
• [12] J. H. Hubbard and B. E. Lundell (2011) A first look at differential algebra. Amer. Math. Monthly 118 (3), pp. 245–261. Cited by: §2.
• [13] W. Hurewicz (1958) Lectures on ordinary differential equations. MIT Press, Cambridge, MA. Cited by: §2.
• [14] E. L. Ince (1956) Ordinary differential equations. Dover, New York. Cited by: §1.
• [15] I. Kleiner (1989) Evolution of the function concept: a brief survey. College Math. J. 20 (4), pp. 282–300. Cited by: §1.
• [16] S. G. Krantz and H. R. Parks (2002) The implicit function theorem: history, theory, and applications. Birkhäuser. External Links: ISBN 0-8176-4285-4, Document, Link, MathReview (B. Mordukhovich) Cited by: §4.
• [17] S. Lefschetz (1962) Differential equations: geometric theory. second edition, John Wiley & Sons. Cited by: §2, §2.
• [18] J. Lützen (1990) Joseph Liouville 1809–1882: master of pure and applied mathematics. Studies in the History of Mathematics and Physical Sciences, Vol. 15, Springer. Cited by: §1.
• [19] J. H. Manheim (1964) The genesis of point set topology. Pergamon Press, Oxford. External Links: MathReview (M. Kline) Cited by: §4.
• [20] J. Mawhin (1988) Problème de cauchy pour les équations différentielles et théories de l’intégration : influences mutuelles. Cahiers du séminaire d’histoire des mathématiques 9, pp. 231–246. Cited by: §1.
• [21] P. Painlevé (1899–1916) Gewöhnliche Differentialgleichungen; Existenz der Lösungen. In Enzyklopädie der Mathematischen Wissenschaften mit Einschluss ihrer Anwendungen. Band II, 1. Teil, 1. Hälfte, H. Burkhardt, W. Wirtinger, and R. Fricke (Eds.), pp. 189–229. Cited by: §1.
• [22] R. Remmert (1998) Classical topics in complex function theory. Graduate Texts in Mathematics, Vol. 172, Springer. Note: Translated from the German by Leslie Kay Cited by: §2.
• [23] C. E. Roberts Jr. (1976) Why teach existence and uniqueness theorems in the first course in ordinary differential equations?. Internat. J. Math. Ed. Sci. Tech. 7 (1), pp. 41–44. Cited by: §1.
• [24] D. Rüthing (1984) Some definitions of the concept of function from Joh. Bernoulli to N. Bourbaki. Math. Intelligencer 6 (4), pp. 72–77. External Links: ISSN 0343-6993, MathReview (F. J. Papp) Cited by: §4.
• [25] I. Stewart (1995) Concepts of modern mathematics. Dover, New York. Cited by: §1.
• [26] G. Teschl (2012) Ordinary differential equations and dynamical systems. Graduate Studies in Mathematics, Vol. 140, American Mathematical Society, Providence, RI. Cited by: §2, §2, §4.
• [27] D. Tournés (2012) Diagrams in the theory of differential equations (eighteenth to nineteenth centuries). Synthese 186 (1), pp. 257–288. Cited by: §1.
• [28] A. P. Youschkevitch (1976/77) The concept of function up to the middle of the 19th century. Arch. History Exact Sci. 16 (1), pp. 37–85. External Links: ISSN 0003-9519, MathReview (E. Neuenschwander) Cited by: §1.
• [29] A. P. Youschkevitch (1981) Sur les origines de la “méthode de Cauchy-Lipschitz” dans la théorie des équations différentielles ordinaires. Rev. Histoire Sci. Appl. 34 (3-4), pp. 209–215. Cited by: §1.