A recent diary by teacherken raised some interesting points, although the commentary in a few cases disappointed me when a number of individuals might have implied that the study of calculus is not justified by the limited frequency with which the discipline of calculus is applied outside academia.
Without calling out any individual, because this comment was merely consistent with a recurrent theme and not uniquely offensive, or even extraordinary:
I am an engineer with a Ph.D. in Electrical Engineering and I have used calculus in a non-coursework-related setting at most five times in the 12 years since my Bachelor's degree. (Non-coursework-related includes all the research I did for my advanced degrees as well as all the time spent working since, but does not include calculus done to generate solution sets for homework problems in classes I was TA'ing.) In only two cases did I actually use the solution derived from calculus to solve my problem.
I respond below.
First of all, I need to comment about this axiom, posited on provided anecdotally by teacherken via Car Talk, and diaried by teacherken:
Calculus is the collection of techniques that allow us to determine the slope of any curve and the area under that curve.
This is true as far as it goes. In fact, when I was about 14 years old, I found a small hardback book in the family bookshelf, and it had my mother's name written inside the front cover, so I asked the easy question, "Hey, Mom, what's calculus?" This is almost verbatim what she told me: it's how we calculate the area under a curve.
With all due respect to Mom, while this is a fine answer when a succinct answer is required, it is also a very simplistic answer to the question. It is more to the point a succinct answer to one of the questions: "What is an integral? What is a derivative?" Integrals and derivatives are just basic tools. We are not, at this level, teaching cabinetry. We are teaching how to swing a hammer and how to turn a screwdriver. The calculus is the cabinetry.
I have a mere B.S. in Electrical Engineering, but 18 years of work in the industry, most of which is more Computer Science than Electrical Engineering. My employer has purchased for me a license for Mathematica, a software package that I desperately craved as an undergraduate, to eliminate the drudgery of algebra and the basic mechanics of calculus. While this software package costs twice as much as the PC upon which it runs, I justify its cost to my employer because I use the principles of calculus all the time, and Mathematica is to calculus as the handheld calculator is to basic arithmetic.
As an example, I have written firmware for a device with an LED status light. The status light indicates when the device is active (on vs. off) but its brightness is variable, depending on the brightness of the environment. It is brighter in a brightly lit room than it is in a dark room at night. Obviously, I have a means of determining the brightness of the room lighting, but how do I do that? It turns out that I can buy a sensor that measures the local illumination, but to translate the sensor reading to a meaningful value such as I might measure with a photographic light meter, I need to be able to calculate the following:
x1.4 = e1.4 ln x
While a PowerPC or Core2 processor makes quick work of expressions like this, we often perform calculations like this with (cheap) old microprocessors dating to the 1970s, where we don't have floating point capabilities. How do we figure an expression like this, when all we know are integers between 0 and 65535? Calculus. We used fixed-point arithmetic and power series, the latter of which can only be understood in terms of calculus.
Just in the last couple of days, I ran into a problem where my brightness function was not behaving properly. Obviously, as the room gets brighter and brighter, the LED should get brighter and brighter. We refer to this behavior as "monotonically increasing." The actual behavior was not monotonically increasing, which was obviously bad. So where did things go wrong? Where exactly did the brightness erroneously stop increasing and start decreasing? This is called a "local maximum" and it is easily determined through calculus. I identified the exact point of failure by calculating the derivative function and solving for zero. (Actually, I let Mathematica compute the derivative function and solve it for zero.) This is why we learn calculus. The underlying principles solve real-world problems.
We need to learn principles well enough to understand how to apply them. If we "learn" principles but not how to apply them, then we have not been educated at all. Our failure to apply what we have learned does not mean that what we "learned" was unimportant, but that our understanding of what we learned remains incomplete. We have not learned at all.