One of the hardest things to get used to is that, in spite of the close kinship between math and computing, as soon as you start computing with real hardware, differences emerge, and in order to get the most out of your computational tools, you need to be good at both math and computing. "Stiffness" may be the best illustration. Lots of systems involve behavior on multiple timescales. And even if the fast moving behavior is small, it can wreak havoc with your computational tools. This is because you usually don't try to solve the problems analytically. Instead you solve them "numerically", meaning that you let the computer make a really good approximation of the solution, relying on the computer's repeatability and its ability to do fast arithmetic. But in stiff systems, even small errors in the approximation can cause things to go haywire. Here's an example using a simple, classically stiff ordinary differential equation:
It's hard to get any simpler. But when we try to solve the equation by computer using a numerical method that isn't sensitive to stiffness, what we get makes almost no sense at all. To get all of this right, we need to know enough math to recognize that this is a stiff problem, and enough computation to know how to solve a stiff problem on a computer. If stiffness never comes up in real problems, then there are no worries, right? But stiffness is real. Wherever you have behaviors working on different timescales, stiffness arises. Traffic analysis is a good example. If you are trying to model complex traffic flow in busy cities that experience rush hours, or in telecommunications networks that have busy hours, you have to cope with stiffness. Stiffness is a good example, but far from the only place where the differences between math and computing are evident. We briefly mentioned roundoff. Computers are really bad at things like adding small numbers to big ones. And this also comes up often in what are called "ill-conditioned systems", like when you are trying to watch a target but all of your sensors see it from a similar angle (sometimes called "needle space"). In the real world we frequently have little choice in where we can observe a behavior, like in the figure below where we have two fixed sensors observing a vehicle, but the sensors are looking in almost the same direction to see the vehicle. That invariably leads to ill-conditioning. The same problem can come up in social network analytics, for example, when trying to characterize someone by their comments but you only have their comments on a narrow range of subjects. Like a person you only know from their Netflix reviews, and they only review films they like. There are ways to adjust to roundoff problems just as there are ways to adjust to stiffness. But once again, it requires enough math to see the problem coming, and enough computation to know how to fix it. |