About Performance Modeling

I’d like to dispel the myth that performance modeling is too complex to be useful and show how modeling performance can save you a lot of time and money. Even though there are as many kinds of models as there are PhD candidates to dream them up, here we’ll just look at two wildly useful types of models: Capacity and Simulation.

Capacity Models

twitterCapacity models are just regular old capacity planning with a bit of a twist.  Instead of “find a utilization and scale it,” now there is more work to do.  In a capacity model you are redirecting the flow of work, adding new work, adjusting the mix of current transactions, or incorporating new hardware. There are more things to count and account for.

To do a capacity model you have to understand what you’ve got, and then figure out how to adjust the numbers to compensate for the changes you are modeling. It requires performance monitoring, capacity planning skills, simple math and an eye for detail.

Capacity models can do a lot, but they can’t predict the response time under load.  If you need to know that, then you need a simulation model.

Simulation Models

simulationSimulation models are a funny combination of an accounting program, a random number generator, and a time machine.   They simulate work arriving randomly at a pace you select and then simulate the flow of work though a simulated computing world by accounting for costs and delays at every step. They can run faster, or slower, than real time. They can skip ahead through time when they’ve got nothing to simulate at the moment. They give you throughput, utilization and response time information for any computing world that you can dream up. The only problem is that they sound scary.

I used to believe that simulation modeling could only be done by super-smart NASA engineers and was only reasonable to do in situations where things had to work the first time or people would be killed and millions of dollars worth of hardware would be destroyed. I used to believe that simulation modeling was incredibly expensive, hard, and time consuming. I was wrong.

I’ve found simulation modeling to be a useful, and important tool. In a previous job I taught modeling concepts and a PC-based simulation modeling tool to rooms full of regular people who worked for regular companies, doing the normal work of maintaining and improving commercial systems. From that I learned modeling is doable.  The stories I heard from my students about the models their companies relied on proved to me that modeling is useful.

Some Modeling Truths

Before I get into model building, here are some surprising truths about modeling…

Modeling Is Necessary

There are two kinds of important performance problems you can’t solve without modeling. You can’t do performance monitoring, capacity planning, or load testing on un-built systems, as there is nothing to test. You can’t use simple capacity planning or load testing to predict future performance on systems that are about to undergo radical transformations.

In both cases there is a bit of a chicken-or-egg problem as the company wants to know the cost of the hardware for the unbuilt or radically transformed computing world before it is built, but until you build/transform it, you don’t have all the data you need to make those projections. This is solvable.

All Models Are Wrong…


George Box once artfully said: “All Models are wrong, some models are useful.”  So, please take a moment and get over the fact that your model won’t generate a perfect result.

Nobody models to a high degree of accuracy because to get that you have to build wildly complex models that model every little thing. You have to put so much time into the model that the business is out of business before the model sees its first run. The 80:20 rule (see Pareto Principle) applies here.  A simple model can give you a ballpark answer. That is often more than good enough to green light a project or size a hardware request.

…Some Models Are Useful

Imagine an inaccurate model where you are guessing at many of the input parameters and unsure about the transaction mix or peak demand. You run the model and, even with the most optimistic assumptions, it forecasts that you’ll have to buy somewhere between two and five new servers. If the budget is closed for the rest of the year, then this useful model just saved you a lot of time on that sure-to-fail idea. The thing that makes a model useful is your confidence that it is accurate enough to answer your question.

If you have to model, build the least accurate model that will do the job.
Bob’s Ninth Rule of Performance Work


Modeling Can Be Done At Any Time

All models can be built at different stages of a project’s lifecycle: design, testing, and production.  At each stage there is data available to build a model.

In the design stage, models are built on educated guesses, the results of quick tests, business plans, and other less than concrete data. They answer big scale questions like “How many servers will we need?” and are not all that precise.

In the testing stage, models can be built with better data as the design is fairly fixed and there is some running software to meter. Here you can ask small scale questions such as “Can both these processes run on the same server?” as well as big scale questions like “Will this configuration handle the peak?”

In the production stage the entire application and computing world can be metered and tested against. Here, with enough work, you can build a model to answer almost any question.

Models Can Be Built To Different “Resolutions”

At some point in every model you treat some part of your computing world like a black box. Data goes in, data comes out, and you don’t care about the exact inner workings.  Depending on the question you want answered, the model can treat any part of your transaction path as that mysterious black box. It could be as small as a single communications link or as large as the entire datacenter.

The higher the resolution of the model, the more costly and time consuming it is to build. Do not confuse high resolution with high accuracy, as they are not the same.  A low-resolution model can give you a spot-on accurate answer.  For example, if you are modeling your datacenter’s Internet connection, you don’t care what happens inside the datacenter, you care about how the bandwidth requirements will change as the transaction mix changes.

More to come…

There is much more to say about models, but that will have to wait for a future post.

For more info on performance modeling see: The Every Computer Performance Book  on Amazon and iTunes.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s