I have no social media accounts; all purported ones are fake.

Can you ever truly know how a bullet is going to work? Probably not, but if you’re very careful you can get close.

Can you ever truly know how a bullet is going to work? Probably not, but if you’re very careful you can get close.

aimed_research_sub-microsecond_photography_of_federal_power-shok_100grn_.243

Wikimedia Commons

.

Someone sent me an article on a gelatin test of some ammunition; I replied that I don’t put much stock in such things when done by amateurs. Why might that be?
——
It happened again the other day: an online argument triggered by a gelatin test of a particular ammunition, one in which someone predicted the performance of the ammunition based on his own gelatin testing.

I don’t really pay much attention to these kinds of “tests”, because they’re usually done without proper controls and — worse — without an understanding of why ballistic gelatin tests are done and how they should be interpreted.

A gelatin test is, in scientific terms, a model. Models can be physical (like our gel blocks), mathematical, or even philosophical. The reason we make models is to enable us to do two things: first, explain why something happened the way it did, and second to be able to predict what will happen under similar circumstances in the future.

In order to do this we make informed conjectures about the variables which we believe influence the outcomes we’ve observed, and then test those conjectures by building a model which includes only those variables. Here’s where it gets both interesting and complicated: we can’t include all of the variables in any model, because there are often many more than we can account for, or they’re interrelated with other variables, or there are variables which we don’t yet know about, or because including all of them would be prohibitive in some significant way.

As a result we build our models to account for only those variables which we think are the most important, those that we believe to have the most impact on the outcome. In essence, we make simplifications which strip away inessential elements and leave those things that are essential to our understanding. Then, we feed that model data and test for its explanatory and/or predictive qualities.

Because we’re looking for understanding, not necessarily simulation, the result is that we sometimes end up with a model which bears no resemblance to the thing which we are attempting to understand. Stripping a problem to only the most essential variables often means that it doesn’t look familiar, because those things which make it look the way we expect it to look prove to be inessential to the model’s value. Combining variables adds another layer of abstraction.

This is, for instance, why we can test certain pharmaceuticals on animals which bear no resemblance to human beings; it’s the outcome, not how much they look like us, in which we’re interested. As someone once said, “the map is not the territory.”

This is also why ballistic gelatin doesn’t really look or act like human tissue — it’s not supposed to. It’s not a tissue simulant or replacement, no matter how many times you’ve heard it referred to as such. It’s a test medium which allows us to explain how and why projectiles do what they do, and to serve as a way to compare future versions with what we know already works. Ballistics gelatin provides a model framework which, when the variables of penetration and expansion and their effects on the test medium are measured, allows us to make educated guesses as to what works.

Remember that our ballistic model was built upon an existing knowledge of what has worked in the past; if you test something which is known to work and come to the conclusion that it won’t, it means that either a) the variables tested by the model aren’t the correct ones; b) the test conditions weren’t as specified by the model; or c) the analysis of the model’s data is fundamentally flawed in some way. With amateurs, it’s usually b) and/or c).

Again: the variables included in the model are those found to be sufficient to explain and/or predict results. If properly chosen, they don’t rely on the presence of other variables to give usable results, and in fact introducing variables that aren’t part of the model don’t really do any good. In the case of ballistic gelatin, doing things like adding bones to the matrix show a phenomenal ignorance of what a model is and what it’s trying to do; the gelatin, remember, isn’t a simulant! It’s a test medium whose composition is chosen because it embodies the collective variables found in living tissue.

In other words, the model (the gelatin and performance measurements) have already taken into account variables like skin and bones and muscle. Because of this, it can explain or predict even though it doesn’t look, feel, or act like any of those things. The trouble is that it takes an understanding of the model to interpret the results, because the model (and the results) are an abstraction — a carefully controlled abstraction which proves its value by the results it gives.

It’s complicated, I’ll admit. Models often are, which is why experimental design is so critical to understanding any phenomenon. It’s also why I pay little attention to amateur gel tests or to tests using other materials which supposedly better “simulate” flesh. The closer you try to get to a simulant, the more variables you have to include; what’s more, unless the target itself is completely homogenous you can never really simulate it — because only the target itself has all the variables needed to make valid observations.

This is why we have models in the first place, so that we can explain and predict without accounting for every unaccountable variable.

-=[ Grant ]=-

  • Posted by Grant Cunningham
  • On October 6, 2014

Leave Reply