Fun with Lab Radar Data - Initial findings on variability of BC as shot

I’ve been playing with my Labradar data similarly. Brian Litz impressed on me that long range accuracy is significantly influenced by BC variation. And have a Labradar that can measure that. So I spent a lot of time calculating BC from the series summary data and the five downrange velocities recorded per shot. The BC seemed to change a lot, much more than the vertical spread on target would imply, and after some deeper digging I think it’s a data issue much more than actual shot to shot BC variance.

The deeper dive revealed that Labradar uses very lazy math to calculate the downrange velocities. Each shot is recorded as a series of distances over regular intervals of time. A downrange velocity is simply (d2-d1) / (t2-t1) where 1 and 2 are two points collected at the chosen distance. The distance is the measured data and error shows up as variations in the distance. So when two consecutive distances in the shot data are perfect, the downrange velocity is calculated accurately. As error develops in the distance value, it causes the velocity data wander high and low. The velocity errors build rapidly because the time intervals are so tight. The LabRadar series summary data shows you those downrange velocities without any indication of whether it’s a solid value or a snip of the noisy garbage. So if you pull up the data for an individual shot you’ll see the more distant downrange velocity data is jumping up and down from the actual values.

We have two choices: drop the noisy data and play with shorter ranges or calculate the velocities using a better data fitting model. The SNR data did not correlate crisply with my perception of where the velocities went unusable.

It didn’t take me long to move from the former to the latter. Using Excel I have calculated a fifth order polynomial fit of the raw distance vs time data, then I calculate a velocity vs time by taking the derivative of the distance function. One limitation is that can process only one shot at a time this way. Processing a series requires maybe 20 seconds of copy-pasting per shot.

Data Processing Method.png


Explanation: Magenta = raw shot data. I copy and paste these columns to process an individual shot.
Yellow: some controls that set the bounds for the data pulled by the following functions
Green: My Distance vs. time and Velocity vs. time functions. Green column is populated using the velocity vs. time function.
Blue: Since I want to know velocity at specific distances, I calculated a 5th order poly of the velocity vs. distance data. (It would have been very tedious to figure out how to traverse my velocity data and interpolate between the two adjacent points of my velocity data.) From the velocity vs. distance function I pull velocity at 0, 20, 40, 60, 80, and 100y.
Orange: note the Labradar velocity going wonky after 0.4 seconds. This one appears to be from me shooting at a 100y target and the LR somehow recorded data after the pellet passed through. In most cases, the velocity flip-flops from high to low and gets really noisy. Looking at the velocity plot shows you quite clearly at what distance the Labradar data loses validity, and each shot is different.

My next step is to play around with the BC math posted here to try to calculate BC for the shot from my fitted velocity data. I had been doing that using ChairGun and it would be great to include it in my Excel processing.
 
Last edited:
Thanks so much for that @dgeesaman - very interesting . . .

I took your approach and applied a fifth order polynomial to my data, and there really is not much difference in the fit over the assessed data (less than a half fps variation), but I do see a difference in any forecast data outside the range of good data, so I'll have to play around with that and test things out. One concerning one is that it is giving a significantly higher value for the muzzle velocity than both the 3 factor forecast and the LabRadar forcast. So far I only looked at it on one trace, so I will have to do more digging.

It may be that I was fooled into a the good fit of the three factor analysis because my initial data was more limited to a shorter range, at only being under 43 yards of good data. I'm not likely to get much better than 50 yards here at home, but I still want to get the methodology right.

The Lab Radar data is almost disconcerting after the target. When I collected my data I had the ranges set at 10, 20, 30, 40, and 50 yards, even though I was shooting into a solid wood backstop 43 yards out (it is just what what most convenient for an initial test). But it kept giving me readings out to 55 yards in the individual shot data, all of which had poor S/N values and even showed the pellet increasing in speed between a few readings. This really surprised me as the pellet simply stopped as it was embedded in the wood, and there was nothing left to track at speed. Lots to figure out, but I'm thinking I like the idea of knowing exactly when to saw off the data that the Lab Radar provides, given what I have seen already.

I too will try to incorporate direct BC calculation from the great discussion the Matt and BlackIce included in this thread . . . .
 
  • Like
Reactions: dgeesaman
Thanks so much for that @dgeesaman - very interesting . . .

I took your approach and applied a fifth order polynomial to my data, and there really is not much difference in the fit over the assessed data (less than a half fps variation), but I do see a difference in any forecast data outside the range of good data, so I'll have to play around with that and test things out. One concerning one is that it is giving a significantly higher value for the muzzle velocity than both the 3 factor forecast and the LabRadar forcast. So far I only looked at it on one trace, so I will have to do more digging.

It may be that I was fooled into a the good fit of the three factor analysis because my initial data was more limited to a shorter range, at only being under 43 yards of good data. I'm not likely to get much better than 50 yards here at home, but I still want to get the methodology right.

The Lab Radar data is almost disconcerting after the target. When I collected my data I had the ranges set at 10, 20, 30, 40, and 50 yards, even though I was shooting into a solid wood backstop 43 yards out (it is just what what most convenient for an initial test). But it kept giving me readings out to 55 yards in the individual shot data, all of which had poor S/N values and even showed the pellet increasing in speed between a few readings. This really surprised me as the pellet simply stopped as it was embedded in the wood, and there was nothing left to track at speed. Lots to figure out, but I'm thinking I like the idea of knowing exactly when to saw off the data that the Lab Radar provides, given what I have seen already.

I too will try to incorporate direct BC calculation from the great discussion the Matt and BlackIce included in this thread . . . .
The danger with polynomial fits is they tail off to infinity on both ends. I know my equations are currently using the poly for v0 but I think a linear fit extrapolating from the first 5-10 data points is probably more accurate. Or a linear extrapolation using the first and fifth data point to help average down the distance errors. Sometimes the LR doesn’t pick up the pellet until 10-20y downrange and that’s when the polynomial makes big v0 errors. So far I haven’t cared enough about v0 to write in a better method, I just calculate BC based on v10 and v50 for example.

I’m not enough of a data analysis wonk to kick this to VB where I could apply more intelligent curve fits, so I’m using the fits Excel offers.

Out to fifty (or the target distance) I get pretty consistent data. The best data comes from recording shots sent to the farthest available backstop. Our clubs range is 112y to the backstop.

I haven’t come up with a better way to trim bad data points than to simply plot the raw velocities and polynomial fit and apply common sense. Sometimes the far data is noisy but follows a realistic trend, other times it goes wacky and is pure junk. Maybe the data is better indoors where nothing else is moving and returning signal (like leaves or waving grass?).
 
Last edited:
  • Like
Reactions: Ballisticboy
I haven’t come up with a better way to trim bad data points than to simply plot the raw velocities and polynomial fit and apply common sense. Sometimes the far data is noisy but follows a realistic trend, other times it goes wacky and is pure junk. Maybe the data is better indoors where nothing else is moving and returning signal (like leaves or waving grass?).
I'm thinking I'm going to input a hard value for the yardage to use, at least as a starting point, and maybe add a check for when the S/N falls off on how much data to use in the string. If you know you have a clear shot out to "X" distance then the data out to that most likely is good and after it is suspect. The S/N can be an added check to see if it holds, and if not then call for manual intervention.

I have not shot much with mine yet - just a few test strings indoors and out - and the S/N on the indoor stuff is a bit worse than outdoors in what I have. Of course the outdoor string is from my deck so the first 25 yards of the 43 are really clear, while the indoors is in the basement down a hall with lots of sources of reflection. The pellet is the only thing moving indoors, but there are lots of other sources of noise to the unit. I'm not sure how to make that better, other than to have a free and open indoor shooting lane (which is hard to come by).
 
I have only just acquired a LabRadar and have not had a chance to test it very much, but I have analysed a lot of data from Bob Sterne and others over the years. I have also analysed data from large Doppler tracking radars which use narrow beams and moving heads, as well as much more powerful, compared to LabRadar, fixed head Doppler muzzle velocity radars. The tracking radar produces thousands of data points from close to the gun to the maximum range. The "projectiles" ranged from large particles up to 155mm shells, with speeds from 300ft/sec to 7-8 thousand ft/sec, with literally hundreds of gun/bullet combinations. When fitting a curve to the velocity/range data, I always found a fourth order equation gave the best fit without giving wildly exaggerated values outside the data limits. The curve fits were necessary with small arms to fill in the data close to the gun, as the radar produced muzzle velocities were not always reliable. If the data is being used for BC data, then smoothing the velocity data first is probably OK. I was always working out purpose drag laws where all the analysis had to be done first and then the resulting Cd data could be smoothed. If the velocity data was smoothed, the resulting drag curve shape was very much a function of the shape of the smoothed velocity curve, rather than a true drag curve.

Doppler radars measure speeds at a fixed time interval. Some Doppler systems also employ a sinusoidal system for the wave frequency, which can also estimate range based on the frequency changes, at least that is what I have been told. LabRadar I have been told uses this system to measure ranges. I am not expert enough on radar systems to comment on the accuracy of that system for measuring range. Personally, I always try to use the velocity time data starting from about 10 yards range. Data at shorter ranges is not reliable.
 
  • Like
Reactions: AlanMcD
OK - I've now been playing around with fitting the data and comparing the resulting equations for 3rd, 4th, and 5th power polynomials for several days, and have learned a lot. I do think I need to get some longer trace data to work with, as all I have so far that is "good" has the first point picked up a little less than 7 yards out, and ends at the target at about 42 yards out. I do have several shots of this data, but would like to see what I get with much more distance - but I don't reguallry shoot beyond 55 yards so it would not be much longer.

The short answer I can give is that al three do a very good of creating a usable "smoothing" equation for the range of good data, but I trust none of them for forecasting the results for more than a few yards outside the range of data that was used to create the equation. Additionally, how good that forecast is will depend on the results of each individual shot, but in general I would trust the 3rd order fit better than the other two - they can get wacky really fast, at least with the data that I have on hand to work with.

To illustrate the fit, I'll show four plots. All four show the results of each equation plotted for speed against distance, and the plots run from zero to 50 yards, but have grey boxes covering the first 6.5 yards and last 8 yards, as these ranges do not have any usable data from the Lab Radar for creating the equations.

The first three plots all use the same shot data, but vary the amount of the data used to create the equations. Plot 4 shows a different shot:
- Plot 1 uses all data from shots ranging from 7 to 42 yards (63 data points),
- Plot 2 uses data only from shots ranging from 15 to 35 yards (36 data points) - the other points are all "forecasts" for speed
- Plot 3 uses data only from shots ranging from 18 to 32 yards (25 data points) - same as above, and is illustrative of how fast things can fall off
- Plot 4 is the same data range as Plot 3, but show the results form a different shot (thus looks a good bit different).

In the original post, and later update to GA BC values, I used the forecasted speeds from 0 and 50 yards. When I recalculate them using 7 and 42 yard smoothed data (staying fully inside the range of data used for the equations) I get the following list of BCs for the five shots:
0.0307, 0.0355, 0.0352, 0.0355, and 0.0343 - this gives a spread of 0.0048 in BC values, for a bit over 13% of variation, so the original point still holds for this data.

It has been fun learning through this, and I'll learn much more going forward. Interestingly, none of the three really match what the Lab Radar says for muzzle velocity that well - sometimes the 3rd order is better, and sometimes the 4th order is better. I'll have to do more testing on that.

Slide1.JPG


Slide2.JPG


Slide3.JPG


Slide4.JPG