[Paper Review] Player Tracking Technology: Half-Full or Half-Empty Glass? Or maybe there is no glass at all.

Who would’ve thought I’d be writing a sequel to this over a year after the first was initially published? Sadly, much like a well-designed strength program based on the greatest periodisation model described in Mel Siff’s Supertraining, theory can have a sudden and unpredictable reaction once it merges with reality and life. From deaths to lightning strikes, to unscrupulous individuals, you name an obstacle and it’s safe to say it happened to us. Such is life, and certainly a story for another day.

At the time of writing, I make no shame in the fact that it was an article that would reveal the deep fabrics of Precision Sports Technologies; what made us tick, what made us exist, how we see the world, and in particular, how we saw the day-to-day working environment of being a sports scientist, the use of technology (or lack thereof…or worse yet, over dependence on technology) and how that informed the solution we have been working on for the best part of 5 years now.

For a while now I’d wondered what could I write as a follow up to that last article? I part of my wanted to re-address the nature of the day-to-day working environment and practical challenges faced by the typical Sports Scientists, as I did so in my first blog post, but that would be simply lazy writing and repeating the same points I made then but now. I also wanted to discuss how I felt the sports science industry was being played by those with a monetisation agenda; with newer technology being developed either to continue this allusion that many clubs are stacked with an army of data scientists, or simply unaware of the illusion. Of how the new goal of technology in sports isn’t even to answer questions, but to keep making new ones, without answering the old ones, and to develop a dependency on technology that is neither healthy not valid. Essentially to talk about how we’re all just winging it (like I am as a founder), with no true or actual influence on the decision making process. I also wanted to discuss how many sports scientists are waiting for old-“dinosaur” coaches to move on so there could be less resistance to technology, but this was mainly to satisfy my Jurassic Park-related analogies I had built to make my point (something along the lines of “If I’ve learnt anything from watching Jurassic Park (and I’ve learnt a lot from watching Jurassic Park), is that dinosaurs don’t go down so easily, not without a fight at least.”)

precision

Next Steps: Predictive Injuries & Valuable Metrics

Thankfully (or sadly), I didn’t have to resort to Jurassic Park or Terminator 2 analogies. Over Christmas, Mladen sent me Bucheit & Simpson (2017) article titled “Player Tracking Technology”, and it could not have been a better timed article, largely discussing a lot of the key points and decisions that we had made over the last few years that matched it. For that reason, I wanted to make this essentially a review of that fantastic article, and to discuss 2 key subject matter brought about in that article, and how those 2 key points led me to a third point not discussed in the article. First one being;

1. The questionability and reliability of “GPS/GNSS”.

I make no apologies, but I’ve become a bit of a space tech hipster, much prefer to call it as GNSS (Global Navigation-Satellite System, it’s really industry term) and not “GPS” which is the name of the American System). It also would be quite bizarre for me to discuss the reliability of “GNSS” considering the context is really about our own “GNSS-based” monitoring system, but I’ve studied the arts of being bizarre for a very long time and don’t plan on stopping now.

Although the reliability and validity of GNSS is now quite wildly known and discussed now, it’s limitations and weaknesses have been known for many years before now. Like most, when I started on this journey, my knowledge of “GPS” in the use of sports went no further than what I was told (and in truth, isn’t that far off from then). It was during our time developing our first prototypes at the European Space Agency, with access to their wealth of unreleased technology, space tech specialists and overall expertise in all things space technology, did our “space tech baptism” come to light. The first bit of knowledge was always the case of more sampling = better accuracy. Of course, I’m keenly aware about not shooting myself in the foot here, considering we’ve recently decided to update our own GNSS module to a next generation one with…. you guess it, more sampling. But even still, the case for “more sampling, more accuracy” has always been a bit of a puzzle to me. In my previous guest article, I likened it to the golden age of gaming (80s-late90s/early 00s) of “more bits = more power” but a better analogy would be the mobile phone megapixel era, whereby “more pixels = better camera”, of which played on general consumer ignorance of the fact that many professional grade cameras had less megapixels than their phones. In the same vein, you could have a 1Hz GNSS with millimetre positional accuracy. There’s also the opportunity of greater positional accuracy provided by the European Space Agency’s GALILEO Navigation system, capable of providing sub-meter accuracy, but the complexity of that system goes beyond me.

I could discuss in further details the other smoke and mirrors involved in the use of GNSS, but I would be repeating more of the same points I made in my original article. Suffice to say, we’ve entered a phase in sports science that’s commonly seen in other industries where technology first enters their world with some agenda of monetisation; where technology becomes for technology’s sake and the end users are thrown into a deep pool of data to try and make sense of, rather than making sense of their jobs. Data and complexity becomes a way to create an environment of smoke and mirrors for further sales, and actual user-experience takes a backseat. I often consider just how many sports scientists out there sit down at their desks and wonder, with a host of excel spreadsheets in front of them, just how influential they are being to the entire decision making process.

2. Metrics Selection.

The most significant point(s) discussed in the paper was of the 3 levels of metrics that have developed with the use of monitoring technology over the past decade or so. During the early days of such technology and their use in sports, as an industry we didn’t know much or where to go. So, you do what you normally do in situations like these; you collect data, lots and lots of it. Which is exactly what the industry did. Problem is, almost a decade down the line, there are still many collecting data and building models based on 60+ metrics, seemingly unsure as to what to do with all the metrics used.

As pointed out by the authors, we’ve recently evolved from using just “GPS” performance data to “Level 2” type data (High-Intensity Distance Covered, Acc/Dec, Metabolic Power, etc.) to give an indicator of loading and performance events. Even in my then limited sports science knowledge (it was always limited, now even more so seeing how I don’t keep up to date with the latest research), I was always troubled by the idea of building any sort of model on such variables. As an industry, we had established the limitations of GNSS, yet were seemingly developing more metrics based on a foundation that already had a certain degree of error. We are in effect, building models based on “interpretations from interpretations”. Like any form of chain or communication line, the more human intervention, the greater the probability of error or some sort of degradation to the original message. Like healthy eating, the closer the food is to its original intended state, the better it is for you. The less interpretation, the better. Ofcourse that is no slight the advancements made such by that of the Metabolic Power, as the model itself isn’t flawed per se. If one can enhance the navigation technology and positional accuracy behind it, then it’s validity and application only increases. In many cases, technology is the bottleneck, and not the academic method behind it per se.

In recent years, due to the enhancements of MEMS technology, the industry is slowly evolving to the use of “Level 3” type metrics, more focused on assessing biomechanical loading and changes along with (rather than just) locomotor metrics. The authors discuss the benefits of this; such as greater reliability, actual “usability” of such metrics and their implementation into a usable user-experience, and allowing sports scientists/physical preparation coaches to focus on actual training load, rather than basing loads on performance variables, which of course are incredibly contextual and subject to tactical/game influences, which can lead to misinterpretation. One potential benefit not discussed in the paper that this advantage brings is the clear separation of job roles that exist within the team sports support-team. As those who have worked a day in the industry are fully aware of, being a sports scientist is as much academic as it is a season of Game of Thrones (and you really would be lucky to survive till the end of the season, before the white walkers…. I mean, new management team arrive). In some clubs, the fact that the sports scientists tracking of metrics can overlap into the roles of the performance analyst and coaches can sometimes be a threat to the coaches, when it really shouldn’t. Sometimes the last thing coaches want is for the sports scientist to become all-encompassing; knowledgeable in all areas of technical/tactical coaching (their areas of expertise) to add to their knowledge of physical perpetration. All it does is sometimes get the coaches to feel threatened with their back to the wall, and thus like an episode of Planet Earth, display their strength to the sports scientists and exert their authority. They are the decision makers, after all.

The final advantage of L3 metrics is its application to other sports. Contrary to popular belief, sports science isn’t just centred on field-based team sports. It’s understandable, due to the popularity and money in those sports as to why most research and discussion is based on team sports, but going down a more “agnostic” L3 and less “sports-specific” locomotor type of metrics allows for wider application and more research in other sports that maybe do not have the financial support that team sports has. After all, I imagine the number of change-in-directions doesn’t matter much to a 100m track athlete, or Total Distance Covered matters much to a combat athlete.

3. Predictive Injury Modelling.

The logical processes of all the preceding thoughts is how they all affect the final frontier; Injury-Prediction (or the preferable yet less sexy “Injury Reduction”). As an industry, we are heading towards finally trying to ask this question for ourselves and for our industry. Still very much at the embryonic stage, but already many new models are popping up indicating their predictive abilities.

Far be it from me to ever claim to be a statistical or machine-learning expert (luckily, I don’t have to be), but I’ve always been fascinated with many of the models. I am not one to claim what works and what doesn’t, but I’ve always found it rather interesting that most of the models are mainly built on external load metrics (lest for some sRPE here and there). As outlined above, there’s already an inherent risk in building the foundations of any predictive model based primarily on GPS-based data, due to the errors they bring, let alone Level-2 type metrics, which can be seen as “Interpretations of Interpretations” (so would a model built on L1 and L2 metrics be an interpretation based on interpretations of interpretations?), but there’s the obvious flaw of models not factoring the response of the most complex machine on earth; the human body (or not factoring it enough). It’s like developing a “Dose-Response” model on just the dosage alone, with little observation on the human bodies that are receiving these dosages and how they respond to the doses (and why).

Equally one must be careful with one’s positioning on this. I’m fully aware of the industry backlash and scepticism against the idea of “predictive injuries”, and rightly so in most cases. I mean, there is still the practical usability of a lot of these models in the real-world settings, as discussed by Mladen himself here, but equally we can’t throw the baby out with the bathwater. One of the cases against predictive models is to point to the huge number of variables, both internal and external, that could affect that models; from body types and structures, genetic profiles, training and injury history to weather, training surfaces to even footwear and clothing. Yes, there are numerous variables at play that influence the probability of an injury, but we can’t just look at the size of the different factors and say “there’s so many factors, we’ll never know”. One thing we must remember sometimes is that there genuinely is nothing new under the sun; all the challenges we face in the sports science community have been faced in other industries before us. Understanding the nature of individualisation is indeed hugely important, especially when one considers how little this is factored into many predictive models. But once again, this isn’t a new challenge. The fixation for generalization exists in all areas of science, not just sports science, as discussed here on this blog, other areas of science are also focused on individualisation just as much as we should. But then, there is the paradox discussed in my first guest blog, where we can sometimes get infatuated with knowing the 1% of what makes us different before we’ve even figured out the 99% that also makes us the same. But almost certainly, we surely should start factoring the individual variability and subgroups in any potential model if we are to truly get deeper into the proverbial predictive rabbit hole.

Pathfinding…

Now this is the point where I turn the microscope back on ourselves and claim we have the answer, isn’t it? That won’t be the case I’m afraid, as I speak first and foremost as just a normal person that also gets tired of marketing and sales talk. I believe the discussions above and in the original paper outline challenges we face both as an industry and as a company on what we need to do to overcome them. But it won’t be easy, such often aren’t. We will make mistakes, we will stumble, we will apologise, we will argue one point and then let it go the next day when new research says otherwise, we will deviate at times but will stay the course and keep searching. And I say we not just about us as a company, but as a community, because the truth is if we are ever going to find the answers, we simply must work together. The data and the variables will be large, and even though there is a general trend to keeping things as in house as possible, that can never be to the advantage of the community as a collective. It’s only by working together can we finally enter this new frontier of possible injury-prevention to the benefit of all; from the large top clubs with a vast sports-science support team and capital to support them, to the track coach who is all things to his athletes (coach, strength trainer, and physio) on the track over the weekend. It really is the only way.

Related Articles

The Theory & The Reality of using GPS in Sports

I’ve always been a bit of a tech head and fascinated with technology, long before I got into sports. Makes sense considering my initial career path was as a games developer before the path I neded up choosing. Even during my time at a Premier League football club, I tried valiantly to get HRV incorporated, first with KubiosHRV (that…

The Future of Velocity Based Training

The advancements in the (micro) technology and analytics achieved that their accessibility and affordability to the average gym goer increased tremendously in the last couple of years. Everything started with live HR measurements, and eventually resulted in the GPS watches and activity trackers, as well as barbell/body velocity trackers such as the PUSH device.

Interview with Marco Cardinale

Interview with Marco Cardinale I would be really surprised that the name Marco Cardinale doesn’t ring a bell if you are in the strength and conditioning field. Marco recently co-edited the book “Strength and Conditioning: Biological Principles and Practical Applications” with Robert Newton and Kazunori Nosaka that got really awesome reviews and feedback from the coaches and researchers I…

Examining the Accuracy of Acceleration-Velocity Profiles Using Local Positioning Systems

The primary goal of our research was to assess the reliability of LPS-derived metrics, particularly when compared to the more established laser gun method. Specifically, we aimed to evaluate the accuracy of key performance indicators like Maximum Sprinting Speed (MSS), Maximum Acceleration (MAC), and Relative Acceleration (TAU) using both the traditional time-velocity method and the newer velocity-acceleration method.

Responses

Your email address will not be published. Required fields are marked *

Cancel Membership

Please note that your subscription and membership will be canceled within 24h once we receive your request.