“Plato has Socrates describe a gathering of people who have lived chained to the wall of a cave all of their lives, facing a blank wall. The people watch shadows projected on the wall by things passing in front of a fire behind them, and begin to designate names to these shadows. The shadows are as close as the prisoners get to viewing reality. He then explains how the philosopher is like a prisoner who is freed from the cave and comes to understand that the shadows on the wall do not make up reality at all, as he can perceive the true form of reality rather than the mere shadows seen by the prisoners.”
Athletes are not KPIs; Athletes are not bar charts. I know decision makers love to see these nice colorful charts, showing progressions, distributions, comparisons and so forth. But where the rubber meets the road is definitely not as nice and colorful and usually represents huge elephant in the room in the board meetings. Pointing to the “reality” behind the charts and bars to these “cave prisoners” might result in your “death” similar as in this allegory by Plato, the great Greek philosopher.
There is nothing wrong with charts. As long as one understands the underlying assumptions, data munging and general complexities of the real world (remember: all models are wrong, some are useful). For example, simple volume distribution chart (across zones) pretty popular in endurance sports, especially when showing load polarization has a lot of data munging – how are zones calculates, how are durations calculated: using only work interval times or whole session times and so forth, how is lag time dealt with with very short intervals and so forth (especially if HR is used). Shadows on the wall – real world is much more complex. How do we represent these complexities with our models and analysis? One needs to understands that before jumping into conclusions.
Data has multiple roles of course. I really like the VitalSmarts quote: “The only reason for gathering or publishing any data is to reinforce vital behaviors”. Sometimes we get stuck in this bottom-up approach, where we are so much focused on collecting, understanding validity, reliability and sensitivity, analyzing and visualizing the data without considering top-down approach: how do we plan using the data, and why? What vital behavior we are trying to reinforce? Sometimes this is very fuzzy and we end up with shadows on the wall and elephant in the room.
Another important aspect of data collection is that it needs to bring “new food to the table”. Yes, sometimes data serves historical role to reinforce our program and confirm what we intuitively know or observe. But more importantly, data should tell us the story or reveal the facts that we are not familiar with. Something that can help us prescribe instead of describe. Descriptive < Predictive < Prescriptive
Talking more about the assumptions of the data, let’s take session RPE (sRPE) for an example. Numerous articles says it is reliable and correlates with HR metrics and so forth. But I am suspicious. All those numbers and chart looks nice, but without going into details of one’s culture at hand and trust of the players’ rating, the way we collect the data, the way we ask questions is shown to affect the ratings (see Thinking Fast and Slow by Daniel Kahneman or works by Dan Ariely). It is questionable how much we can trust ourselves to rate anything subjectively. One needs to understand the differences between experiencing self and remembering self.
According to Kahneman, the rating of any given event is not “area under curve” or average, but rather using two rules:
- Peak-end rule: the global retrospective rating was well predicted by the average of [the level of the pain reported at] the worst moment of the experience and at its end
- Duration neglect: The duration of the procedure had no effect whatsoever on the ratings [of total pain]
I was talking to Marco Cardinale and we proposed very simple experiment. The workout has two parts: part A and part B. Part A is more strenuous and “painful” (e.g repeated 300m runs with short rest). Part B is easier, more “fun” (probably SSG with the ball). Anyway, on one training session players perform A -> B and then rate the session, and on another occasion players perform B -> A. How would you think they will rate? I am pretty certain they will rate differently.
Similar thing with wellness questionnaires. All the graphs and charts look great, but when the “tire meets the road” is usually very nasty. The way we ask questions and when will probably affect the scores. Give me a sneakers bar and ask me how I feel. There may me a lot of confounders not caught by the questionnaire. For example, if we measure wellness upon wakening, how in the world I know how I feel and if I am sore? I need to get up, walk a bit and so forth. But something might also happen during that time. Kahneman shared very interesting research where a subject were asked to rate life satisfaction, but before a question subjects were asked to go to photocopy machine and copy one paper. One group found 1$ inside the copy machine, and another found nothing. There was a difference how they rated. Unfortunately we are not that rational and “unaffected” by our environment. Representing this with a chart or table without going into details and nuances of data collection and preparation represents shadows on the wall.
I am currently reading Antifragile book by Nassim Taleb. Wonderful book with a lot of ideas and implementation of the concept of fragility~antifragility. One major idea is that one is unable to predict when certain event will happen (i.e. Black Swan, or for example and injury in sport) with any confidence – what one can calculate is how one is fragile to that event if it happens instead. I will leave you with that thought. Try to get out of the cave.