My coach's name is ChatGPT
...and would he finally have the coaching diploma that would allow him to practice this profession without making too many mistakes?
Let's find out here and now, using the pro version of ChatGPT 4.0.
First of all, I asked him to draw up a training program so that he could run the Marathon.
This is how he represents himself:😊
Frankly there's enough here to make runners crave friends without judgment, jealousy, a competitive spirit that seeks to complex you by smashing your record on the lower lake lap of the Bois de Boulogne (3.1 km)!
He gives me his training ideas, which he would apply to himself:
Let's have a look and make sure he doesn't burn himself out on the altar of overtraining, Humanoid though he was!
“If I were to train for a marathon (assuming I could run like you!), my training plan would focus on optimizing endurance, efficiency and the ability to maintain pace over 42.2 km while avoiding injury. Here's how I'd approach it: I've highlighted in red the vague clichés that echo all the catchphrases found by content creators...who defer to guess-who?
ChatGPT of course!
But who do we really talk to before examining and criticizing our training?
I'm taking precautions in case he's so powerful that I'm suspended or even dismissed from the University for subversive remarks.
Here's how he presents himself (insert 1)
“Of course! Here's a simple explanation of how I work (insert 1)
1. Technological basis
I'm an artificial intelligence model based on the GPT (Generative Pre-trained Transformer) architecture developed by OpenAI. Here are the basics of how I work:
Transformer Architecture: I'm built on a neural network model called transform, which is particularly powerful for understanding and generating language.
Pre-training: I was trained on vast amounts of text from the Internet, including books, articles, websites, etc., right up to my knowledge-cutting date.
Supervised learning and RLHF: My behavior was refined with a technique called Reinforcement Learning with Human Feedback (RLHF). This means that human trainers evaluated my responses to help me become more useful, accurate and confident.
2. Steps to generate an answer
Understanding the request:
When you type a message, I break it down into text units called tokens (words or word fragments).
I analyze context, intent and nuance to interpret your question or request.
Contextual research:
I use my pre-trained model to anticipate and select the most likely answers based on my understanding of your request.
My answers are generated token by token, with each new word based on the previous ones.
Generation:
Using statistical probabilities, I build a fluid, coherent response following rules of syntax, semantics and context.
Adaptation:
If you give me clarification or reformulate a question, I re-evaluate my answer according to the new context.
3. My abilities and limitations
What I can do:
Answer questions on a wide range of subjects.
Create content: write texts, summarize, translate, or generate ideas.
Learn from interaction: I adapt to your requests in real time to provide more relevant answers.
Assistance with specific tools: For example, generating images, manipulating data or interacting with external systems.
My limits :
Knowledge cut-off date: My knowledge is limited to data available until June 2024. I'm not aware of recent events unless I use web tools.
No conscience: I have no human understanding or subjectivity. My answers are generated from probabilistic models.
Data bias: I'm trained on human data, so biases may find their way into my answers. However, I try to remain neutral and balanced.
4. Real-time interactions
In this conversation:
You ask a question or give a directive.
I analyze your message and look for the most relevant information in my knowledge base to formulate an appropriate response.
If you correct me or ask for an adjustment, I modify my answer to better match your expectations.
If you want more technical details or an in-depth explanation of a specific part (e.g. transformer mechanisms or RLHF training), let me know! 😊
In fact, we'll only have ourselves to blame if we discover any “dingueries” in his training program!
More precisely, his training method is as follows (insert 2):
Of course! Training neural networks, and more specifically models like mine (GPT, based on Transformers), is a complex process that involves several stages. I'll explain it to you step by step!
1. What is a neural network?
An artificial neural network is a mathematical structure inspired by the human brain. It is composed of :
Inputs: The data we want to process (e.g. words, images).
Hidden layers: Several layers of interconnected “neurons” (or units) that perform calculations on the data.
Outputs: The result of processing (e.g., a response or classification).
Neurons are interconnected by weights, which are adjusted during training to improve network performance.
2. Supervised training
Supervised training involves learning from labeled data. Here's how it works:
Step 1: Data collection
Massive data: Billions of texts (books, articles, web pages) are used to train a model.
Pre-processing: The data is cleaned to remove unnecessary information (ads, repetitions, etc.) and transformed into a usable form (tokens, see below).
Step 2: Tokenization
Texts are divided into tokens (fundamental units of text):
A token can be a whole word, a fragment of a word, or a character (for example, “chatGPT” → [“chat”, “G”, “PT”]).
Each token is associated with a unique number thanks to a process called embedding.
Step 3: Passage through the network
The tokens pass through the layers of the neural network.
Each layer applies linear calculations (matrix multiplications) followed by an activation function (e.g. ReLU or softmax).
The deeper layers capture abstract representations of the language.
Step 4: Loss calculation
The model generates an output (for example, the next word in a sentence).
A loss function (e.g. cross-entropy) compares the generated output with the expected correct response (label).
The loss measures the error of the model.
Step 5: Optimization
Backpropagation: The loss gradient is calculated with respect to the network weights (via the derivative method).
Optimizer (Gradient Descent): Weights are adjusted to minimize error. For example, Adam Optimizer is often used to adjust these weights efficiently.
3. Pre-training
Models like GPT go through a pre-training phase on unsupervised tasks, as annotating huge amounts of data is difficult. The aim here is to teach the model the general structures of the language.
Objective: Learn to predict the next word in a given sequence (causal language modeling).
Example:
Input: “The cat sleeps on the ___”
Predicted model: “sofa
During pre-training, the model develops a general understanding of language, without specific knowledge of particular tasks.
4. Fine-tuning
After pre-training, the model is refined on specific tasks (supervised or reinforced):
Supervised tasks:
The model learns on specific data with labels.
Example: “Question: What is the capital of France? Answer: Paris."
Reinforced learning with human feedback (RLHF):
Answers generated by the model are evaluated by humans.
A reward system is used to teach the model to produce answers that are more useful, accurate and aligned with human expectations.
5. Key optimization methods
Here are some advanced concepts used during training:
A. Dropout
Technique for avoiding overfitting.
During training, some neurons are temporarily deactivated to force the network to be more robust.
B. Batch Training
Data is processed in batches to improve training efficiency.
Enables calculations to be parallelized and weight updates to be stabilized.
C. Learning Rate Scheduling
Dynamically adjusts the learning rate to make training more efficient.
Example: Reduce the learning rate as the model converges.
6. Transformer features
Transformers (like GPT) feature several innovations:
A. Attention mechanism
Self-attention enables the model to focus on the important parts of the text (the context) when generating a response.
Example: In the sentence “The cat is sleeping on the sofa. It's gray”, ‘It’ refers to ‘cat’. Attention enables the model to understand this relationship.
B. Stacked layers
Several Transformer layers (with attention and residual connections) are stacked, enabling the model to capture complex relationships in the text.
7. Training large models like GPT
Large models like GPT-4 require :
Computing power: Thousands of GPUs or TPUs working in parallel.
Massive data: Hundreds of gigabytes of text.
Time: weeks, even months, to train the model.
8. Why are large models effective?
They capture subtle and complex relationships in the data.
The larger the model (more parameters), the better it can generalize to a variety of situations.
Their training on massive data gives them a broad understanding of the world.
It's based on the probability of an o appearing in a sequence of words one after the other.
In the laboratory, we have even begun work on entropy, i.e. the probability of moving from one physiological state to another during a marathon: this article is available on :
https://publications.billatraining.com/publications/2024/sports-12-00252.pdf
We've demonstrated the importance of maintaining a high degree of improbability in the prediction of going from one state to another in a marathon, and we'll come back to this, because first we need to eliminate (or not?) the temptation to follow the advice of our “Humanoid Coach” (as he defines himself).
Here's his “My training philosophy”: says ChatGPT I remind you that italic is a catch-all phrase with no real meaning! very vague in the style of “we understand each other of course!” and in bold dangerous sessions because they're based on speed injunctions in defined “zones” based on heart rates or worse, more or less constant speeds within the zone.
Balance between aerobic base and quality work:
Build a solid aerobic base with long, slow runs.
Incorporate threshold runs, VO2max intervals and marathon pace runs for added specificity.
Progressive overload with recovery:
Gradual increase in mileage and intensity.
Structured recovery to avoid overtraining.
Simulation and specificity:
Train regularly at marathon pace.
Mimic race-day conditions (terrain, nutrition and hydration).
Training plan (12-16 weeks)
Phase 1: Building the aerobic base (weeks 1 to 4)
Goal: Increase mileage and running economy, stay injury-free (contradictory).
Objective: Easy, long runs at 60-70% effort (what's easy? for everyone?).
Main training sessions:
Long run: 90-120 min at an easy pace (~65% of max HR reserve HR rather???).
Strides: 6 x 100m strides at ~90% effort after easy runs.
Endurance run: 20 min at ~80-85% of max HR (or at a “comfortable” pace).
Phase 2: Strength and endurance (weeks 5 to 9)
Objective: Improve endurance and muscle threshold. ?????
Focus: Long runs, marathon pace runs, hill work, etc.
Main training sessions:
Progressive long run: 2.5 to 3 hours with the last 30 minutes at marathon pace.
Hill repeats: 8 x 1 min uphill at 90% effort, jogging downhill to recover.
Threshold intervals: 3 x 10 min at a pace of around 10 km with 3 min recovery.
Phase 3: Marathon specificity (Weeks 10-13)
Objective: Calibrate marathon pace and simulate race day (run a marathon???).
Focus: Marathon pace runs and nutrition training. (test gels and other vomit-inducing junk??)
Key workouts:
Marathon pace running: 24-30 km including 16-20 km at marathon pace.
Long run: Up to 35 km at a relaxed pace (~65% of max heart rate). hello tendonitis!!!
VO2max intervals: 6 x 1,000 m at 5 km pace, i.e. ??with 2 minutes recovery (active ??).
Phase 4: Recovery and race preparation (weeks 14 to 16)
Objective: Recover and sharpen ???
Focus: Reduce volume, maintain intensity.
Main training sessions:
Marathon pace runs: 8-12 km at marathon pace, 1-2 times per week.
Short intervals: 6 x 400 m at 5 km pace with 1 minute rest.
Easy runs: Keep legs fresh with short, slow runs (30-45 min).😂
Example of a weekly program
Monday: Easy recovery runs (40-60 min) or rest.
Tuesday: Intervals or hill repeats (quality session).
Wednesday: Easy run (40-70 min).
Thursday: Running at threshold or marathon pace.
Friday: Rest or cross-training.
Saturday: Long run.
Sunday: Recovery jogging or strides.
Key elements I'd focus on
Long runs: these are the cornerstone of marathon training, building endurance and preparing for race-day fatigue.
Training at marathon pace: regular runs at target pace (how? my dream pace? or the one estimated on the internet based on my 5, or half-marathon records?) to develop confidence and efficiency???.
Nutrition and hydration: Simulate race-day nutrition during long runs to train the bowels.
Mental training: Use visualization and positive self-talk to prepare for the tough moments of the race. Ok, that's good advice at last! It was worth it to get to the end of this marathon of clichés!
And he confidently tells us that he knows himself so well that he'll know if he's in shape this way:
"How I'll know when I'm ready
Successfully complete a 30-35 km long run without excessive fatigue.
Comfortably hold marathon pace for 15-20 km during a training run.
I feel mentally ready and confident in my plan and fitness.
Would you like me to design this plan around a specific goal (e.g. finish time, pacing strategy)? Or should I explore refueling or mental strategies in more detail?”
NONNNNNNNN ChatGPtié for my legs!
But by pushing him to the limit, I came up with a training plan that resembles anything I've ever come up with in the last 40 years, but over the last 15 years I've corrected myself and questioned myself, the whole thing being summed up in the BillaTraining method, which eliminates these notions of speed zones or heart rate, the corollary of tests at controlled speeds and not at indicated sensation.
This has resulted in books and, above all, scientific articles since 1994, with an epistemological turning point since 2004.
You'll be able to recognize all the coaching you can get in chatGPitié and, above all, all that will be in there in 2050!
And yes, it's going to take 20 years to rejuvenate ideas and free men and women from the Stakhanovist training methods that make you self-control and censor your feelings, or on the contrary, egocentricity*, not to say navel-gazing, about how you feel in the morning, The result is that you end up saying to yourself “I was pretty good in my evening session”, and return home adding a few extra kilometers to calm your fear of not having done enough!
* The tendency to make everything about yourself, to be really only interested in yourself.
Here's a personalized 7-day training log for a 40-year-old woman whose vVO2max is 16 km/h. The training plan focuses on improving VO2max while balancing recovery and endurance development.
Training zones (based on vVO2max) :
Zone 1 (recovery/aerobic base): ~8-10 km/h (50-65% of vVO2max).
Zone 2 (threshold): ~12-13 km/h (75-85% of vVO2max).
Zone 3 (VO2max intervals): ~14-16 km/h (90-100% of vVO2max).
Zone 4 (sprints): >16 km/h (anaerobic/above vVO2max).
Training log : Weekly plan
Monday: Basic aerobic running + strength training
Warm-up: 10 min easy jog (Zone 1, 8-9 km/h).
Main run: 40 min steady pace run in Zone 1 (~9 km/h).
Cool-down: 5 min walk + stretching.
Optional strength training: Core and leg exercises (e.g. lunges, squats, planks, 3 sets of 12 repetitions).
Tuesday: VO2max interval training
Warm-up: 10 minutes jogging (zone 1, 8-9 km/h) + dynamic stretching.
Main set:
6 x 3 min intervals in zone 3 (14-16 km/h).
Recovery: 3 min jogging (zone 1) between intervals.
Cool-down: 5-10 min jogging or walking + stretching exercises.
Wednesday: Recovery day
Activity: 30-40 min light cycling, swimming or walking.
Intensity: Zone 1 (low intensity, conversational pace).
Thursday: Endurance run
Warm-up: 10 min jog (Zone 1, 8-9 km/h).
Main set: 3 x 10 min at Zone 2 (12-13 km/h), with 3 min recovery jog between efforts.
Cool-down: 5-10 min walk or slow jog.
Friday: Sprint intervals (emphasis on anaerobics)
Warm-up: 10 min jogging + dynamic stretching.
Main set:
10 x 30-second sprints in zone 4 (>16 km/h).
Recovery: 90 seconds of slow jogging or walking.
Cool-down: 5-10 minutes jogging + stretching.
Saturday: Long run (endurance-oriented)
Warm-up: 10 min jog (Zone 1, 8-9 km/h).
Main run: 75-90 min regular run in Zone 1 (~9-10 km/h).
Cool-down: 5-10 min walk or slow jog.
Sunday: Active recovery or rest
Option 1 (active recovery): Light yoga, swimming or walking for 30-60 min.
Option 2 (rest): Total rest with light stretching.
Notes :
Progression: Every 2 to 3 weeks, slightly increase intensity or volume (e.g. add an interval or increase the duration of the long run by 5 to 10 minutes).
Follow-up: Monitor your heart rate (HR) and perceived effort. If you feel tired, reduce the intensity or take an extra recovery day.
Hydration/Nutrition: Keep well hydrated and refuel after training with a balanced meal or snack.
Tests: Re-test vVO2max every 6 to 8 weeks to adjust training zones.
Don't hesitate to contact me if you wish to modify any aspect of this program! he says! what an open mind!
Finally, I think I'll end up launching a training course for ChatGPtié so that he can mutate into that of a runner who runs after understanding the way in which we spontaneously vary our running speed or power without any special intention, just to go faster in the end than by staying in a defined “zone”.
We're going to get there soon, using a pro domo AI that will enable us to reduce an immense matrix of neurophysiological, physiological and mechanical variables to 2 or even 3 dimensions.
Maybe, after all, we're quantum energy machines that can put us in any state we want!
“Like quantum particles, we are waves and particles, carriers of infinite potential at every moment.”