
Probability gives us a steady way to think about data when we don’t have all the facts. It’s not a substance inside coins or clouds – it’s a way to turn incomplete information into action.
Scientists use it when they say there’s a “60% chance of rain” or that a treatment “reduces mortality by 17%.”
They aren’t claiming to see the future, they’re using a disciplined method to update beliefs and guide choices based on evidence gathered so far.
We use that same method all the time. Weather forecasts shape what we wear. Medical teams weigh risks and benefits before ordering a test or prescribing a drug. Engineers set safety margins.
In each case, probability acts like a shared language that helps people agree on what the numbers allow – and what they don’t.
Probability doesn’t exist as a physical thing in the world – it’s a system we created to describe and reason through that uncertainty.
Because of that, and because of other reasons outlined in the below examples, probability “probably” isn’t real.
There are two classic ways to read probability.
The Frequentist view ties probability to long‑run patterns. If you flip a fair coin over and over, the share of heads will settle near 50%. That number reflects what happens across many tries.
The Bayesian, or belief‑based, view treats probability as a degree of belief given what you know right now.
A meteorologist can assign a high chance of rain on a particular day by combining radar, pressure, and wind data. That day will not repeat, but the judgment still has a sound basis.
Both approaches matter. When a process repeats and you can count outcomes, the frequentist approach shines. When you must decide once with partial clues, a belief‑based approach fits.
Practically speaking, good science speaks both languages and switches between them as the situation demands.
A model is a pared‑down description of reality created for a job. It highlights the pieces that matter for a specific question and sets aside the clutter.
That choice is intentional. The goal is clarity, not perfect mimicry. A well‑built statistical model keeps irrelevant noise out so the signal linked to your decision stands out.
Scientists check whether the model’s assumptions match the data and whether its predictions hold up when tested on new cases. If they don’t, they fix or replace the model.
During the COVID‑19 crisis, hospitals needed to know which treatments helped patients survive. Researchers answered that question with large randomized trials, determining probabilities.
Randomization assigns patients to treatments by chance, creating groups that are, on average, similar at the start. That makes later differences credible.
In the United Kingdom, the RECOVERY trial – led by the University of Oxford with the National Health Service – tested a low daily dose of the steroid dexamethasone for up to ten days in addition to standard care.
The study enrolled thousands of patients, so the results would not be distorted by chance swings in small samples.
Attempting to arrive at probable treatment outcomes, the trial measured deaths within a fixed period, so the results would be comparable across patients.
Adding dexamethasone reduced deaths within 28 days by roughly 17% compared with standard care. The benefit was not the same for everyone.
Patients who were on invasive mechanical ventilation at the start – those who were most ill – saw the largest gain, with a drop in death risk of about a third compared with similar patients who did not receive the steroid.
Patients who needed oxygen but were not on a ventilator also saw a benefit, though the reduction was smaller.
Patients who did not require oxygen showed no benefit, and there were hints that the drug could even be harmful in that subgroup.
The trial also found that the steroid slightly improved the chance, or probability, that a patient would be discharged from the hospital within 28 days.
Every one of those above conclusions rests on probability. Confidence intervals show a range of effects consistent with the data. P‑values test whether the observed differences are likely to be due to chance alone.
Researchers set their plan in advance, collect data systematically, and use methods designed to reduce the ways we can fool ourselves.
Pre‑specified outcomes, careful tracking, and transparent reporting help others check the work and repeat it. That is how numbers become usable knowledge rather than scattered observations.
When we say “dexamethasone works for many hospitalized COVID‑19 patients,” we are not claiming that every individual has a fixed, hidden tag that determines the outcome.
We are saying that, for a well‑matched group of patients, the pattern of results shifts in a measurable way when the steroid is added to care.
That group‑level shift guides a decision for the next patient because it reflects how similar patients have responded under the same conditions – the probability that other patients will react the same way.
From there, clinicians translate relative changes into practical terms. A relative drop, such as about a third for ventilated patients, tells you the direction and size of the effect.
To plan resources and set expectations, teams often look at absolute risk changes and the number of patients who need to be treated for one additional life to be saved.
Those measures depend on the baseline risk in the hospital and the specific population. That is why context – who is receiving care and how sick they are – matters as much as the headline probability percentages.
Probabilities are not permanent truths. They are current, well‑reasoned beliefs that should move when new evidence arrives.
If later research had contradicted the early results on dexamethasone, the right response would have been to change practice. That is the strength of a method that expects uncertainty and has a plan to handle it.
Sticking with a rule after the facts shift is not rigor – it is stubbornness. Science earns trust by being steady about methods and flexible about conclusions.
The same playbook shows up outside medicine. In climate science, researchers compare future scenarios by asking how likely certain warming levels are under different emissions paths.
Those probabilities are built from physics, data streams, and model checks.
In engineering, teams estimate the chance that a bridge will face a load beyond its design and then set safety margins to keep failure rates acceptably low.
In everyday life, you weigh chances when you bring an umbrella, apply to a reach school, or back up your files. You are not trying to be perfect. You are trying to make the best move with the information at hand.
To sum it all up, probability is the language we use to make sense of uncertainty. Uncertainty itself is the fact that we don’t always know what will happen next or have all the information we need.
Probability doesn’t exist as a physical thing in the world – it’s a system we created to describe and reason through that uncertainty.
When you say there’s a 75% chance of rain, you’re not claiming that something inside the clouds measures that number.
You’re using the best evidence available to make a smart guess. In that way, probability turns unknowns into usable information so we can act with more confidence.
Probability links evidence to action while keeping uncertainty in view. It describes what the data support and what they leave open, which is exactly what decision‑makers need.
The uncertainty left behind – the messy, unpredictable reality – exists no matter what.
Probability helps us handle that uncertainty with more confidence and fewer surprises. That matters whether you are reading a forecast, setting a safety margin, or weighing a treatment.
Speak the language fluently, and you will see how evidence shapes smart decisions – clearly, honestly, and without the false comfort of certainty.
The full study was published in the journal Nature.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–
