Numbers Can Lie: How Statistics Mislead

In 2015, India changed how it calculated GDP.

The old method, which had been used for decades, measured economic output based on the cost of production — what it cost to make things. The new method measured it based on market prices — what things sold for. The base year was updated. The data sources were changed. And overnight, India's GDP growth rate jumped.

Under the old method, India's economy had grown at about 4.7 percent in 2013-14. Under the new method, applied to the same year, the same economy, the same factories, farms, and offices — the growth rate was revised to 6.9 percent.

Nothing in the real economy had changed. Not a single additional job had been created. Not one more ton of steel had been produced. Not one additional child had been fed. The same economy, measured two different ways, told two different stories.

The government celebrated. The opposition complained. International institutions expressed polite skepticism. Several prominent economists — including Arvind Subramanian, who had served as the government's own Chief Economic Adviser — later published research suggesting that the new methodology was significantly overstating growth, perhaps by as much as 2.5 percentage points per year.

Who was right? That question is still debated. But the more important question is this: if the most fundamental number in economics — the growth rate of an entire nation's economy — can change by over two percentage points simply by changing how you count, what does that tell us about the reliability of economic statistics?

It tells us that numbers are not neutral. They are choices. Choices about what to count, how to count it, what to include, what to exclude, and how to present the result. And those choices — invisible to most people who read the final number — can make the difference between a narrative of success and a narrative of failure.


Look Around You

The next time you read a headline that says "Economy grows 7%!" or "Inflation falls to 4%!" or "Unemployment drops to record low!" — stop for a moment. Ask yourself: Who measured this? How did they measure it? What did they count? What did they leave out?

The number itself is not the answer. The number is the beginning of a question.


The Average That Hides Everything

Let us start with the most familiar and most dangerous statistical tool: the average.

India's per capita income in 2024-25 was approximately Rs 2 lakh per year — about $2,400 at market exchange rates. This number is calculated by dividing total national income by total population.

What does this number tell you? Almost nothing useful.

It does not tell you that the richest one percent of Indians own more wealth than the bottom seventy percent combined. It does not tell you that a software engineer in Bangalore might earn Rs 30 lakh while a farm laborer in Bihar earns Rs 50,000. It does not tell you that hundreds of millions of people live on less than Rs 150 per day.

The average takes the software engineer's Rs 30 lakh and the farm laborer's Rs 50,000, puts them in the same pot, stirs, and produces a number — Rs 2 lakh — that describes neither of them.

There is an old joke that captures this perfectly: if a billionaire walks into a bar where nine unemployed people are drinking, the average wealth in the bar is over Rs 100 crore. The unemployed people are, on average, fabulously rich.

This is not a minor statistical quibble. It is a fundamental problem with how we talk about economies. When a politician says "per capita income has doubled," they may be describing a reality in which the rich got much richer, the poor stayed the same, and the average — pulled up by the rich — increased. The number is technically accurate and practically meaningless for the majority of the population.

THE SAME COUNTRY, THREE DIFFERENT STORIES

Imagine a country with 10 people. Their annual incomes:

   Person:     1    2    3    4    5    6    7    8    9    10
   Income: Rs 10k  15k  20k  25k  30k  40k  50k  80k  200k 1,000k

   MEAN (average):   Rs 147,000
   "Per capita income is Rs 1.47 lakh. A middle-income country!"

   MEDIAN (middle value):  Rs 35,000
   "Half the population earns less than Rs 35,000. A poor country!"

   MODE (most common):  No clear mode — incomes are dispersed
   "Income inequality is extreme. No typical income exists."

   ┌──────────────────────────────────────────────────┐
   │                                                  │
   │  The MEAN says: "Doing reasonably well"          │
   │  The MEDIAN says: "Most people are struggling"   │
   │  The GINI coefficient says: "Deeply unequal"     │
   │                                                  │
   │  All three are calculated from the SAME DATA.    │
   │  The number you choose to report determines      │
   │  the story you tell.                             │
   │                                                  │
   └──────────────────────────────────────────────────┘

"There are three kinds of lies: lies, damned lies, and statistics." — Attributed to Benjamin Disraeli, popularized by Mark Twain


Correlation Is Not Causation

This is perhaps the most important statistical principle, and the one most frequently violated in public discourse.

Consider this observation: in countries where people eat more ice cream, there are more drowning deaths. A graph would show a clear, positive correlation — as ice cream consumption rises, so do drownings.

Does ice cream cause drowning? Of course not. Both are caused by a third factor: hot weather. When the weather is hot, people eat more ice cream and go swimming more often. The correlation is real. The causal relationship is nonsense.

This is obvious when the example involves ice cream. It is far less obvious when the example involves economic data — and far more dangerous.

"Countries that opened their markets grew faster." This correlation was used for decades to justify free-trade policies. But does trade openness cause growth, or do growing countries choose to open their markets? Or is there a third factor — good institutions, perhaps — that causes both growth and openness? The correlation cannot tell you.

"States with higher government spending have higher growth." Does spending cause growth (the Keynesian interpretation)? Or does growth cause higher tax revenue, which leads to more spending (the reverse causation)? Again, the correlation alone cannot answer the question.

"People who went to college earn more." Does college cause higher earnings? Or do the kinds of people who go to college — those from wealthier families, with better networks, with certain personality traits — earn more regardless? This question has consumed education economists for decades, and the answer is nuanced: college helps, but not as much as the raw correlation suggests, because much of the difference is due to selection effects.

What Actually Happened

In the 1990s, the World Bank published numerous studies showing that countries which followed its policy prescriptions — open markets, reduced government spending, privatization — grew faster. These studies were enormously influential and were used to justify structural adjustment programs across the developing world.

But critics pointed out a basic statistical problem: the Bank was comparing countries that followed its advice (and received its loans and aid) with countries that did not. The countries that followed the Bank's advice were also receiving billions in aid, debt relief, and preferential trade access. Was it the policy changes that caused growth, or the massive financial support that accompanied them?

Moreover, many of the "success stories" later reversed. Countries that grew in the 1990s stagnated in the 2000s. The correlation between World Bank-recommended policies and long-term growth turned out to be much weaker than initially claimed.

The lesson: correlation in economic data is common. Causation is extraordinarily difficult to establish. Anyone who tells you that a simple correlation proves a policy works is either ignorant of statistics or hoping you are.


Survivorship Bias: The Dead Don't Talk

Here is a story from World War II that illustrates one of the most insidious statistical traps.

During the war, Allied bombers suffered heavy losses. The military asked a statistician named Abraham Wald to analyze the damage patterns on returning planes and recommend where to add armor. The officers showed him data: returning planes had the most bullet holes in the fuselage and wings. The least damage was on the engines and cockpit.

The obvious conclusion: armor the fuselage and wings, where the damage is concentrated.

Wald said the opposite. Armor the engines and cockpit.

His reasoning was elegant. The planes he was analyzing were the ones that came back. The planes that were hit in the engines and cockpit did not come back — they crashed. The bullet holes on returning planes showed where a plane could be hit and survive. The areas with no bullet holes showed where a hit was fatal.

The officers were looking at survivors and drawing conclusions about the whole population. This is survivorship bias — the error of studying only successes and ignoring failures.

Survivorship bias is rampant in economic analysis.

"Look at Singapore, South Korea, and Taiwan — small countries that became rich through export-oriented growth!" Yes, look at them. But also look at the dozens of small countries that tried the same strategy and failed — countries no one writes books about. We study the successes and build theories around them. The failures are invisible.

"Look at these successful entrepreneurs — they dropped out of college and built empires!" Bill Gates, Steve Jobs, Mark Zuckerberg. For every dropout billionaire, there are millions of dropouts who are not billionaires. We see the survivors. We do not see the casualties.

"This mutual fund has beaten the market for five consecutive years!" There are thousands of mutual funds. By pure chance, some will beat the market for five years. You are seeing the survivors of a random process, not evidence of skill. The funds that underperformed were quietly closed or merged — they disappeared from the data.

SURVIVORSHIP BIAS IN ECONOMIC ANALYSIS

   WHAT WE SEE:

   ┌──────────────────────────────────────────────┐
   │                                              │
   │   SUCCESSFUL COUNTRIES                       │
   │   South Korea, Singapore, Taiwan, China      │
   │                                              │
   │   "They all used export-oriented growth!"    │
   │   "They all had strong states!"              │
   │   "They all invested in education!"          │
   │                                              │
   │   CONCLUSION: Do what they did.              │
   │                                              │
   └──────────────────────────────────────────────┘

   WHAT WE DON'T SEE:

   ┌──────────────────────────────────────────────┐
   │                                              │
   │   FAILED COUNTRIES                           │
   │   (that tried similar strategies)            │
   │                                              │
   │   Philippines, Sri Lanka (pre-2000s),        │
   │   many African nations, Myanmar,             │
   │   Pakistan, Bangladesh (earlier decades)     │
   │                                              │
   │   Some ALSO used export-oriented growth.     │
   │   Some ALSO had strong states.               │
   │   Some ALSO invested in education.           │
   │                                              │
   │   But nobody writes success books            │
   │   about them.                                │
   │                                              │
   └──────────────────────────────────────────────┘

   THE REAL QUESTION:
   What was DIFFERENT about the successes vs. the
   failures? THAT is where the useful lesson lies —
   not in the survivors alone.

How Governments Play With Numbers

Governments are not neutral reporters of statistics. They are players in the game. And because economic statistics affect elections, investment decisions, and international credibility, governments have powerful incentives to make the numbers look good.

This does not always mean lying outright. More often, it means making legitimate-seeming methodological choices that happen to produce more favorable numbers.

The Unemployment Shell Game

How do you count the unemployed? This sounds simple. It is not.

In India, the official unemployment rate has historically been reported by the National Sample Survey Office (now the Periodic Labour Force Survey) as remarkably low — often around 5-6 percent. In a country where hundreds of millions of people visibly struggle to find work, this number has always seemed suspiciously modest.

The trick is in the definition. Under the "usual status" definition, a person is considered employed if they worked for even a few hours in the reference period. A farmer who works during the planting and harvest seasons but has no work for six months of the year is counted as "employed." A woman who does occasional embroidery work for fifty rupees a day between household tasks is "employed." A young man who helps his father at the family shop without any payment is "employed."

None of these people would consider themselves employed in any meaningful sense. But in the statistics, they are.

The Centre for Monitoring Indian Economy (CMIE), a private data firm, uses a more stringent definition — actively seeking work and unable to find it — and consistently reports higher unemployment rates. In late 2024 and 2025, while the government reported robust employment growth, CMIE's data showed unemployment rates of 7-9 percent, and much higher in certain demographics — urban youth unemployment regularly exceeds 20 percent.

Who is right? It depends on what you mean by "employed." And that definitional choice — made in an office, invisible to the newspaper reader — determines the story.

The Inflation Basket Problem

Inflation is measured by tracking the prices of a "basket" of goods and services that a typical household buys. But who decides what goes in the basket? And how much weight does each item get?

If you reduce the weight of food in the inflation basket — perhaps because the "average" household now spends a smaller share of income on food — you will report lower inflation. But the poorest households spend over sixty percent of their income on food. For them, the inflation they experience is much higher than what the official basket reports.

If you exclude volatile items like fuel and food from the "core" inflation measure — as many central banks do — you get a more stable number. But fuel and food are precisely the items that poor people spend most of their money on. "Core" inflation, which excludes the things the poor care most about, is a measure of inflation for people who are already comfortable.

The GDP Revision Game

Governments routinely revise GDP figures — sometimes years after the original estimate. The initial estimate, which makes the headlines and shapes the narrative, is based on incomplete data. The revised figure, which comes out quietly months or years later, often tells a different story.

India's quarterly GDP estimates have been revised significantly on several occasions. First estimates that showed strong growth were sometimes revised downward. But by the time the revision appeared, the original headline had already done its political work. Nobody rewrites last year's headlines.

"Not everything that counts can be counted, and not everything that can be counted counts." — William Bruce Cameron


Argentina: When a Government Simply Lied

Not all statistical manipulation is subtle. Sometimes, governments simply falsify their data.

Argentina between 2007 and 2015 provides the most dramatic modern example. Under the government of Cristina Fernandez de Kirchner, Argentina's official statistics agency — INDEC — reported inflation rates of about 10-11 percent per year. Independent economists, private surveys, and the lived experience of every Argentine citizen suggested the true rate was 25-30 percent.

The government had effectively taken over the statistics agency and pressured it to report lower numbers. Economists who challenged the official figures were threatened with fines under a consumer protection law. The official numbers were used to calculate cost-of-living adjustments for pensions and wages — so lower official inflation meant lower payments to pensioners and workers. The manipulation was not just a political embarrassment; it was a tool for transferring wealth from ordinary people to the government.

The International Monetary Fund eventually issued an unprecedented censure of Argentina for providing inaccurate data — the first time the IMF had formally reprimanded a member country for this reason.

What Actually Happened

When Argentina's new government took office in 2015, it stopped publishing inflation data for several months while the statistics agency was reformed. When the new, credible data was finally released, it confirmed what everyone had known: inflation had been roughly two to three times higher than the government had claimed for nearly a decade.

The consequences were not just statistical. Pension payments that had been pegged to the understated official inflation had lost enormous real value. Workers whose contracts included inflation adjustments based on official data had been systematically underpaid. The lie in the data became a mechanism of redistribution — from the vulnerable to the state.


Base Rate Neglect: The Danger of Percentages

Here is a headline: "Factory output grows 25% in March!"

Impressive? Before you celebrate, ask: compared to what?

If factory output in March of the previous year was at an all-time low — perhaps because of a pandemic lockdown, or a natural disaster — then a 25 percent growth simply means partial recovery from a catastrophic baseline. You have not grown 25 percent from normal. You have grown 25 percent from rock bottom. You might still be below where you started.

This is called the base effect, and it is one of the most common ways economic data misleads.

India's GDP growth of 20.1 percent in Q1 of 2021-22 made global headlines. The fastest growth in the world! But Q1 of the previous year — April to June 2020 — was the quarter in which the national lockdown had caused a 24.4 percent contraction. Growing 20 percent from a base that had shrunk 24 percent means you have not fully recovered. You are still below where you started.

THE BASE EFFECT ILLUSION

   Imagine normal GDP level = 100

   Year 1 (crisis):    100 drops by 24% ───> 76
   Year 2 ("boom"):    76  grows by 20%  ───> 91.2

   THE HEADLINE:   "20% GROWTH! FASTEST IN THE WORLD!"
   THE REALITY:    Still 8.8% below where you started.

   ┌─────────────────────────────────────────────────┐
   │                                                 │
   │   100 ┤ ████████████                            │
   │       │                                         │
   │    91 ┤                        ████████████     │
   │       │                                         │
   │    76 ┤            ████████████                  │
   │       │                                         │
   │       └────────────────────────────────────────  │
   │         Normal      Crisis Year   "Recovery"    │
   │                                                 │
   │   The "boom" is actually incomplete recovery.   │
   │                                                 │
   └─────────────────────────────────────────────────┘

Another version of this trap: percentage changes on small numbers sound dramatic.

"Foreign investment in State X increased by 300%!" This sounds like a revolution. But if the previous year's investment was Rs 10 crore, a 300 percent increase brings it to Rs 40 crore — still a rounding error in the national context. The percentage is technically accurate. The impression it creates is wildly misleading.


Cherry-Picking: The Art of Choosing Your Data

A determined analyst can prove almost anything by choosing the right starting and ending points for their data.

Want to show that the economy is booming? Start your graph from a recession trough. The line will soar upward.

Want to show that the economy is failing? Start your graph from a boom peak. The line will plunge downward.

Want to show that poverty is declining? Choose the poverty line that produces the best numbers. Use a lower poverty line, and fewer people are "poor." Use a higher one, and more people are.

Want to show that a program is working? Compare the program area to a control area that was already doing worse. Any improvement — even one unrelated to the program — will show up as success.

This is cherry-picking — selecting data that supports your predetermined conclusion and ignoring data that contradicts it. It is not technically lying. Every number cited may be accurate. But the selection of which numbers to cite is itself a form of argument.


Think About It

A government announces: "We have built 10 crore toilets under the Swachh Bharat Mission."

What questions would you ask before accepting this as evidence of success?

Here are some: How many of those toilets are being used? How many have running water? How many were already under construction before the program started? How does the number of toilets compare to the number of households that need them? What does independent survey data show about open defecation rates?

The number 10 crore is not wrong. But by itself, it does not tell you whether the program succeeded. It tells you that someone counted something. Whether that something matters depends on questions the number alone cannot answer.


How to Read Economic Data Critically

Given all these traps, how should an ordinary citizen read economic statistics? You do not need a degree in statistics. You need a few good habits.

1. Ask: Average or Median? When you see an average — per capita income, average wage, average consumption — ask whether the median would tell a different story. If the average is significantly higher than the median, it means a small number of very high values are pulling the average up. The median, which tells you what the person in the middle earns, is usually more informative.

2. Ask: What Is the Base? When you see a percentage change — growth rate, decline, increase — ask what the baseline is. A large percentage change from a tiny base is insignificant. A small percentage change on a huge base is enormous. "Exports grew 50%" and "exports grew by Rs 500 crore" may describe the same event very differently.

3. Ask: Who Is Counting? The source of data matters. Government statistics agencies have both expertise and political incentives. Independent data sources — academic surveys, private data firms, international organizations — provide useful cross-checks. When government data and independent data diverge, the truth is usually somewhere in between, and the divergence itself tells you something important.

4. Ask: What Is Being Left Out? Every statistic is a selection. GDP excludes unpaid domestic work, environmental degradation, and the informal economy. Unemployment rates exclude discouraged workers who have stopped looking. Inflation baskets may not reflect what you actually buy. Ask what the number does not measure, because that omission may be as informative as the number itself.

5. Ask: Over What Period? Economic data can look very different depending on the time period you choose. Annual averages smooth out seasonal variations. Quarter-over- quarter comparisons can be misleading if there are seasonal patterns. Long-term trends are more informative than short-term fluctuations, but they can also hide recent deterioration.

6. Ask: Compared to What? A number in isolation means almost nothing. "India's GDP is $3.5 trillion" — compared to what? To China's $18 trillion? To India's own $500 billion in 1991? To what it could have been with better policies? The comparison determines the meaning.

A CITIZEN'S CHECKLIST FOR READING ECONOMIC DATA

   ┌────────────────────────────────────────────────────┐
   │                                                    │
   │  When you see a number, ask:                       │
   │                                                    │
   │  [ ] Is this an AVERAGE or MEDIAN?                 │
   │  [ ] What is the BASE for this percentage?         │
   │  [ ] WHO measured this, and what are their          │
   │      incentives?                                   │
   │  [ ] What is EXCLUDED from this measure?           │
   │  [ ] What TIME PERIOD is being used?               │
   │  [ ] What is the COMPARISON — compared to what?    │
   │  [ ] Does CORRELATION imply CAUSATION here?        │
   │  [ ] Am I seeing SURVIVORS only, or the full       │
   │      picture?                                      │
   │  [ ] Could this number be a BASE EFFECT?           │
   │  [ ] Who BENEFITS from this number being believed? │
   │                                                    │
   └────────────────────────────────────────────────────┘

The Poverty Line: A Number That Determines Fate

Perhaps no economic statistic carries higher stakes than the poverty line — the income level below which a person is officially "poor."

In India, the poverty line has been fiercely debated for decades. The Tendulkar committee in 2009 set it at approximately Rs 32 per day in urban areas and Rs 26 per day in rural areas. These numbers were widely criticized as absurdly low — too low to afford adequate nutrition, let alone other necessities.

But here is the political significance: the poverty line determines who is eligible for government benefits — subsidized food, housing schemes, education grants. Set the line low enough, and millions of genuinely poor people are classified as "above poverty" and excluded from help. The government also gets to announce that poverty has been dramatically reduced.

Set the line higher, and more people qualify for benefits — but the government must spend more, and the poverty statistics look worse.

The same people, living the same lives, eating the same meals, can be "poor" or "not poor" depending on where a committee draws a line. The number does not describe reality. It creates it.

"The government is very keen on amassing statistics. They collect them, add them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But you must never forget that every one of those figures comes in the first instance from the village watchman, who just puts down what he damn pleases." — Sir Josiah Stamp, 1929


The Darrell Huff Legacy

In 1954, an American journalist named Darrell Huff published a slim book called How to Lie with Statistics. It became one of the best-selling statistics books of all time, and its lessons remain as relevant today as they were seventy years ago.

Huff identified several techniques that are used — consciously or unconsciously — to mislead with data.

The truncated y-axis: A graph that does not start at zero can make small changes look enormous. If the y-axis starts at 95 instead of 0, a change from 96 to 98 looks like a doubling. This is extraordinarily common in news graphics and corporate presentations.

The misleading pictogram: Doubling the height of a graphic icon quadruples its area, making a two-times increase look like a four-times increase. Politicians love this trick — showing a small bag of money next to a big bag of money, where the "big" bag looks ten times larger even though the numbers only differ by a factor of two.

The unspecified average: Saying "the average family" without specifying whether you mean the mean, median, or mode. As we have seen, these can tell very different stories.

The non-comparable comparison: Comparing two things that are measured differently. "Crime is up compared to last year" — but did the definition of "crime" change? Did reporting rates change? Did police recording practices change? The numbers may not be measuring the same thing in both periods.


Think About It

Find a chart in today's newspaper or on a news website. Look at the y-axis. Does it start at zero? If not, redraw the chart in your mind with the y-axis starting at zero. Does the story look different?

Now look at the time period. What happens if you extend or shorten the time period? Does the trend hold, or does it change?

These are not advanced statistical skills. They are basic literacy for citizenship.


The GDP Debate: India's Most Important Statistical Argument

The debate over India's GDP methodology, which we opened this chapter with, deserves deeper examination because it illustrates every statistical trap we have discussed.

In 2019, Arvind Subramanian — who had been India's Chief Economic Adviser from 2014 to 2018 — published a working paper at Harvard arguing that India's GDP growth had been overstated by approximately 2.5 percentage points per year between 2011-12 and 2016-17. If true, this meant that India's economy had been growing at 4-4.5 percent rather than the officially reported 6.5-7 percent.

Subramanian's argument rested on a critical observation: India's GDP growth had decoupled from virtually every other economic indicator. When GDP was reportedly growing at 7 percent, exports were stagnant. Credit growth was declining. Investment was falling. Industrial production was sluggish. Tax revenues were growing slowly. These indicators — which normally move in tandem with GDP — were telling a different story from the headline number.

The government disputed these findings. Other economists criticized Subramanian's methodology. The debate was never conclusively resolved.

But the episode raised a fundamental question that every citizen should consider: if the most important economic number in a country of 1.4 billion people is genuinely uncertain — if informed, credible economists can disagree about whether growth is 4.5 percent or 7 percent — then what exactly are we debating when we debate economic policy? We are debating the interpretation of numbers whose accuracy is itself in dispute.

This does not mean that statistics are useless. It means that statistics are not facts in the way that the boiling point of water is a fact. They are estimates — constructed, contested, and always open to revision. Treating them as settled truths is not numeracy. It is numerological superstition.


What Numbers Cannot Tell You

Even perfectly accurate, honestly reported statistics have fundamental limitations.

GDP does not measure well-being. A country ravaged by a hurricane will see GDP rise as reconstruction spending increases. The hurricane destroyed homes, lives, and communities — but the rebuilding counts as economic activity. GDP counts the activity; it does not ask whether the activity makes anyone better off.

Unemployment does not measure suffering. A person counted as "employed" because they work three hours a day selling peanuts by the roadside is statistically identical to a person with a full-time job, health insurance, and a pension. The number treats them the same. Their lives are nothing alike.

Inflation does not measure whose prices rose. An inflation rate of 5 percent may mean that luxury goods fell in price while food and rent rose by 15 percent. The rich experienced deflation. The poor experienced crisis. The number tells you neither story.

Trade statistics do not measure fairness. A country that exports raw materials at low prices and imports manufactured goods at high prices may show a "balanced" trade account. But the terms of that trade — the value added, the jobs created, the technology transferred — may be deeply unequal. The number balances; the reality does not.

"If you torture the data enough, it will confess to anything." — Ronald Coase


The Bigger Picture

We started this chapter with India's GDP revision — a single methodological change that transformed the story of an entire economy. We have traveled through the tricks of averages and percentages, the trap of survivorship bias, the manipulation of unemployment definitions, the falsification of inflation data in Argentina, and the fundamental limitations of even honest statistics.

What have we learned?

First, that numbers are not neutral. Every economic statistic is the product of choices — about definitions, methodologies, baselines, and presentations. These choices are made by people with interests, and those interests inevitably shape the numbers. This does not mean all statistics are lies. It means all statistics are constructions, and constructions can be built to serve different purposes.

Second, that the most dangerous statistic is the one presented without context. A number without a comparison, a baseline, a definition, and a source is not information — it is an assertion. And assertions should be questioned.

Third, that statistical literacy is not a luxury. It is a survival skill. In a world where governments, corporations, and media organizations use numbers to persuade, the ability to ask basic questions about those numbers — Who counted? How? What was left out? Who benefits from this number being believed? — is as essential as the ability to read.

Fourth, that we should be humble about what we know. The economy is measured through a glass, darkly. Our best numbers are estimates, our best estimates are approximations, and our best approximations leave out enormous swathes of human experience — the unpaid work of women, the informal economy, the ecological costs of production, the psychological toll of inequality.

And fifth, that the proper response to statistical uncertainty is not cynicism but vigilance. The answer to "numbers can lie" is not "ignore all numbers." It is "learn to ask better questions about numbers." A citizen who can read an economic statistic critically — who can spot a misleading average, identify a base effect, and ask what the number leaves out — is a citizen who is harder to fool. And a democracy of citizens who are harder to fool is a democracy that works better.

The next time someone tells you a number — any number — about the economy, remember: the number is not the answer. The number is the beginning of the question. And the question is always: "What is this number not telling me?"

"Facts are stubborn things, but statistics are pliable." — Mark Twain