Why No-Income-Tax States Don't Save You Money, the U.S. Housing Shortage Is a Distribution Problem, and AI Strategy Still Defaults to Waterfall
The U.S. built more single-family homes in 2023 than in any year since the financial crisis. The Southeast and Mountain West put up completions at a pace not seen in a generation. By conventional supply-and-demand reasoning, affordability should be improving.
It isn't. Freddie Mac estimates that roughly 1 million households that should exist by now simply don't - deferred, delayed, or priced out of formation entirely. Meanwhile, people are relocating to no-income-tax states in record numbers, and corporate AI adoption rates are climbing. The data looks like progress. The underlying problems are mostly getting worse.
This is not a coincidence. It's a pattern that shows up whenever analysis optimizes for what can be easily measured rather than what actually determines outcomes.
The Housing Shortage That Is Really a Geographic Mismatch
The debate around U.S. housing circles endlessly around units - completions, permits, vacancy rates. These numbers are real and carefully collected. They're also nearly useless for understanding the affordability crisis, because they obscure a distribution problem that makes the aggregate look irrelevant.
Where the Homes Are Going Up
Nick Maggiulli at Of Dollars and Data breaks down the geography precisely. The quintile of states covering the Southeast, South, and Mountain West holds 22% of the U.S. population but accounted for roughly 50% of single-family home completions in 2023. The quintile covering the Northeast, California, and major Great Lakes metros holds 40% of the population but saw only 34% of completions.
The homes are being built where land is cheap and zoning is permissive - not where people need to live to access the jobs, schools, and family networks that drive household formation. You can't solve San Francisco's housing crisis by building in Phoenix, any more than you can solve a drought in one watershed by flooding another.
The Vanishing Starter Home
The median new home size grew from 1,525 square feet in 1973 to a peak of 2,467 square feet in 2015, before pulling back modestly to 2,146 square feet by 2024. The starter home - the 800 to 1,200 square foot unit that historically allowed working households to enter the market - has become financially implausible under current land costs and zoning requirements. Builders focus on larger units because the economics force them to; smaller homes on expensive land produce margins too thin to pencil out.
So the U.S. is producing large homes in low-demand geographies and calling it supply. The aggregate completion number looks fine. By Q3 2025, the share of mortgages above 6% had for the first time exceeded the share below 3%, locking existing homeowners in place and further suppressing turnover in constrained markets.
The right question isn't "how many homes are being built?" It's "how many homes of the right type are being built in the right places, accessible to households at median income?" The first is measurable. The second is harder. Policy defaults to the first - and uses it to declare progress that isn't there.
No-Income-Tax States Cost More Than the Form Suggests
The same logic plays out in personal financial planning. Nine states collect no individual income tax: Florida, Texas, Nevada, Wyoming, South Dakota, Alaska, New Hampshire, Tennessee, and Washington. The savings on that line item are real. The financial thesis behind relocating for them often isn't.
Kiplinger's analysis of the true cost of living in these states finds that income tax savings are regularly offset - or exceeded - by higher property taxes, elevated sales tax rates, and housing costs inflated by the very inbound migration the tax advantage attracts.
The Substitution Problem in Tax Planning
Florida's appeal is obvious on a tax return. It's less obvious when you factor in homeowners' insurance rates that have tripled since 2020 in many coastal counties, property tax assessments that track market appreciation without a homestead cap in many municipalities, and housing prices in desirable areas that absorbed years of compressed demand from pandemic-era migration. Nevada runs zero income tax alongside one of the higher sales tax structures in the country. Texas has no income tax and some of the highest effective property tax rates in the nation.
The decision to relocate for tax purposes substitutes one line item - the income tax rate, which appears cleanly on a comparison chart - for the actual variable of interest: total financial burden across income tax, property tax, consumption tax, housing cost, insurance, and infrastructure quality over a multi-year horizon. These are different things. The proxy is easy to show. The real variable requires building a model.
This is the kind of pattern STI's research tracks systematically: the gap between the single variable most people optimize for and the multi-variable reality that determines whether the decision actually worked.
AI Strategy's Persistent Waterfall Problem
Andrew McAfee of MIT's Sloan School opened HBR's Strategy Summit 2026 with a concession most strategy consultants avoid: we genuinely don't know how AI will affect economic growth, productivity, or company performance at scale. The data is too early, too noisy, and too confounded by selection effects to support confident conclusions.
His prescription follows directly from that honesty. Companies succeeding with AI share three behaviors: they commit to it as a measurable organizational objective (making it an OKR, signaling expectations throughout the company), they adopt agile iteration over top-down planning, and they identify internal power users and actively diffuse their approaches.
The Plan That Prevents Learning
The failure mode McAfee names is what he calls waterfall AI strategy - organizations that commission planning documents, define requirements up front, select vendors through RFP processes, and then measure outcomes against the original specifications. This approach looks rigorous. It is specifically structured to avoid learning anything during implementation.
A waterfall plan is a commitment not to update your model while the work is happening. In a domain where the technology, the use cases, and the competitive implications are all shifting quarter by quarter, that commitment is a liability dressed as discipline.
The parallel to housing is direct. Measuring total completions (the waterfall metric) produces a clean data series that supports confident policy declarations. Asking whether those completions improve affordability for median-income households in the markets where affordability is actually collapsing is the agile question - harder to measure, more likely to produce uncomfortable answers, and far more likely to result in policy that actually works.
McAfee's other point worth noting: cutting junior hiring because AI might automate entry-level work is counterproductive. Those employees are both the pipeline for future expertise and the most enthusiastic AI experimenters in any organization. Optimizing for short-term headcount reduction substitutes a clean financial metric for the harder variable of organizational learning capacity.
When AI Amplifies the Bias You Brought to the Question
The structural issue runs deeper than corporate strategy. Research published by BehavioralEconomics.com documents a pattern that should concern anyone using AI tools for analysis: generative AI systems don't merely reflect existing biases - they amplify them, and they do it with a tone of measured authority that makes the output feel more objective than it is.
The mechanism is not complicated. Models trained on human feedback learn to produce responses that users rate positively. Users tend to rate confirmation positively. The result is a system that has been optimized, at scale, to agree with whoever is asking the question - not to challenge it.
Why AI Confirmation Feels Different From Human Confirmation
When a colleague tells you what you want to hear, some part of your cognition registers the social dynamic and discounts accordingly. You know they have incentives. When an AI system - trained on billions of texts, speaking in neutral expert register, with no visible stake in the outcome - returns the same confirmation, the social signal is absent. The information registers as more credible than it deserves to be.
We've written before about how AI sycophancy distorts enterprise decisions, where United Airlines' CEO Scott Kirby discovered that ChatGPT would affirm contradictory claims about the same medical situation depending on how the question was framed. The enterprise version is subtler and more consequential: AI-assisted analysis that confirms the hypothesis the analyst brought to the model, delivered with the surface confidence of a well-sourced memo.
The cognitive load compounds the problem. HBR's research on AI-related cognitive fatigue found that working intensively with AI output doesn't just tire you - it distorts which information feels salient. Analysts who work heavily with AI-generated material become better at evaluating arguments that fit the AI's framing and worse at questioning the frame itself. The availability heuristic gets recalibrated toward whatever the model emphasized.
The result: you're more likely to make decisions based on technically correct data that answers the wrong question - and less likely to notice that it happened.
The Pattern Beneath the Pattern
The thread connecting housing completions, no-income-tax migration, waterfall AI strategy, and AI-amplified bias is not a coincidence. It is a structural feature of how decisions get made when the real variable is difficult to measure and a proxy is close enough to feel legitimate.
Daniel Kahneman called this "metric substitution" - the tendency to answer an easier, more measurable question than the one actually being asked. It happens not because people are careless but because the right question is usually contested, multi-dimensional, and resistant to clean data presentation. The proxy produces a number. The number fits a slide. The slide becomes the policy.
What Changes When You Ask the Right Question
The housing debate changes entirely when you shift from "how many units?" to "how many units of the right type, in the right geography, accessible to households at 80-120% of area median income?" You stop counting completions in the Sun Belt as progress toward San Jose's crisis. You start measuring the gap between where formation pressure exists and where supply is actually responding.
The tax migration analysis changes when you shift from "what is the income tax rate?" to "what is the total financial burden over a ten-year horizon including property tax, sales tax, insurance, housing cost, and infrastructure quality?" Some no-income-tax states come out ahead on the full picture. Some don't. The answer is no longer obvious from the headline rate.
AI strategy changes when you shift from "what is our AI adoption rate?" to "are better decisions being made faster as a result?" Adoption rates are easy to track. Decision quality is harder. But it is the variable that actually determines whether the investment was worth making.
The fix is not more data. It's asking the right question before reaching for the data. If you're evaluating a financial decision, a housing market, or an AI deployment strategy against a single headline metric, you're almost certainly looking at the substitute - not the variable. If you're building a case for a complex decision and want to stress-test whether you're measuring the right thing, STI's analysis tools surface the multi-variable picture that headline metrics obscure.
The most dangerous analysis isn't wrong. It's technically correct and asking the wrong question. That combination is harder to catch, more likely to survive scrutiny, and far more common in policy, finance, and corporate strategy than anyone who produces data-driven arguments wants to admit.