← Back to White Papers

Why Software Projects Fail

The Hidden Mechanics of a Misunderstood Industry

Thomas Godart · White Paper, Version 0.1 · February 2026

Abstract

The software industry suffers from a set of structural, economic, and psychological dysfunctions that are poorly understood by most stakeholders. These dysfunctions compound multiplicatively rather than additively, producing outcomes where identical projects can differ by a factor of x200 in cost and duration depending on the conditions under which they are executed. This paper identifies and articulates the core problems that underlie the chronic failure of software projects worldwide, with particular attention to the mechanisms that are not addressed by conventional software engineering literature.

Part I

Known Factors Affecting Software Quality and Development Time

An exhaustive reference of well-established causes of quality decrease and time increase in software creation. These are considered common knowledge in the field.

Methodology

Every leading AI system is trained on the near-totality of publicly available human knowledge. If a factor affecting software quality or development time has ever been described in a book, a paper, a conference talk, or an online discussion, it is already encoded in these models.

We therefore asked an AI to produce an exhaustive enumeration of every known cause of quality decrease and time increase in software creation. The resulting list — approximately 183 factors across 12 categories — serves as a baseline: everything the AI can produce from existing knowledge is, by definition, already covered by prior publications. It represents the conventional understanding of the field.

The white paper's contribution begins where this list ends. Part II of the paper focuses exclusively on structural, economic, and psychological mechanisms that are not present in this enumeration — observations that are novel, original, and derived from direct field experience rather than from the existing literature.


1. Requirements & Specification

Quality Decrease
Time Increase

2. Architecture & Design

Quality Decrease
Time Increase

3. Code & Implementation

Quality Decrease
Time Increase

4. Testing & Quality Assurance

Quality Decrease
Time Increase

5. Technical Debt

Quality Decrease
Time Increase

6. People & Team

Quality Decrease
Time Increase

7. Process & Methodology

Quality Decrease
Time Increase

8. Tools & Infrastructure

Quality Decrease
Time Increase

9. Project Management & Organization

Quality Decrease
Time Increase

10. Documentation & Knowledge Management

Quality Decrease
Time Increase

11. External & Environmental Factors

Quality Decrease
Time Increase

12. Deployment & Operations

Quality Decrease
Time Increase
Category Quality Factors Time Factors
Requirements & Specification76
Architecture & Design106
Code & Implementation177
Testing & QA127
Technical Debt86
People & Team88
Process & Methodology78
Tools & Infrastructure78
Project Management & Organization69
Documentation & Knowledge Management65
External & Environmental57
Deployment & Operations76
Total~100~83

These ~183 factors represent the established body of knowledge in software engineering. The following part focuses on what lies beyond this conventional understanding.

Part II

The Hidden Mechanics

Structural, economic, and psychological dysfunctions that are poorly understood by most stakeholders.

1. The Absence of Average Velocity

1.1. Software work has no continuous speed

In virtually every other profession, there exists a measurable, continuous relationship between time spent and output produced. A journalist writes a given number of words per hour. A tile layer covers a given number of square meters per day. These rates can be measured, averaged, and projected. It is this measurability that justifies paying professionals for their time: time converts into output at a roughly predictable rate.

In software engineering, this relationship does not exist. Software work operates at exactly two speeds, neither of which is an average.

1.2. Speed one: full stop

The first speed is zero. When a software engineer encounters a problem they have never solved before — a new requirement that does not fit the existing architecture, a technical question with no known answer in the current context — the engineer must stop producing and begin researching. This research involves identifying candidate solutions, comparing their trade-offs, building a decision file, and ultimately choosing a direction.

This research work produces no deliverable output. No code is written. No feature is shipped. From the perspective of the organization, this activity is perceived as ancillary to the "real work" of coding. Engineers themselves, team leads, and clients all perceive it negatively — the more research is needed, the less progress is visible.

The duration of this research phase is fundamentally unknowable in advance. It depends on the novelty of the problem, the complexity of the existing system, and the quality of information available. Worse, if a decision made during this phase turns out to be wrong — which may only become apparent months or years later — the cost of reversing it can be orders of magnitude greater than the cost of the original research. Architectural and structural decisions, when wrong, can be catastrophic.

1.3. Speed two: teleportation

The second speed is effectively infinite. When an engineer receives a request for something they have already built, or something that closely resembles what already exists in the application, the implementation time approaches zero. Adding a third form to a system that already has two, adding another authenticated page to a system where authentication is already implemented — these are copy-and-adapt operations that can be completed in minutes.

It is common for a meeting to last thirty minutes discussing a feature that will then be implemented in under two minutes.

1.4. The impossibility of measuring velocity

Because software work alternates between full stop and teleportation, computing an "average velocity" for a team is a meaningless exercise. Yet organizations demand it. Agile methodologies require it. Sprint planning depends on it.

What happens in practice is that teams fabricate velocity. Since the system demands a number, humans produce one. This fabricated velocity then becomes the baseline for planning, estimation, and accountability — all built on a fiction.

This fiction has a direct consequence: work expands to fill the time allocated. Give a team ten times more time and they will produce the same deliverable, having unconsciously filled the additional time with invented intermediate tasks. Conversely, compress the timeline by a factor of ten and the project may still be delivered, because the actual productive work — the teleportation moments — remains the same.


2. The Multiplier Problem

2.1. Compounding factors

The absence of average velocity is only one factor among several, and these factors multiply rather than add.

Method variance (x3 to x5). The choice of development methodology — the tooling, the process, the architectural approach — can produce a three-to-fivefold difference in cost for the same outcome.

Individual variance (x3 to x5). The speed at which a given engineer operates at a keyboard, their fluency with the tools, their experience with the domain, can differ by a factor of three to five compared to another engineer.

Scope variance (x2 to x8). It is trivially easy to inflate or deflate the apparent scope of a software project. An engineer can invent intermediate tasks, split tickets unnecessarily, and make a simple project appear two to four times more voluminous. Conversely, a disciplined approach can simplify the scope by a factor of two.

2.2. The x200 factor

When these factors are compounded: 5 x 5 x 8 = 200. The same project, under different conditions, can differ by a factor of two hundred in actual effort required. This is not a theoretical maximum. It is the product of moderate, commonly observed variances.


3. The Perverse Economics of Time-Based Compensation

3.1. Paying for time destroys the incentive to perform

If an engineer can complete a project 200 times faster than the industry average, paying that engineer by the hour provides zero incentive to operate at this speed. The rational economic behavior is to work at the pace that maximizes personal income, which is the slowest pace the organization will tolerate.

The engineer who can deliver x200 should logically charge per project rather than per hour. But becoming an agency requires at least seven or eight distinct professional roles beyond engineering. An individual cannot easily become an agency, and so the structural incentive to inflate time remains unchecked.

3.2. Systemic inflation

In an industry where everyone competes on time, time becomes a commodity that participants inflate. This is not unique to software. The semiconductor industry, for instance, advertises fabrication node sizes that do not correspond to physical measurements but to marketing equivalences. The announced value is not entirely false, but it is not true either.

In software, the equivalent inflation is ubiquitous. Estimates are padded. Sprints are filled. Complexity is manufactured. None of this is necessarily malicious; it is the natural consequence of a system that rewards time spent rather than value delivered.


4. The Supervision Paradox

4.1. Non-technical managers are structurally deceived

A common organizational pattern places software engineers under the supervision of managers who do not code, who no longer code, or who are too polite to challenge what their teams report. These managers operate under the principle that teams should be trusted.

The teams, systematically, exploit this trust. Not out of malice, but out of human nature: when no one can verify the actual difficulty of a task, the task expands to match the comfort level of the person performing it.

4.2. The judge-and-party problem

Engineers who are asked to assess their own work are structurally unable to provide an honest assessment. When a manager asks an engineer what went wrong in a project, the engineer is simultaneously the judge and the judged party. The answer will be biased not by dishonesty, but by the impossibility of objectivity under self-evaluation.


5. Technology Fads and CV-Driven Development

5.1. Engineers work for their careers, not for the project

A significant portion of technology decisions within organizations are driven not by what is best for the project, but by what is best for the engineer's resume. When a project requires a mundane but effective technology, engineers will advocate for trendy alternatives that enhance their marketability.

Trends are not solutions. The fact that every other company uses a given framework does not mean it is the right choice for the current project. Yet it is an argument that is extremely difficult to counter inside an organization, because the engineer advocating for the trend appears to be aligned with industry consensus.

5.2. The measured good vs. the believed good

The correct technology choice is the one that has been measured as good for the specific problem. Not the one that is believed to be good based on popularity or reputation. This distinction is almost never made in practice.


6. The Dogma of Received Knowledge

6.1. Learned knowledge vs. constructed knowledge

There are good engineers who received an education and built a personal body of knowledge on top of it. And there are exceptional engineers who constructed their entire body of knowledge from direct experience, without the scaffolding of formal education.

The difference between the two is fundamental. Knowledge constructed through direct experience carries with it the reasoning that produced it. This framework can be adapted to new situations. Knowledge received empirically — learned from a book, a course, a conference — arrives without its construction method. It can be applied but not adapted.

6.2. The knowledge pipeline delay

When a new practice emerges in the developer community, it takes time to be articulated, more time to be discussed publicly at conferences, more time to be published in books, and still more time to be taught in universities. By the time a practice reaches the classroom, it may already be obsolete in the field.

The engineer who constructed the knowledge directly is always years ahead of the engineer who learned it through the institutional pipeline.

6.3. Innovation is structurally unwelcome

An engineer who invents a novel method is not following a trend. There is nothing to put on a resume. No industry consensus to point to. No conference talk to reference. The innovation stands as the product of one person's thinking, and this is precisely the kind of contribution that organizations are least equipped to recognize and most likely to reject.


7. The Communication Collapse

7.1. Meetings as rest

Standard Agile practice mandates daily stand-ups, weekly grooming sessions, sprint planning, sprint reviews, retrospectives, and demonstrations. These are perceived as necessary by organizational designers.

From the perspective of an engineer whose actual work is writing code, every meeting is a break. This is not cynicism; it is physics. An engineer's productive output is code. A meeting does not produce code. Every engineer knows this.

7.2. The positive feedback loop

When problems arise in an organization, the standard response is to increase communication: more meetings, more status reports, more documentation. This creates a positive feedback loop in the systems-theory sense: the solution amplifies the problem.

More meetings reduce productive time. Reduced productive time creates more problems. More problems generate more meetings. The velocity of the team approaches zero asymptotically as team size increases.

This is the mechanism by which large organizations lose the ability to produce software. Unable to innovate or build, they resort to acquiring smaller companies that still can. This is not a market strategy; it is a structural inevitability.

7.3. Case studies in failure

The pattern is visible in high-profile failures. In the United States, two Pentagon HR software projects were cancelled after spending a combined 800 million dollars, with no operational software delivered after 12 years of work. In France, a software project for the Police and Gendarmerie consumed 257 million euros and produced a system so unusable that attaching a single document to a complaint required 17 clicks, after 10 years of work.


8. The Fractal Dimensionality of Software

8.1. Even an apartment has seven dimensions

Most people conceive of software as a flat surface: more features mean a larger area, and cost scales linearly with area. This is the apartment metaphor — more tiles, more floor space, proportionally more cost.

This model is fundamentally wrong. An apartment is not a two-dimensional plan. It is at minimum a three-dimensional volume (walls make rooms, not floors). And it has additional hidden dimensions: electrical wiring, plumbing, drainage, ventilation. So a realistic apartment has at least seven dimensions, not two.

8.2. The Moon illusion

To illustrate why dimensionality matters, consider the Moon. Ask anyone to draw the Earth and the Moon side by side at scale, and almost no one will get it right. The Moon's gravity is roughly one-sixth of Earth's, which most people know. And one-eighth the volume, for a sphere, corresponds to one-half the diameter, because volume scales as the cube of the radius. 23 = 8.

A Moon that looks "half as big" is actually eight times smaller in volume. The exponent — the number of dimensions — is what creates this perceptual gap. To put it in a different way: in order to make an Earth, you need six Moons, and it will look just twice as big.

8.3. Fractal dimensions in software

Software systems do not have integer dimensions. Each cross-cutting concern — authentication, internationalization, logging, permissions, caching — adds a fractional dimension to the system. No single concern affects every other component entirely, so each adds less than one full dimension. But these fractional dimensions accumulate.

A system with 12 effective dimensions that appears "twice as large" as another is not twice as expensive to build. It is 212 = 4,096 times as expensive.

This is why every software project in history starts faster than expected and slows down more than expected. The initial features are built in low-dimensional space. As cross-cutting concerns accumulate, the effective dimensionality rises, and each new feature must be built not in two or three dimensions but in twelve or more. The cost per feature explodes, and everyone is surprised, because no one computed the dimensionality.

8.4. Premature dimensionality is lethal

Engineers, left unchecked, will introduce dimensions prematurely. It is tempting to add internationalization before it is needed, to architect for microservices before there is traffic, to implement event sourcing before there is data. Each premature dimension increases the cost of every subsequent feature multiplicatively.

A project that should cost X will cost X raised to the power of its premature dimensions. This is how projects become infinitely expensive to develop while appearing, from the outside, to be only modestly ambitious.


9. Software Development as Investment, Not Labor

9.1. The phase inversion

In most professions, effort and value are synchronous. A journalist publishes an article, and it generates revenue immediately as it is consumed. Effort converts to value in real time.

Software development is the opposite. All effort is invested before any value is created. The software will only generate value when it is used, which is in the future. At the time the software is being written, it produces nothing. It is pure investment.

This creates a phase inversion between effort and success. When the engineers are working hardest, there is no success. When success arrives, the engineers are idle or working on something else. They are never working on the thing that is succeeding.

9.2. Failure blames the wrong people

If the enterprise fails to monetize the software, the investment retroactively becomes a loss. The engineering team may be blamed, even though the failure to monetize has nothing to do with the quality of the engineering.

9.3. Success breeds decay

If the enterprise succeeds, a different pathology emerges. The engineering team, having delivered a successful product, is not released. They are retained, because no successful company fires the team that "made it work." The team then receives an endless stream of specifications that gradually push the software toward uselessness — features no one needs, refinements no one asked for, complexity no one benefits from.

So the software that sold best ends up containing teams that are doing the worst work. Success in the market produces decay in the product.


10. The Paradox of Market Leadership

10.1. The winner is the worst deal

Consider two companies offering identical services. The one that prices higher generates more margin, accumulates more capital, and eventually acquires the competitor. After the acquisition, without competition, prices rise further.

The company that "won" the market is, by construction, the one that delivered the worst value to the client.

In every transaction, the interests of the buyer and the seller are opposed. Maximum gain for the seller is minimum value for the buyer. The companies celebrated as industry leaders — the Microsofts, the SAPs, the Oracles — are, by this logic, systematically the worst deals available.

10.2. Acquisition as inability

Large companies acquire smaller ones not as a strategy of strength but as an admission of impotence. Having lost the ability to build through the communication collapse described in Section 7, they must buy what they can no longer create. The acquisition is not innovation; it is the epitaph of innovation.


11. The Noise Paradox of Technology Popularity

11.1. Bad documentation generates traffic

When a technology is well-designed and well-documented, its users can work autonomously. They do not need to ask questions online. They do not need workarounds. They do not generate discussion.

When a technology is poorly designed or poorly documented, its users flood the internet with questions, workarounds, tutorials, and complaints. This volume of activity is universally interpreted as popularity. More discussion is equated with more success.

The consequence is that the worst technologies appear the most popular, and the best technologies appear obscure. The internet systematically amplifies mediocrity and suppresses excellence.

11.2. The queue illusion

This is analogous to two shops selling the same product. One is efficient: short queues, fast service. The other is disorganized: long queues, slow checkout. Passersby observe the crowded shop and conclude it must be better. The crowd is caused by inefficiency, not by quality.

11.3. Frameworks as evidence of language failure

A programming language that requires a framework to be used safely is, by definition, a poorly designed language. The framework compensates for flaws that should not exist. Design patterns, similarly, are often patches for languages that permit dangerous constructs.

JavaScript is a canonical example. Widely regarded by specialists as one of the most poorly designed languages in computing history, its syntactic permissiveness generates an endless stream of frameworks, each promising to solve the problems created by the language itself. This churn — a new "revolutionary" framework roughly every week — is interpreted as a vibrant ecosystem. It is, in reality, evidence of chronic dysfunction.

Languages that are designed correctly — that prevent circular dependencies, that enforce type safety, that eliminate entire categories of bugs by construction — generate no such noise. Their silence is mistaken for irrelevance. We can't see what isn't there, even if creating this absence was the most important feature.


12. Security and the Problem of Absence

12.1. Security is the absence of failure

Security work only becomes visible when it fails. A breach, a data leak, a system compromise — these are events. The absence of these events, which is the product of sustained, expert attention over years or decades, is invisible by nature.

When security work is done well, nothing happens. And nothing happening is extremely difficult to value, to measure, or to sell.

12.2. The impossibility of measuring zero

If a system has one failure per hundred operations, a failure rate can be computed. If a system has zero failures across years of operation, no statistic is possible. There is no denominator. There is no trend line. There is only absence.

This is structurally identical to the problem of quitting smoking. One can say that a person has not smoked for five years. But one cannot say that the person has "successfully quit," because a single cigarette tomorrow retroactively negates the claim. The "quitting" was never an event; it was a continuous absence, and any interruption destroys it retroactively.

Security, uptime, and reliability are of the same nature. They are not achievements; they are sustained absences. And the human mind is not equipped to value what it cannot perceive.


13. The 80/20 Trap

13.1. The law of diminishing returns

In every completed software project, retrospective analysis reveals the same pattern: 80% of the total effort was spent on 20% of the features. The first features are cheap. The last features are ruinously expensive.

This is a direct consequence of the fractal dimensionality described in Section 8. As the system grows, each additional feature must be integrated across an increasing number of dimensions. The cost per feature rises exponentially, not linearly.

13.2. The illusion of completeness

Organizations that pursue 100% completion of their specifications are, by mathematical necessity, spending four-fifths of their budget on one-fifth of the value. The final 20% of features consumes 80% of the effort but contributes marginally to the utility of the software.


14. The Impossibility of Hiring

14.1. The quarter-CV problem

It is structurally impossible for an organization to hire the right people for a software project. The reasons are multiple and compounding.

The primary instrument of recruitment is the CV. A CV contains only half of an individual's life: their past. It shows what they have done. It does not show what they want to do, what they are capable of becoming, or where their ambition lies.

Within that past, the CV shows only results — job titles, project names, company names. It does not show methods: how the work was done, what analyses were conducted, what decisions were made and why, what trade-offs were navigated. When methods are removed, results are only half of what was accomplished.

A CV therefore represents approximately one quarter of what an individual actually is. Organizations make their most consequential decisions — who builds their software — based on 25% of the relevant information.

14.2. The HR keyword trap

When recruitment is delegated to human resources specialists, the process is handled by someone who has no competence in the domain being recruited for. The HR professional must evaluate candidates in a field they do not understand.

Faced with this impossibility, HR professionals fall back on the only tool available to them: keyword matching. They carry a checklist of buzzwords and acronyms, and they evaluate candidates by how many boxes are ticked.

This creates a perverse inversion. Engineers who do not truly master their subject tend to produce dense, acronym-laden discourse — a classic defense mechanism that ensures they will not be challenged, since the person across the table cannot understand them anyway. These candidates tick every keyword box.

Conversely, engineers who deeply master their subject can explain complex work in simple, clear language. They adapt their discourse to their audience. They avoid unnecessary jargon. And in doing so, they fail to trigger the keyword checklist. From the HR professional's perspective, the clearest communicator appears to be the least qualified candidate.

The best engineers are systematically filtered out by the very process designed to find them.

14.3. Companies that cannot hire cannot build

Organizations that recruit through keyword matching end up with teams composed of engineers selected for their ability to produce impressive-sounding discourse rather than impressive software. These teams, predictably, fail to deliver. The recruitment dysfunction propagates directly into project failure.

For the rejected engineer, this is not a loss. A company that cannot hire correctly cannot build correctly either. Being rejected by such a company is, paradoxically, a quality signal.


15. Technical Interviews as Theater

15.1. The luck factor

To compensate for the psychological biases of conversational interviews, many organizations introduce technical assessments: live problem-solving, technical questions, or coding challenges. The intent is sound — remove subjectivity by introducing objectivity. The execution is fundamentally flawed.

Technical questions in interviews are subject to enormous randomness. If the question happens to concern a problem the candidate has already solved in their career, they will answer in extraordinary detail — possibly exceeding the scope of the interview. If the question concerns a problem they have never encountered, they may appear incompetent, even though they would solve it competently given the normal working conditions of their profession.

15.2. Engineers are asynchronous workers

An engineer, like a lawyer, does not operate in real time. When a client presents a legal question to a lawyer, the lawyer does not answer on the spot. The lawyer takes notes, researches, and responds later. This is understood and accepted.

Software engineering is identical. A question arises in an organization; it will be solved by the engineers — in the future, not in the present. Engineering is an inherently asynchronous profession. Expecting real-time performance in a thirty-minute interview is evaluating a fundamentally asynchronous worker in a synchronous context. The measurement does not measure what it claims to measure.

15.3. Coding challenges and the doping analogy

Coding challenges attempt to compress weeks of real engineering work into thirty or sixty minutes. To make this possible, the problems must be simultaneously simple enough to solve quickly and difficult enough to be discriminating. The result is algorithmic puzzles — problems that are deliberately bizarre, deliberately tricky, and deliberately unrelated to actual engineering work.

Naturally, good engineers tend to fail these tests. The problems are disorienting precisely because they bear no resemblance to real work.

Seeing that the highest-paying companies use the most bizarre tests, candidates begin training on these puzzles. Entire ecosystems of thousands of practice problems emerge online. The population — particularly in the United States, where this practice is most entrenched — trains obsessively on algorithmic trivia to demonstrate a competence that is entirely uncorrelated with engineering ability.

This is structurally identical to doping in professional sports. When a few athletes dope, they win consistently. This forces everyone to dope in order to remain competitive. Entire disciplines become permanently doped, and the athletes pay the price with their health — professional cyclists dying of "natural causes" at thirty-five.

The equivalent in software is engineers filling their minds with thousands of useless algorithmic tricks to pass interviews that measure nothing relevant. The training is a form of intellectual doping. It crowds out useful knowledge with trivia, and it corrupts the entire hiring pipeline by rewarding preparation over competence.

15.4. The only real test is working together

All of these failures — psychological bias, keyword matching, random technical questions, artificial coding challenges — converge on a single conclusion: there is no way to know how someone works without actually working with them.

The French labor system offers a partial solution in the form of the "période d'essai" — a trial period during which either party can terminate the contract immediately, without justification or consequence. For a few weeks or months, people actually work together and discover whether the collaboration functions.

If such trial periods existed as a standard in international contracts — particularly in the technology sector — they would replace the entire dysfunctional apparatus of modern hiring. A few weeks of real collaboration reveals more about compatibility than any number of interviews, tests, or CV screenings.

Without this mechanism, organizations build teams from people selected by processes that cannot identify competence. Entire groups of individuals produce nothing and survive by employing techniques that are alternatives to actual work. These are the teams on which software projects are built. These are the teams that fail.


16. Credential Gatekeeping and the Unregulable Profession

16.1. Administrative barriers vs. competence

Another mode of failure occurs when administrative requirements disqualify the right candidates. A position requires a master's degree, and the best candidate does not have one. A position requires a doctorate, and the best candidate never pursued one — precisely because their level of expertise made it unnecessary.

If an individual has reached a level of mastery where obtaining a given credential would be a waste of time and money, requiring that credential excludes them for no reason other than bureaucratic compliance. Administrative reasons are not good reasons. They are equivalent to filtering candidates by gender, age, or religion — arbitrary constraints that have no correlation with the ability to make a software project succeed.

16.2. An unregulated profession at the center of everything

Software engineering is one of the few professions where no legal credential is required to practice. Unlike medicine, where unlicensed practice is a criminal offense, or architecture, where buildings can collapse if the designer lacks proper training, software has no regulatory framework.

This is paradoxical, because software is now at the center of virtually every profession. In France, massive data breaches occurred in 2025 every few weeks with millions of personal records exposed each time hitting 60% of the total population, because a server was misconfigured, a database was left accessible, or an elementary security error was made. The attack surface of modern software systems is effectively infinite.

The absence of regulation is not because the risks are perceived as low. It is because the country would not know what to regulate. Software encompasses so many different systems, languages, architectures, and domains that even specialists cannot agree on what competence looks like. The country would need to consult software engineers — but which ones? On which topics? There is no consensus to build regulation upon.

16.3. The two-year horizon

The difficulty of regulation is compounded by the speed of change in software. In most professions, the horizon of relevant knowledge spans decades. A carpenter's skills, a plumber's techniques, a physician's understanding of human anatomy — these evolve slowly. The knowledge acquired today will remain applicable for twenty or thirty years.

In software, the true planning horizon does not exceed two years. Operating systems change, development methods change, the devices used to connect change, the threats to defend against change. No one can predict with any confidence what the software landscape will look like in twenty-four months.

This means that credentials — even if they were relevant at the time of issuance — decay faster than in any other profession. A certification earned three years ago may describe a technology that no longer exists. The half-life of software knowledge is shorter than the cycle time of any credentialing institution.


17. The Promotion Trap and the Salary Pyramid

17.1. Success is punished by promotion

Organizations are structured as hierarchical pyramids, and salary increases with hierarchical position. This creates an inevitable dynamic: when an engineer demonstrates exceptional talent at building software, the organization rewards them by promoting them out of the role where they were exceptional.

The promotion removes the best builder from the building floor and installs them in a management position — a fundamentally different profession requiring fundamentally different skills. The organization has not gained a manager; it has lost its best engineer.

This is a well-known pattern (the Peter Principle), but in software it has a specific aggravating factor. The gap between engineering and management is wider than in most fields, because the technical depth required to supervise software work is not the same as the technical depth required to produce it. A manager who once coded may understand the domain, but they no longer practice it. And as Section 16.3 established, software knowledge decays within two years. A promoted engineer's technical judgment becomes obsolete while they attend to budgets and timelines.

17.2. The authority paradox

It is psychologically difficult for organizations to accept that authority does not require hierarchical superiority. In most companies, it is unthinkable for a junior-salaried individual to have authority over higher-paid colleagues. Yet the nature of software work frequently demands exactly this: the person who understands the system best — who should be making architectural decisions — may be the least senior person in the room.

The salary pyramid makes this impossible. To give the best engineer decision-making authority, they must be elevated in the hierarchy. To elevate them, they must be given a management title. To justify the management title, they must manage. And managing is not engineering. The structure designed to reward competence systematically destroys it.


18. Prestige Spending and the Gift Economy

18.1. When spending becomes the goal

When an organization reaches a certain size and reputation, it becomes psychologically difficult for it to spend small amounts. A prestigious company must commission prestigious projects, which must carry prestigious budgets. The expenditure itself becomes part of the organization's identity.

This is the logic of luxury goods. A luxury watch must be expensive, because if an identical mechanism existed at one-tenth the price, the watch would not be luxury. The price is not a consequence of the quality; the price is the product. The same inversion occurs in enterprise software procurement: the budget is not a consequence of the project's requirements; the budget is a requirement in itself.

18.2. The economics of gift-giving

In standard economic theory, a voluntary exchange between free parties is always mutually beneficial — both sides gain, or the exchange would not occur. There is one known exception: gift-giving.

When a gift is purchased, the buyer does not know what the recipient actually needs. The recipient, if they needed something, would be capable of acquiring it themselves. The gift therefore does not fulfill a genuine need; it fulfills a social obligation. The purpose of the gift is the act of spending, not the utility of the object. Economic theory consistently finds that the value of gifts is negative — the money spent exceeds the value received.

Enterprise software procurement in large organizations follows the gift economy, not the exchange economy. The large institution with a large budget for a large project must find a large service company capable of absorbing tens or hundreds of millions. If the same result could be achieved by a small firm for a fraction of the cost, the institution would not commission it — not because the result would be inferior, but because spending a small amount on a small firm would be incomprehensible to stakeholders. It would be as if a newly wealthy individual, instead of buying a Rolex, bought an unknown brand for a twentieth of the price.

18.3. Prestige as value destruction

Every prestige-driven purchase is mechanically a destruction of value. The delta between what was spent and what was needed is pure waste. Large institutions and governments that commission software projects at scales of hundreds of millions of euros are not optimizing for outcomes; they are optimizing for the appearance of seriousness. The appearance of seriousness requires the expenditure to be large. And because the expenditure must be large, the project must be large. And because the project must be large, it must involve large teams, long timelines, and complex organizational structures — all of which, as established in Sections 2, 7, and 8, are precisely the conditions that guarantee failure.

The prestige economy does not merely tolerate waste; it requires it.


19. The Silence of Competence

19.1. Those who know do not speak

There is an ancient observation, formulated in the Tao Te Ching, that those who know do not speak, and those who speak do not know. This is not mysticism; it is a structural consequence of how knowledge and communication interact.

When an individual is searching for the answer to a question, they talk about it. They ask colleagues, post online, attend conferences, write articles exploring the problem. The unsolved question generates noise. When the same individual has solved the question, they stop talking about it. The question is no longer on their agenda. The solved problem generates silence.

This is why the internet is filled with discussions about problems that people cannot solve, and silent about problems that have been solved. The volume of discourse around a topic is inversely correlated with the degree to which it has been resolved.

19.2. The diet analogy

The pattern is visible in everyday life. Every year, new diet books are published, new methods are promoted, and millions of people discuss weight loss strategies. This constant stream of discourse exists because these people have failed to solve the problem. People who have no weight problem do not think about diets. They have never needed a diet. The question, for them, is resolved — and therefore invisible.

Following a published diet is, in a sense, a logical error: the people who do not have a weight problem never followed one. The solution and the discourse about the solution are structurally incompatible.

19.3. The knowledge gap that cannot close

Applied to software: if an individual has truly solved the question of why software projects fail and how to prevent it, that individual has no reason to communicate the answer. The knowledge is their entire professional value. Giving it away freely would be economically irrational.

Meanwhile, a researcher who wants to study the question lacks the practical experience to answer it. The researcher has never built twenty software products end to end. The practitioner has never written a paper. The two populations do not overlap.

This creates a structural knowledge gap. The people who know cannot be heard. The people who are heard do not know. The literature on software project failure is written almost exclusively by people who have never personally succeeded at making software projects not fail. And the people who have succeeded are too busy succeeding — or too economically rational — to write about it.


20. The Inflection Point

20.1. A historical document

This paper is written at a singular moment in the history of software engineering. Until approximately 2025, software was built manually by humans, and it failed for the human reasons described in the preceding nineteen sections. Beginning in 2026, software is increasingly built by machines — by AI systems that do not suffer from the psychological, economic, and organizational dysfunctions catalogued here.

AI does not inflate estimates. It does not attend meetings. It does not optimize for its resume. It does not require a salary pyramid. It does not fail technical interviews. It does not generate prestige-driven procurement. The entire apparatus of dysfunction described in this paper is specifically human.

20.2. Why this paper can exist

The structural knowledge gap described in Section 19 means that the only person who could both possess the knowledge and be willing to share it is someone who recognizes that the knowledge is about to become historically obsolete. If AI will render these human dysfunctions irrelevant, then sharing the knowledge costs nothing — its competitive value has an expiration date.

This paper is therefore a professional testament: a historical account of why human-driven software projects failed, written at the last moment when such an account is relevant. Future problems will be different — they will concern AI alignment, automated system reliability, and machine-generated architecture. But the problems of human software engineering, as described here, will not recur in their current form.

20.3. Speed as the meta-solution

One principle unifies all the problems in this paper: speed. Every dysfunction described here increases the time and cost of software projects. And time is not merely a cost — it is the enemy of accuracy. Every software product misses its target by some margin. The faster it is built, the sooner it can be tested against reality and adjusted. The slower it is built, the further it drifts from its objective before anyone discovers the deviation.

Large projects are slow. Slow projects are expensive. Expensive projects are difficult to adjust. Difficult-to-adjust projects miss their targets. And the miss, compounded by the cost already sunk, becomes the failure that defines the industry. Speed is not a luxury; it is the condition under which software has any chance of succeeding at all.

Part III

Solutions

A framework for making software projects succeed, derived from the same field experience.

This section is forthcoming.

Part III will present the solutions and mitigations that can be implemented to address the problems identified in Parts I and II.

Conclusion

The problems identified in this paper are not bugs in the software industry. They are structural features of a system that has been built on false assumptions about the nature of software work. The assumption that software velocity is measurable, that time converts linearly to output, that larger teams produce more, that market leaders are technical leaders, that popularity indicates quality, that interviews can identify competence, that credentials correlate with ability, that large budgets produce large results — each of these assumptions is demonstrably false, and together they produce an industry where projects routinely fail at scales measured in hundreds of millions of dollars.

These problems compound multiplicatively. The x200 factor, the fractal dimensionality, the phase inversion, the communication collapse, the perverse economics of time-based pay, the impossibility of hiring correctly, the prestige economy, the silence of competence — none of these operates in isolation. They interact, reinforce, and amplify each other into the systematic dysfunction that has characterized human-driven software development.

The solutions to these problems exist but require a fundamental rethinking of how software projects are structured, staffed, measured, and compensated. And as of 2026, the most fundamental solution of all — removing the human from the production loop — is no longer theoretical.