- The Cautionary
- Posts
- Learning AI #14
Learning AI #14
Can AI control air travel?
Can AI replace air-traffic controllers?
There are roughly 45,000 flights in the sky over the United States on any given day. Each one needs to be separated from every other one, routed through crowded corridors of airspace, sequenced for landing at airports that have three runways and 40 planes trying to use them, while kept away from weather systems that are moving and changing in real time.

Air traffic is still largely driven by humans making decisions recommended by AI.
A single controller at a busy facility might manage 15 to 20 aircraft simultaneously, make decisions every few seconds, talk on the radio, watch radar, and coordinate with adjacent sectors.
Given the information-intensive nature of the job, people want to know if AI can do it.
I put the question to Claude and ChatGPT and the answer was more nuanced than a simple yes or no, and raised the issue of the “edge case,” or rare event.
Here is what Claude had to say:
Air traffic control is a domain where the rare event is the dangerous one.
AI systems trained on historical data struggle with genuinely novel situations: simultaneous runway incursions, cascading equipment failures, a pilot who is incapacitated and not responding, or a "nordo" (no radio) aircraft whose intentions must be inferred.
Human controllers improvise, reason by analogy, and communicate with extraordinary nuance under pressure.
For the routine flights, the clear weather, and the uncrowded airspace, a well-designed AI handles all that today. It is the unusual, the unexpected, the situation that no training scenario anticipated, where human judgment still carries weight that no current AI system can replicate. For example, humans can detect stress or impairment when speaking with a pilot in a way machines cannot.
Consider January 15, 2009, when Captain Chesley “Sully” Sullenberger lost both engines over Manhattan at 2,800 feet. (An edge case if there ever was one.) He had roughly three minutes to figure out that no runway was reachable, that the Hudson River was his best option, that a water landing at 150 miles per hour with 155 people on board was survivable if executed correctly, and then execute it correctly.

Sully was initially criticized by the FAA for the Miracle on the Hudson, but was later hailed as a hero
An algorithm optimizing for standard emergency protocols directed him toward Teterboro Airport in New Jersey, and if the AI were flying the plane autonomously, it would have tried for Teterboro and crashed.
Sully did not think he could make it to Teterboro. He looked at the numbers, listened to his gut (and 40 years of experience as a pilot), and chose the river. Everyone lived.
This is not an argument that humans are always right and machines are always wrong. It is an argument that so-called “edge cases” or rare events in aviation are not edge cases at all. They are the entire reason pilots and air-traffic controllers exist and planes are not autonomous.
AI does an incredible job of consuming, storing, and presenting information. But, in the history of the world, there were not many situations that included “failure of both engines while flying over a densely populated area,” AI has no reference point to recommend a course of action. It does not know how to handle the rare event, since like a child seeing a lion at the zoo for the first time, it does not know what it is seeing.
The FAA is facing a significant controller shortage, and AI will almost certainly absorb more of the routine workload. Expect to see air-traffic control move from human-in-the-loop, where humans approve every decision suggested by the AI to the human-on-the-loop model, where the AI is making more decisions and humans intervene in a decision like clearing all the runways when Air Force One is coming in to land.
For the future of air-traffic control, expect fewer humans in the room, but expect them to be in the room for a long time.
Things I think about
Footprints left on the Moon will remain there for millions of years due to the lack of atmosphere.
Response from ChatGPT
Can AI replaced air traffic controllers?
Short answer: not fully—not anytime soon. But AI will absolutely reshape the job and could handle large parts of it.
What air traffic controllers actually do
Air traffic controllers aren’t just “routing planes.” They are constantly:
Monitoring multiple aircraft in real time
Predicting conflicts before they happen
Communicating clearly with pilots under time pressure
Handling unexpected events (weather, equipment failure, emergencies)
That last category—unpredictable edge cases—is where AI still struggles.
Where AI can replace or outperform humans
AI is already strong at:
Pattern recognition (tracking trajectories, spotting conflicts early)
Optimization (fuel-efficient routing, spacing aircraft)
Data fusion (combining radar, weather, flight plans faster than humans)
In fact, organizations like the Federal Aviation Administration and Eurocontrol are already deploying AI-assisted tools.
Think of this as:
AI = co-pilot for controllers, not the pilot.
Where AI struggles (and why humans still matter)
AI falls short in areas that matter a lot in aviation safety:
Ambiguity & judgment
Pilots don’t always follow scripts. Controllers interpret tone, hesitation, and intent.Novel emergencies
A fire onboard, medical emergency, or total comms failure requires improvisation.Accountability
Aviation is a zero-failure industry. Someone must be legally and operationally responsible.Trust
Pilots trust human controllers. Full AI control would require a massive cultural shift.
What the future actually looks like
More realistic trajectory:
Next 5–10 years
AI handles routine spacing, sequencing, and traffic flow
Controllers supervise more aircraft at once
Fewer controllers per flight volume
10–20 years
Highly automated “digital towers” at smaller airports
Humans intervene mainly in edge cases
Beyond that
Partial or conditional autonomy possible
But human oversight likely never disappears entirely
Bottom line
AI will:
Reduce the number of controllers needed
Augment decision-making
Handle routine operations
But it won’t fully replace them because:
Aviation safety depends on handling rare, chaotic, high-stakes situations—and humans are still better at that.
If you want, I can break this down into which parts of the controller job are most at risk (and which are safest long-term).
Response from Claude
Can AI replaced air traffic controllers?
The short answer is: not fully, not safely, not soon — but AI is already reshaping the role in profound ways. The question is less whether AI can replace controllers and more how much autonomy it should have, and under what conditions.
What AI Can Already Do
Modern air traffic management systems already rely heavily on automation:
Conflict detection and resolution — algorithms flag potential separation violations minutes before they occur, far faster than human perception
Routing optimization — AI can compute fuel-efficient routes across entire traffic flows simultaneously
Weather integration — real-time rerouting based on convective activity, turbulence, and SIGMET data
Data link communications — CPDLC (Controller-Pilot Data Link Communications) reduces voice workload for routine clearances
Arrival sequencing — tools like TBFM (Time-Based Flow Management) in the U.S. automate metering and merge sequencing into busy terminals
These capabilities are impressive. In low-complexity, high-predictability environments — oceanic tracks, en-route cruise at altitude — an AI system could arguably manage routine operations with minimal human oversight today.
Where AI Falls Short
Edge cases and novelty. Air traffic control is a domain where the rare event is the dangerous one. AI systems trained on historical data struggle with genuinely novel situations: simultaneous runway incursions, cascading equipment failures, a pilot who is incapacitated and not responding, or a "nordo" (no radio) aircraft whose intentions must be inferred. Human controllers improvise, reason by analogy, and communicate with extraordinary nuance under pressure.
Pilot-controller relationship. Much of what a controller does is negotiation and interpretation — reading hesitation in a pilot's voice, catching a non-standard readback, sensing that something is slightly off. These are judgment calls that require theory of mind and contextual awareness AI doesn't reliably possess.
Accountability and trust. Aviation's safety culture is built on clear lines of responsibility. When something goes wrong, someone must be answerable. Fully autonomous AI management creates deeply uncomfortable ambiguity about liability.
Infrastructure fragility. A fully AI-controlled system creates catastrophic single-point-of-failure risk. Cyberattacks, GPS spoofing, sensor failures, or adversarial inputs could compromise the entire system simultaneously in ways human controllers, distributed across facilities, would not.
Man-in-the-Loop vs. Man-on-the-Loop
This is the crux of the debate, and it maps directly onto the spectrum between full human control and full AI autonomy.
Man-in-the-loop (MITL) means the human must actively approve or execute each consequential action. The AI advises; the human decides and acts. This is the current model for most ATC automation — a controller receives a conflict alert and then issues the clearance. The human is a required step in every transaction.
Advantages: Maximum human oversight, clear accountability, catches AI errors before they propagate
Disadvantages: Human becomes a bottleneck, especially in high-traffic environments; cognitive load remains high; automation can create complacency — controllers rubber-stamping AI suggestions without real evaluation (the "automation bias" problem)
Man-on-the-loop (MOTL) means the AI acts autonomously in real time, while the human monitors and retains authority to intervene. The human's role shifts from operator to supervisor. This is already the model in many oceanic sectors, where controller-to-aircraft contact is infrequent and separation is largely procedural.
Advantages: Dramatically higher traffic capacity, faster response times (AI can react in milliseconds), reduces human error from fatigue
Disadvantages: Supervisory vigilance degrades over time — humans monitoring quiet systems become worse at catching anomalies, not better. This is called the out-of-the-loop problem: when automation handles everything, the human loses situational awareness and is poorly positioned to take over when the automation fails.
The MOTL model is essentially what autopilot represents in the cockpit — and the aviation community has spent decades grappling with its consequences (Air France 447 is the textbook tragedy of pilots who couldn't hand-fly an aircraft after automation disconnected).
The Realistic Near-Term Path
Rather than replacement, the likely trajectory is role redefinition:
AI handles routine separation and sequencing autonomously, with humans monitoring exception queues
Controllers become "exception managers" — intervening only when the AI flags uncertainty or when unusual situations arise
Traffic capacity increases while headcount may decrease, but the human role becomes higher-stakes, not lower
Remote towers (already deployed in Scandinavia and elsewhere) show how a single controller can supervise multiple small airports through AI-assisted video and sensor feeds — a preview of supervisory models
The FAA, EUROCONTROL, and ICAO are all exploring frameworks along these lines. The challenge isn't technical capability — it's regulatory certification, labor agreements, and above all establishing the failure modes of AI systems well enough to trust them with zero-tolerance safety environments.
The Bottom Line
Aviation is the domain where "good enough" isn't good enough. AI will almost certainly transform air traffic control — increasing capacity, reducing controller cognitive load, and handling the routine with superhuman precision. But the idea of removing the human entirely runs into a fundamental problem: AI systems fail in ways humans don't, and humans fail in ways AI doesn't. The two failure modes are complementary, which makes the hybrid model not just politically palatable but genuinely safer than either extreme.
The man-on-the-loop model is probably where the industry lands — but making it work requires solving the vigilance and situational awareness problems that come with supervisory control, which is a human factors challenge as much as a technical one.
**********