When the Radar Killed the Network
We've just solved one of the most bizarre networking problems I've encountered in my time at CSC. It took weeks of troubleshooting, endless theories, and more than a few late nights. The answer, when we finally found it, was something none of us would have guessed.
The client is British Aerospace at their Farnborough site - home of the famous Farnborough Airshow. They're one of our largest clients, and we're based just down the road at Spectrum Point, so we're on-site regularly. They'd recently upgraded to a modern Ethernet network connecting all their workstations - mostly Compaq 486DX and 486SX machines - to servers, printers, and other network devices.
The Problem: Friday at 3pm
The issue was maddeningly consistent and completely baffling. Every Friday afternoon at approximately 3pm, the entire network would go down. Not gradually, not partially - the whole thing would just die. Workstations would lose connection to servers, printers would drop offline, network drives would disappear. Then, after about fifteen to twenty minutes, everything would come back up as if nothing had happened.
Only Fridays. Only around 3pm. Like clockwork.
When you're troubleshooting network issues, consistency is usually helpful. At least you can predict when the problem will occur and monitor what's happening. But when the pattern is this specific and this regular, it stops being helpful and starts being mystifying. What happens at 3pm on Fridays that doesn't happen any other time?
Weeks of Troubleshooting
Paul Mariotti, Eddie Felmer, Greg Giles, and I spent weeks chasing this problem. We started with the obvious suspects:
- Scheduled tasks? We checked every server for cron jobs, scheduled backups, anything that might run at 3pm on Fridays. Nothing.
- Network traffic spikes? Maybe everyone was trying to access something before the weekend. But monitoring showed normal traffic patterns right up until the crash.
- Hardware failures? We tested switches, routers, hubs. Swapped out components. Checked power supplies. Everything tested fine.
- Cable problems? We traced runs, tested continuity, looked for physical damage. The cabling was new - it shouldn't be the issue.
- Software conflicts? Maybe some application running on the workstations. But they were all standard configurations, and the problem affected everything simultaneously.
Every theory led nowhere. We'd show up on Friday afternoons with monitoring equipment, ready to capture whatever was happening. And every Friday at 3pm, like clockwork, the network would die. We'd see the packets stop flowing, watch the connections drop, and have absolutely no idea why.
The Human Element
Finally, out of desperation, we started asking different questions. Not "what's wrong with the network?" but "what happens at this site at 3pm on Fridays?"
We talked to security. We talked to facilities. We talked to people who'd worked at the site for years. Most of them shrugged - nothing unusual happens at 3pm on Fridays. It's nearly the weekend, people are winding down, but nothing specific.
Then someone mentioned, almost as an afterthought: "Oh, that's when they test the radar systems."
The radar systems. Of course - we're at an aerospace facility. They have radar equipment in the aerospace park. And they test it weekly. On Fridays. At 3pm.
The Physics of the Problem
Once we knew what to look for, the problem became clear - and absolutely fascinating from a technical standpoint. The radar testing was generating powerful electromagnetic pulses. Our new Ethernet network, with hundreds of meters of copper cabling running throughout the facility, was acting as a giant antenna.
The copper cables were picking up the radar signals and converting them into electrical interference on the network. The signals were powerful enough to completely overwhelm the legitimate network traffic. Packets weren't being lost due to collisions or errors - they were being drowned out by electromagnetic noise.
It's actually kind of elegant, in a frustrating way. The network infrastructure itself - the very cables designed to carry data - had become a massive receiving antenna for signals it was never meant to handle. Every cable run, every network drop, every connection point was picking up the radar pulses and injecting noise into the system.
Why didn't we see this during installation and testing? Because installation happened during normal working hours, not during radar testing. The system worked perfectly - until it was exposed to the very specific electromagnetic environment of Friday afternoon radar tests.
The Solution: Fiber to the Desktop
Once we understood the problem, the solution was obvious, if expensive: rip out all the copper cabling and replace it with fiber optic cable. Fiber doesn't conduct electricity, so it's immune to electromagnetic interference. The radar can pulse away all it wants, and the fiber won't pick up a thing.
This is a massive undertaking. We're talking about rewiring an entire facility - pulling out perfectly good (well, technically good) copper Ethernet cabling and replacing it with fiber optic runs. Fiber to every desktop, every printer, every network device. In 1996, this is cutting-edge stuff. Most organizations are still running coax or 10BASE-T copper. Fiber to the desktop is almost unheard of outside of specialized environments.
But BAe doesn't have much choice. They can't stop testing their radar systems - that's core to their business. They can't move the network to a different building - it needs to be where the people and equipment are. And they can't shield hundreds of meters of copper cabling effectively enough to block radar-level signals.
So fiber it is. The project is going to take months and cost a fortune. But it will solve the problem definitively. No electromagnetic interference, no Friday afternoon network crashes, no more troubleshooting sessions trying to figure out why everything dies at exactly 3pm.
Lessons Learned
This experience has taught me several things about troubleshooting complex technical problems:
Look beyond the technical. We spent weeks examining logs, testing hardware, analyzing network traffic. The answer wasn't in any of that data. It was in understanding the environment - what else happens in the building that might affect our systems?
Question your assumptions. We assumed the problem was in the network infrastructure, the servers, the workstations - the IT equipment. It never occurred to us that radar systems in a completely different part of the facility could be the culprit. Why would it? We're IT people, not RF engineers.
Talk to people who aren't IT professionals. The answer came from asking facilities people, long-time employees, security staff - people who understand the building and its operations in ways we don't. They knew about the radar testing. We didn't, because it wasn't "IT relevant." Turns out it was very relevant.
Sometimes the infrastructure is the antenna. This is a new one for the troubleshooting handbook. In traditional phone systems, we worry about electromagnetic interference causing noise on voice calls. But with data networks, we're running higher frequencies over longer cable runs, creating more opportunities for cables to act as antennas. As we push more data over copper, electromagnetic compatibility is going to become a bigger issue.
Document the weird ones. This is going in every training manual, every case study, every "war stories" session we do. Because the next time someone encounters intermittent network failures that occur at specific times, they need to know to ask: "What else is happening in the environment at those times?"
The Broader Implications
I suspect this problem is going to become more common as we deploy more networks in industrial and specialized environments. Factories have heavy electrical equipment that generates electromagnetic noise. Hospitals have medical imaging systems. Research facilities have all sorts of equipment that generates RF signals. Military and aerospace installations - like BAe - have radar and communications equipment.
For decades, networks were relatively immune to these concerns because they were low-speed, used thick shielded coax, and were mostly isolated from high-EMI environments. But as we move to higher speeds, lighter cabling (like the UTP we used at BAe), and deploy networks everywhere, electromagnetic compatibility is going to matter more and more.
Fiber optic cabling might become the standard solution for high-EMI environments. Yes, it's more expensive. Yes, it's harder to work with. Yes, it requires different skills and tools. But it's immune to electromagnetic interference, it doesn't conduct electricity (so no ground loops or electrical hazards), and it supports higher speeds over longer distances than copper.
The BAe installation might be one of the first large-scale fiber-to-the-desktop deployments in the UK, driven not by bandwidth requirements but by electromagnetic immunity. That's probably not what the fiber manufacturers expected, but it might end up being a major driver for fiber adoption in specialized environments.
A Problem Solved
After weeks of frustration, we finally have an answer. It's not the answer any of us expected - we were all thinking in terms of network protocols, hardware failures, configuration issues. None of us considered that the building itself, with its radar systems and aerospace testing facilities, would be part of the problem.
The fiber optic installation is going to be a major project. But at least we know it will work. And we've learned something valuable about the intersection of networking technology and the physical environment it operates in.
Plus, we have one hell of a story for the next time someone asks us about an unusual troubleshooting case.
"Well, there was this time at Farnborough when the radar kept killing the network every Friday afternoon..."
Next Post
56½ HoursPrevious Post
Greentalk / Oak