The application had been running for 12 years. It was built in VB6, connected to a SQL Server 2008 database, and processed every order the company handled. About 300 transactions per day, five days a week. The team of 15 people who used it knew every quirk, every workaround, every button that you had to click twice because the first click never registered.
And it was falling apart.
The original developer had moved on years ago. Nobody had the source code. The software ran on Windows 7 machines that couldn't be updated because the app wouldn't work on anything newer. Finding replacement hardware when a machine died meant buying refurbished PCs on eBay. That's not IT strategy. That's survival mode.
This isn't an unusual story. We hear some version of it every few months from businesses that have been running on legacy desktop software for a decade or more. The application works (mostly), so there was never a good time to replace it. Until suddenly there is, because the risk of keeping it running outweighs the cost of rebuilding.
The Fear Is Justified
Let's be honest about why companies put off legacy replacement. The fear isn't irrational.
Your team has been trained on this software for years. They know where everything is. They've built their daily routines around it. Dropping a completely new application on their desks is disruptive, and disruption in a business that processes hundreds of transactions daily means errors, slowdowns, and unhappy customers.
Then there's the data. A 12-year-old database contains the entire history of the business. Migrating that data incorrectly could mean lost records, broken references, or reporting that doesn't add up. For any company in a regulated industry, that's not just inconvenient. It's a compliance problem.
And the biggest fear of all: what if the new system doesn't do everything the old one did? What if there's some obscure feature that three people use every Friday afternoon that nobody mentioned during requirements gathering, and now they can't do their jobs?
All of these fears are legitimate. We've seen every one of them play out on projects (not ours, thankfully) where the team tried to do a big-bang replacement. Flip the switch on Monday, hope for the best. That approach has about a 40% success rate in our experience, and the 60% that goes wrong goes really wrong.
Our Approach: Parallel, Not Replacement
Here's how we handle it differently. Instead of building the new system and flipping a switch, we run old and new in parallel. The process has four phases, and the timeline depends on the size and complexity of the application.
Phase 1: Understand What Actually Exists
Before writing any code, we spend 2-3 weeks living inside the old application. We sit with users (remotely or on-site) and watch them work. Not what they say they do. What they actually do.
In the project I mentioned at the top, this phase uncovered 23 features that weren't in any documentation. Some were workarounds that staff had invented to compensate for bugs. Some were genuinely important workflows that had been added by the original developer as one-off requests over the years. If we'd skipped this step and just built based on a requirements document, we would have missed about a third of what the application actually needed to do.
We document everything. Every screen, every report, every data flow. We also document what's broken, what's slow, and what people wish worked differently. That last part is gold. It means the new system won't just replicate the old one. It'll be better.
Phase 2: Build the Core, Match the Muscle Memory
We start building the new application in .NET with DevExpress, and here's where we make a deliberate choice that some development teams wouldn't. We match the general layout and flow of the old application.
Not because the old design was great. Usually it wasn't. But because 15 people who use this software 8 hours a day have trained their hands and eyes to navigate it a certain way. If we completely reinvent the interface, we're adding weeks of retraining and months of frustrated users making mistakes.
So the new version keeps familiar navigation patterns. The main menu is in a similar place. The order entry screen has the same general flow: select customer, add line items, review, submit. We improve the details (better search, faster grids, fewer clicks to accomplish common tasks), but the overall feel is recognizable.
This isn't about being unimaginative. It's about respecting the fact that your team has real work to do, and they can't stop doing it while they learn a completely new interface.
Phase 3: Parallel Operation
This is the phase that separates a smooth transition from a disaster. Once the core modules of the new application are built and tested, we deploy them alongside the old system. Both systems read from and write to the same database (through a synchronization layer we build specifically for this purpose).
The team starts using the new application for specific tasks while continuing to use the old one for everything else. Maybe they do all new order entry in the new system, but still run their end-of-week reports from the old one.
This accomplishes three things. First, users get comfortable with the new software gradually, not all at once. Second, we catch issues in production with a safety net. If something goes wrong in the new system, the old one is still right there. Third, it gives us real-world feedback that no amount of testing can replicate.
The parallel period typically lasts 4-8 weeks, depending on how complex the application is.
Phase 4: Cutover and Decommission
Once the team is comfortable and all modules have been moved to the new application, we schedule the cutover. By this point, most users are already doing 80-90% of their work in the new system, so the cutover is more of a formality than a shock.
We keep the old system accessible (in read-only mode) for another 30-60 days. This gives people a way to look up historical data or verify that something migrated correctly. After that, the old application gets archived.
Data Migration Deserves Its Own Conversation
Moving 12 years of data from an old database to a new one is not a simple copy-paste job. Schemas change. Field definitions change. Data that was stored as free text in the old system might need to be structured in the new one.
We write migration scripts that we run repeatedly during development. The first run always reveals problems: orphaned records, duplicate entries, fields that contain data nobody expected (we once found an address field that someone had been using to store delivery instructions for 6 years).
By the time we do the final production migration, we've already run the scripts a dozen times. We know exactly what to expect, how long it takes, and what needs manual review afterward.
What It Costs and How Long It Takes
Every project is different, but for a mid-size business application (20-40 screens, multi-user, reporting, data migration), the typical timeline is 4-7 months from kickoff to full cutover. That includes the parallel operation period.
Cost depends on complexity, but we find that legacy replacement projects run about 30-40% less than building the equivalent application from scratch. The old system, despite its problems, provides a detailed blueprint of what needs to be built. That saves significant time in requirements gathering and design.
The Right Time Is Before You're Desperate
The worst time to rebuild a legacy application is when the old system has already failed. When the server dies and there's no backup, when the OS is so outdated it's a security liability, when the last person who understood the code has retired.
The best time is while the old system still works, giving you the luxury of a proper parallel transition instead of a panicked replacement.
If you're running a business-critical desktop application that's getting harder to maintain, we've been through this process many times. We're happy to take a look at your situation and give you an honest assessment of what a rebuild would involve. Sometimes the answer is "you've got another 3 years before it's urgent." Sometimes it's "you should have started last year." Either way, better to know now.

