When writing complex software things don’t always go as planned. You implement a new feature that works perfectly well locally and in a test environment, but when your code hits the real world everything falls apart. Sadly, that’s one of the things we all have to deal with as software developers. On a recent project for a major telecommunications client we needed to be able to process more than 20 million records every night. That equated to 5GB of storage and unfortunately the environment where our process was running had up to 4GB of memory.
Processing such a vast amount of data brings a lot of challenges along with it, especially when you also need to combine it with a few more million records that are located in a database. Adding code to retrieve associated information and transform raw data might take an extra few milliseconds per each record. However, when you repeat that operation 20 million times those few milliseconds can easily turn into hours.
So there we were asking ourselves why it was taking so long. Is it an index we forgot to add to the database? A network latency problem? These things can be very hard to pin down.
We needed to think outside the box to get around this one.