Latency numers every programmer should know

Modern computer are very very very fast.
C/64 was about 1Mhz and you can hangs it just throwing a 10.000 cycle in interpreted BASIC v2 language.

Slowest computer today are clocked 3Ghz (3000Mhz!) and are super scalar: it means they can usually execute 2 or more instructions per core in parallel, and you get at least 2 cores on the tiny one (RasperryPi2 has 4 ARM cores, for instance). The sow Intel centrino has at least 2 ALU and can execute between one and two instruction in parallel (mostly arithmetic one).

So for example, if your code take 13 seconds to process 50 record, you have a problem, a big problem.
But to understand better the issue, lets start with this table (click to enlarge):

 

Taken from Numbers Every Programmer Should Know By Year (colin-scott.github.io)

It is sad to say but modern chip are way to fast for dynamic RAM.

So a main memory reference, not cached, take as much as 100 nano seconds to complete.

But even so, a mechanical (!) hard disk take no more than 2ms to do a disk seek ( ).

So if your code does not need to do a network call and have all its data in memory, and it is quite small (so no memory swap occurs) your operations take as much as 100 nanoseconds/each to occur, at worst.

And in 1 ms there are one million of nano seconds.
So if your code does 1 million of simple operation, it should stay around 1 ms.

If you cannot process 50 records in about, lets say, 50 ms, you got a big trouble, because you have X multiplier which is over one million…

For instance, if you take 1 seconds to process 50 records, it means you use 20 ms per record, I mean 20 million random access memory per record!

I mean really random access, always cache miss, which you see it is not a good sign of a good program…

Ok just joking, do not worry: just throw more microservices in parallel and do not thing about it….

(Not joking really, you should look at your code and understand why is it so slow!).