Latency and throughput

 what I learned:


Latency and throughput are the 2 most important things to consider when measuring a performance of a system.



1) Latency is basically how long it takes for data to get from point A to point B. This could be anything from fetching data from client to server, reading from RAM, reading from SSD, etc. An API call will obviously have higher latency than reading from SSD and reading from SSD will have higher latency than reading from RAM. When designing a system, it's important to think about the latency of your system. For example, if you have a streaming service like netflix, it's important not to have users constantly see "video buffering" because of latency issues so maybe you need to have servers closer to the clients.


2) Throughput it how much work a machine can perform in a given time. A good way to think about it is that throughput is the size of the diameter of the pipe in the picture above. A real world way to think about it is if you have a lot of clients, throughput is how many clients can the server handle requests for in a given amount of time. You can see that throughput can easily become the 'bottleneck' of your system. A good way to improve throughput is perhaps to increase the number of servers.

Comments

Popular posts from this blog

Lifecycle of React components

Styled Components

e-commerce website built with React.nodejs