Posts

Caching

Image
 Caching reduces latency and speeds up a system. We will store data in a different location than where it was originally being accessed so that it will be accessed faster.  When do we need caching? 1) We have clients repeating the same network requests. For example, Graphql can cache data on the client side so that we can avoid fetching the same requests repeatedly or We can cache static data like css/html on the client's browser on first load. 2) We perform computationally expensive operations. We can cache it to avoid doing the same operation and speed up the system. 3) We have multiple servers accessing the same data from a database. We can avoid this by having the servers fetch from a cache instead of the database or maybe we can setup the cache on the server side. Caching when writing: When you are making write or edit requests, you may run into a problem with having 2 sources of truth. For example, you may have a cache of the data on the server as well as having it on the dat

Generative AI

Image
 What is Generative AI? Sub-field of AI that focuses on generating content such as text, images, audio, code, video, etc. We use mathematical models to approximate the content.  Basic Steps of Generative AI: 1) We use large data sets as inputs. For example, images or text 2) We feed the data to deep learning models to discern a pattern 3) We accomplish a task such as generating new images. What is LLM? advanced model that leverages Generative AI and is trained on large datasets to excel in language processing tasks. How does LLM work? There are 3 steps: 1) encoding - basically takes in text and converts them into tokens(numerical representations). The tokens with similar word meanings are also put "closer into vector space" 2) transforming - once it's done encoding, it is fed through a learning model. Sometimes a human is used in this step to guide the model. 3) decoding - converts the tokens back into human words. How to build LLM Applications Prices are coming down year

Availability

Image
what I learned  Availability is really important. It is the odds of a service being available at any given time measured in percentage. It is implied that a customer expects a high level of availability when they are paying for something. It's especially important to think about availability when designing critical systems like life supporting hospital software or even systems that are far reaching and widely consumed such as cloud services. In the industry, since low availability such as 85% is unacceptable, we measure it in nines. If you have 99% availability, then we say it has 2 nine availability. If we have 99.99%, then we have 4 nine availability. We usually care about the downtime per year. For example, 2 nines equates to 3.65 days of downtime per year. 3.65 days is still pretty bad, imagine if youtube was down this much per year. We usually regard 5 nines as the gold standard in the industry which equates to 5 minutes of downtime per year. 1) SLA(service level agreement) -

Code Splitting in React

 This is an easy way to increase performance of your Front End React App. The idea is to only load your components or code when you need to. This becomes even more effective at scale, as your project becomes bigger and bigger. 1)dynamic importing - You can choose to import a file only when the app needs it.  For example, if you have a simple app where we are importing a file name sum import React from 'react' import { sum } from './sum' const Test = () => {   return (     < div >         < button onClick = { () => alert ( sum ( 2 , 2 )) } ></ button >     </ div >   ) } export default Test We can change to it to only import when the user clicks on the button: import React from 'react' // import { sum } from './sum' const Test = () => {   return (     < div >         < button onClick = { () => {           import ( "../sum.js" ). then ( module => {             module . sum

New features in Node.js

Image
 What I learned : Some new handy features in Node.js 1) fetch Fetch API comes pre-built as of Node V.21! This means we don't have to install any additional packages to use this outside of browser environments. Less dependencies is always better in the long run, not only because the third party library is maintained by who knows but if you need to update Node in the future, a lot of your packages might not be compatible with the newer Node. fetch ( 'http://yourUrl' ) . then ( res => res . json ()) . then ( data => console . log ( data )) 2) try catch Before if you needed to do some async calls and handle it, you had to do some acrobatics to perform async await...For example, you had to setup a function, then invoke it... const fetchData = async () => { const response = await fetch ( 'http://yourUrl' ) const data = await response . json () console . log ( data ) } fetchData () Now you can simply use try catch like this which makes it simpler. try

Latency and throughput

Image
 what I learned: Latency and throughput are the 2 most important things to consider when measuring a performance of a system. 1) Latency is basically how long it takes for data to get from point A to point B. This could be anything from fetching data from client to server, reading from RAM, reading from SSD, etc. An API call will obviously have higher latency than reading from SSD and reading from SSD will have higher latency than reading from RAM. When designing a system, it's important to think about the latency of your system. For example, if you have a streaming service like netflix, it's important not to have users constantly see "video buffering" because of latency issues so maybe you need to have servers closer to the clients. 2) Throughput it how much work a machine can perform in a given time. A good way to think about it is that throughput is the size of the diameter of the pipe in the picture above. A real world way to think about it is if you have a lot of

CI/CD

Image
 What I learned today: CI/CD stands for continuous integration and continuous delivery. It is a set of practices and tools to streamline and improve the process of software and delivery. Specifically, we are talking about doing the build, deploy and test phases in small frequent steps. Overall, CI/CD make delivery more efficient and reduces the manual overhead involved. Continous Integration - Automating the frequent merging of code, often many times a day. An important part of this process is version control using a shared repository like git to catch merge conflicts before we ship it. The next step after merge is to compile, test the code(Ideally, this is what we should do but I have worked in repos where we didn't do testing but we were siloed so I couldn't do much) and build Continous Delivery - automates the deployment process so that changes to the codebase can be automatically released. Once the code passes all the tests, the pipeline automatically deploys to a staging o