What is the difference between adduser and useradd? (thread)
This surprisingly halt my Linux From Scratch experience, as they are both similar commands, yet one is much better than other. From this article, I learned that useradd is native binary compiled with the system while adduser is a perl script which uses useradd binary in back-end. TLDR, use adduser, as it will handle things such as password creation and home directory creation and assignment. (I’ve tried with useradd, even when you’ve created the home directory, you won’t be assigned with it).
Difference between Asymmetric and Symmetric Multiprocessing (article)
Inspired by the question asked in class, I wanted to learn more about the difference between asymmetric and symmetric multiprocessing. Articles made by G4G are always great, this one’s no different. This article has diagrams as well as tables to help understand the difference between these two multiprocessing types. In asymmetric multiprocessing, only the master processor runs the tasks of an OS. Symmteric multiprocessing however, every processors are treated equally.
Round Robin scheduling (thread)
This article is interesting, as it takes on a more university-test-problem question. Thread starter asked the following question, given the recent CPU and I/O bursts and their respective length in time units, describe it’s state transition. Round robin is very popular in learning about scheduling and load balancing, but it has it’s flaws such as uneven work deistribution. I learned a bit more about the theoretical side of this load balancer.
Introduction to CPU Scheduling (video)
Neso Academy is my go-to youtuber for digital systems, computer organization, and now operating systems. Their videos are complete and detailed, this one’s no different. I now understand that CPU scheduling is the time we are assigning to different processes for their utilization of the CPU. We don’t want the CPU to remain idle, I.E when the CPU is waiting for something to complete. By assigning the CPU to another process during that waiting time, we reduce idle time and maximize the utilization of CPU.
CPU and I/O Burst Cycles (video)
Now that I’ve understand CPU scheduling, the next step is to learn about CPU and I/O burst cycles. Video is also from NesoAcademy, very detailed, but I’ll try to summarize what I’ve just learned. A process can be in either two states: CPU execution state
or I/O wait
. They also alternate between each other. CPU bursts is the time
when the process is in CPU execution state. The same goes for I/O bursts.
Load Balancer – System Design Interview Question (article)
For the rest of this week’s links, I’ll be trying to learn this week’s topics in a software engineering approach, as I feel like it would benefit me the most and help me get ready in interviews. First up is Load Balancers, I’ve heard about L4 and L7 load balancers, but what are they exactly? G4G was the right choice to learn this, as this article is very complete. L4 lies in OSI model layer 4, which is the transport layer (TCP/SSL) therefore L4 is also referred to as Network Load Balancing
. L7 on the other hand lies in OSI model layer (HTTP/HTTPS) and therefore is referred to as Application Load Balancer
or HTTP(s) load balancer
. Finally learned the difference!
Round robin
is a great example. Weighted Round Robin
utilizes the server with better resources to handle the bigger request (ex: a server dedicated for video processing), yet it still has the same flaws with the original round robin, uneven work distribution. Other examples are Random algorithm
and User IP Hashing
.Least Load Algorithm
addresses the problems that static load balancing has, that is to allocate each request to the server that has the least load at the current time. This is obviously more complex to implement. Another great one to learn is Power-of-d-choices
, which is efficient when d
is small, but only works when d
is approximately the number of servers.CFS: Completely fair process scheduling in Linux (article)
This article is very interesting, as it offers more than it’s title suggests. I first heard about preemptive scheduling in class, which works by having a fixed amount of time that a task is allowed to hold a processor until preempted in favor of some other task. A typical model also has explicit prorities, meaning higher-priority processes gets scheduled first. CFS on the other hand, computes the amount of time of a given task dynamically. CFS scheduler as a target latency, and each runnable task would get 1/N slice of the target latency. Here, N is the number of tasks. The fair in CFS comes from the even distribution of the slice (1/N) for each N tasks.