Top 10 List of Week 05

  1. A Guide to the Linux TOP command
    As one of the commands needed to be executed this week, I had to look up about it. This article provides a complete guide on navigating the top command output interface, including what each section is all about and how to read the information. Turns out, top is basically Window’s task manager to linux’s with slight differences.

  2. The secret behind the modern computer’s efficient memory management
    It’s paging. Before paging, the memory was divided into sections and the OS would allocate memory to processes as if it’s playing a puzzle: fitting processes in available memory. This approach causes a lot of fragmentation, and as more and more processes are added and removed, the memory space gets broken into smaller pieces. Paging works by separating the logical memory and the physical memory.

  3. What is a Paging File or Pagefile as Fast As Possible
    This video explains briefly about how computers, with Windows OS especially, takes advantage of paging file as an overflow for the system memory. Apparently, you can customize the size of this paging file in order to “increase performance”. But, the video also mentioned about the trade-offs that it will have.

  4. Page Replacement Algorithms
    Page replacement is used in case of a page fault, which is when a running program accesses a memory page that’s not loaded in physical memory. The algorithms suggest which page to replace. In FIFO, OS keeps track of all pages in a queue, and when a page needs to be replaced, the one in front of the queue is selected for removal. In Optimal Page replacement, pages are replace which wouldn’t be used tor the longest duration of time in the future. Whereas in LRU, the page that’s replaced is the least recently used.

  5. Caching - Simply Explained
    A video about caching, how it stores frequently demanded things closer to those asking for it. It’s a nice refresher for reviewing last term’s lecture material about caching. The video also mentions a book titled “Algorithms to Live By” by Brian Christian and Tom Griffiths.

  6. The Trouble with Memory
    The presentation talks about performance trouble spot, and that the biggest problem is excessive memory churn. One way to measure memory churn is by analyzing the garbage collection logs in Java. Quote from the speaker, “…we’ve gotten some tremendous performance improvements simply by looking at the memory efficiency, the algorithms that people are using, identifying hot allocation sites in the code, and doing something to basically destroy them, or reduce them, or remove them from the code.”

  7. Native Memory Allocation in Java
    The article explains how to allovate native memory in Java and different ways to do it. There’s a bunch of examples, including how to allocate native memory in a docker container.

  8. Security Flaws in Dynamic Memory Management
    The article talks about problems in dynamic memory manipulation and buffer overflow errors. Overflow errors may cause data corruption, unexpected behaviors, or program abnormal interruption. It provides examples of how source codes can have security vulnerabilites because of deficient dynamic memory management, especially in languages like C which delegates memory management to programmers.

  9. Explanation of the Kernel
    The kernel runs independently of any user/processes and acts as a bridge between application and data processing performed at the hardware level. It stays in memory, so the kernel needs to stay as small as possible while still providing its essential services. It’s responsible for memory management and allocation, including I/O. It’s a very clear and concise article, explains the concepts well with easy-to-digest sentences.

  10. Thrashing in Operating System
    This article brings back the term of page faults and swapping. When those two happens very frequently at a higher rate, the OS spends more time swapping these pages. This state is called thrashing which causes the CPU utilization being reduced. Thrashing affects performance of execution, and may result in severe performance problems. To prevent it from happening, we can measure and control the page fault rate.