How program run on a computer: it executes instructions. Many millions (and these days, even billions) of times every second, the processor fetches an instruction from memory, decodes it (i.e., figures out which instruction this is), and executes it. After it is done with this instruction, the processor moves on to the next instruction, and so on, and so on, until the program finally completes.

To ensure system operate correctly and efficiently in an easy-to-use manner, OS use a general techniques called visualization.

Visualization the OS takes a physical resource (such as the processor, or memory, or a disk) and transforms it into a more general, powerful, and easy-to-use virtual form of itself. that is, system calls or standard library.

  • OS is also sometimes known as resource manager because of its duty
  • virtualizing CPU: even for a single core CPU, by visualizing the CPU, it can seemingly run infinite task run at once
  • virtualizing memory: echo process accesses its own private virtual address space

Concurrency: Ensure the OS can execute multi task at once without causing error and inaccuracy

Persistence: OS use file system for persistence data on disk.

Design goal

it takes physical resources, such as a CPU, memory, or disk, and virtualizes them. It handles tough and tricky issues related to concurrency. And it stores files persistently, thus making them safe over the long-term.

  • build up abstraction: make the system easier to use
  • high performance, minimize the overhead
  • provide protection between application, isolate processes from one another
  • high degree of reliability
  • other: energy efficient, security, mobility...


Introduction of some basic virtualization OS involved


The definition of a process, informally, is quite simple: it is a running program.

To make a lot process seemly run at the same time, the OS virtualizing the CPU. By running one
process, then stopping it and running another, and so forth, the OS can promote the illusion that many virtual CPUs exist when in fact there is only one physical CPU (or a few). This basic technique, known as time sharing of the CPU, allows users to run as many concurrent processes as they would like.

mechanisms: are low-level methods or protocols that implement a needed piece of functionality. Int he CPU virtualizing aspect, the mechanisms involved are:

  • context switch
  • time-sharing
  • scheduling policy
  • ...

machine state: what a program can read or update when it is running.

  • memory, or more specifically: the address space
  • program counter (PC), also called as instruction pointer or IP
  • stack pointer and frame pointer, use to manage the stack fro function parameters, local variables and return address


  • Create: method to create new process
  • Destroy: come with create
  • Wait: Sometimes it is useful to wait for a process to stop running; thus some kind of waiting interface is often provided
  • Miscellaneous control: method to suspend a process (stop it from running for a while) and then resume it (continue
    it running).
  • Status: information about a process, how long it has run for, or what state it is in

Process Creation

  1. The first thing that the OS must do to run a program is to load its code and any static data (e.g., initialized variables) into memory, into the address space of the process.
  2. Once the code and static data are loaded into memory, there are a few other things the OS needs to do before running the process. Some memory must be allocated for the program’s run-time stack (or just stack). The OS may also create some initial memory for the program’s heap.
  3. The OS will also do some other initialization tasks, particularly related to input/output. For example, in UNIX systems, each process by default has three open file descriptors: input, output and error

Process States

In a simplified view, a process can be in one of three states:

  • Running: the processor is executing instruction
  • Ready: the process is ready but for some reason the OS has chosen not to run it
  • Blocked: process has performed some kind of operation that makes it not ready to run until some other event takes place.

Process: states transitions
Process: states transitions

Data structure

To track the state of each process, for example, the OS likely will keep some kind of process
list for all processes that are ready, as well as some additional information to track which process is currently running. The OS must also track, in some way, blocked processes; when an I/O event completes, the OS should make sure to wake the correct process and ready it to run again.

// the registers xv6 will save and restore
// to stop and subsequently restart a process
struct context
    int eip;
    int esp;
    int ebx;
    int ecx;
    int edx;
    int esi;
    int edi;
    int ebp;
// the different states a process can be in
enum proc_state

// the information xv6 tracks about each process
// including its register context and state

struct proc
    char *mem;    // Start of process memory
    uint sz;      // Size of process memory
    char *kstack; // Bottom of kernel stack
    // for this process
    enum proc_state state;      // Process state
    int pid;                    // Process ID
    struct proc *parent;        // Parent process
    void *chan;                 // If non-zero, sleeping on chan
    int killed;                 // If non-zero, have been killed
    struct file *ofile[NOFILE]; // Open files
    struct inode *cwd;          // Current directory
    struct context context;     // Switch here to run process
    struct trapframe *tf;       // Trap frame for the
    // current interrupt

Interlude: fork()

when a process execute fork(), the process that is created is an (almost) exact copy of the calling process, and run two processes on OS. Because of that, the process is not deterministic because of the scheduler (you don't know which process runs first).

Mechanism: Limited Direct Execution

Introduce how CPU share physical CPU among many jobs running seemingly at the same time.

Questions need to figure out: Performance: implement virtualization without adding excessive overhead to the system. Control: how can we run processes efficiently while retaining control over the CPU.

The basic technique to solve this to solve the questions is Limited Direct Execution. The “direct execution” part of the idea is simple: just run the program directly on the CPU. Thus, when the OS wishes to start a program running, it creates a process entry for it in a process list, allocates some memory pages for it, loads the program code into memory, disk), locates its entry point (i.e., the main() routine or something similar), jumps to it, and starts running the user’s code.


some problems introduced by the limited direct execution mechanism

Restricted Operations

how to prevent the UserApp running some restricted operations?

We can solve it by introducing a new processor mode: user mode; code that runs in user mode is restricted in what it can do. For example, when running in user mode, a process can’t issue I/O requests; doing so would result in the processor raising an exception; the OS would then likely kill the process.

In contrast to user mode is kernel mode, which the operating system (or kernel) runs in. In this mode, code that runs can do what it likes, including privileged operations such as issuing I/O requests and executing all types of restricted instructions.

For UserApp to access privileged operation, they can perform the system call, which allow the kernel to carefully expose certain key pieces of functionality to user programs, such as accessing the file system, creating and destroying processes, communicating with other processes, and allocating more memory.

To execute a system call, a program must execute a special trap instruction. When finished, the OS calls a special return-from-trap instruction, which, as you might expect, returns into the calling user program while simultaneously reducing the privilege level back to user mode.

Switching between processes

How can the operating system regain control of the CPU so that it can switch between processes?

Approach 1: Cooperative, wait for system calls: the OS trusts the processes of the system to behave reasonably. Processes that run for too long are assumed to periodically give up the CPU so that the OS can decide to run some other task. Applications also transfer control to the OS when they do something illegal. For example, if an application divides by zero, or tries to access memory that it shouldn’t be able to access, it will generate a trap to the OS.

Approach 2: Non-Cooperative: we can use the timer interrupt and set it to every processes, timer device can be programmed to raise an interrupt every so many milliseconds; when the interrupt is raised, the currently running process is halted, and a pre-configured interrupt handler in the OS runs. At this point, the OS has regained control of the CPU, and thus can do what it pleases: stop the current process, and start a different one.

scheduler: decide whether to continue running the currently-running process, or switching to a different one.

context switch: if the system make the decision to switch processes, the OS will execute a lower level piece of code which refer to a context switch, all the OS has to do is save a few register values for the currently-executing process (onto its kernel stack, for example) and restore a few for the soon-to-be executing process (from its kernel stack).