This document discusses microkernel operating system design. It begins by explaining the differences between kernel mode and user mode. It then describes different approaches to system structure, including monolithic, layered, and microkernel designs. The main advantages of microkernels are that they improve reliability, security, and extensibility by running most operating system services outside the kernel in user space. Specific microkernel-based operating systems mentioned include Mach, QNX, MINIX 3, and Apple MacOS Server.
The success of application deployment on cloud depends a lot on the architecture style which in turn depends on your business needs. This presentation talks about the commonly used Architecture and business use cases.
System calls allow programs to request services from the operating system kernel. They are generally implemented as assembly instructions and can be invoked from C/C++. Parameters can be passed via registers, memory blocks, or pushing to the stack. System calls are categorized as process control, file management, device management, information maintenance, and communication. Examples provided include creating and terminating processes, opening and closing files, reading/writing devices, getting time/date, and sending/receiving messages.
Memory management is the method by which an operating system handles and allocates primary memory. It tracks the status of memory locations as allocated or free, and determines how memory is distributed among competing processes. Memory can be allocated contiguously or non-contiguously. Contiguous allocation assigns consecutive blocks of memory to a process, while non-contiguous allocation allows a process's memory blocks to be scattered across different areas using techniques like paging or segmentation. Paging divides processes and memory into fixed-size pages and frames to allow non-contiguous allocation while reducing fragmentation.
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
Independent processes operate concurrently without affecting each other, while cooperating processes can impact one another. Inter-process communication (IPC) allows processes to share information, improve computation speed, and share resources. The two main types of IPC are shared memory and message passing. Shared memory uses a common memory region for fast communication, while message passing involves establishing communication links and exchanging messages without shared variables. Key considerations for message passing include direct vs indirect communication and synchronous vs asynchronous messaging.
4.1Introduction
- Potential Threats and Attacks on Computer System
- Confinement Problems
- Design Issues in Building Secure Distributed Systems
4.2 Cryptography
- Symmetric Cryptosystem Algorithm: DES
- Asymmetric Cryptosystem
4.3 Secure Channels
- Authentication
- Message Integrity and Confidentiality
- Secure Group Communication
4.4 Access Control
- General Issues
- Firewalls
- Secure Mobile Code
4.5 Security Management
- Key Management
- Issues in Key Distribution
- Secure Group Management
- Authorization Management
Gives an overview about Process, PCB, Process States, Process Operations, Scheduling, Schedulers, Interprocess communication, shared memory and message passing systems
In this presentation, I am explaining about Threads, types of threads, its advantages and disadvantages, difference between Process and Threads, multithreading and its type.
"Like the ppt if you liked the ppt"
LinkedIn - https://in.linkedin.com/in/prakharmaurya
There are 5 levels of virtualization implementation:
1. Instruction Set Architecture Level which uses emulation to run inherited code on different hardware.
2. Hardware Abstraction Level which uses a hypervisor to virtualize hardware components and allow multiple users to use the same hardware simultaneously.
3. Operating System Level which creates an isolated container on the physical server that functions like a virtual server.
4. Library Level which uses API hooks to control communication between applications and the system.
5. Application Level which virtualizes only a single application rather than an entire platform.
The document discusses different types of computer system organizations based on the number of general-purpose processors used: single-processor systems which use one main CPU, multiprocessor/multicore systems which contain two or more closely communicating processors, and clustered systems which gather multiple complete computer systems together. Single-processor systems may contain additional special-purpose processors like disk or keyboard controllers. Multiprocessor systems can be symmetric, with all processors performing all tasks, or asymmetric with dedicated tasks. Clustered systems provide high-availability and parallel processing across nodes.
The document discusses different memory management strategies:
- Swapping allows processes to be swapped temporarily out of memory to disk, then back into memory for continued execution. This improves memory utilization but incurs long swap times.
- Contiguous memory allocation allocates processes into contiguous regions of physical memory using techniques like memory mapping and dynamic storage allocation with first-fit or best-fit. This can cause external and internal fragmentation over time.
- Paging permits the physical memory used by a process to be noncontiguous by dividing memory into pages and mapping virtual addresses to physical frames, allowing more efficient use of memory but requiring page tables for translation.
This document discusses different types of mainframe systems, beginning with batch systems where users submit jobs offline and jobs are run sequentially in batches. It then describes multiprogrammed systems which allow multiple jobs to reside in memory simultaneously, improving CPU utilization. Finally, it covers time-sharing systems which enable interactive use by multiple users at once through very fast switching between programs, minimizing response time. The key difference between multiprogrammed and time-sharing systems is the prioritization of maximizing CPU usage versus minimizing response time respectively.
The document discusses various CPU scheduling algorithms including first come first served, shortest job first, priority, and round robin. It describes the basic concepts of CPU scheduling and criteria for evaluating algorithms. Implementation details are provided for shortest job first, priority, and round robin scheduling in C++.
The document discusses various aspects of computer system structures. It describes that a modern computer system consists of a CPU, memory, and device controllers connected through a system bus. I/O devices and the CPU can operate concurrently, with each device controller managing a specific device type. Interrupts are used to signal when I/O operations are complete. Memory is organized in a hierarchy from fastest and smallest registers to slower but larger magnetic disks. Various techniques like caching, paging and virtual memory help bridge differences in speed between CPU and I/O devices. The document also discusses hardware protection mechanisms like dual mode operation, memory protection using base and limit registers, and CPU protection using timers.
This document outlines the course outcomes and topics to be covered for a Cloud Computing elective course. The course aims to describe system models, analyze virtualization mechanisms, demonstrate cloud architectural design and security, and construct cloud-based software applications. The topics covered in Unit 1 include scalable computing over the internet, technologies for network-based systems, system models for distributed and cloud computing, software environments, and performance, security and energy efficiency. Specific topics in Unit 1 range from multicore CPUs and virtualization to models like clusters, grids, peer-to-peer networks and cloud computing.
RPC allows a program to call a subroutine that resides on a remote machine. When a call is made, the calling process is suspended and execution takes place on the remote machine. The results are then returned. This makes the remote call appear local to the programmer. RPC uses message passing to transmit information between machines and allows communication between processes on different machines or the same machine. It provides a simple interface like local procedure calls but involves more overhead due to network communication.
Here are short notes on X.25, ATM, and Frame Relay:
a. X.25 - X.25 is a protocol suite for packet switched WANs. It establishes switched virtual circuits between DTE devices using X.121 addressing. X.25 uses LAPB for data link layer and PLP for network layer. It provides reliable data transfer over public networks.
b. ATM - Asynchronous Transfer Mode is a cell switching and multiplexing technology designed for B-ISDN. It uses fixed size 53 byte cells and establishes permanent virtual circuits between endpoints. ATM supports real-time multimedia traffic using constant bit rate, variable bit rate and available bit rate.
c. Frame Relay -
The document discusses processes and threads. A process is an executing program with resources, while a thread is a sequence of execution within a process that shares its resources. Threads have lower overhead than processes and allow for multitasking. However, multithreaded programs are more difficult to debug. Thread management can be done at the user level or kernel level. Different models map user threads to kernel threads, such as many-to-one, one-to-one, and many-to-many.
This document provides an overview of operating system structures, including simple, monolithic, layered, and micro-kernel structures. It discusses the advantages and disadvantages of each structure and provides examples of Unix and its features. Specifically, it describes how the simple structure has limited interfaces but good performance, while the monolithic structure manages all OS components but risks crashing if one fails. The layered approach isolates functions but has performance degradation, and the micro-kernel structure improves security by removing non-essential components to independent micro-kernels.
The document provides an overview of operating systems. It discusses:
1. What is an operating system and its goals which include making the computer convenient to use efficiently and managing computer resources.
2. The basic functions of an operating system including managing resources like memory, files, and peripherals; providing a user interface; running applications; and supporting utility programs.
3. The different types of operating systems including monolithic, layered, and microkernel based on how their components are organized and how they allocate privileges between kernel and user modes.
The document discusses system structure using a layered approach. It describes how systems are organized into layers, with each layer built upon the one below it. An interface separates each pair of adjacent layers. Typical layers consist of data structures and routines that higher layers can invoke, while lower layers provide operations to the layer above. This modular approach simplifies debugging and verification. The document also discusses microkernel architecture, which assigns only essential functions like address space and scheduling to the kernel. Virtual machines provide a simulated hardware interface identical to the underlying hardware.
The document provides an overview of operating systems, including their definition, functions, types, and architectures. It can be summarized as follows:
1. An operating system is software that manages computer hardware resources and provides common services for computer programs. It acts as an intermediary between users and the computer hardware.
2. The main functions of an operating system include managing system resources like memory and files, providing a user interface, running applications, and supporting utility programs.
3. Operating systems can be categorized as monolithic, layered, microkernel, networked, or distributed based on their internal organization and architecture.
4. Operating system architectures include single processor systems, multi-processor systems, and clustered systems.
This document provides an introduction to operating systems and covers several key topics:
1. It describes the architecture and evolution of operating systems including monolithic, layered, microkernel, and exokernel designs as well as the progression from batch processing to timesharing, personal computing, and distributed systems.
2. The functions of operating systems are discussed including program execution, input/output, resource allocation, and error detection.
3. Examples of operating systems are given such as Linux, Windows, and mobile phone OSs.
4. Components of a computer system are defined including the hardware, software, system programs, and application programs.
5. Concepts like multiprocessing, multitasking, shells
A short explanation of Architecture of operating system. In this slide i explain about monolithic OS , layer OS, microkernel OS and networked and distributed OS with their architecture.
The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component.
The document provides an overview of advanced operating systems. It discusses synchronization mechanisms like processes, threads, and the critical section problem. It also covers other synchronization issues like the dining philosophers problem and producer-consumer problem. Distributed operating systems and multiprocessing operating systems are presented as types of advanced operating systems. Design approaches like layered, kernel, and virtual machine approaches are summarized. Semaphores are introduced as a synchronization mechanism using wait and signal operations.
The document discusses kernels, kernel architectures, and system calls. It defines a kernel as the core part of an operating system that handles processes, memory, devices, and accessing computer resources. Kernels manage the core OS features, while full operating systems add applications. Kernels run in protected kernel space, while user programs run in user space to prevent accessing other programs or the kernel. Kernel architectures include monolithic, where all services are in one module; microkernels, which move some services to user space; and exokernels, which impose few abstractions. System calls allow user processes to request services from the kernel since they cannot directly access hardware.
lecture 1 (Part 2) kernal and its categoriesWajeehaBaig
Kernel and its categories
computer start up
Architecture of Operating system(Monolithic ,Layered,Micro kernel,Network and distributed O.S)
Interrupt and its function
System calls
System boot
O.S services(for system, for user)
Communication takes place between user modules using message passing
Benefits:
Easier to extend a microkernel
Easier to port the operating system to new architectures
More reliable (less code is running in kernel mode)
More secure
Detriments:
Performance overhead of user space to kernel space communication
Aayu Tiwari operating system presentation_240406_095037.pdfaayutiwari2003
The document discusses different types of computer operating system kernels. It defines the kernel as the central component of an operating system that acts as an interface between applications and hardware. The kernel manages communication and resources. A microkernel provides only basic services like memory and process management, implementing other services as user-level processes. This makes the system more modular but slower. A monolithic kernel implements all services in the kernel, making it faster but less modular. A hybrid kernel combines aspects of micro and monolithic designs.
This presentation will give brief and basic knowledge about the operating system.
Types of operating systems are included in this ppt, too.
Types of the operating system are explained with the help of examples.
In this ppt, you will get to know about the advantages and disadvantages of types of operating systems.
Go through this ppt to get a crystal clear concept of the operating system.
The document provides an overview of operating systems including:
- An introduction to operating systems and their key functions like enabling programs to run and acting as an intermediary between users and hardware.
- A brief history of operating systems from early batch systems in the 1950s to modern graphical user interface operating systems like Windows.
- Descriptions of different types of operating systems including batch, multi-programming, multi-tasking, network, distributed, and real-time operating systems.
- Discussions of popular operating systems like Windows, Mac OS, Android, iOS, and Linux.
- Features and limitations of operating systems.
ITT Project Information Technology BasicMayank Garg
The document discusses operating system concepts including time sharing systems, file management, file access methods, OS structure, kernels, and monolithic vs microkernels. It provides details on:
1) The main idea of time sharing systems is to allow multiple users to interact with a single computer concurrently using multi-programming and CPU scheduling.
2) File management involves activities like structuring, accessing, naming, sharing and protecting files through operations like create, delete, open, close, read, write, seek, rename and copy.
3) OS structure can use a layered approach with different privilege levels or organize components into layers with each layer building on lower layers.
The document provides an overview of operating systems, including definitions of key terms like kernel, processes, memory management, file systems, etc. It discusses different types of operating systems like batch, time-sharing, distributed, and real-time operating systems. It also covers operating system components like process management, memory management, I/O management, and system calls. Finally, it discusses user interfaces, system programs, and services provided by operating systems.
Operating system 14 unix and kernel based osVaibhav Khanna
The layered architecture of UNIX divides the operating system into levels, with each level built upon the level below it and only using functions from lower levels, starting from the hardware layer at the bottom and moving up through interfaces, drivers, the kernel, and finally the user interface at the top. The kernel in UNIX provides core functions like file systems, CPU scheduling, and memory management through system calls to higher level systems programs. Later versions of UNIX partitioned the kernel further along functional boundaries into microkernels to reduce its size and move nonessential functions out of the kernel.
The document discusses five main types of operating systems: batch, time-sharing, distributed, network, and real-time. Batch operating systems group similar jobs into batches to be processed when the computer is idle. Time-sharing systems allow multiple users to access a single system simultaneously by rapidly switching between tasks. Distributed systems connect independent computers over a network to share resources. Network operating systems run on servers to manage shared access to files, printers, and other resources over a private network. Real-time systems have very strict time constraints to process inputs and require fast response times, such as for robots, air traffic control, and medical devices.
The document discusses operating systems and provides an overview of key concepts. It covers course objectives like understanding process management, synchronization, memory management, and file systems. The document also outlines course outcomes such as analyzing scheduling algorithms, solving issues related to synchronization and deadlocks, and understanding I/O systems and security. It provides details on topics that will be covered in each unit of the course.
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...John Gallagher
Rails apps can be a black box. Have you ever tried to fix a bug where you just can’t understand what’s going on? This talk will give you practical steps to improve the observability of your Rails app, taking the time to understand and fix defects from hours or days to minutes. Rails 8 will bring an exciting new feature: built-in structured logging. This talk will delve into the transformative impact of structured logging on fixing bugs and saving engineers time. Structured logging, as a cornerstone of observability, offers a powerful way to handle logs compared to traditional text-based logs. This session will guide you through the nuances of structured logging in Rails, demonstrating how it can be used to gain better insights into your application’s behavior. This talk will be a practical, technical deep dive into how to make structured logging work with an existing Rails app.
I talk about the Steps to Observable Software - a practical five step process for improving the observability of your Rails app.
ECONOMIC FEASIBILITY AND ENVIRONMENTAL IMPLICATIONS OF PERMEABLE PAVEMENT IN ...Fady M. A Hassouna
Permeable pavement is considered one of the sustainable management
options for roadway networks, which mitigates a number of problems associated with
stormwater, ground water pollution, and traffic safety. In this study, the economic
feasibility, vehicle operation, and environmental implications of implementing permeable
pavement in Nablus, Palestine have been determined by selecting the local roadways that
satisfy the permeable pavement requirement, such as low traffic volume, grade less than
5%, speed limit up to 50 km/h, and subgrade with good permeability. The total costs of
construction and maintenance for both conventional asphalt and permeable pavement have
also been compared based on the life cycle cost analysis (LCCA). Finally, the
environmental implications such as the expected increase in the amount of ground water
and the reduction in water pollutants have been investigated. The results of the analysis
show that the permeable pavement is applicable for the local roadways that have satisfied
the requirements, which are 61 roadways. Furthermore, it could lead to an annual
significant increase in ground water by 107,404.7 m3 and slightly reduce the cost of
construction and maintenance by up to 1,912,000 ILS during its life period compared to
conventional asphalt pavement. Moreover, applying porous asphalt could enhance
vehicular traffic safety by improving skid resistance.
If we're running two pumps, why aren't we getting twice as much flow? v.17Brian Gongol
A single pump operating at a time is easy to figure out. Adding a second pump (or more) makes things a bit more complicated. That complication can deliver a whole lot of additional flow -- or it can become an exercise in futility.
If we're running two pumps, why aren't we getting twice as much flow? v.17
Microkernel
1. Seminar
on
MICROKERNEL
BY
TUSHAR.N.TEKADE
(ME IT 9114)
SCTR’s PUNE INSTITUTE OF COMPUTER TECHNOLOGY,
PUNE
2. Computer has 2 modes of operation…….
• Kernel Mode
• User Mode
• The OS is the most fundamental piece of software & runs in
kernel mode(supervisor mode).
• In this mode, it has complete access to all the h/w and execute
any instruction the machine is capable of.
• Everything running in Kernel Mode is clearly a part of OS ,but
some program running outside it are arguably also part of it or
closely associated with it.
3. • Distinction between OS mode and user mode is that if a user
does not like a particular e-mail reader , he is free to get
different one or write his own.
• Whereas he is not free to write his own clock interrupt handler,
a part of OS & protected by h/w against attempts by user to
modify it.
• There are however programs which run in user mode but help
the OS or perform privileged functions.
Ex. Changing Password program- not part of OS and not runs in
kernel mode.
4. SYSTEM STRUCTURE
• Design and Implementation of OS not “solvable”, but
some approaches have proven successful
• Internal structure of different Operating Systems can
vary widely
• Start by defining goals and specifications
• Affected by choice of hardware, type of system
• User goals and System goals
– User goals – operating system should be convenient to
use, easy to learn, reliable, safe, and fast
– System goals – operating system should be easy to
design, implement, and maintain, as well as flexible,
reliable, error-free, and efficient
5. • Approach is to partition the task into small components rather
than a monolithic structure.
• Each of these modules should be well-defined portions of the
system , with proper inputs, outputs and functions.
• All the modules (process , main memory, file, i/o system,
secondary storage, networking , command-interpreter
management ) are interconnected and melded into a kernel.
6. SIMPLE STRUCTURE
• MS- DOS an example of such system, originally
designed and implemented to provide most
functionality in least space(limited h/w).
• Not divided in modules
• It has some structure but its interfaces and
functionalities not well separated.
8. UNIX
• OS initially limited by h/w functionality.
• 2 separable parts
- System Programs
- Kernel
• Consists of everything below the system-call interface
and above the physical hardware
• Provides the file system, CPU scheduling, memory
management, and other operating-system functions; a
large number of functions for one level
10. MONOLITHIC STRUCTURE
• In this approach the entire operating system runs as a
single program in kernel mode.
• The operating system is written as a collection of
procedures, linked together into a single large
executable binary program.
• An unwieldy and difficult to understand system.
12. The main disadvantages of monolithic structures
are:
The dependencies between system
components - a bug in a device driver might
crash the entire system.
Large kernels can become very difficult to
maintain.
13. LAYERED APPROACH
• It’s a way of Modularization of system, in which OS
is broken into number of layers or levels.
• Each built on top of lower layers, bottom layer(layer
0) is the h/w & the highest layer (layer N) is user
interface.
• OS layer is an implementation of an abstract object
that is encapsulation of data & of the operations that
manipulate that data.
• It contains data structures and routines that can be
invoked by higher level layers & it can invoke
operations on lower layer.
15. Advantages:
Modularity
Simplifies debugging and system verification.
Ease in error solving.
Each layer hides the existence of certain data structures,
operations and h/w from higher layers.
Difficulties:
Careful definition of each layer.
They tend to be less efficient than other type(due to overhead
added by each layer to system call).
THE system built at the Technische Hogeschool Eindhoven in the
Netherlands by E. W. Dijkstra (1968) and his students
16. MICROKERNELS
• With layered approach the designers have a choice
where to draw the kernel-user boundary.
• Traditionally all the layers were in the kernel.
• In fact, a strong base can be made for putting as little
as possible in kernel mode because bugs in kernel can
bring down the system instantly.
• In contrast user processes can be set up to have less
power so that a bug there may not be fatal.
17. • The basic idea behind the microkernel design is to
achieve high reliability by splitting the OS up into
smaller but well defined modules , only one mode
THE MICROKERNEL runs in the kernel mode& the
rest run as relatively less power ordinary user
processes.
• Another function is to provide communication facility
between client program and various services running.
BENEFITS:
Ease of extending the OS
All new services added to user space, thus no need to
modify kernel , and when needed changes are few
because microkernel is smaller kernel.
18. BENEFITS:
OS is more easier to port from one h/w design to
another.
More security & reliability since most services are
running as user rather than kernel processes.
If a service fails the rest of OS remains untouched,
which is in contrast with monolithic system.
19. • Tru64 UNIX(formerly Digital UNIX) provides a
UNIX interface to the user , uses microkernel ,
implemented with Mach kernel.(which maps UNIX
system calls into messages to appropriate user-level
servies.)
• The Apple MacOS Server OS is also based on Mach
Kernel.
• QNX is RTOS based on Microkernel design,
providing services for message passing and process
scheduling.
20. MINIX 3
• MINIX 3 is a POSIX conformant, open source
microkernel system freely available .
• It is only about a 3200 lines of C & 800 lines of
assembler for very low level functions such as
catching interrupts & switching process.
• The C code manages & schedules processes, handles
inter process communication & offers a set of about
35 kernel calls to allow the rest of OS to do its work.
• These calls does: looking handlers for interrupts,
moving data between address spaces , installing new
memory maps for newly created processes.
22. • Lowest layer in user mode: Device Drivers
Do not have physical access to i/o port space & cannot issue
i/o command directly.
• Above lowest layer : Server
All servers like file server which manages file system i.e.
create , destroy and manages process.
One interesting server is “reincarnation server” which
checks if other server & drivers are working properly and
if found faulty corrects it…so a self-healing system.
• User Program obtain OS services by sending short
messages to the servers asking for the POSIX system
calls
• Each layer has exactly the power do its work and nothing
more , limiting the damage a baggy component can do.
24. • Idea of Minimal Kernel is to put the mechanism for
doing something in the kernel but not the policy.
• Ex. Scheduling Problem
Mechanism: look for highest priority process & run it.
Policy: assigning priorities to process , can be done by
user mode process.
Thus policy and mechanism can be decoupled and
kernel can be made smaller.
“Microkernel (also known as μ-kernel or Samuel
kernel) is the near-minimum amount of software that
can provide the mechanisms needed to implement an
operating system (OS)”