How is Linux different from UNIX, and what is a UNIX-like OS? Features of operating systems of the UNIX family.

Moreover, each of them can perform many different computing processes that will use the resources of this particular computer.

The second colossal merit of Unix is ​​its multiplatform nature. The core of the system is designed in such a way that it can be easily adapted to almost any microprocessor.

Unix has other characteristic features:

  • using simple text files to configure and manage the system;
  • widespread use of utilities launched from the command line;
  • interaction with the user through a virtual device - a terminal;
  • representation of physical and virtual devices and some means of interprocess communication in the form of files;
  • the use of pipelines from several programs, each of which performs one task.

Application

Currently, Unix systems are distributed mainly among servers, and also as embedded systems for various equipment, including smartphones. Unix systems also dominate supercomputers, in particular, Linux is installed on 100% of TOP500 supercomputers.

The first versions of Unix were written in assembly language and did not have a built-in high-level language compiler. Around 1969, Ken Thompson, with the assistance of Dennis Ritchie, developed and implemented the B language (B), which was a simplified (for implementation on minicomputers) version of the BCPL language developed in the language. Bi, like BCPL, was an interpreted language. Released in 1972 second edition Unix rewritten in B. In 1969-1973 on the basis of Bi, a compiled language was developed, called C (C).

Split

An important reason for the split in Unix was the implementation in 1980 of the TCP/IP protocol stack. Prior to this, machine-to-machine communication in Unix was in its infancy - the most significant method of communication was UUCP (a means of copying files from one Unix system to another, originally working over telephone networks using modems).

Two programming interfaces for network applications have been proposed: the Berkley sockets and the TLI transport layer interface (Transport layer interface).

The Berkley sockets interface was developed at the University of Berkeley and used the TCP/IP protocol stack developed there. TLI was created by AT&T according to the transport layer definition of the OSI model and first appeared in System V version 3. Although this version contained TLI and streams, it did not originally implement TCP/IP or other network protocols, but such implementations were provided by third parties. .

The implementation of TCP/IP was officially and definitively included in the base distribution of System V version 4. This, along with other considerations (mostly marketing), caused the final demarcation between the two branches of Unix - BSD (University of Berkeley) and System V (commercial version from AT&T). Subsequently, many companies, having licensed System V from AT&T, developed their own commercial flavors of Unix, such as AIX, CLIX, HP-UX, IRIX, Solaris.

Modern implementations of Unix are generally not pure V or BSD systems. They implement features from both System V and BSD.

Free Unix-like operating systems

At the moment, GNU/Linux and members of the BSD family are rapidly taking over the market from commercial Unix systems and simultaneously infiltrating both end-user desktops and mobile and embedded systems.

Proprietary systems

Since the split of AT&T, the Unix trademark and the rights to the original source code have changed owners several times, in particular, they belonged to Novell for a long time.

The influence of Unix on the evolution of operating systems

Unix systems are of great historical importance because they have propagated some of today's popular operating system and software concepts and approaches. Also, during the development of Unix systems, the C language was created.

Widely used in systems programming, the C language, originally created for the development of Unix, has surpassed Unix in popularity. The C language was the first "tolerant" language that didn't try to force a programming style on the programmer. C was the first high-level language that gave access to all the features of the processor, such as references, tables, bit shifts, increments, etc. On the other hand, the freedom of the C language led to buffer overflow errors in such standard C library functions as gets and scanf. Many infamous vulnerabilities resulted, such as the one exploited in the famous Morris worm.

The early developers of Unix contributed to the introduction of the principles of modular programming and reuse into engineering practice.

Unix enabled the use of TCP/IP protocols on relatively inexpensive computers, which led to the rapid growth of the Internet. This, in turn, contributed to the rapid discovery of several major vulnerabilities in Unix security, architecture, and system utilities.

Over time, leading Unix developers developed software development cultural norms that became as important as Unix itself. ( )

Some of the best-known examples of Unix-like OSes are macOS, Solaris, BSD, and NeXTSTEP.

Social role in the IT professional community and historical role

The original Unix ran on large multi-user computers, which also offered proprietary operating systems from the hardware manufacturer, such as RSX-11 and its descendant VMS. Despite the fact that according to a number of opinions [ whose?] the then Unix had disadvantages compared to these operating systems (for example, the lack of serious database engines), it was: a) cheaper, and sometimes free for academic institutions; b) was ported from hardware to hardware and developed in a portable C language, which “decoupled” software development from specific hardware. In addition, the user experience turned out to be “untied” from the equipment and the manufacturer - a person who worked with Unix on VAX easily worked with it on 68xxx, and so on.

Hardware manufacturers at that time were often cool about Unix, considering it a toy, and offering their proprietary OS for serious work - primarily DBMS and business applications based on them in commercial structures. There are known comments about this from DEC regarding its VMS. Corporations listened to this, but not the academic environment, which had everything for itself in Unix, often did not require official support from the manufacturer, managed on its own, and appreciated the cheapness and portability of Unix. Thus, Unix was perhaps the first operating system portable to different hardware.

The second major rise of Unix was the introduction of RISC processors around 1989. Even before that, there were so-called. workstations are high power single-user personal computers with enough memory, hard disk and enough advanced OS (multitasking, memory protection) to work with serious applications such as CADs. Among the manufacturers of such machines, Sun Microsystems stood out, making a name for itself on them.

Before the advent of RISC processors, these stations usually used the Motorola 680x0 processor, the same as in Apple computers (albeit under a more advanced operating system than Apple's). Around 1989, commercial implementations of RISC architecture processors appeared on the market. The logical decision of a number of companies (Sun and others) was to port Unix to these architectures, which immediately led to the porting of the entire Unix software ecosystem.

Proprietary serious operating systems, such as VMS, began their decline from this very moment (even if it was possible to port the OS itself to RISC, everything was much more complicated with applications for it, which in these ecosystems were often developed in assembler or in proprietary languages ​​​​like BLISS ), and Unix became the operating system for the most powerful computers in the world.

However, around this time, the ecosystem began to move towards the GUI in the form of Windows 3.0. The huge advantages of the GUI, as well as, for example, unified support for all types of printers, were appreciated by both developers and users. This greatly undermined the position of Unix in the PC market - implementations such as SCO and Interactive UNIX could not cope with supporting Windows applications. As for the GUI for Unix, called X11 (there were other implementations, much less popular), it could not fully run on a regular user's PC due to memory requirements - X11 required 16 MB for normal operation, while Windows 3.1 with enough performance to run both Word and Excel at the same time in 8 MB (this was the standard PC memory size at the time). With high memory prices, this was the limiting factor.

The success of Windows gave impetus to an internal Microsoft project called Windows NT, which was compatible with Windows by API, but at the same time had all the same architectural features of a serious OS as Unix - multitasking, full memory protection, support for multiprocessor machines, file permissions and directories, system log. Windows NT also introduced the journaling file system NTFS, which at that time exceeded all file systems standardly supplied with Unix in terms of capabilities - Unix analogues were only separate commercial products from Veritas and others.

Although Windows NT was not initially popular, due to its high memory requirements (the same 16 MB), it allowed Microsoft to enter the market for server solutions, such as DBMS. Many at the time did not believe in the ability of Microsoft, traditionally specialized in desktop software, to be a player in the enterprise software market, which already had big names such as Oracle and Sun. Adding to this doubt was the fact that Microsoft's DBMS - SQL Server - started out as a simplified version of Sybase SQL Server, licensed from Sybase and 99% compatible in all aspects of working with it.

In the second half of the 1990s, Microsoft began to push Unix into the corporate server market as well.

The combination of the above factors, as well as the collapse in prices for 3D video controllers, which have become home from professional equipment, essentially killed the very concept of workstation by the early 2000s.

In addition, Microsoft systems are easier to manage, especially in typical use cases.

But at the moment, the third sharp rise of Unix has begun.

In addition, Stallman and his comrades were well aware that proprietary development tools were not suitable for the success of non-corporate software. Therefore, they developed a set of compilers for various programming languages ​​(gcc), which, together with the GNU utilities developed earlier (replacing the standard Unix utilities), constituted a necessary and quite powerful software package for the developer.

FreeBSD was a serious competitor to Linux at that time, however, the "cathedral" style of development management as opposed to the "bazaar" style of Linux, as well as much more technical archaism in such matters as support for multiprocessor machines and executable file formats, greatly slowed down the development of FreeBSD compared to Linux, making the latter the flagship of the free software world.

In the future, Linux reached more and more heights:

  • porting serious proprietary products such as Oracle;
  • IBM's serious interest in this ecosystem as the basis for its vertical solutions;
  • the appearance of analogues of almost all familiar programs from the Windows world;
  • the refusal of some hardware manufacturers from the mandatory pre-installation of Windows;
  • the release of netbooks with only Linux;
  • use as a kernel in Android.

At the moment, Linux is a deservedly popular OS for servers, although much less popular on desktops.

Some architectural features of the Unix OS

Unix features that distinguish this family from other operating systems are listed below.

  • The file system is tree-like, case sensitive in names, very weak restrictions on the length of names and paths.
  • There is no support for structured files by the OS kernel; at the level of system calls, a file is a stream of bytes.
  • The command line is located in the address space of the process being launched, and is not retrieved by a system call from the command interpreter process (as happens, for example, in RSX-11).
  • The concept of "environment variables".
  • Starting processes by calling fork(), that is, the ability to clone the current process with all the state.
  • The concepts of stdin/stdout/stderr.
  • I/O only through file descriptors.
  • Traditionally very weak support for asynchronous I/O compared to VMS and Windows NT.
  • The command interpreter is an ordinary application that communicates with the kernel with ordinary system calls (in RSX-11 and VMS, the command interpreter was executed as a special application, placed in memory in a special way, using special system calls, system calls were also supported, allowing the application to access its parent interpreter commands).
  • A command line command is nothing more than the name of a program file, no special registration and special development of programs as commands is required (which was a common practice in RSX-11, RT-11).
  • An approach with a program that asks the user questions about its modes of operation is not accepted, instead command line parameters are used (in VMS, RSX-11, RT-11, programs also worked with the command line, but in its absence, they were prompted to enter parameters).
  • A device namespace on disk in the /dev directory that can be managed by an administrator, unlike the Windows approach, where this namespace is located in kernel memory, and administration of this namespace (for example, setting permissions) is extremely difficult due to the lack of permanent storage on disks (built every time you boot).
  • Extensive use of text files for storing settings, as opposed to a binary settings database, such as in Windows.
  • Widespread use of text processing utilities to perform everyday tasks under the control of scripts.
  • "Promotion" of the OS after loading the kernel by executing scripts with a standard command interpreter.
  • Wide use

The history of UNIX® begins in 1969. Most modern UNIX systems are commercial versions of the original UNIX distributions. Sun's Solaris, Hewlett-Packard's HP-UX, IBM's AIX® are the best representatives of UNIX, which also have their own unique elements and their own fundamental solutions. For example, Sun Solaris is UNIX, but it also contains many tools and extensions designed specifically for Sun workstations and servers.

Linux® was developed in an attempt to provide a free alternative to commercial UNIX environments. Its history goes back to 1991, or even 1983, when the GNU project was created, whose original goal was to provide a free alternative to UNIX. Linux runs on many more platforms, such as Intel®/AMD x86. Most UNIX operating systems are only capable of running on one platform.

Linux and UNIX have common historical roots, but there are also significant differences. Many tools, utilities, and free applications that come standard with Linux were originally conceived as free alternatives to UNIX programs. Linux often provides support for many options and applications, borrowing the best or most popular functionality from UNIX.

As an administrator or developer who is accustomed to working with Linux, the UNIX system may not seem very user-friendly. On the other hand, the foundation of a UNIX-like operating system (tools, file system, APIs) is quite standardized. However, some details of the systems may have significant differences. These differences will be discussed later in the article.

Technical differences

Developers of commercial UNIX distributions rely on a specific set of clients and server platforms for their operating system. They have a good idea of ​​what kind of support and optimization of which applications need to be implemented. UNIX manufacturers do their best to ensure compatibility between different versions. In addition, they published the standards of their OS.

GNU/Linux development, on the other hand, is not platform or client focused, and GNU/Linux developers have different backgrounds and perspectives. There is no strict standard set of tools or environments in the Linux community. To solve this problem, the Linux Standards Base (LSB) project was launched, but it did not turn out to be as effective as we would like.

This lack of standardization leads to significant inconsistencies within Linux. For some developers, the ability to use the best of other operating systems is a plus, but copying UNIX elements to Linux is not always convenient, for example, when device names inside Linux can be taken from AIX, while file system tools are HP-UX oriented. Incompatibilities of this kind also occur between different Linux distributions. For example, Gentoo and RedHat implement different update methods.

By comparison, each new release of the UNIX system comes with a well-documented description of the new features and changes in UNIX. Commands, tools, and other elements rarely change, and often the same command-line arguments for applications remain the same throughout many versions of that software. When significant changes occur to these elements, vendors of commercial UNIX systems often provide the wrapper needed to ensure compatibility with earlier versions of the tool.

This compatibility means that utilities and applications can be used on new versions of operating systems without checking or changing their source code. Therefore, migrating to a new version of UNIX, which usually does not differ fundamentally from the old version, is much less effort for users or administrators than migrating from one Linux distribution to another.

Hardware architecture

Most commercial versions of UNIX are built for one or a small number of hardware architectures. HP-UX only runs on PA-RISC and Itanium platforms, Solaris on SPARC and x86, and AIX is for POWER processors only.

Because of these restrictions, UNIX vendors are relatively free to modify their code for these architectures and take advantage of any advantage of their architecture. Because they know the devices they support so well, their drivers work better and they don't have to deal with PC-specific BIOS limitations.

Linux, on the other hand, has historically been designed for maximum compatibility. Linux is available on a variety of architectures, and the number of I/O devices and other peripherals that can be used with the OS is nearly limitless. Developers cannot know in advance what specific hardware will be installed in a computer, and often cannot ensure that it is used effectively. One example is memory management on Linux. Previously, Linux used a segmented memory model originally designed for x86. It is now adapted to use paged memory, but still retains some segmented memory requirements, which causes problems if the architecture does not support segmented memory. This is not a problem for UNIX vendors. They know exactly what hardware their UNIX will run on.

Nucleus

The kernel is the heart of the operating system. The source code for the kernel of commercial UNIX distributions is the property of their developers and is not distributed outside the company. Completely opposite situation with Linux. The procedures for compiling and patching kernels and drivers are quite different. For Linux and other open source operating systems, a patch can be released as source code and the end user can install, test, and even modify it. These patches are usually not as carefully tested as patches from commercial UNIX OS vendors. Because there is no complete list of applications and environments that need to be tested to run correctly on Linux, Linux developers depend on end users and other developers to catch bugs.

Commercial UNIX distribution vendors release kernels only as executable code. Some releases are monolithic, while others allow you to update only a particular kernel module. But in any case, this release is provided only in the form of executable code. If an update is needed, the administrator must wait for the manufacturer to release a patch in binary, but may be comforted by the fact that the manufacturer will carefully check their patch for backwards compatibility.

All commercial versions of UNIX have evolved to some degree into a modular kernel. Drivers and specific OS features are available as separate components and can be loaded or unloaded from the kernel as needed. But the open modular architecture of Linux is much more flexible. However, the flexibility and adaptability of Linux means constant change. The Linux source code is constantly changing, and at the whim of the developer, the API can change. When a module or driver is written for a commercial version of UNIX, it will last much longer than the same driver for Linux.

File system support

One of the reasons why Linux has become such a powerful OS is its wide compatibility with other operating systems. One of the most obvious features is the abundance of filesystems that are available. Most commercial versions of UNIX support two or three file system types. Linux, however, supports most of the modern file systems. shows which file systems are supported by UNIX. Any of these filesystems can be mounted on Linux, although not all of these filesystems fully support reading and writing data.

Table 1. File systems that are standard for UNIX

Most commercial versions of UNIX support journaling file systems. For example, HP-UX uses hfs as its standard file system, but it also supports the journaled vxfs file system. Solaris supports ufs and zfs. A journaling file system is an essential component of any enterprise server environment. Support for journaled filesystems was introduced late in Linux, but there are now several options, from clones of commercial filesystems (xfs, jfs) to Linux-specific filesystems (ext3, reiserfs).

Other file system features include support for quotas, file access control lists, mirroring, system snapshots, and resizing. They are supported in one form or another by Linux filesystems. Most of these features are not standard on Linux. Some features may work on one file system, while others will require a different file system. Some of these features are simply not available on certain Linux filesystems, while others require additional installation of tools, such as a specific version of LVM or disk array support (software raid package). Historically, compatibility between programming interfaces and standard tools has been difficult to achieve in Linux, so many file systems implement these features in different ways.

Since commercial UNIX systems support a limited number of file systems, their tools and techniques for working with them are more standardized. For example, since only one master file system was supported in Irix, there was only one way to set access control lists. This is much more convenient for the end user and for further support of this OS.

Application Availability

Most of the basic applications are the same on both UNIX and Linux. For example, the cp , ls , vi , and cc commands are available on UNIX and Linux, and are very similar, if not completely identical. The Linux versions of these tools are based on the GNU versions of these tools, while the UNIX versions of these tools are based on the traditional UNIX tools. These UNIX tools have a long history and rarely change.

But that doesn't mean that commercial versions of UNIX can't be used with GNU tools. In fact, many commercial UNIX OS vendors include many GNU tools in their distributions or offer them as free add-ons. GNU tools are not just standard tools. Some of these free utilities have no commercial counterparts (emacs or Perl). Most manufacturers preinstall these programs and they are either automatically installed with the system or available as an optional feature.

Free and open source applications are almost always built into all Linux distributions. There is a large amount of free software available for Linux, and many of these applications have been ported to commercial versions of the UNIX operating system.

Commercial and/or closed source applications (CAD, financial programs, graphic editor) may have no analogues for Linux. Although some vendors release versions of their applications for Linux, most vendors are slow to do so until Linux becomes more popular with users.

On the other hand, commercial versions of UNIX have historically supported a large number of enterprise-level applications, such as Oracle or SAP. Linux loses out heavily because of the difficulty of certifying large applications, while commercial versions of UNIX don't change much from release to release. Linux can change a lot, not only with each new distribution, but sometimes between releases of the same distribution. Therefore, it is very difficult for a software manufacturer to understand exactly in which environment their application will be used.

System administration

Although some Linux distributions come with a standard set of system administration tools, such as SUSE's YaST, there is no common standard for Linux system administration tools. Text files and command line tools are available, but sometimes their use can be inconvenient. Each commercial version UNIX has its own system management interface.With this interface, you can manage and modify system elements.The following is an example of System Administration Manager for HP-UX.

This SAM contains the following modules:

  • Users or groups to manage.
  • Kernel options that can be changed.
  • Network configuration.
  • Setting up and initializing disks.
  • X server configuration.

The quality of this utility pack is excellent, and the utility pack works well with text files. There is no analogue of this tool for Linux. Even YaST in SUSE does not have the same functionality.

Another aspect in UNIX and Linux that seems to change with almost every version of the OS is the location of the system initialization scripts. Fortunately, /sbin/init and /etc/inittab are standard directories. But the system startup scripts are in different directories. shows the locations where system initialization scripts are stored for various UNIX and Linux distributions.

Table 2. Location of system initialization scripts for different versions of UNIX
HP-UX/sbin/init.d
AIX/etc/rc.d/init.d
Irix/etc/init.d
Solaris/etc/init.d
redhat/etc/rc.d/init.d
SUSE/etc/rc.d/init.d
Debian/etc/init.d
Slackware/etc/rc.d

Due to the large number of Linux distributions and the almost infinite number of applications available (given that there are many versions of this application too) for this OS, managing programs on Linux becomes a difficult task. Choosing the right tool depends on which distribution you're working with. Further inconvenience stems from the fact that some distributions use the Redhat Package Manager (RPM) file format, while their programs are incompatible. This division leads to a huge number of options for working with packages, and it is not always clear which system is used in a particular environment.

On the other hand, commercial distributions of UNIX contain standard package managers. Even though there are different versions of applications and specific formats for different versions of UNIX, the application management environment is the same. For example, Solaris has used the same application package management tools since its inception. And most likely the means of identifying, adding or removing software packages in Solaris will still be unchanged.

The vendors of commercial UNIX distributions also supply the hardware that their OS is designed to run on, so they can introduce new devices into their OS, which is much more difficult to do for Linux. For example, in latest versions Linux has attempted to implement support for hot swappable components (with varying degrees of success). Commercial versions of UNIX have had this capability for many years. Also, in commercial versions of UNIX, hardware monitoring is better than in Linux. Manufacturers can write drivers and embed them into their operating system, which will monitor system health, such as the number of ECC memory errors, power settings, or any other hardware component. This kind of support for Linux is only expected in the distant future.

Hardware for commercial UNIX systems also has more advanced boot options. Before the operating system boots, there are many options to customize how it boots, check the health of the system, or adjust hardware settings. The BIOS of a standard PC has few, if any, of these options.

Support

One of the most significant differences between Linux and UNIX is the cost. The vendors of commercial UNIX systems have charged a high price for their UNIX, even though it can only be used with their hardware platforms. Linux distributions, on the other hand, are relatively inexpensive, if not free at all.

When you buy a commercial version of UNIX, vendors usually provide technical support. Most Linux users are not supported by the OS manufacturer. They can only get support via email, forums, and various communities of Linux users. However, these groups are not just for Linux users. Many administrators of commercial UNIX family operating systems participate in these open support groups in order to be able to both provide assistance and, if necessary, use it. Many people find such self-help groups even more useful than the support system offered by the OS manufacturer.

Conclusion

The fundamentals of UNIX and Linux are very similar. For a user or system administrator, switching from Linux to UNIX will add some inconvenience to the work, but in general the transition will be painless. Even if the filesystems and kernels are different and take some time to get used to, the tools and APIs remain the same. For the most part, these differences are no more significant than differences between major versions of UNIX. All branches of UNIX and Linux are gradually evolving and will differ slightly from each other, but due to the maturity of UNIX concepts, the fundamentals of the OS will not change very much.

Introduction

What is Unix?

Where to get free Unix?

What are the main differences between Unix and other OSes?

Why Unix?

Basic Unix Concepts

File system

command interpreter

Manuals - man

Introduction

Writing about the Unix operating system is extremely difficult. Firstly, because a lot has been written about this system. Secondly, because the ideas and decisions of Unix have had and are having a huge impact on the development of all modern operating systems, and many of these ideas are already described in this book. Thirdly, because Unix is ​​not one operating system, but a whole family of systems, and it is not always possible to "track" their relationship to each other, and it is simply impossible to describe all the operating systems included in this family. However, without claiming to be exhaustive in any way, we will try to give a cursory overview of the "Unix world" in those areas of it that seem interesting to us for the purposes of our tutorial.

The birth of the Unix operating system dates back to the end of the 60s, and this story has already acquired "legends", which sometimes tell in different ways about the details of this event. The Unix operating system was born at the Bell Telephone Laboratories (Bell Labs) research center, which is part of the AT&T corporation. Initially, this initiative project for the PDP-7 computer (later - for the PDP-11) was either a file system, or a computer game, or a text preparation system, or both. It is important, however, that from the very beginning the project, which eventually turned into an OS, was conceived as a software environment for collective use. The author of the first version of Unix is ​​Ken Thompson, however, a large team of employees (D. Ritchie, B. Kernigan, R. Pike and others) took part in the discussion of the project, and subsequently in its implementation. In our opinion, several fortunate circumstances of the birth of Unix determined the success of this system for many years to come.

For most of the people in the team where Unix was born, that OS was "the third system." There is an opinion (see, for example) that a system programmer achieves high qualifications only when completing his third project: the first project is still "student", the second developer tries to include everything that did not work out in the first, and as a result it turns out to be too cumbersome , and only in the third is the necessary balance of desires and possibilities achieved. It is known that before the birth of Unix, the Bell Labs team participated (together with a number of other firms) in the development of the MULTICS OS. The final product MULTICS (Bell Labs did not take part in the last stages of development) bears all the hallmarks of a "second system" and is not widely used. It should be noted, however, that many fundamentally important ideas and decisions were born in this project, and some concepts that many consider to be born in Unix actually originate from the MULTICS project.

The Unix operating system was a system that was made "for myself and for my friends." Unix was not set to capture the market and compete with any product. The developers of the Unix operating system themselves were its users, and they themselves assessed the suitability of the system to their needs. Without the pressure of market conditions, such an assessment could be extremely objective.

Unix was a system that was made by programmers for programmers. This determined the elegance and conceptual harmony of the system - on the one hand, and on the other - the need for understanding the system for a Unix user and a sense of professional responsibility for a programmer developing software for Unix. And no subsequent attempts to make "Unix for Dummies" have been able to rid the Unix OS of this virtue.

In 1972-73. Ken Thompson and Dennis Ritchie wrote a new version of Unix. Especially for this purpose, D. Ritchie created the C programming language, which is now no longer necessary. Over 90% of the Unix code is written in this language, and the language has become an integral part of the OS. The fact that the main part of the OS is written in a high-level language makes it possible to recompile it into the codes of any hardware platform and is the circumstance that determined the widespread use of Unix.

During Unix's inception, US antitrust laws prevented AT&T from entering the software market. Therefore, the Unix operating system was non-commercial and freely distributed, primarily in universities. There, its development continued, and it was most actively conducted at the University of California at Berkeley. At this university, the Berkeley Software Distribution group was created, which was engaged in the development of a separate branch of the OS - BSD Unix. Throughout subsequent history, mainstream Unix and BSD Unix have evolved in parallel, repeatedly enriching each other.

As the Unix operating system spread, commercial firms became more and more interested in it, which began to release their own commercial versions of this operating system. Over time, the "main" branch of Unix from AT&T became commercial, and a subsidiary of Unix System Laboratory was created to promote it. The BSD Unix branch forked in turn into commercial BSD and Free BSD. Various commercial and free Unix-like systems have been built on top of the AT&T Unix kernel, but have included features borrowed from BSD Unix as well as original features. Despite the common source, differences between members of the Unix family accumulated and eventually made it extremely difficult to port applications from one Unix-like operating system to another. At the initiative of Unix users, there was a movement to standardize the Unix API. This movement was supported by the International Organization for Standards ISO and led to the emergence of the POSIX (Portable Operation System Interface eXecution) standard, which is still being developed and is the most authoritative standard for the OS. However, making the POSIX specification an official standard is a slow process and fails to meet the needs of software vendors, which has led to the emergence of alternative industry standards.

With the transition of AT&T Unix to Nowell, the name of this operating system changed to Unixware, and the rights to the Unix trademark were transferred to the X / Open consortium. This consortium (now the Open Group) developed its own (wider than POSIX) system specification, known as the Single Unix Specification. The second edition of this standard has recently been released, much better aligned with POSIX.

Finally, a number of firms producing their own versions of Unix formed the Open Software Foundation (OSF) consortium, which released their own version of Unix, OSF/1, based on the Mach microkernel. OSF also released the OSF/1 system specifications, which served as the basis for OSF member firms to release their own Unix systems. These systems include SunOS from Sun Microsystems, AIX from IBM, HP/UX from Hewlett-Packard, DIGITAL UNIX from Compaq, and others.

Initially, the Unix systems of these firms were mostly based on BSD Unix, but now most modern industrial Unix systems are built using (under license) the AT&T Unix System V Release 4 (S5R4) kernel, although they also inherit some properties of BSD Unix. We do not take responsibility for comparing commercial Unix systems, since comparisons of this kind that appear periodically in print often provide completely opposite results.

Nowell sold Unix to Santa Crouse Operations, which produced its own Unix product, SCO Open Server. SCO Open Server was based on an earlier version of the kernel (System V Release 3), but was superbly debugged and highly stable. Santa Crouse Operations integrated its product with AT&T Unix and released Open Unix 8, but then sold Unix to Caldera, the owner of "classic" Unix today (late 2001).

Sun Microsystems began its introduction to the Unix world with SunOS, based on the BSD kernel. However, it subsequently replaced it with a Solaris system based on the S5R4. Version 8 of this OS is currently being distributed (there is also v.9-beta). Solaris runs on the SPARC platform (RISC processors manufactured to Sun specifications) and Intel-Pentium.

Hewlett-Packard offers the HP-UX OS. v.11 on the PA-RISC platform. HP-UX is based on S5R4, but contains many features that give away its origins from BSD Unix. Of course, HP-UX will also be available on the Intel-Itanium platform.

IBM comes out with the AIX OS, the latest version to date is 5L (it will be discussed later). IBM did not announce the "pedigree" of AIX, it is mostly original development, but the first versions bore signs of origin from FreeBSD Unix. Now, however, AIX is more like S5R4. AIX was originally available on the Intel-Pentium platform, but was subsequently (as per general IBM policy) no longer supported on that platform. AIX currently runs on IBM RS/6000 servers and other PowerPC-based computing platforms (including IBM supercomputers).

DEC's DIGITAL UNIX was the only commercial implementation of OSF/1. The DIGITAL UNIX OS ran on DEC's Alpha RISC servers. When DEC was taken over by Compaq in 1998, Compaq acquired both Alpha and DIGITAL UNIX servers. Compaq intends to restore its presence in the market of Alpha servers and, in this regard, is intensively developing an OS for them. The current name of this OS is Tru64 Unix (current version is 5.1A), it continues to be based on the OSF/1 kernel and has many BSD Unix features.

Although most commercial Unix systems are based on a single kernel and conform to POSIX requirements, each has its own dialect of API, and differences between dialects are cumulative. This leads to the fact that porting industrial applications from one Unix system to another is difficult and requires, at a minimum, recompilation, and often also correction of the source code. An attempt to overcome the "confusion" and make a single Unix operating system for all was undertaken in 1998 by an alliance of SCO, IBM and Sequent. These firms joined together in the Monterey project to create a single OS based on Unixware, owned at the time by SCO, IBM AIX, and Sequent's DYNIX OS. (Sequent is a leader in the production of NUMA computers - asymmetric multiprocessor - and DYNIX is Unix for such computers). The Monterey OS was to run on the 32-bit Intel-Pentium platform, the 64-bit PowerPC platform, and the new 64-bit Intel-Itanium platform. Nearly all leaders in the hardware and middleware industry have declared their support for the project. Even firms that have their own Unix clones (except Sun Microsystems) have announced that they will only support Monterey on Intel platforms. The work on the project appeared to be going well. Monterey OS was among the first to prove its performance on Intel-Itanium (along with Windows NT and Linux) and the only one that did not emulate the 32-bit Intel-Pentium architecture. However, in the final stage of the project, a fatal event occurred: SCO sold its Unix division. Even earlier, Sequent became part of IBM. The "successor" of all the features of Monterey OS is IBM AIX v.5L OS. However, not quite all. The Intel-Pentium platform is not a strategic focus for IBM and AIX is not available on that platform. And because other leaders in the computer industry don't (or don't quite share) IBM's position, the idea of ​​a common Unix operating system never came to fruition.

If you've recently started learning Linux and getting comfortable in this vast universe, then you've probably come across the term Unix a lot. Sounds very similar to Linux, but what does it mean? You are probably wondering what is the difference between unix and linux. The answer to this question depends on what you understand by these words. After all, each of them can be interpreted in different ways. In this article, we will look at a simplified history of Linux and Unix to help you understand what they are and how they are related. As always, you can ask questions or add more information in the comments.

Unix began its history in the late 1960s and early 1970s at AT&T Bell Labs in the United States. Together with MIT and General Electric, Bell Labs began developing a new operating system. Some researchers were dissatisfied with the development of this operating system. They moved away from working on the main project and began to develop their own OS. In 1970, this system was called Unix, and two years later it was completely rewritten in the C programming language.

This allowed Unix to be distributed and ported to various devices and computing platforms.

As Unix continued to evolve, AT&T began licensing it for university use as well as for commercial purposes. This meant that not everyone could, as now, freely change and distribute the code of the Unix operating system. Soon, many editions and variants of the Unix operating system began to appear, designed to solve various problems. The most famous of these was BSD.

Linux is similar to Unix in functionality and features, but not in code base. This operating system was assembled from two projects. The first is the GNU project developed by Richard Stallman in 1983, the second is the Linux kernel written by Linus Torvalds in 1991.

The goal of the GNU Project was to create a system similar to, but independent of, Unix. In other words, an operating system containing no Unix code that could be freely redistributed and modified without restrictions, like free software. Since the free Linux kernel could not run on its own, the GNU project merged with the Linux kernel, and the Linux operating system was born.

Linux was designed under the influence of the Minix system, a descendant of Unix, but all the code was written from scratch. Unlike Unix, which was used on servers and large mainframes of various enterprises, Linux was designed to be used on home computer with simpler hardware.

Today, Linux runs on more platforms than any other operating system, including servers, embedded systems, microcomputers, modems, and even mobile phones. Now the difference between linux and unix will be considered in more detail.

What is Unix

The term Unix can refer to such concepts:

  • The original operating system developed by AT&T Bell Labs from which other operating systems are developed.
  • Trademark, written in capital letters. UNIX is owned by The Open Group, which developed the Single UNIX Specification, a set of standards for operating systems. Only those systems that comply with the standards can legitimately be called UNIX. Certification is not free and requires developers to pay for the use of this trademark.
  • All operating systems are registered with the Unix name. Because they meet the aforementioned standards. These are AIX, A/UX, HP-UX, Inspur K-UX, Reliant UNIX, Solaris, IRIX, Tru64, UnixWare, z/OS and OS X - yes, even those that run on Apple computers.

What is Linux

The term Linux refers only to the kernel. An operating system would not be complete without a desktop environment and applications. Since most applications were developed and are now being developed under the GNU project, the full name of the operating system is GNU/Linux.

A lot of people now use the term Linux to refer to all distributions based on the Linux kernel. At the moment, the newest version of the Linux kernel is 4.4, version 4.5 is under development. The renumbering of kernel releases from 3.x to 4.x took place not so long ago.

Linux is a Unix-like operating system that behaves like Unix but does not contain its code. Unix-like operating systems are often referred to as Un*x, *NIX and *N?X, or even Unixoids. Linux does not have a Unix certification, and GNU stands for GNU not Unix, so Mac OS X is more Unix than Linux in that respect. Nevertheless, the Linux kernel and the GNU Linux operating system are very similar to Unix in functionality, implementing most of the principles of the Unix philosophy. It's human-readable code, storing system configuration in separate text files, and using small command-line tools, a graphical shell, and a session manager.

It is important to note that not all Unix-like systems have received UNIX certification. In a certain context, all operating systems based on UNIX or its ideas are called UNIX-like, whether they have a UNIX certificate or not. In addition, they can be commercial and free.

I hope now it has become more clear how unix differs from linux. But let's go even further and summarize.

Main differences

  • Linux is a free and open source operating system, but the original Unix is ​​not, except for some of its derivatives.
  • Linux is a clone of the original Unix, but it does not contain its code.
  • The main difference between unix and linux is that Linux is only a kernel, while Unix was and is a full-fledged operating system.
  • Linux was designed for personal computers. And Unix is ​​focused primarily on large workstations and servers.
  • Today, Linux supports more platforms than Unix.
  • Linux supports more types of file systems than Unix.

As you can see, the confusion usually comes from the fact that linux vs unix can mean completely different things. Whatever the meaning, the fact remains that Unix came first and Linux came later. Linux was born from a desire for software freedom and portability, inspired by the Unix approach. It's safe to say that we are all indebted to the free software movement, because the world would be a much worse place without it.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN

FEDERATION

FEDERAL AGENCY FOR EDUCATION

STATE EDUCATIONAL INSTITUTION

HIGHER PROFESSIONAL EDUCATION

Taganrog State Radio Engineering University

Discipline "Informatics"

"UNIX operating system"

Completed by: Orda-Zhigulina D.V., gr. E-25

Checked: Vishnevetsky V.Yu.

Taganrog 2006


Introduction

What is Unix 3

Where to get free Unix 7

Main part. (Description of Unix)

1. Basic concepts of Unix 8

2. File system 9

2.1 File types 9

3. Command interpreter 11

4. UNIX 12 kernel

4.1 General organization of the traditional UNIX kernel 13

4.2 Main functions of the kernel 14

4.3 Principles of interaction with the core 15

4.4 Principles of interrupt handling 17

5. I/O control 18

5.1 Principles of System I/O Buffering 19

5. 2 System Calls for I/O Control 21

6. Interfaces and input points of drivers 23

6.1 Block drivers 23

6.2 Character Drivers 24

6. 3 Stream Drivers 25

7. Commands and Utilities 25

7. 1 Team organization in UNIX OS 26

7.2 I/O redirection and piping 26

7. 3 Built-in, library and user commands 26

7.4 Command language programming 27

8. GUI Tools 27

8.1 User IDs and User Groups 30

8.2 File protection 32

8.3 Promising operating systems supporting the UNIX OS environment 33

Conclusion

Main differences between Unix and other OS 36

Applications of Unix 37


Introduction

What is Unix

The term Unix and the not-quite-equivalent UNIX are used with different meanings. Let's start with the second of the terms, as the simpler one. In a nutshell, UNIX (in that form) is a registered trademark originally owned by the AT&T Corporation, which has changed hands over the years and is now the property of an organization called the Open Group. The right to use the name UNIX is achieved by a kind of "check for lice" - passing tests of compliance with the specifications of some reference OS (Single Unix Standard - which in this case can be translated as the Single Standard on Unix). This procedure is not only complicated, but also very expensive, and therefore only a few operating systems from the current ones have undergone it, and all of them are proprietary, that is, they are the property of certain corporations.

Among the corporations that have earned the right to the name UNIX then developers / testers and the blood (more precisely, the dollar) of the owners, we can name the following:

Sun with its SunOS (better known to the world as Solaris);

IBM, which developed the AIX system;

Hewlett-Packard is the owner of the HP-UX system;

IRIX is SGI's operating system.

In addition, the proper UNIX name applies to systems:

True64 Unix, developed by DEC, with the liquidation of which passed to Compaq, and now, together with the latter, has become the property of the same Hewlett-Packard;

UnixWare is owned by SCO (a product of the merger of Caldera and Santa Cruz Operation).

Being proprietary, all these systems are sold for a lot of money (even by American standards). However, this is not the main obstacle to the spread of UNIX itself. For their common feature is binding to certain hardware platforms: AIX runs on IBM servers and workstations with Power processors, HP-UX - on their own HP-PA (Precision Architecture) machines , IRIX - on graphics stations from SGI, carrying MIPS processors,True64 Unix - designed for Alpha processors (unfortunately, in the Bose deceased) Only UnixWare is focused on the "democratic" PC platform, and Solaris exists in versions for two architectures - its own, Sparc, and still the same PC, which, however, did not greatly contribute to their prevalence - due to the relatively weak support for the new PC peripherals.

Thus, UNIX is primarily a legal concept. But the term Unix has a technological interpretation. This is the common name used by the IT industry for the entire family of operating systems, either derived from the "original" UNIX company AT & T, or reproducing its functions "from scratch", including free operating systems such as Linux, FreeBSD and other BSDs, no verification to conform to the Single Unix Standard has never been exposed. That is why they are often called Unix-like.

The term "POSIX-compliant systems", which is close in meaning, is also widely used, which unites a family of operating systems that correspond to the set of standards of the same name. The POSIX (Portable Operation System Interface based on uniX) standards themselves were developed on the basis of practices adopted in Unix systems, and therefore the latter are all, by definition, POSIX-compliant. However, these are not completely synonymous: compatibility with POSIX standards is claimed by operating systems that are only indirectly related to Unix (QNX, Syllable), or not related at all (up to Windows NT/2000/XP).

To clarify the question of the relationship between UNIX, Unix and POSIX, we have to delve a little into history. Actually, the history of this issue is discussed in detail in the corresponding chapter of the book "Free Unix: Linux, FreeBSD and Others" (coming soon by BHV-Petersburg) and in articles on the history of Linux and BSD systems.

The Unix operating system (more precisely, its first version) was developed by employees of Bell Labs (a division of AT & T) in 1969-1971. Its first authors - Ken Thompson and Dennis Ritchie - did it solely for their own purposes, in particular, in order to be able to have fun with their favorite StarTravel game. And for a number of legal reasons, the company itself could not use it as a commercial product. However, the practical application of Unix was found quite quickly. Firstly, it was used at Bell Labs to prepare various kinds of technical (including patent) documentation. And secondly, the UUCP (Unix to Unix Copy Program) communication system was based on Unix.

Another area where Unix was used in the 70s and early 80s of the last century turned out to be quite unusual. Namely, in the source texts, it was distributed among scientific institutions conducting work in the field of Computer Science. The purpose of such dissemination (it was not completely free in the current sense, but in fact turned out to be very liberal) was: education and research in the above field of knowledge.

The most famous is the BSD Unix system, created at the University of Berkeley, California. Which, gradually freeing itself from the proprietary code of the original Unix, eventually, after dramatic ups and downs (described in detail here), gave rise to modern free BSD systems - FreeBSD, NetBSD and others.

One of the most important results of the work of university hackers was (1983) the introduction of support for the TCP / IP protocol in Unix, on which the then ARPANET was based (and which became the foundation of the modern Internet). This was a prerequisite for Unix dominance in all areas related to the World Wide Web. And this turned out to be the next practical application of this family of operating systems - by that time there was no longer any need to talk about a single Unix. Because it, as mentioned earlier, separated its two branches - originating from the original UNIX (over time, it received the name System V) and the system of Berkeley origin. On the other hand, System V formed the basis of those various proprietary UNIXs that, in fact, had the legal right to claim this name.

The last circumstance - the branching of the once single OS into several lines that are gradually losing compatibility - came into conflict with one of the cornerstones of the Unix ideology: the portability of the system between different platforms, and its applications from one Unix system to another. What brought to life the activities of various kinds of standards organizations, which ended in the end with the creation of the POSIX standards set, which was mentioned earlier.

It was POSIX standards that Linus Torvalds relied on, creating "from scratch" (that is, without using pre-existing code) his operating system - Linux. And she, having quickly and successfully mastered the traditional areas of application of Unix systems (software development, communications, the Internet), eventually opened up a new one for them - general-purpose desktop user platforms. This is what made it popular among the people - a popularity that surpasses that of all other Unix systems combined, both proprietary and free.

Further, we will talk about working on Unix systems in the broadest sense of the word, without taking into account any kind of trademarks and other legal troubles. Although the main examples related to working methods will be taken from the field of free implementations of them - Linux, to a lesser extent FreeBSD, and even less - from other BSD systems.

Where to get free Unix?

FreeBSD Database - www.freebsd.org;

You can go to www.sco.com


Main part. (Description of Unix)

1. Basic concepts of Unix

Unix is ​​based on two basic concepts: "process" and "file". Processes are the dynamic side of the system, they are subjects; and files - static, these are the objects of the processes. Almost the entire interface between processes interacting with the kernel and with each other looks like writing / reading files. Although you need to add things like signals, shared memory and semaphores.

Processes can be roughly divided into two types - tasks and daemons. A task is a process that does its work, trying to finish it as soon as possible and complete it. The daemon waits for the events it needs to process, processes the events that have occurred, and waits again; it usually ends at the order of another process, most often it is killed by the user by giving the command "kill process_number". In this sense, it turns out that an interactive task that processes user input is more like a daemon than a task.

2. File system

In the old Unix "s, 14 letters were assigned to the name, in the new ones this restriction was removed. In addition to the file name, the directory contains its inode identifier - an integer that determines the number of the block in which the file attributes are recorded. Among them: user number - the owner of the file; number groups Number of references to the file (see below) Date and time of creation, last modification and last access to the file Access attributes Access attributes contain the file type (see below), rights change attributes at startup (see below) and permissions access to it for the owner, classmate and others for reading, writing and executing.The right to delete a file is determined by the right to write to the overlying directory.

Each file (but not a directory) can be known by several names, but they must be on the same partition. All links to the file are equal; the file is deleted when the last link to the file is removed. If the file is open (for reading and/or writing), then the number of links to it increases by one more; this is how many programs that open a temporary file delete it right away so that if they crash, when the operating system closes the files opened by the process, this temporary file will be deleted by the operating system.

There is another interesting feature of the file system: if, after the creation of the file, writing to it was not in a row, but at large intervals, then no disk space is allocated for these intervals. Thus, the total volume of files in a partition can be greater than the volume of the partition, and when such a file is deleted, less space is freed than its size.

2.1 File types

Files are of the following types:

regular direct access file;

directory (file containing names and identifiers of other files);

symbolic link (string with the name of another file);

block device (disk or magnetic tape);

serial device (terminals, serial and parallel ports; disks and tapes also have a serial device interface)

named channel.

Special files designed to work with devices are usually located in the "/dev" directory. Here are some of them (in the FreeBSD nomination):

tty* - terminals, including: ttyv - virtual console;

ttyd - DialIn terminal (usually a serial port);

cuaa - DialOut line

ttyp - network pseudo-terminal;

tty - the terminal with which the task is associated;

wd* - hard drives and their subsections, including: wd - hard drive;

wds - partition of this disk (here called "slice");

wds - partition section;

fd - floppy disk;

rwd*, rfd* - the same as wd* and fd*, but with sequential access;

Sometimes it is required that a program launched by a user does not have the rights of the user who launched it, but some other. In this case, the change rights attribute is set to the rights of the user - the owner of the program. (As an example, I will give a program that reads a file with questions and answers and, based on what it has read, tests the student who launched this program. The program must have the right to read the file with answers, but the student who launched it should not.) For example, the passwd program works, with with which the user can change his password. The user can run the passwd program, it can make changes to the system database - but the user cannot.

Unlike DOS, where the full filename is "drive:pathname" and RISC-OS, which is "-filesystem-drive:$.pathname" (which generally has its advantages), Unix uses transparent notation in the form "/path/name". The root is measured from the partition from which the Unix kernel was loaded. If a different partition needs to be used (and the boot partition usually contains only what is needed to boot), the command `mount /dev/partitionfile dir` is used. At the same time, files and subdirectories that were previously in this directory become inaccessible until the partition is unmounted (naturally, all normal people use empty directories to mount partitions). Only the supervisor has the right to mount and unmount.

At startup, each process can expect to have three files open for it, which it knows as standard input stdin at descriptor 0; standard output stdout on descriptor 1; and standard output stderr at descriptor 2. When logged in, when the user enters a username and password and the shell is started, all three are directed to /dev/tty; later any of them can be redirected to any file.

3. Command interpreter

Unix almost always comes with two shells, sh (shell) and csh (a C-like shell). In addition to them, there are also bash (Bourne), ksh (Korn), and others. Without going into details, here are the general principles:

All commands except changing the current directory, setting environment variables (environment) and structured programming statements are external programs. These programs are usually located in the /bin and /usr/bin directories. System administration programs - in the /sbin and /usr/sbin directories.

The command consists of the name of the program to be started and arguments. Arguments are separated from the command name and from each other by spaces and tabs. Some special characters are interpreted by the shell itself. The special characters are " " ` ! $ ^ * ? | & ; (what else?).

You can give multiple commands on the same command line. Teams can be split; (sequential command execution), & (asynchronous simultaneous command execution), | (synchronous execution, the stdout of the first command will be fed to the stdin of the second).

You can also take standard input from a file by including "file" (the file will be zeroed out) or ">>file" (the entry will be written to the end of the file) as one of the arguments.

If you need information on any command, issue the command "man command_name". This will be displayed on the screen through the "more" program - see how to manage it on your Unix with the `man more` command.

4. UNIX kernel

Like any other multi-user operating system that protects users from each other and protects system data from any unprivileged user, UNIX has a secure kernel that manages computer resources and provides users with a basic set of services.

The convenience and efficiency of modern versions of the UNIX operating system does not mean that the entire system, including the kernel, is designed and structured in the best possible way. The UNIX OS has evolved over the years (it is the first operating system in history that continues to gain popularity at such a mature age - for more than 25 years). Naturally, the capabilities of the system grew, and, as often happens in large systems, the qualitative improvements in the structure of the UNIX OS did not keep pace with the growth of its capabilities.

As a result, the core of most modern commercial versions of the UNIX operating system is a large, not very well-structured monolith. For this reason, programming at the UNIX kernel level continues to be an art (except for the well-established and understandable technology for developing external device drivers). This lack of manufacturability in the organization of the UNIX kernel does not satisfy many. Hence the desire for a complete reproduction of the UNIX OS environment with a completely different organization of the system.

Due to the greatest prevalence, the UNIX System V kernel is often discussed (it can be considered traditional).

4.1 General organization of the traditional UNIX kernel

One of the main achievements of the UNIX OS is that the system has the property of high mobility. The meaning of this quality is that the entire operating system, including its kernel, is relatively easy to transfer to different hardware platforms. All parts of the system, except for the kernel, are completely machine independent. These components are neatly written in C, and porting them to a new platform (at least on the 32-bit computer class) requires only recompilation. source code to the target computer codes.

Of course, the greatest problems are associated with the system kernel, which completely hides the specifics of the computer used, but itself depends on this specifics. As a result of a thoughtful separation of machine-dependent and machine-independent components of the kernel (apparently, from the point of view of operating system developers, this is the highest achievement of the developers of the traditional UNIX OS kernel), it was possible to achieve that the main part of the kernel does not depend on the architectural features of the target platform, is written entirely in C and needs only recompilation to be ported to a new platform.

However, a relatively small part of the kernel is machine dependent and is written in a mixture of C and the target processor's assembly language. When transferring a system to a new platform, this part of the kernel must be rewritten using assembly language and taking into account the specific features of the target hardware. The machine-dependent parts of the kernel are well isolated from the main machine-independent part, and with a good understanding of the purpose of each machine-dependent component, rewriting the machine-specific part is mostly a technical task (although it requires high programming skills).

The machine-specific part of the traditional UNIX kernel includes the following components:

promotion and initialization of the system at a low level (so far it depends on the features of the hardware);

primary processing of internal and external interrupts;

memory management (in the part that relates to the features of virtual memory hardware support);

process context switching between user and kernel modes;

target platform specific parts of device drivers.

4.2 Main functions of the kernel

The main functions of the UNIX OS kernel include the following:

(a) System initialization - start-up and spin-up function. The kernel provides a bootstrap tool that loads the full kernel into the computer's memory and starts the kernel.

(b) Process and thread management - the function of creating, terminating and keeping track of existing processes and threads ("processes" running on shared virtual memory). Since UNIX is a multi-process operating system, the kernel provides for the sharing of processor time (or processors in multi-processor systems) and other computer resources between running processes to give the appearance that the processes are actually running in parallel.

(c) Memory management is a function of mapping the virtually unlimited virtual memory of processes into the computer's physical RAM, which is limited in size. The corresponding kernel component provides shared use of the same areas of RAM by several processes using external memory.

(d) File management - a function that implements the abstraction of the file system - hierarchies of directories and files. UNIX file systems support several types of files. Some files may contain ASCII data, others will correspond to external devices. The file system stores object files, executable files, and so on. Files are usually stored on external storage devices; access to them is provided by means of the kernel. There are several types of file system organization in the UNIX world. Modern versions of the UNIX operating system simultaneously support most types of file systems.

(e) Communication means - a function that provides the ability to exchange data between processes running inside the same computer (IPC - Inter-Process Communications), between processes running in different nodes of a local or wide data network, as well as between processes and external device drivers .

(f) Programming interface - a function that provides access to the capabilities of the kernel from the side of user processes based on the mechanism of system calls, arranged in the form of a library of functions.

4.3 Principles of interaction with the core

In any operating system, some mechanism is supported that allows user programs to access the services of the OS kernel. In the operating systems of the most famous Soviet computer BESM-6, the corresponding means of communication with the kernel were called extracodes, in the IBM operating systems they were called system macros, and so on. On UNIX, these facilities are called system calls.

The name does not change the meaning, which is that to access the OS kernel functions, "special instructions" of the processor are used, when executed, a special kind of internal processor interrupt occurs, transferring it to the kernel mode (in most modern OS this type of interrupt is called trap - trap). When processing such interrupts (decryption), the OS kernel recognizes that the interrupt is actually a request to the kernel from the user program to perform certain actions, selects the parameters of the call and processes it, and then performs a "return from the interrupt", resuming the normal execution of the user program .

It is clear that the specific mechanisms for raising internal interrupts initiated by the user program differ in different hardware architectures. Since the UNIX OS strives to provide an environment in which user programs can be fully mobile, an additional layer was required to hide the specifics of the specific mechanism for raising internal interrupts. This mechanism is provided by the so-called system call library.

For the user, the system call library is a regular library of pre-implemented functions of the C programming system. When programming in C, using any function from the system call library is no different than using any native or library C function. However, inside any function of a particular system call library contains code that is, generally speaking, specific to a given hardware platform.

4.4 Principles of interrupt handling

Of course, the mechanism for handling internal and external interrupts used in operating systems depends mainly on what kind of hardware support for interrupt handling is provided by a particular hardware platform. Fortunately, by now (and for quite some time now) major computer manufacturers have de facto agreed on the basic interrupt mechanisms.

Speaking not very precisely and specifically, the essence of the mechanism adopted today is that each possible interrupt of the processor (whether it be internal or external interrupt) corresponds to some fixed address of physical RAM. At the moment when the processor is allowed to interrupt due to the presence of an internal or external interrupt request, there is a hardware transfer of control to the physical RAM cell with the corresponding address - usually the address of this cell is called the "interrupt vector" (usually, requests for internal interrupt, i.e. i.e. requests coming directly from the processor are satisfied immediately).

The business of the operating system is to place in the appropriate cells of the RAM the program code that provides the initial processing of the interrupt and initiates the full processing.

Basically, the UNIX operating system takes a general approach. In the interrupt vector corresponding to the external interrupt, i.e. interrupt from some external device, contains instructions that set the processor's run level (the run level determines which external interrupts the processor should respond to immediately) and jump to the full interrupt handler in the appropriate device driver. For an internal interrupt (for example, an interrupt initiated by the user program when the required virtual memory page is missing in the main memory, when an exception occurs in the user program, etc.) or a timer interrupt, the interrupt vector contains a jump to the corresponding UNIX kernel program.

5. I/O control

Traditionally, UNIX OS distinguishes three types of I/O organization and, accordingly, three types of drivers. Block I/O is mainly intended for working with directories and regular files of the file system, which at the basic level have a block structure. At the user level, it is now possible to work with files by directly mapping them to virtual memory segments. This feature is considered the top level of block I/O. At the lower level, block I/O is supported by block drivers. Block I/O is also supported by system buffering.

Character input/output is used for direct (without buffering) exchanges between the user's address space and the corresponding device. Kernel support common to all character drivers is to provide functions for transferring data between user and kernel address spaces.

Finally, stream I/O is similar to character I/O, but due to the possibility of including intermediate processing modules in the stream, it has much more flexibility.

5.1 Principles of System I/O Buffering

The traditional way to reduce overhead when performing exchanges with external memory devices that have a block structure is block I/O buffering. This means that any block of an external memory device is read first of all into some buffer of the main memory area, called the system cache in UNIX OS, and from there it is completely or partially (depending on the type of exchange) copied to the corresponding user space.

The principles of organizing the traditional buffering mechanism are, firstly, that a copy of the contents of the block is kept in the system buffer until it becomes necessary to replace it due to a lack of buffers (a variation of the LRU algorithm is used to organize the replacement policy). Secondly, when writing any block of an external memory device, only an update (or formation and filling) of the cache buffer is actually performed. The actual exchange with the device is either done by popping the buffer due to its contents being replaced, or by issuing a special sync (or fsync) system call, supported specifically for forcibly pushing updated cache buffers to external memory.

This traditional buffering scheme came into conflict with the virtual memory management tools developed in modern versions of the UNIX OS, and in particular with the mechanism for mapping files to virtual memory segments. Therefore, System V Release 4 introduced a new buffering scheme, which is currently used in parallel with the old scheme.

The essence of the new scheme is that at the kernel level, the mechanism for mapping files to virtual memory segments is actually reproduced. First, remember that the UNIX kernel does indeed run in its own virtual memory. This memory has a more complex, but fundamentally the same structure as the user's virtual memory. In other words, the virtual memory of the kernel is segment-page, and, along with the virtual memory of user processes, is supported by a common virtual memory management subsystem. It follows, secondly, that almost any function provided by the kernel to users can be provided by some components of the kernel to other components of the kernel. In particular, this also applies to the ability to map files to virtual memory segments.

The new buffering scheme in the UNIX kernel is mainly based on the fact that you can do almost nothing special to organize buffering. When one of the user processes opens a file that has not been opened until then, the kernel forms a new segment and connects the file being opened to this segment. After that (regardless of whether the user process will work with the file in the traditional mode using the read and write system calls or will connect the file to its virtual memory segment), at the kernel level, work will be done with the kernel segment to which the file is attached at the level kernels. The main idea of ​​the new approach is that the gap between virtual memory management and system-wide buffering is eliminated (this should have been done long ago, since it is obvious that the main buffering in the operating system should be performed by the virtual memory management component).

Why not abandon the old buffering mechanism? The thing is that the new scheme assumes the presence of some continuous addressing inside the external memory object (there must be an isomorphism between the mapped and mapped objects). However, when organizing file systems, UNIX OS is quite difficult to allocate external memory, which is especially true for i-nodes. Therefore, some blocks of external memory have to be considered isolated, and for them it turns out to be more profitable to use the old buffering scheme (although it may be possible in tomorrow's versions of UNIX to completely switch to a unified new scheme).

5. 2 System calls for I/O control

To access (that is, to be able to perform subsequent I/O operations) any kind of file (including special files), a user process must first connect to the file using one of the open, creat, dup, or pipe system calls.

The sequence of actions of the open (pathname, mode) system call is as follows:

the consistency of the input parameters (mainly related to the flags of the file access mode) is analyzed;

allocate or locate space for a file descriptor in the system process data area (u-area);

in the system-wide area, existing space is allocated or located to accommodate the system file descriptor (file structure);

the file system archive is searched for an object named "pathname" and a file system level file descriptor (vnode in UNIX V System 4 terms) is generated or found;

the vnode is bound to the previously formed file structure.

The open and creat system calls are (almost) functionally equivalent. Any existing file can be opened with the creat system call, and any new file can be created with the open system call. However, with regard to the creat system call, it is important to emphasize that, in its natural use (to create a file), this system call creates a new entry in the corresponding directory (according to the given pathname), and also creates and appropriately initializes a new i-node.

Finally, the dup system call (duplicate - copy) leads to the formation of a new descriptor for an already open file. This UNIX-specific system call is for the sole purpose of I/O redirection.) Its execution consists in creating a new open file descriptor in the u-region of the user process's system space, containing the newly formed file descriptor (integer), but referring to the already existing system-wide file structure and containing the same signs and flags that correspond to open sample file.

Other important system calls are the read and write system calls. The read system call is executed as follows:

the descriptor of the specified file is located in the system-wide file table, and it is determined whether the access from the given process to the given file in the specified mode is legal;

for some (short) time, a synchronization lock is set on the vnode of this file (the contents of the descriptor must not change at critical moments of the read operation);

the actual read is performed using the old or new buffering mechanism, after which the data is copied to become available in the user's address space.

The write operation works in the same way, but changes the contents of the buffer pool buffer.

The close system call causes the driver to abort the associated user process and (in the case of the most recent device close) sets the system-wide "driver free" flag.

Finally, another "special" ioctl system call is supported for special files. This is the only system call that is provided for special files and is not provided for other kinds of files. In fact, the ioctl system call allows you to arbitrarily extend the interface of any driver. The ioctl parameters include an opcode and a pointer to some area of ​​user process memory. All interpretation of the opcode and associated specific parameters is handled by the driver.

Naturally, since drivers are primarily designed to control external devices, the driver code must contain the appropriate means for handling interrupts from the device. The call to the individual interrupt handler in the driver comes from the operating system kernel. Similarly, a driver can declare a "timeout" input that the kernel accesses when the time previously ordered by the driver elapses (such timing control is necessary when managing less intelligent devices).

The general scheme of the interface organization of drivers is shown in Figure 3.5. As this figure shows, in terms of interfaces and system-wide management, there are two types of drivers - character and block. From the point of view of internal organization, another type of drivers stands out - stream drivers. However, in terms of their external interface, stream drivers do not differ from character drivers.

6. Interfaces and input points of drivers

6.1 Block drivers

Block drivers are designed to serve external devices with a block structure (magnetic disks, tapes, etc.) and differ from others in that they are developed and executed using system buffering. In other words, such drivers always work through the system buffer pool. As you can see in Figure 3.5, any read or write access to a block driver always goes through preprocessing, which is to try to find a copy of the desired block in the buffer pool.

If a copy of the required block is not in the buffer pool, or if for some reason it is necessary to replace the contents of some updated buffer, the UNIX kernel calls the strategy procedure of the corresponding block driver. Strategy provides a standard interface between the kernel and the driver. With the use of library subroutines intended for writing drivers, the strategy procedure can organize queues of exchanges with the device, for example, in order to optimize the movement of magnetic heads on the disk. All exchanges performed by the block driver are performed with buffer memory. The rewriting of the necessary information into the memory of the corresponding user process is carried out by kernel programs that manage buffers

6.2 Character drivers

Character drivers are primarily designed to serve devices that communicate character-by-character or variable-length character strings. A typical example of a character device is a simple printer that accepts one character per exchange.

Character drivers do not use system buffering. They directly copy data from user process memory when performing write operations, or into user process memory when performing read operations, using their own buffers.

It should be noted that it is possible to provide a character interface for a block device. In this case, the block driver uses the additional features of the strategy procedure, which allows the exchange to be carried out without the use of system buffering. For a driver that has both block and character interfaces, two special files are created in the file system, block and character. With each call, the driver receives information about the mode in which it is used.

6. 3 Stream Drivers

The main purpose of the streams mechanism is to increase the level of modularity and flexibility of drivers with complex internal logic (this applies most of all to drivers implementing advanced network protocols). The specificity of such drivers is that most of the program code does not depend on the features of the hardware device. Moreover, it is often advantageous to combine parts of the program code in different ways.

All this led to the emergence of a streaming architecture of drivers, which are a bidirectional pipeline of processing modules. At the beginning of the pipeline (closest to the user process) is the stream header, which is primarily accessed by the user. At the end of the pipeline (closest to the device) is the normal device driver. An arbitrary number of processing modules can be located in the gap, each of which is designed in accordance with the required streaming interface.

7. Commands and Utilities

When working interactively in a UNIX OS environment, they use various utilities or external commands of the shell language. Many of these utilities are as complex as the shell itself (and by the way, the shell shell itself is one of the utilities you can call from the command line).

7. 1 Team organization in UNIX OS

To create a new command, you just need to follow the rules of C programming. Every well-formed C program begins its execution with the main function. This "semi-system" function has a standard interface, which is the basis for organizing commands that can be called in the shell environment. External commands are executed by the shell interpreter using a bunch of fork system calls and one of the exec options. The parameters of the exec system call include a set of text strings. This set of text strings is passed as input to the main function of the program being run.

More precisely, the main function takes two parameters - argc (the number of text strings to pass) and argv (a pointer to an array of pointers to text strings). A program claiming to use it as a shell command must have a well-defined external interface(parameters are usually entered from the terminal) and must control and correctly parse the input parameters.

Also, in order to conform to shell style, such a program should not itself override the files corresponding to standard input, standard output, and standard error. The command can then be redirected I/O in the usual way and can be included in pipelines.

7.2 I/O redirection and piping

As you can see from the last sentence of the previous paragraph, you don't need to do anything special to enable I/O redirection and pipelining when programming instructions. It is enough to simply leave the three initial file descriptors untouched and work with these files correctly, namely, output to a file with a descriptor stdout, enter data from the stdin file, and print error messages to the stderror file.

7. 3 Built-in, library and user commands

Built-in commands are part of the shell program code. They run as interpreter subroutines and cannot be replaced or redefined. The syntax and semantics of built-in commands are defined in the corresponding command language.

Library commands are part of the system software. This is a set of executable programs (utilities) supplied with the operating system. Most of these programs (such as vi, emacs, grep, find, make, etc.) are extremely useful in practice, but their discussion is beyond the scope of this course (there are separate thick books).

A user command is any executable program organized in accordance with the requirements set out in. Thus, any UNIX OS user can expand the repertoire of external commands of his command language indefinitely (for example, you can write your own command interpreter).

7.4 Command language programming

Any of the mentioned variants of the shell language can, in principle, be used as a programming language. Among UNIX users, there are many people who write quite serious programs on the shell. For programming, it is better to use programming languages ​​(C, C++, Pascal, etc.) rather than command languages.


8. GUI Tools

Although many professional UNIX programmers today prefer to use the traditional line-based means of interacting with the system, the widespread use of relatively inexpensive, high-resolution color graphic terminals has led to the fact that all modern versions of the UNIX OS support graphical user interfaces with the system. , and users are provided with tools for developing graphical interfaces with the programs they develop. From the point of view of the end user, the graphical interface tools supported in various versions of the UNIX OS, and in other systems (for example, MS Windows or Windows NT), are approximately the same in style.

Firstly, in all cases, a multi-window mode of operation with a terminal screen is supported. At any time, the user can create a new window and associate it with the desired program that works with this window as with a separate terminal. Windows can be moved, resized, temporarily closed, etc.

Secondly, in all modern varieties of the graphical interface, mouse control is supported. In the case of UNIX, it often turns out that the normal terminal keyboard is used only when switching to the traditional line interface (although in most cases at least one terminal window is running one of the shell family shells).

Thirdly, such a spread of the "mouse" style of work is possible through the use of interface tools based on pictograms (icons) and menus. In most cases, a program running in a certain window prompts the user to select any function to be performed by it, either by displaying a set of symbolic images of possible functions (icons) in the window, or by offering a multi-level menu. In any case, for further selection, it is sufficient to control the cursor of the corresponding window with the mouse.

Finally, modern graphical interfaces are "user-friendly", providing the ability to immediately get interactive help for any occasion. (Perhaps it would be more accurate to say that good GUI programming style is one that actually provides such hints.)

After listing all these general properties of modern GUI tools, a natural question may arise: If there is such uniformity in the field of graphical interfaces, what is special about graphical interfaces in the UNIX environment? The answer is simple enough. Yes, the end user really in any today's system deals with approximately the same set of interface features, but in different systems these features are achieved in different ways. As usual, the advantage of UNIX is the availability of standardized technologies that allow you to create mobile applications with graphical interfaces.

8. Protection principles

Since the UNIX operating system from its very inception was conceived as a multi-user operating system, the problem of authorizing the access of various users to the files of the file system has always been relevant in it. Access authorization refers to system actions that allow or deny a given user access to a given file, depending on the user's access rights and access restrictions set for the file. The access authorization scheme used in the UNIX OS is so simple and convenient and at the same time so powerful that it has become the de facto standard of modern operating systems (which do not pretend to be systems with multi-level protection).

8.1 User IDs and User Groups

Each running process in UNIX is associated with a real user ID, an effective user ID, and a saved user ID. All of these identifiers are set using the setuid system call, which can only be executed in superuser mode. Similarly, each process has three user group IDs associated with it - real group ID, effective group ID, and saved group ID. These identifiers are set by the privileged setgid system call.

When a user logs on to the system, the login program checks that the user is logged in and knows the correct password (if one is set), creates a new process and starts the shell required for this user in it. But before doing so, login sets the user and group IDs for the newly created process using the information stored in the /etc/passwd and /etc/group files. Once user and group IDs are associated with a process, file access restrictions apply to that process. A process can access or execute a file (if the file contains an executable program) only if the file's access restrictions allow it to do so. The identifiers associated with a process are passed to the processes it creates, subject to the same restrictions. However, in some cases a process can change its permissions using the setuid and setgid system calls, and sometimes the system can change the permissions of a process automatically.

Consider, for example, the following situation. The /etc/passwd file is not writable by anyone except the superuser (the superuser can write to any file). This file, among other things, contains user passwords and each user is allowed to change their password. Available special program/bin/passwd, which changes passwords. However, the user cannot do this even with this program, because the /etc/passwd file is not allowed to be written to. On a UNIX system, this problem is resolved as follows. An executable file may specify that when it is run, user and/or group identifiers should be set. If a user requests the execution of such a program (using the exec system call), then the corresponding process's user ID is set to that of the owner of the executable and/or the group ID of that owner. In particular, when the /bin/passwd program is run, the process will have a root ID, and the program will be able to write to the /etc/passwd file.

For both user ID and group ID, the real ID is the true ID, and the effective ID is the ID of the current execution. If the current user id matches the superuser, then that id and the group id can be reset to any value with the setuid and setgid system calls. If the current user ID is different from the superuser ID, then executing the setuid and setgid system calls causes the current ID to be replaced with the true ID (user or group, respectively).

8.2 Protecting files

As is customary in a multiuser operating system, UNIX maintains a uniform mechanism for controlling access to files and file system directories. Any process can access a certain file if and only if the access rights described with the file correspond to the capabilities of this process.

Protecting files from unauthorized access in UNIX is based on three facts. First, any process that creates a file (or directory) is associated with some unique user identifier (UID - User Identifier) ​​in the system, which can be further treated as the identifier of the owner of the newly created file. Second, each process attempting to access a file has a pair of identifiers associated with it, the current user and group identifiers. Thirdly, each file uniquely corresponds to its descriptor - i-node.

Any i-node used in the file system always uniquely corresponds to one and only one file. The I-node contains quite a lot of different information (most of it is available to users through the stat and fstat system calls), and among this information there is a part that allows the file system to evaluate the access rights of a given process to a given file in the required mode.

The general protection principles are the same for all existing variants of the system: The i-node information includes the UID and GID of the current owner of the file (immediately after the file is created, the identifiers of its current owner are set to the corresponding current identifier of the creator process, but can later be changed by the chown and chgrp system calls) . In addition, the i-node of the file contains a scale that indicates what the user - its owner can do with the file, what users belonging to the same user group as the owner can do with the file, and what others can do with the file users. Small details of implementation in different versions of the system differ.

8.3 Future operating systems supporting the UNIX OS environment

A microkernel is the smallest core part of an operating system, serving as the basis for modular and portable extensions. It appears that most next-generation operating systems will have microkernels. However, there are many different opinions about how operating system services should be organized in relation to the microkernel: how to design device drivers to be as efficient as possible, but keep the driver functions as independent of the hardware as possible; whether non-kernel operations should be performed in kernel space or user space; whether it is worth keeping the programs of existing subsystems (for example, UNIX) or is it better to discard everything and start from scratch.

The concept of a microkernel was introduced into wide use by Next, whose operating system used the Mach microkernel. The small, privileged core of this operating system, around which subsystems ran in user mode, was theoretically supposed to provide unprecedented flexibility and modularity of the system. But in practice, this advantage was somewhat discounted by the presence of a monolithic server that implements the UNIX BSD 4.3 operating system, which Next chose to wrap the Mach microkernel. However, reliance on Mach made it possible to include messaging tools and a number of object-oriented service functions into the system, on the basis of which it was possible to create an elegant end-user interface with graphical tools for network configuration, system administration and software development.

The next microkernel operating system was Microsoft's Windows NT, where the key advantage of using a microkernel was to be not only modularity but also portability. (Note that there is no consensus on whether NT should actually be considered a microkernel operating system.) NT was designed to be used on single and multi-processor systems based on Intel processors, Mips and Alpha (and those that come after them). Since programs written for DOS, Windows, OS/2, and Posix-compliant systems had to run on NT, Microsoft used the inherent modularity of the microkernel approach to create an overall NT structure that did not mimic any existing operating system. Each operating system is emulated as a separate module or subsystem.

More recently, microkernel operating system architectures have been announced by Novell/USL, the Open Software Foundation (OSF), IBM, Apple, and others. One of NT's main competitors in microkernel operating systems is Mach 3.0, a system created at Carnegie Mellon University that both IBM and OSF have undertaken to commercialize. (Next is currently using Mach 2.5 as the basis for NextStep, but is also looking closely at Mach 3.0.) Another competitor is Chorus Systems' Chorus 3.0 microkernel, chosen by USL as the basis for new implementations of the UNIX operating system. Some microkernel will be used in Sun's SpringOS, the object-oriented successor to Solaris (if, of course, Sun completes SpringOS). There is an obvious trend towards moving from monolithic to microkernel systems (this process is not straightforward: IBM took a step back and abandoned the transition to microkernel technology). By the way, this is not news at all for QNX Software Systems and Unisys, which have been releasing successful microkernel operating systems for several years. QNX OS is in demand in the real-time market, and Unisys' CTOS is popular in banking. Both systems successfully use the modularity inherent in microkernel operating systems.


Conclusion

The main differences between Unix and other OS

Unix consists of a kernel with included drivers and utilities (programs external to the kernel). If you need to change the configuration (add a device, change a port or interrupt), then the kernel is rebuilt (relinked) from object modules or (for example, in FreeBSD) from sources. This is not entirely true. Some parameters can be corrected without rebuilding. There are also loadable kernel modules.

In contrast to Unix, in Windows (if it is not specified which one, then we mean 3.11, 95 and NT) and OS / 2, when loading, they actually link drivers on the go. At the same time, the compactness of the assembled kernel and the reuse of common code are an order of magnitude lower than In addition, if the system configuration is unchanged, the Unix kernel can be written to ROM and executed _not_booted_ into RAM without rework (you only need to change the starting part of the BIOS) Code compactness is especially important, because the kernel and drivers never leave the physical memory is not swapped to disk.

Unix is ​​the most multi-platform OS. WindowsNT is trying to imitate it, but so far it has not been successful - after abandoning MIPS and POWER-PC, W "NT remained on only two platforms - the traditional i * 86 and DEC Alpha. Portability of programs from one version of Unix to another is limited. A sloppyly written program , which doesn't take into account differences in Unix implementations, makes unreasonable assumptions like "integer must be four bytes long" might require major reworking, but it's still many orders of magnitude easier than porting from OS/2 to NT, for example.

Applications of Unix

Unix is ​​used both as a server and as a workstation. In the server nomination, MS WindowsNT, Novell Netware, IBM OS/2 Warp Connect, DEC VMS and mainframe operating systems compete with it. Each system has its own area of ​​application in which it is better than others.

WindowsNT is for administrators who prefer a user-friendly interface to resource savings and high performance.

Netware - for networks where high performance file and printer services are needed and other services are not so important. The main drawback is that it is difficult to run applications on a Netware server.

OS / 2 is good where you need a "light" application server. It requires less resources than NT, is more flexible in management (although it can be more difficult to set up), and multitasking is very good. Authorization and differentiation of access rights are not implemented at the OS level, which is more than paid off by implementation at the level of application servers. (However, often other OS do the same). Many FIDOnet and BBS stations are based on OS/2.

VMS is a powerful, in no way inferior to Unix's (and in many ways superior to it) application server, but only for DEC's VAX and Alpha platforms.

Mainframes - to serve a very large number of users (on the order of several thousand). But the work of these users is usually organized in the form of not a client-server interaction, but in the form of a host-terminal one. The terminal in this pair is rather not a client, but a server (Internet World, N3 for 1996). The advantages of mainframes include higher security and fault tolerance, and the disadvantages are the price corresponding to these qualities.

Unix is ​​good for the skilled (or willing to be) administrator, because requires knowledge of the principles of functioning of the processes occurring in it. Real multitasking and hard memory sharing ensure high reliability of the system, although Unix's performance of file and print services is inferior to Netware.

The lack of flexibility in granting user access rights to files compared to WindowsNT makes it difficult to organize _at_the_file_system_ level group access to data (more precisely, to files), which, in my opinion, is offset by ease of implementation, which means less hardware requirements. However, applications such as SQL Server solve the problem of group access to data on their own, so the ability to deny access to a _file_ to a specific user, which is missing in Unix, is clearly redundant in my opinion.

Almost all the protocols on which the Internet is based were developed under Unix, in particular the TCP / IP protocol stack was invented at Berkeley University.

Unix's security, when properly administered (and when it isn't?), is in no way inferior to either Novell or WindowsNT.

An important property of Unix that brings it closer to mainframes is its multi-terminality, many users can simultaneously run programs on the same Unix machine. If you do not need to use graphics, you can get by with cheap text terminals (specialized or based on cheap PCs) connected over slow lines. In this, only VMS competes with it. Graphical X terminals can also be used when windows of processes running on different machines are present on the same screen.

In the workstation nomination, Unix competes with MS Windows*, IBM OS/2, Macintosh and Acorn RISC-OS.

Windows - for those who value compatibility over efficiency; for those who are ready to buy a large amount of memory, disk space and megahertz; for those who like not delving into the essence, click on the buttons in the window. True, sooner or later you still have to study the principles of the system and protocols, but then it will be too late - the choice has been made. An important advantage of Windows must also be recognized as the ability to steal a bunch of software.

OS/2 - for fans of OS/2. :-) Although, according to some reports, OS / 2 interacts better than others with mainframes and IBM networks.

Macintosh - for graphic, publishing and musical works, as well as for those who love a clear, beautiful interface and do not want (can not) understand the details of the system.

RISC-OS, flashed in ROM, allows you not to waste time installing the operating system and restoring it after failures. In addition, almost all programs under it use resources very economically, so they do not need swapping and work very quickly.

Unix functions both on PCs and on powerful workstations with RISC processors; really powerful CAD systems and geographic information systems are written under Unix. The scalability of Unix, due to its multiplatform nature, is an order of magnitude superior to any other operating system, according to some authors.


Bibliography

1. Textbook Kuznetsova S.D. ”UNIX operating system” 2003;

2. Polyakov A.D. “UNIX 5th Edition on x86, or don't forget history”;

3. Karpov D.Yu. "UNIX" 2005;

4. Fedorchuk A.V. Unix Mastery, 2006

5. Site materials http://www.citforum.ru/operating_systems/1-16;

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION FEDERAL AGENCY FOR EDUCATION STATE EDUCATIONAL INSTITUTION OF HIGHER VOCATIONAL EDUCATION
mob_info