Information and Communication Technologies

Characteristics of information and communication technologies. History of the creation of a computer system. Classification operating system: DOS, Windows, Unix, Linux, Mac OS. Databases management systems: the concept, characteristics, architecture.

Рубрика Программирование, компьютеры и кибернетика
Вид курс лекций
Язык английский
Дата добавления 20.10.2016
Размер файла 393,9 K

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http://www.allbest.ru/

1. ICT in Core Sectors of Development. ICT Standardization. Definition of ICT. Subject ICT and its purposes. Main directions of development of ICT. Standardization in ICT

Stands for "Information and Communication Technologies." ICT refers to technologies that provide access to information through telecommunications. It is similar to Information Technology (IT), but focuses primarily on communication technologies. This includes the Internet, wireless networks, cell phones, and other communication mediums.

In the past few decades, information and communication technologies have provided society with a vast array of new communication capabilities. For example, people can communicate in real-time with others in different countries using technologies such as instant messaging, voice over IP (VoIP), and video-conferencing. Social networking websites like Facebook allow users from all over the world to remain in contact and communicate on a regular basis.

Modern information and communication technologies have created a "global village," in which people can communicate with others across the world as if they were living next door. For this reason, ICT is often studied in the context of how modern communication technologies affect society.

ICT has become an integral and accepted part of everyday life for many people. ICT is increasing in importance in people's lives and it is expected that this trend will continue, to the extent that ICT literacy will become a functional requirement for people's work, social, and personal lives.

ICT includes the range of hardware and software devices and programmes such as personal computers, assistive technology, scanners, digital cameras, multimedia programmes, image editing software, database and spreadsheet programmes. It also includes the communications equipment through which people seek and access information including the Internet, email and video conferencing.

The use of ICT in appropriate contexts in education can add value in teaching and learning, by enhancing the effectiveness of learning, or by adding a dimension to learning that was not previously available. ICT may also be a significant motivational factor in students' learning, and can support students' engagement with collaborative learning.

Standards are a powerful tool to facilitate access to markets and open the doors to growth and jobs in the EU. This is especially true in the information and communication technology (ICT) sector where the continuous emergence of new services, applications and products fuels the need for more interoperability between systems. The EU promotes ICT standardisation to make sure ICT markets remain open and consumers have choice.

ICT standardisation is the voluntary cooperation for the development of technical specifications that outlines the agreed properties for a particular product, service, or procedure.

ICT specifications are primarily used to maximise the ability for systems to work together. In modern ICT the value of a device relies on its ability to communicate with other devices. This is known as the `network effect' and is important in almost all areas of ICT. Specifications ensure that products made by different manufacturers are interoperable, and that users have the chance to pick and mix between different suppliers, products or services. This is essential to ensure that markets remain open, allowing consumers to have the widest choice of products possible and giving manufacturers the benefit of economies of scale.

The EU supports an effective and coherent standardisation framework, which ensures that standards are developed in a way that supports EU policies and competitiveness in the global market.

Regulation 1025/2012 on European standardisation sets the legal framework in which the different actors in the standardisation system can operate. These actors are the European Commission, the European standardisation organisations, industry, small and medium-sized industries (SMEs) and societal stakeholders.

Article 13 of Regulation 1025/2012 allows the Commission to identify ICT technical specifications to be eligible for referencing in public procurement. This allows public authorities to make use of the full range of specifications when buying IT hardware, software and services, allowing for more competition in the field and reducing the risk of lock-in to proprietary systems.

The Commission financially supports the work of the three European standardisation organisations:

ETSI - the European Telecommunications Standards Institute

CEN - the European Committee for Standardization

CENELEC - the European Committee for Electrotechnical Standardization

EU-funded research and innovation projects also make their results available to the standardisation work of several standards-setting organisations.

2. Introduction to computer systems. Architecture of computer systems. Review of computer systems. Evolution of computer systems. Architecture and components of computer systems. The Use of computer systems

Computer system is defined as the combination of hardware, software, user and data. A computer is a programmable device that can automatically perform a sequence of calculations or other operations on data without human aid. It can store, retrieve, and process data according to internal instructions. A computer may be analog, digital, or hybrid, although most today are digital. Digital computers express variables as numbers, usually in the binary system. They are used for general purposes, whereas analog computers are built for specific tasks, typically scientific or technical. The term "computer" is usually synonymous with digital computer, and computers for business are exclusively digital.

The core of any computer is its central processing unit (CPU), commonly called a processor or a chip. The typical CPU consists of an arithmetic-logic unit to carry out calculations; main memory to store data temporarily for processing; and a control unit to control the transfer between memory, input and output sources, and the arithmetic-logic unit.

In the five decades since 1940, the computer industry has experienced four generations of development. Each computer generation is marked by a rapid change in the implementation of its building blocks: from relays and vacuum tubes (1940s-1950s) to discrete diodes and transistors (1950s-1960s), through small-scale and medium-scale integrated (SSI/MSI) circuits (1960s-1970s) to large-scale and very large scale integrated (LSI/VLSI) devices (1970s-1980s). Increases in device speed and reliability and reductions in hardware cost and physical size have greatly enhanced computer performance. However, better devices are not the sole factor contributing to high performance. The division of computer system generations is determined by the device technology, system architecture, processing mode, and languages used. We are currently (1989) in the fourth generation; the fifth generation has not materialized yet, but researchers are working on it.

Computer architecture deals with the logical and physical design of a computer system. The Instruction Set Architecture (ISA) defines the set of machine-code instructions that the computer's central processing unit can execute. The microarchitecture describes the design features and circuitry of the central processing unit itself. The system architecture (with which we are chiefly concerned in this section) determines the main hardware components that make up the physical computer system (including, of course, the central processing unit) and the way in which they are interconnected. The main components required for a computer system are listed below.

Central processing unit (CPU)

Random access memory (RAM)

Read-only memory (ROM)

Input / output (I/O) ports

The system bus

A power supply unit (PSU)

In addition to these core components, in order to extend the functionality of the system and to provide a computing environment with which a human operator can more easily interact, additional components are required. These could include:

Secondary storage devices (e.g. disk drives)

Input devices (e.g. keyboard, mouse, scanner)

Output devices (e.g. display adapter, monitor, printer)

A distinction is usually made between the internal components of the system (those normally located inside the main enclosure or case) and the external components (those that connect to the internal components via an external interface. Examples of such external components, usually referred to as peripherals, include the keyboard, video display unit (monitor) and mouse. Other peripherals can include printers, scanners, external speakers, external disk drives and webcams, to name but a few. The Internal components usually (though not always) include one or more disk drives for fixed or removable storage media (magnetic disk or tape, optical media etc.) although the core computing function does not absolutely require them. The relationship between the elements that make up the core of the system is illustrated below.

The core components in a personal computer

The core system components are mounted on a backplane, more commonly referred to as a mainboard (or motherboard). The mainboard is a relatively large printed circuit board that provides the electronic channels (buses) that carry data and control signals between the various components, as well as the necessary interfaces (in the form of slots or sockets) to allow the CPU, Memory cards and other components to be plugged into the system. In most cases, the ROM chip is built in to the mainboard, and the CPU and RAM must be compatible with the mainboard in terms of their physical format and electronic configuration. Internal I/O ports are provided on the mainboard for devices such as internal disk drives and optical drives.

Exploded view of personal computer system

Some of the external I/O ports found on a typical IBM PC

External I/O ports are also provided on the mainboard to enable the system to be connected to external peripheral devices such as the keyboard, mouse, video display unit, and audio speakers. Both the video adaptor and audio card may be provided?on-board? (i.e. built in to the mainboard), or as separate plug-in circuit boards that are mounted in an appropriate slot on the mainboard. The mainboard also provides much of the control circuitry required by the various system components, allowing the CPU to concentrate on its main role, which is to execute programs. We will be looking at the individual system components in detail in later sections.

Computers have become an essential part of modern human life. Since the invention of computer they have evolved in terms of increased computing power and decreased size. Owing to the widespread use of computers in every sphere, Life in today's world would be unimaginable without computers. They have made human lives better and happier. There are many computer uses in different fields of work. Engineers, architects, jewelers, and filmmakers all use computers to design things. Teachers, writers, and most office workers use computers for research, word processing and emailing. Small businesses can use computers as a point of sale and for general record keeping.

Computers have its dominant use in the education field which can significantly enhance performance in learning. Even distance learning is made productive and effective through internet and video-based classes. Researchers have massive usage of these computers in their work from the starting to till the end of their scholarly work.

Most of the medical information can now be digitized from the prescription to reports. Computation in the field of medicine allows us tooffer varied miraculous therapies to the patients. ECG's, radiotherapy wasn't possible without computers.

We know well that computers are being used by the financial institutions like banks for different purposes. The foremost important thing is to store information about different account holders in a database to be available at any time. Keeping the records of the cash flow, giving the information regarding your account.

Computers are now the major entertainers and the primary pass time machines. We can use computers for playing games, watching movies, listening to music, drawing pictures.

With internet on computers we can know the details of the buses or trains or the flight available to our desired destination. The timings and even the updates on the delay can also be known through these computers. We can book our tickets through online. Staff of the transport system will keep a track of the passengers, trains or flight details, departure and arrival timings by using computers.

Every single information shared can be recorded by using computer. Official deals and the issues were made even through online. We use email system to exchange the information. It has wide uses in marketing, stock exchanges and bank. Even the departmental stores can't run effectively without computer.

Electronic mail is the revolutionary service offered by the computes. Video Conferencing is also another major advantage. Electronic shopping through online shopping added favor to purchaser and merchants. Electronic banking is now at your hand where every bank has online support for transaction of monetary issues. You can easily transfer your money anywhere even from your home.

As per the title, computers aid in designing buildings, magazines, prints, newspapers, books and many others. The construction layouts are designed beautifully on system using different tools and software's.

3. Computer Software. Operating systems. Desktop applications

The evolution of operating systems is directly dependent to the development of computer systems and how users use them. Here is a quick tour of computing systems through the past fifty years in the timeline.

Early Evolution:

1945: ENIAC, Moore School of Engineering, University of Pennsylvania.

1949: EDSAC and EDVAC

1949 BINAC - a successor to the ENIAC

1951: UNIVAC by Remington

1952: IBM 701

1956: The interrupt

1954-1957: FORTRAN was developed

Operating Systems by the late 1950s:

By the late 1950s Operating systems were well improved and started supporting following usages:

It was able to Single stream batch processing

It could use Common, standardized, input/output routines for device access

Program transition capabilities to reduce the overhead of starting a new job was added

Error recovery to clean up after a job terminated abnormally was added.

Job control languages that allowed users to specify the job definition and resource requirements were made possible.

Operating Systems In 1960s:

1961: The dawn of minicomputers

1962 Compatible Time-Sharing System (CTSS) from MIT

1963 Burroughs Master Control Program (MCP) for the B5000 system

1964: IBM System/360

1960s: Disks become mainstream

1966: Minicomputers get cheaper, more powerful, and really useful

1967-1968: The mouse

1964 and onward: Multics

1969: The UNIX Time-Sharing System from Bell Telephone Laboratories

Accomplishments after 1970:

1971: Intel announces the microprocessor

1972: IBM comes out with VM: the Virtual Machine Operating System

1973: UNIX 4th Edition is published

1973: Ethernet

1974 The Personal Computer Age begins

1974: Gates and Allen wrote BASIC for the Altair

1976: Apple II

August 12, 1981: IBM introduces the IBM PC

1983 Microsoft begins work on MS-Windows

1984 Apple Macintosh comes out

1990 Microsoft Windows 3.0 comes out

1991 GNU/Linux

1992 The first Windows virus comes out

1993 Windows NT

2007: iOS

2008: Android OS

And the research and development work still goes on, with new operating systems being developed and existing ones being improved to enhance the overall user experience while making operating systems fast and efficient like they have never been before.

4. Classification of operating systems

Multiuser OS:

In a multiuser OS, more than one user can use the same system at a same time through the multi I/O terminal or through the network.

For example: windows, Linux, Mac, etc. A multiuser OS uses timesharing to support multiple users.

Multiprocessing OS:

A multiprocessing OS can support the execution of multiple processes at the same time. It uses multiple number of CPU. It is expensive in cost however, the processing speed will be faster. It is complex in its execution. Operating system like Unix, 64 bit edition of windows, server edition of windows, etc. are multiprocessing.

Multiprogramming OS:

In a multiprogramming OS more than one programs can be used at the same time. It may or may not be multiprocessing. In a single CPU system, multiple program are executed one after another by dividing the CPU into small time slice.example: Windows, Mac, Linux,etc.

Multitasking OS:

In a multitasking system more than one task can be performed at the same time but they are executed one after another through a single CPU by time sharing. For example: Windows, Linux, Mac, Unix,etc. Multitasking OS are of two types: a) Pre-empetive multitasking b) Co-operative multitasking

In the pre-empetive multitasking, the OS allows CPU times slice to each program. After each time slice, CPU executes another task. Example: Windows XP.

In co-operative multitasking a task can control CPU as long as it requires. However, it will free CPU to execute another program if it doesn't require CPU. Exaample: windows 3.x, multifinder,etc.

Multithreading:

A program in execution is known as process. A process can be further divided into multiple sub-processers. These sub-processers are known as threads. A multi-threading OS can divide process into threads and execute those threads. This increases operating speed but also increases the complexity. For example: Unix, Server edition of Linux and windows.

Batch Processing:

A batch processing is a group of processing system in which all the required input of all the processing task is provided initially. The result of all the task is provided after the completion of all the processing. Its main functions are:

Multiple task are processed

User cannot provide input in between the processing

It is appropriate only when all the inputs are known in advance

It requires large memory

CPU ideal time is less

Printer is the appropriate output device

It is old processing technique and rarely used at present

Online Processing:

It is an individual processing system in which the task is processed on individual basis as soon as they are provided by the user. It has features like:

Individual task is processed at a time

User can provide input in between processing

It is appropriate when all inputs ate not known in advance

It doesn't require large memory

CPU ideal time is more

Monitor is appropriate output device

It is modern processing technique and mostly used in present

Today's mobile devices are multifunctional devices capable of hosting a broad range of applications for both business and consumer use. Smartphones and tablets enable people to use their mobile device to access the Internet for email, instant messaging, text messaging and Web browsing, as well as work documents, contact lists and more.

Mobile devices are often seen as an extension to your own PC or laptop, and in some cases newer, more powerful mobile devices can even completely replace PCs. And when the devices are used together, work done remotely on a mobile device can be synchronized with PCs to reflect changes and new information while away from the computer.

5. Bases of management systems database: concept, characteristic, architecture. Data models

A database is a collection of information that is organized so that it can easily be accessed, managed, and updated. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.

In computing, databases are sometimes classified according to their organizational approach. The most prevalent approach is the relational database, a tabular database in which data is defined so that it can be reorganized and accessed in a number of different ways. A distributed database is one that can be dispersed or replicated among different points in a network. An object-oriented programming database is one that is congruent with the data defined in object classes and subclasses.

Database architecture is logically divided into two types.

Logical two-tier Client / Server architecture

Logical three-tier Client / Server architecture

Two-tier Client / Server Architecture

Two-tier Client / Server architecture is used for User Interface program and Application Programs that runs on client side. An interface called ODBC(Open Database Connectivity) provides an API that allow client side program to call the dbms. Most DBMS vendors provide ODBC drivers. A client program may connect to several DBMS's. In this architecture some variation of client is also possible for example in some DBMS's more functionality is transferred to the client including data dictionary, optimization etc. Such clients are called Data server.

Three-tier Client / Server Architecture

Three-tier Client / Server database architecture is commonly used architecture for web applications. Intermediate layer called Application server or Web Server stores the web connectivty software and the business logic(constraints) part of application used to access the right amount of data from the database server. This layer acts like medium for sending partially processed data between the database server and the client.

2. Data models define how the logical structure of a database is modeled. Data Models are fundamental entities to introduce abstraction in a DBMS. Data models define how data is connected to each other and how they are processed and stored inside the system.

The very first data model could be flat data-models, where all the data used are to be kept in the same plane. Earlier data models were not so scientific, hence they were prone to introduce lots of duplication and update anomalies.

Entity-Relationship Model

Entity-Relationship (ER) Model is based on the notion of real-world entities and relationships among them. While formulating real-world scenario into the database model, the ER Model creates entity set, relationship set, general attributes and constraints.

ER Model is best used for the conceptual design of a database.

ER Model is based on ?

Entities and their attributes.

Relationships among entities.

These concepts are explained below.

Entity ? An entity in an ER Model is a real-world entity having properties called attributes. Every attribute is defined by its set of values called domain. For example, in a school database, a student is considered as an entity. Student has various attributes like name, age, class, etc.

Relationship ? The logical association among entities is called relationship. Relationships are mapped with entities in various ways. Mapping cardinalities define the number of association between two entities.

Mapping cardinalities ?

one to one

one to many

many to one

many to many

6. Introduction in the analysis of data and data management. Methods of data collection, classification and forecasting. Handling of big data arrays

Data Analysis is important to every organization to survive in this competitive world. In recent years every one wants to make use of the data, understanding their business and to take effective decisions.

Approach to do Data Analysis

Generally we follow the below approach while analyzing the data:

Understanding the Problem and Data: It is important to understand the business questions or requirement. And also we need to understand the data. And we find important variable to use in our analysis.

Data Collection: If you have historical data, then we can go for the next step. In other cases we collect the data before proceeding for the next step. For example, If you want to analyse how a particular Super market is performing from the last 2 two years. We can study the historical data. i.e; sales data to draw the conclusions. In the second case, If you want to study how your customers are satisfying with your service. You have to collect the data (By asking the questions face to face, or by launching the surveys) to analyse the data.

Cleansing and Formatting the Data:

Once our data is ready, the next step is cleaning and formatting the data. We can not use the raw data which we have received as a input data. We have to study the data to find if there are any missing or wrong data points. And we also format the data in the required format for Analysis. We follow many approaches to clean the data. For examples, we can generate simple frequency tables and see how the data is. Or we can plot the charts (generally scatter chart) to see if there are any outliers.

Tabulation and Statistical Modelling: Once we are completed with the data cleansing, we go for tabulating the variables. We can study the data to draw the basic observations. And based on the requirement we can apply statistical techniques to understand and interpret the data. We will see these techniques later in-detail.

Interpreting or Recommendations: Based on the outputs generated in the above step, we will analyse the data. And we will write our recommendations (generally it is in Presentation or Dashboard) by following the assumptions. And we send it to the executives to take the decisions to solve the business problem.

Data collection is the process of gathering and measuring information on targeted variables in an established systematic fashion, which then enables one to answer relevant questions and evaluate outcomes. The data collection component of research is common to all fields of study including physical and social sciences, humanities and business.It help us to collect the main points as gathered information. While methods vary by discipline, the emphasis on ensuring accurate and honest collection remains the same. The goal for all data collection is to capture quality evidence that then translates to rich data analysis and allows the building of a convincing and credible answer to questions that have been posed.

Regardless of the field of study or preference for defining data (quantitative or qualitative), accurate data collection is essential to maintaining the integrity of research. Both the selection of appropriate data collection instruments (existing, modified, or newly developed) and clearly delineated instructions for their correct use reduce the likelihood of errors occurring.

A formal data collection process is necessary as it ensures that data gathered are both defined and accurate and that subsequent decisions based on arguments embodied in the findings are valid.[2] The process provides both a baseline from which to measure and in certain cases a target on what to improve.

Data mining is an interdisciplinary subfield of computer science. It is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.

The term is a misnomer, because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence, machine learning, and business intelligence. The book Data mining: Practical machine learning tools and techniques with Java[7] (which covers mostly machine learning material) was originally to be named just Practical machine learning, and the term data mining was only added for marketing reasons. Often the more general terms (large scale) data analysis and analytics - or, when referring to actual methods, artificial intelligence and machine learning - are more appropriate.

The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, but do belong to the overall KDD process as additional steps.

7. Networking and telecommunications. Basic concepts and LAN components. Types of networks. Network protocols and standards

Computer networking is an engineering discipline that aims to study and analyze the communication process among various computing devices or computer systems that are linked, or networked, together to exchange information and share resources.

Computer networking depends on the theoretical application and practical implementation of fields like computer engineering, computer sciences, information technology and telecommunication.

The components used to establish a local area network (LAN) have a variety of functions. The common unifying theme among them is that they facilitate communication between two or more computers. LAN components are configurable in a variety of ways, but a LAN always requires the same basic components.

Local Area Network (LAN)

This is one of the original categories of network, and one of the simplest. LAN networks connect computers together over relatively small distances, such as within a single building or within a small group of buildings.

Homes often have LAN networks too, especially if there is more than one device in the home. Often they do not contain more than one subnet, if any, and are usually controlled by a single administrator. They do not have to be connected to the internet to work, although they can be.

Other Types of Network

Metropolitan Area Network - This is a network which is larger than a LAN but smaller than a WAN, and incorporates elements of both. It typically spans a town or city and is owned by a single person or company, such as a local council or a large company.

Campus Area Network - This is a network which is larger than a LAN, but smaller than an MAN. This is typical in areas such as a university, large school or small business. It is typically spread over a collection of buildings which are reasonably local to each other. It may have an internal Ethernet as well as capability of connecting to the internet.

Wireless Local Area Network - This is a LAN which works using wireless network technology such as Wi-Fi. This type of network is becoming more popular as wireless technology is further developed and is used more in the home and by small businesses. It means devices do not need to rely on physical cables and wires as much and can organise their spaces more effectively.

A Protocol is a predefined set of rules that dictates how network devices (such as router, computer, or switch) communicate and exchange data on the network.

Application Protocols:

The Application Protocol are built on the top of TCP/IP protocol suite. The list of protocol include the following:

Simple Network Management Protocol (SNMP)

The Simple Network Management Protocol (SNMP) is an application-layer protocol designed to manage complex communication networks. SNMP works by sending messages, called protocol data units (PDUs), to different parts of a network. SNMP-compliant devices, called agents, store data about themselves in Management Information Bases (MIBs) and return this data to the SNMP servers.

There are two versions of SNMP: Version 1 and Version 2.

File Transfer Protocol (FTP)

FTP is a Client Server protocol, used for copying files between an FTP server and a client computer over a TCP/IP network. FTP is commonly used to communicate with web servers to upload or download files.

FTP, the File Transfer Protocol, documented in RFC 959, is one of oldest Internet protocols still in widespread use. FTP uses TCP protocol for communication, and capable of transferring both binary files and text files. Some popular FTP clients include FileZilla, and cuteFTP.

FTP uses port TCP port number 21.

Trivial File Transfer Protocol (TFTP)

TFTP stands for Trivial File Transfer Protocol. TFTP is very similar to FTP, but uses UDP protocol for file transfer. UDP, as discusses elsewhere in the tutorial is considered to an unreliable protocol. Hence, TFTP is not frequently used for normal file transfer applications.

Simple Mail Transfer Protocol (SMTP)

SMTP (Simple Mail Transfer Protocol) is a TCP/IP protocol used for sending e-mail messages between servers. SMTP is also used to send email messages from a client machine to a server. An email client such as MS Outlook Express uses SMTP for sending emails and POP3/IMAP for receiving emails from the server to the client machine. In other words, we typically use a program that employs SMTP for sending e-mail, and either POP3 or IMAP for receiving messages from our local (or ISP) server. SMTP is usually implemented to operate over Transmission Control Protocol port 25.

Post Office Protocol (POP3)

POP3 stands for Post of Protocol version 3. It is used for fetching messages from an email server. Most commonly used POP3 client programs include Outlook Express, and Mozilla Thunderbird.

Internet Message Access Protocol (IMAP)

The Internet Message Access Protocol (commonly known as IMAP or IMAP4) allows a local client to access e-mail on a remote server. The current version, IMAP version 4 is defined by RFC 3501. IMAP4 and POP3 are the two most prevalent Internet standard protocols for e-mail retrieval.

Network File System (NFS)

Network File System is a distributed file system which allows a computer to transparently access files over a network.

Telnet

The Telnet service provides a remote login capability. This lets a user on one machine log into another machine and act as if they are directly in front of the remote machine. The connection can be anywhere on the local network, or on another network anywhere in the world, as long as the user has permission to log into the remote system. Telnet uses TCP to maintain a connection between two machines. Telnet uses port number 23.

Hypertext Transfer Protocol (HTTP)

A protocol used to transfer hypertext pages across the World Wide Web. HTTP defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands. For example, when you enter a URL in your browser, this actually sends an HTTP command to the Web server directing it to fetch and transmit the requested Web page. Note that HTML deals with how Web pages are formatted and displayed in a browser.

HTTP is called a stateless protocol because each command is executed independently, without any knowledge of the commands that came before it.

Standards are necessary in almost every business and public service entity.

The primary reason for standards is to ensure that hardware and software produced by different vendors can work together. Without networking standards, it would be difficult--if not impossible--to develop networks that easily share information. Standards also mean that customers are not locked into one vendor. They can buy hardware and software from any vendor whose equipment meets the standard. In this way, standards help to promote more competition and hold down prices.

The use of standards makes it much easier to develop software and hardware that link different networks because software and hardware can be developed one layer at a time.

8. Cyber Security, Ethics and Trust. Information security threats. Security classification for information. Anti-virus programs

information communication computer database

Sometimes referred to as computer security, information technology security is information security applied to technology (most often some form of computer system). It is worthwhile to note that a computer does not necessarily mean a home desktop. A computer is any device with a processor and some memory. Such devices can range from non-networked standalone devices as simple as calculators, to networked mobile computing devices such as smartphones and tablet computers. IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. They are responsible for keeping all of the technology within the company secure from malicious cyber attacks that often attempt to breach into critical private information or gain control of the internal systems.

Information assurance

The act of providing trust of the information, that the Confidentiality, Integrity and Availability (CIA) of the information are not violated. These issues include, but are not limited to: natural disasters, computer/server malfunction or physical theft. Since most information is stored on computers in our modern era, information assurance is typically dealt with by IT security specialists. A common method of providing information assurance is to have an off-site backup of the data in case one of the mentioned issues arise.

Threats

Information security threats come in many different forms. Some of the most common threats today are software attacks, theft of intellectual property, identity theft, theft of equipment or information, sabotage, and information extortion. Most people have experienced software attacks of some sort. Viruses, worms, phishing attacks, and Trojan horses are a few common examples of software attacks. The theft of intellectual property has also been an extensive issue for many businesses in the IT field. Identity theft is the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information. Theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile.

Cell phones are prone to theft, and have also become far more desirable as the amount of data capacity increases. Sabotage usually consists of the destruction of an organization?s website in an attempt to cause loss of confidence on the part of its customers. Information extortion consists of theft of a company?s property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as with ransomware. There are many ways to help protect yourself from some of these attacks but one of the most functional precautions is user carefulness.

An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification.

The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification.

Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information.

Cryptography or cryptology (from Greek ксхрфьт kryptуs, "hidden, secret"; and гсЬцейн graphein, "writing", or -лпгЯб -logia, "study", respectively) is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.

Cryptography prior to the modern age was effectively synonymous with encryption, the conversion of information from a readable state to apparent nonsense. The originator of an encrypted message (Alice) shared the decoding technique needed to recover the original information only with intended recipients (Bob), thereby precluding unwanted persons (Eve) from doing the same. The cryptography literature often uses Alice ("A") for the sender, Bob ("B") for the intended recipient, and Eve ("eavesdropper") for the adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, the methods used to carry out cryptology have become increasingly complex and its application more widespread.

Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that provably cannot be broken even with unlimited computing power--an example is the one-time pad--but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.

The growth of cryptographic technology has raised a number of legal issues in the information age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or even prohibit its use and export. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation. Cryptography also plays a major role in digital rights management and copyright infringement of digital media.

Antivirus or anti-virus software (often abbreviated as AV), sometimes known as anti-malware software, is computer software used to prevent, detect and remove malicious software]

Antivirus software was originally developed to detect and remove computer viruses, hence the name. However, with the proliferation of other kinds of malware, antivirus software started to provide protection from other computer threats. In particular, modern antivirus software can protect from: malicious browser helper objects (BHOs), browser hijackers, ransomware, keyloggers, backdoors, rootkits, trojan horses, worms, malicious LSPs, dialers, fraudtools, adware and spyware. Some products also include protection from other computer threats, such as infected and malicious URLs, spam, scam and phishing attacks, online identity (privacy), online banking attacks, social engineering techniques, advanced persistent threat (APT) and botnet DDoS attacks.

Размещено на Allbest.ru

...

Подобные документы

  • The material and technological basis of the information society are all sorts of systems based on computers and computer networks, information technology, telecommunication. The task of Ukraine in area of information and communication technologies.

    реферат [29,5 K], добавлен 10.05.2011

  • Consideration of a systematic approach to the identification of the organization's processes for improving management efficiency. Approaches to the identification of business processes. Architecture of an Integrated Information Systems methodology.

    реферат [195,5 K], добавлен 12.02.2016

  • IS management standards development. The national peculiarities of the IS management standards. The most integrated existent IS management solution. General description of the ISS model. Application of semi-Markov processes in ISS state description.

    дипломная работа [2,2 M], добавлен 28.10.2011

  • Web Forum - class of applications for communication site visitors. Planning of such database that to contain all information about an user is the name, last name, address, number of reports and their content, information about an user and his friends.

    отчет по практике [1,4 M], добавлен 19.03.2014

  • Information security problems of modern computer companies networks. The levels of network security of the company. Methods of protection organization's computer network from unauthorized access from the Internet. Information Security in the Internet.

    реферат [20,9 K], добавлен 19.12.2013

  • Характеристика, функции, типы, виды и состав операционных систем. Первая коммерческая система unix system. Операционные системы, основанные на графическом интерфейсе, пи–система, семейство unix. История и основные предпосылки появления ОС Windows.

    курсовая работа [66,9 K], добавлен 18.01.2011

  • Lines of communication and the properties of the fiber optic link. Selection of the type of optical cable. The choice of construction method, the route for laying fiber-optic. Calculation of the required number of channels. Digital transmission systems.

    дипломная работа [1,8 M], добавлен 09.08.2016

  • A database is a store where information is kept in an organized way. Data structures consist of pointers, strings, arrays, stacks, static and dynamic data structures. A list is a set of data items stored in some order. Methods of construction of a trees.

    топик [19,0 K], добавлен 29.06.2009

  • Technical and economic characteristics of medical institutions. Development of an automation project. Justification of the methods of calculating cost-effectiveness. General information about health and organization safety. Providing electrical safety.

    дипломная работа [3,7 M], добавлен 14.05.2014

  • Data mining, developmental history of data mining and knowledge discovery. Technological elements and methods of data mining. Steps in knowledge discovery. Change and deviation detection. Related disciplines, information retrieval and text extraction.

    доклад [25,3 K], добавлен 16.06.2012

  • Practical acquaintance with the capabilities and configuration of firewalls, their basic principles and types. Block specific IP-address. Files and Folders Integrity Protection firewalls. Development of information security of corporate policy system.

    лабораторная работа [3,2 M], добавлен 09.04.2016

  • Хабовая архитектура системных плат. Интерфейс командной строки Unix System V. Структура командной строки интерпретаторов sh и ksh. Системные, процессы-демоны и прикладные процессы. Способы порождения и запуска "демонов". Работа с сигналами UNIX.

    реферат [149,5 K], добавлен 11.05.2012

  • Overview history of company and structure of organization. Characterization of complex tasks and necessity of automation. Database specifications and system security. The calculation of economic efficiency of the project. Safety measures during work.

    дипломная работа [1009,6 K], добавлен 09.03.2015

  • Описание файловой системы Unix. Работа основных команд ls, cmp, comm, их ключей. Разработка программного продукта, работающего в среде Windows и представляющего собой эмулятора командного процессора операционной системы Unix. Выбор средств реализации.

    курсовая работа [183,0 K], добавлен 29.04.2015

  • Сущность и принцип работы операционной системы, правила и преимущества ее использования. Возможности различных операционных систем, их сильные и слабые стороны. Сравнительная характеристика систем Unix и Windows NT, их потенциал и выполняемые задачи.

    реферат [10,5 K], добавлен 09.10.2009

  • Призначення та основні функції, типи та конструкція операційної системи. Історія розробки та вдосконалення основних операційних систем найбільшими виробниками (Unix, Linux, Apple). Порівняльні характеристики операційних систем. Покоління Windows та NT.

    курсовая работа [1,3 M], добавлен 28.02.2010

  • История Network File System. Общие опции экспорта иерархий каталогов. Описание протокола NFS при монтировании удаленного каталога. Монтирование файловой системы Network Files System командой mount. Конфигурации, обмен данными между клиентом и сервером.

    курсовая работа [1,3 M], добавлен 16.06.2014

  • Операционная система – набор программ, обеспечивающий организацию вычислительного процесса на ЭВМ, ее значение, структура, функции, история развития. Альтернативы Windows: UNIX, Linux, OS/2, MacOS, главные их достоинства и недостатки, сферы использования.

    реферат [41,4 K], добавлен 28.03.2010

  • History of development. Building Automation System (BMS) and "smart house" systems. Multiroom: how it works and ways to establish. The price of smart house. Excursion to the most expensive smart house in the world. Smart House - friend of elders.

    контрольная работа [26,8 K], добавлен 18.10.2011

  • Знакомство с графическим интерфейсом ASP Linux, его основные преимущества и недостатки, разработка навыков работы с сервисным и прикладным программным обеспечением этой системы. сравнительный анализ функциональных возможностях изученной среды и Windows.

    методичка [1,6 M], добавлен 12.09.2008

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.