Transfer of information through technical communication channels. Lesson summary on the topic “Transfer of information. Integer representation

| 8 classes | Lesson planning for the school year | Work in the local network of a computer class in the file sharing mode

Lesson 2
Work in the local network of a computer class in the file sharing mode

Transfer of information via technical communication channels

Transfer of information via technical communication channels

Shannon's scheme

An American scientist, one of the founders of information theory, Claude Shannon proposed a diagram of the process of transmitting information through technical communication channels (Fig. 1.3).

Rice. 1.3. Scheme of a technical information transmission system

The operation of such a scheme can be explained by the familiar process of talking on the phone. Sourse of information- talking person. encoder A handset microphone that converts sound waves (speech) into electrical signals. Link - telephone network(wires, switches of telephone nodes through which the signal passes). Decoder- a handset (earphone) of a listening person - a receiver of information. Here the incoming electrical signal is converted into sound.

Here, information is transmitted in the form of a continuous electrical signal. it analog communication.

Encoding and decoding information

Under coding any transformation of information coming from a source into a form suitable for its transmission over a communication channel is understood.

At the dawn of the era of radio communication, the alphabet code was used morse. The text was converted into a sequence of dots and dashes (short and long signals) and broadcast. A person who received such a transmission by ear should have been able to decode the code back into text. Even earlier, Morse code was used in telegraph communications. The transmission of information using Morse code is an example of discrete communication.

Currently, digital communication is widely used, when the transmitted information is encoded in binary form (0 and 1 are binary digits), and then decoded into text, image, sound. Digital communication, obviously, is also discrete.

Noise and noise protection. Shannon coding theory

Information is transmitted through communication channels by means of signals of various physical nature: electrical, electromagnetic, light, acoustic. The information content of a signal consists in the value or in the change in the value of its physical quantity (current strength, light brightness, etc.). The term "noise" called various kinds of interference that distort the transmitted signal and lead to loss of information. Such interference primarily occurs due to technical reasons: poor quality of communication lines, insecurity from each other of various information flows transmitted over the same channels. Often, when talking on the phone, we hear noise, crackling, which make it difficult to understand the interlocutor, or the conversation of other people is superimposed on our conversation. In such cases noise protection is necessary.

First of all apply technical ways protection of communication channels from exposure to noise. Such methods are very different, sometimes simple, sometimes very complex. For example, using shielded cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

K. Shannon developed a special coding theory, which gives methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated. For example, if you are hard to hear when talking on the phone, then by repeating each word twice, you have a better chance that the interlocutor will understand you correctly.

However, you cannot do redundancy too big. This will lead to delays and higher communication costs. Shannon's coding theory just allows you to get such a code that will be optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be the maximum.

In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The whole message is divided into portions - packets. For each packet, a checksum (the sum of binary digits) is calculated, which is transmitted along with this packet. At the place of reception, the checksum of the received packet is recalculated, and if it does not match the original, then the transmission of this packet is repeated. This happens until the initial and final checksums match.

Briefly about the main

Any technical information transmission system consists of a source, a receiver, encoding and decoding devices, and a communication channel.

Under coding refers to the transformation of information coming from a source into a form suitable for its transmission over a communication channel. Decoding is the reverse transformation.

Noise are interferences that lead to the loss of information.

In coding theory developed methods representation of transmitted information in order to reduce its loss under the influence of noise.

Questions and tasks

1. What are the main elements of the information transfer scheme proposed by K. Shannon.

2. What is encoding and decoding in the transmission of information?

3. What is noise? What are its implications for the transmission of information?

4. What are the ways to deal with noise?

EC CER: Part 2, conclusion, addition to chapter 1, § 1.1. COR No. 1.

Information transfer scheme. Information transfer channel. Information transfer rate.

There are three types of information processes: storage, transmission, processing.

Data storage:

· Information carriers.

types of memory.

· Storage of information.

· Basic properties of information storages.

Associated with information storage the following concepts: storage medium (memory), internal memory, external memory, information storage.

A storage medium is a physical medium that directly stores information. Human memory can be called RAM. Learned knowledge is reproduced by a person instantly. We can also call our own memory internal memory because its carrier - the brain - is inside us.

All other types of information carriers can be called external (in relation to a person): wood, papyrus, paper, etc. Information storage is information organized in a certain way on external media, intended for long-term storage and permanent use (for example, document archives, libraries, file cabinets). The main information unit of the repository is a certain physical document: a questionnaire, a book, etc. The organization of the repository means the presence of a certain structure, i.e. orderliness, classification of stored documents for the convenience of working with them. The main properties of the information storage: the amount of stored information, storage reliability, access time (i.e. search time necessary information), availability of information protection.

Information stored on computer memory devices is called data. Organized data stores on devices external memory computers are called databases and data banks.

Data processing:

· The general scheme of the information processing process.

· Statement of the task of processing.

· Processing executor.

· Processing algorithm.

· Typical tasks of information processing.

Information processing scheme:

Initial information - processing performer - final information.

In the process of information processing, some information problem is solved, which can be preliminarily set in the traditional form: a certain set of initial data is given, it is required to obtain some results. The very process of transition from the source data to the result is the process of processing. The object or subject that performs the processing is called the processing performer.

To successfully perform information processing, the performer (person or device) must know the processing algorithm, i.e. sequence of steps to be followed in order to achieve the desired result.

There are two types of information processing. The first type of processing: processing associated with obtaining new information, new content of knowledge (solving mathematical problems, analyzing the situation, etc.). The second type of processing: processing associated with a change in form, but not changing the content (for example, translating text from one language to another).

An important type of information processing is coding - the transformation of information into a symbolic form that is convenient for its storage, transmission, processing. Coding is actively used in technical means of working with information (telegraph, radio, computers). Another type of information processing is data structuring (introducing a certain order into the information storage, classification, cataloging of data).

Another type of information processing is the search in some information store for the necessary data that satisfies certain search conditions (request). The search algorithm depends on the way information is organized.

Information transfer:

· Source and receiver of information.

· Information channels.

The role of the sense organs in the process of human perception of information.

· Structure of technical communication systems.

· What is encoding and decoding.

The concept of noise noise protection techniques.

· Information transfer rate and channel capacity.

Information transfer scheme:

Information source - information channel - information receiver.

Information is presented and transmitted in the form of a sequence of signals, symbols. From the source to the receiver, the message is transmitted through some material medium. If technical means of communication are used in the transmission process, then they are called information transmission channels (information channels). These include telephone, radio, TV. Human sense organs play the role of biological information channels.

The process of transmitting information through technical communication channels proceeds according to the following scheme (according to Shannon):

The term "noise" refers to various kinds of interference that distort the transmitted signal and lead to loss of information. Such interferences, first of all, arise for technical reasons: poor quality of communication lines, insecurity from each other of various flows of information transmitted over the same channels. Used for noise protection different ways, for example, the use of various kinds of filters that separate the useful signal from the noise.

Claude Shannon developed a special coding theory that provides methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated. However, you can not make the redundancy too large. This will lead to delays and higher communication costs.

When discussing the topic of measuring the speed of information transfer, an analogy can be used. An analogue is the process of pumping water through water pipes. Here, pipes are the channel for water transmission. The intensity (speed) of this process is characterized by water consumption, i.e. the number of liters pumped per unit of time. In the process of transmitting information, channels are technical communication lines. By analogy with a water pipe, we can talk about the information flow transmitted through channels. Information transfer rate is the information volume of a message transmitted per unit of time. Therefore, the units of measurement of the speed of the information flow: bit / s, bytes / s, etc. information process transmission channel

Another concept - the bandwidth of information channels - can also be explained using the "plumbing" analogy. You can increase the flow of water through pipes by increasing the pressure. But this path is not endless. If too much pressure is applied, the pipe may burst. Therefore, the maximum flow rate of water, which can be called the capacity of the water supply. The technical data communication lines also have a similar data rate limit. The reasons for this are also physical.

1. Classification and characteristics of the communication channel
Link is a set of means intended for the transmission of signals (messages).
To analyze information processes in a communication channel, you can use its generalized scheme shown in fig. 1.

AI
LS
P
PI
P

On fig. 1 adopted the following designations: X, Y, Z, W- signals, messages ; f- hindrance; LS- communication line; AI, PI– source and receiver of information; P– converters (coding, modulation, decoding, demodulation).
There are various types of channels that can be classified according to various criteria:
1. By type of communication lines: wired; cable; fiber optic;
power lines; radio channels, etc.
2. By the nature of the signals: continuous; discrete; discrete-continuous (signals at the input of the system are discrete, and at the output are continuous, and vice versa).
3. For noise immunity: channels without interference; with interference.
Communication channels are characterized by:
1. Channel capacity defined as the product of channel usage time T to, the width of the spectrum of frequencies transmitted by the channel F to and dynamic range D to. , which characterizes the ability of the channel to transmit different levels of signals

V to = T to F to D to.(1)
The condition for matching the signal with the channel:
V c £ V k ; T c £ T k ; F c £ F k ; V c £ V k ; D c £ D k .
2.Information transfer rate - the average amount of information transmitted per unit of time.
3.
4. Redundancy - ensures the reliability of the transmitted information ( R= 0¸1).
One of the tasks of information theory is to determine the dependence of the information transfer rate and communication channel capacity on the channel parameters and the characteristics of signals and interference.
A communication channel can be figuratively compared to roads. Narrow roads - low capacity, but cheap. Wide roads - good traffic capacity, but expensive. Throughput is determined by the bottleneck.
The data transfer rate largely depends on the transmission medium in the communication channels, which are various types of communication lines.
Wired:
1. Wired– twisted pair (which partially suppresses electromagnetic radiation from other sources). Transmission speed up to 1 Mbps. Used in telephone networks and for data transmission.
2. Coaxial cable. Transfer rate 10-100 Mbps - used in local networks, cable television etc.
3. Optical fiber. Transfer rate 1 Gbps.
In environments 1-3, attenuation in dB is linear with distance, i.e. power drops exponentially. Therefore, after a certain distance, it is necessary to install regenerators (amplifiers).
Radio links:
1. Radio channel. Transfer rate 100–400 Kbps. Uses radio frequencies up to 1000 MHz. Up to 30 MHz due to reflection from the ionosphere, propagation of electromagnetic waves beyond the line of sight is possible. But this range is very noisy (for example, by amateur radio). From 30 to 1000 MHz - the ionosphere is transparent and line of sight is required. Antennas are installed at a height (sometimes regenerators are installed). Used in radio and television.
2. microwave lines. Transfer rates up to 1 Gbps. Use radio frequencies above 1000 MHz. This requires line of sight and highly directional parabolic antennas. The distance between regenerators is 10–200 km. Used for telephone, television and data transmission.
3. Satellite connection. Microwave frequencies are used, and the satellite serves as a regenerator (and for many stations). The characteristics are the same as for microwave lines.
2. Bandwidth of a discrete communication channel
A discrete channel is a set of means designed to transmit discrete signals.
Communication channel capacity - the highest theoretically achievable information transfer rate, provided that the error does not exceed a given value. Information transfer rate - the average amount of information transmitted per unit of time. Let us define expressions for calculating the information transfer rate and throughput of a discrete communication channel.
During the transmission of each symbol, on average, the amount of information passes through the communication channel, which is determined by the formula
I (Y, X) = I (X, Y) = H(X) - H (X/Y) = H(Y) - H (Y/X), (2)
where: I (Y, X) - mutual information, i.e. the amount of information contained in Y relatively X;H(X) is the entropy of the message source; H (X/Y)– conditional entropy, which determines the loss of information per symbol associated with the presence of noise and distortion.
When sending a message X T duration T, consisting of n elementary symbols, the average amount of transmitted information, taking into account the symmetry of the mutual amount of information, is:
I(Y T, X T) = H(X T) – H(X T /Y T) = H(Y T) – H(Y T /X T) = n . (4)
The information transfer rate depends on the statistical properties of the source, the coding method and the properties of the channel.
Bandwidth of a discrete communication channel
. (5)
The maximum possible value, i.e. the maximum of the functional is sought on the entire set of probability distribution functions p (x).
throughput depends on specifications channel (speed of equipment, type of modulation, level of interference and distortion, etc.). Channel capacity units are: , , , .
2.1 Discrete communication channel without interference
If there is no interference in the communication channel, then the input and output signals of the channel are connected by an unambiguous, functional dependence.
In this case, the conditional entropy is equal to zero, and the unconditional entropies of the source and receiver are equal, i.e. the average amount of information in the received symbol relative to the transmitted one is
I (X, Y) = H(X) = H(Y); H(X/Y) = 0.
If X T- number of characters per time T, then the information transfer rate for a discrete communication channel without interference is equal to
(6)
where V = 1/ is the average transmission rate of one symbol.
Bandwidth for a discrete communication channel without interference
(7)
Because the maximum entropy corresponds to equiprobable symbols, then the bandwidth for uniform distribution and statistical independence of the transmitted symbols is equal to:
. (8)
Shannon's first theorem for a channel: If the information flow generated by the source is sufficiently close to the bandwidth of the communication channel, i.e.
, where is an arbitrarily small value,
then it is always possible to find a coding method that will ensure the transmission of all source messages, and the information transfer rate will be very close to the channel capacity.
The theorem does not answer the question of how to encode.
Example 1 The source generates 3 messages with probabilities:
p 1 = 0.1; p 2 = 0.2 and p 3 = 0.7.
Messages are independent and are transmitted in a uniform binary code ( m = 2) with a symbol duration of 1 ms. Determine the rate of information transfer over a communication channel without interference.
Decision: The entropy of the source is

[bps].
To transmit 3 messages with a uniform code, two bits are required, while the duration of the code combination is 2t.
Average signal rate
V=1/2 t = 500 .
Information transfer rate
C = vH = 500×1.16 = 580 [bps].
2.2 Discrete communication channel with noise
We will consider discrete communication channels without memory.
Channel without memory A channel is called a channel in which each signal symbol transmitted is affected by interference, regardless of which signals were previously transmitted. That is, interference does not create additional correlative links between symbols. The name "without memory" means that during the next transmission, the channel does not seem to remember the results of previous transmissions.
In the presence of interference, the average amount of information in the received message symbol – Y, relative to the transmitted - X equals:
.
For message symbol X T duration T, consisting of n elementary symbols average amount of information in the received symbol of the message - Y T regarding the transferred X T equals:
I(Y T , X T) = H(X T) - H(X T /Y T) = H(Y T) - H(Y T /X T) = n = 2320 bps
The capacity of a continuous channel with noise is determined by the formula

=2322 bps.
Let us prove that the information capacity of a continuous channel without memory with additive Gaussian noise with a limit on the peak power is no more than information capacity the same channel with the same average power limitation.
Mathematical expectation for a symmetrical uniform distribution

Mean square for symmetrical uniform distribution

Variance for symmetrical uniform distribution

At the same time, for a uniformly distributed process .
Differential entropy of a signal with a uniform distribution
.
The difference between the differential entropies of a normal and a uniformly distributed process does not depend on the value of the dispersion
= 0.3 bits/count
Thus, the throughput and capacity of the communication channel for a process with a normal distribution is higher than for a uniform one.
Determine the capacity (volume) of the communication channel
V k = T k C k = 10×60×2322 = 1.3932 Mbit.
Determine the amount of information that can be transmitted in 10 minutes of the channel
10× 60× 2322=1.3932 Mbit.
Tasks

Using Internet resources, find answers to questions:

Exercise 1

1. What is the information transfer process?

Transfer of information- the physical process by which information is transferred in space. They recorded the information on a disk and transferred it to another room. This process characterized by the presence of the following components:


2. General information transfer scheme

3. List the communication channels you know

Link(English) channel, data line) - a system of technical means and a signal propagation environment for transmitting messages (not just data) from a source to a recipient (and vice versa). A communication channel understood in a narrow sense ( communication path) represents only the physical propagation medium, such as a physical communication line.

According to the type of distribution medium, communication channels are divided into:

4. What is telecommunications and computer telecommunications?

Telecommunications(Greek tele - far away, and Lat. communicatio - communication) is the transmission and reception of any information (sound, image, data, text) over a distance through various electromagnetic systems (cable and fiber optic channels, radio channels and other wired and wireless channels connections).

telecommunications network
- a system of technical means through which telecommunications are carried out.

Telecommunication networks include:
1. Computer networks (for data transmission)
2. Telephone networks (transmission of voice information)
3. Radio networks (transmission of voice information - broadcast services)
4. Television networks (voice and image transmission - broadcast services)

Computer telecommunications - telecommunications, the terminal devices of which are computers.

The transfer of information from computer to computer is called synchronous communication, and through an intermediate computer, which allows you to accumulate messages and transfer them to personal computers as requested by the user, - asynchronous.

Computer telecommunications are beginning to take root in education. In higher education, they are used for the coordination of scientific research, the rapid exchange of information between project participants, distance learning, and consultations. In the system of school education - to increase the effectiveness of students' independent activities related to various types of creative work, including educational activities, based on the widespread use of research methods, free access to databases, and the exchange of information with partners both domestically and abroad.

5. What is the bandwidth of the information transmission channel?
Bandwidth- metric characteristic, showing the ratio of the maximum number of passing units (information, objects, volume) per unit of time through a channel, system, node.
In computer science, the definition of bandwidth is usually applied to a communication channel and is defined the maximum number transmitted/received information per unit of time.
Bandwidth is one of the most important factors from the user's point of view. It is estimated by the amount of data that the network, in the limit, can transfer per unit of time from one device connected to it to another.

The speed of information transfer depends largely on the speed of its creation (source performance), encoding and decoding methods. The highest possible information transfer rate in a given channel is called its bandwidth. The channel capacity, by definition, is the information transfer rate when using the “best” (optimal) source, encoder and decoder for a given channel, therefore it characterizes only the channel.

>>Informatics: Informatics Grade 9. Addendum to Chapter 1

Addendum to Chapter 1

1.1. Transfer of information via technical communication channels

The main topics of the paragraph:

♦ scheme of K. Shannon;
♦ encoding and decoding information;
♦ noise and noise protection. Coding theory by K. Shannon.

K. Shannon's scheme

The American scientist, one of the founders of information theory, Claude Shannon proposed a scheme of the process transmission of information through technical communication channels, shown in Fig. 1.3.

The operation of such a scheme can be explained by the familiar process of talking on the phone. The source of information is the speaking person. An encoder is a handset microphone that converts sound waves (speech) into electrical signals. The communication channel is the telephone network (wires, switches of telephone nodes through which the signal passes). The decoding device is a handset (headphone) of the listening person - the receiver of information. Here the incoming electrical signal is converted into sound.

Communication in which the transmission takes place in the form of a continuous electrical signal is called analog communication.

Encoding and decoding information

Encoding is understood as any transformation of information coming from a source into a form suitable for its transmission over a communication channel.

At the dawn of the radio era, Morse code was used. The text was converted into a sequence of dots and dashes (short and long signals) and broadcast. A person who received such a transmission by ear should have been able to decode the code back into text. Even earlier, Morse code was used in telegraph communications. The transmission of information using Morse code is an example of discrete communication.

At present, digital communication is widely used, when the transmitted information encoded in binary form (0 and 1 are binary digits) and then decoded into text, image, sound. Digital communication, obviously, is also discrete.

Noise and noise protection. Coding theory by K. Shannon

The term "noise" refers to various kinds of interference that distort the transmitted signal and lead to loss of information. Such interference primarily occurs due to technical reasons: poor quality of communication lines, insecurity from each other of various information flows transmitted over the same channels. Often, when talking on the phone, we hear noise, crackling, which make it difficult to understand the interlocutor, or the conversation of other people is superimposed on our conversation. In such cases noise protection is necessary.

First of all, technical methods are used to protect communication channels from the effects of noise. Such methods are very different, sometimes simple, sometimes very complex. For example, using shielded cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

Claude Shannon developed a special coding theory that provides methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated. For example, if you are hard to hear when talking on the phone, then by repeating each word twice, you have a better chance that the interlocutor will understand you correctly.

However, you can not make the redundancy too large. This will lead to delays and higher communication costs. The coding theory of K. Shannon just allows you to get such a code that will be optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be the maximum.

In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The whole message is divided into portions - packets. For each package, a check is calculated sum(sum of binary digits) that is transmitted with this packet. At the place of reception, the checksum of the received packet is recalculated, and if it does not match the original, then the transmission of this packet is repeated. This happens until the initial and final checksums match.

Briefly about the main

Any technical information transmission system consists of a source, a receiver, encoding and decoding devices, and a communication channel.

Encoding is understood as the transformation of information coming from a source into a form suitable for its transmission over a communication channel. Decoding is the inverse transformation.

Noise is interference that leads to the loss of information.

In coding theory, methods have been developed for representing transmitted information in order to reduce its loss under the influence of noise.

Questions and tasks

1. Name the main elements of the information transfer scheme proposed by K. Shannon.
2. What is encoding and decoding when transmitting information?
3. What is noise? What are its implications for the transmission of information?
4. What are the ways to deal with noise?

1.2. Zipping and unzipping files

The main topics of the paragraph:

♦ data compression problem;
♦ compression algorithm using a variable length code;
♦ compression algorithm using repetition factor;
♦ archiving programs.

Data compression problem

You already know that with the help of the global Internet, the user gets access to huge information resources. On the net you can find a rare book, an essay on almost any topic, photographs and music, a computer game, and much more. When transferring this data over the network, problems may arise due to its large volume. The capacity of communication channels is still quite limited. Therefore, the transmission time may be too long, and this is associated with additional financial costs. In addition, for large files, there may not be enough free disk space.

The solution to the problem is data compression, which reduces the amount of data while retaining the content encoded in it. Programs that perform such compression are called archivers. The first archivers appeared in the mid-1980s of the XX century. The main purpose of their use was to save space on disks, the information volume of which at that time was much less than the volume of modern disks.

Data compression (file archiving) occurs according to special algorithms. These algorithms most often use two fundamentally different ideas.

Compression algorithm using variable length code

First idea: using variable length code. The data being compressed is divided into parts in a special way (strings of characters, “words”). Note that a single character (ASCII code) can also be a “word”. For each “word”, the frequency of occurrence is found: the ratio of the number of repetitions of this “word” to the total number of “words” in the data array. The idea of ​​the information compression algorithm is to encode the most frequently occurring "words" with codes of a shorter length than the rarely occurring "words". This can significantly reduce the size of the file.

This approach has been known for a long time. It is used in Morse code, where characters are encoded by various sequences of dots and dashes, with more frequently occurring characters having shorter codes. For example, the commonly used letter "A" is encoded as: -. A rare letter "Ж" is encoded: -. Unlike codes of the same length, in this case there is a problem of separating letter codes from each other. In Morse code, this problem is solved with the help of a “pause” (space), which, in fact, is the third character of the Morse alphabet, that is, the Morse alphabet is not two, but three characters.

Information in the computer memory is stored using a two-character alphabet. There is no special separator character. And yet, we managed to come up with a way to compress data with a variable length of the “word” code that does not require a separator character. Such an algorithm is called the D. Huffman algorithm (first published in 1952). All universal archivers work on algorithms similar to the Huffman algorithm.

Compression algorithm using repetition factor

Second idea: using a repetition factor. The meaning of the algorithm based on this idea is as follows: if a chain of repeating groups of characters occurs in a compressed data array, then it is replaced by a pair: the number (coefficient) of repetitions - a group of characters. In this case, for long repeating chains, the memory gain during compression can be very large. This method is most effective when packing graphic information.

Archiving programs

Archiving programs create archive files (archives). An archive is a file that stores one or more files in compressed form. To use archived files, it is necessary to extract them from the archive - unzip them. Everybody programs archivers usually provide the following features:

Adding files to the archive;
extraction of files from the archive;
deleting files from the archive;
view the contents of the archive.

Currently, the most popular archivers are WinRar and WinZip. WinRar has more features than WinZip. In particular, it makes it possible to create a multi-volume archive (this is convenient if the archive needs to be copied to a floppy disk, and its size exceeds 1.44 MB), as well as the ability to create a self-extracting archive (in this case, the archiver itself is not needed to extract data from the archive) .

Let's give an example of the benefits of using archivers when transferring data over a network. The size of the text document containing the paragraph you are currently reading is 31 KB. If this document is archived using WinRar, then the size of the archive file will be only 6 KB. As they say, the benefit is obvious.

Using archiving programs is very simple. To create an archive, you must first select the files that you want to include in it, then set the necessary parameters (archiving method, archive format, volume size if the archive is multi-volume), and finally issue the CREATE ARCHIVE command. Similarly, the reverse action occurs - extracting files from the archive (unpacking the archive). Firstly, you need to select the files to be extracted from the archive, secondly, determine where these files should be placed, and, finally, issue the EXTRACT FILES FROM THE ARCHIVE command. You will learn more about the work of archiving programs in practical classes.

Briefly about the main

Information is compressed with the help of special archiving programs.

The two most commonly used methods in compression algorithms are the use of a variable length code and the use of a character group repetition factor.

Questions and tasks

1. What is the difference between constant and variable length codes?
2. What are the capabilities of archiving programs?
3. What is the reason wide application archiving software?
4. Do you know any other archivers other than those listed in this paragraph?

I. Semakin, L. Zalogova, S. Rusakov, L. Shestakova, Informatics, Grade 9
Submitted by readers from Internet sites

Open informatics lesson, school plan, informatics abstracts, everything for the student to do homework, download informatics grade 9

Lesson content lesson summary support frame lesson presentation accelerative methods interactive technologies Practice tasks and exercises self-examination workshops, trainings, cases, quests homework discussion questions rhetorical questions from students Illustrations audio, video clips and multimedia photographs, pictures graphics, tables, schemes humor, anecdotes, jokes, comics parables, sayings, crossword puzzles, quotes Add-ons abstracts articles chips for inquisitive cheat sheets textbooks basic and additional glossary of terms other Improving textbooks and lessonscorrecting errors in the textbook updating a fragment in the textbook elements of innovation in the lesson replacing obsolete knowledge with new ones Only for teachers perfect lessons calendar plan year methodological recommendations of the discussion program Integrated Lessons

If you have corrections or suggestions for this lesson,

The transfer of information occurs from the source to the recipient (receiver) of information. source information can be anything: any object or phenomenon of living or inanimate nature. The process of information transfer takes place in some material environment that separates the source and recipient of information, which is called channel transfer of information. Information is transmitted through a channel in the form of a certain sequence of signals, symbols, signs, which are called message. Recipient information is an object that receives a message, as a result of which certain changes in its state occur. All of the above is shown schematically in the figure.

Transfer of information

A person receives information from everything that surrounds him, through the senses: hearing, sight, smell, touch, taste. A person receives the greatest amount of information through hearing and sight. Sound messages are perceived by ear - acoustic signals in a continuous medium (most often in the air). Vision perceives light signals that carry the image of objects.

Not every message is informative for a person. For example, a message in an incomprehensible language, although transmitted to a person, does not contain information for him and cannot cause adequate changes in his state.

An information channel can either be of a natural nature (atmospheric air through which sound waves are transmitted, sunlight reflected from observed objects), or be artificially created. In the latter case, we are talking about technical means of communication.

Technical information transmission systems

The first technical means of transmitting information over a distance was the telegraph, invented in 1837 by the American Samuel Morse. In 1876, the American A. Bell invents the telephone. Based on the discovery of electromagnetic waves by the German physicist Heinrich Hertz (1886), A.S. Popov in Russia in 1895 and almost simultaneously with him in 1896 G. Marconi in Italy, radio was invented. Television and the Internet appeared in the twentieth century.

All of the listed technical methods of information communication are based on the transmission of a physical (electrical or electromagnetic) signal over a distance and are subject to certain general laws. The study of these laws is communication theory that emerged in the 1920s. Mathematical apparatus of communication theory - mathematical theory of communication, developed by the American scientist Claude Shannon.

Claude Elwood Shannon (1916–2001), USA

Claude Shannon proposed a model for the process of transmitting information through technical communication channels, represented by a diagram.

Technical information transmission system

Encoding here means any transformation of information coming from a source into a form suitable for its transmission over a communication channel. Decoding - inverse transformation of the signal sequence.

The operation of such a scheme can be explained by the familiar process of talking on the phone. The source of information is the speaking person. An encoder is a handset microphone that converts sound waves (speech) into electrical signals. The communication channel is the telephone network (wires, switches of telephone nodes through which the signal passes). The decoding device is a handset (headphone) of the listening person - the receiver of information. Here the incoming electrical signal is converted into sound.

Modern computer systems transmission of information - computer networks operate on the same principle. There is an encoding process that converts binary computer code into physical signal of the type that is transmitted over the communication channel. Decoding is the reverse transformation of the transmitted signal into computer code. For example, when using telephone lines in computer networks The functions of encoding and decoding are performed by a device called a modem.

Channel capacity and information transfer rate

Developers of technical information transmission systems have to solve two interrelated tasks: how to ensure top speed transmission of information and how to reduce the loss of information during transmission. Claude Shannon was the first scientist who took on the solution of these problems and created a new science for that time - information theory.

K.Shannon determined the method of measuring the amount of information transmitted over communication channels. They introduced the concept channel bandwidth,as the maximum possible information transfer rate. This speed is measured in bits per second (as well as kilobits per second, megabits per second).

The throughput of a communication channel depends on its technical implementation. For example, computer networks use the following means of communication:

telephone lines,

Electrical cable connection,

fiber optic cabling,

Radio communication.

Throughput of telephone lines - tens, hundreds of Kbps; the throughput of fiber optic lines and radio communication lines is measured in tens and hundreds of Mbps.

Noise, noise protection

The term "noise" refers to various kinds of interference that distort the transmitted signal and lead to loss of information. Such interference primarily occurs due to technical reasons: poor quality of communication lines, insecurity from each other of various information flows transmitted over the same channels. Sometimes, while talking on the phone, we hear noise, crackling, which make it difficult to understand the interlocutor, or the conversation of completely different people is superimposed on our conversation.

The presence of noise leads to the loss of transmitted information. In such cases noise protection is necessary.

First of all, technical methods are used to protect communication channels from the effects of noise. For example, using shielded cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

Claude Shannon developed coding theory, which gives methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated. For example, if you are hard to hear when talking on the phone, then by repeating each word twice, you have a better chance that the interlocutor will understand you correctly.

However, you can not make the redundancy too large. This will lead to delays and higher communication costs. Coding theory allows you to get a code that will be optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be the maximum.

In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The whole message is divided into portions - packages. For each package is calculated check sum(sum of binary digits) that is transmitted with this packet. At the place of reception, the checksum of the received packet is recalculated and, if it does not match the original sum, the transmission of this packet is repeated. This will continue until the initial and final checksums match.

Considering the transfer of information in propaedeutic and basic computer science courses, first of all, this topic should be discussed from the position of a person as a recipient of information. The ability to receive information from the surrounding world is the most important condition for human existence. The human sense organs are the information channels of the human body, carrying out the connection of a person with the external environment. On this basis, information is divided into visual, auditory, olfactory, tactile, and gustatory. The rationale for the fact that taste, smell and touch carry information to a person is as follows: we remember the smells of familiar objects, the taste of familiar food, we recognize familiar objects by touch. And the content of our memory is stored information.

Students should be told that in the animal world the informational role of the senses is different from the human one. The sense of smell performs an important informational function for animals. The heightened sense of smell of service dogs is used by law enforcement agencies to search for criminals, detect drugs, etc. The visual and sound perception of animals differs from that of humans. For example, bats are known to hear ultrasound, and cats are known to see in the dark (from a human perspective).

Within the framework of this topic, students should be able to lead concrete examples the process of information transfer, to determine for these examples the source, receiver of information, used information transmission channels.

When studying computer science in high school, students should be introduced to the basic provisions of the technical theory of communication: the concepts of coding, decoding, information transfer rate, channel capacity, noise, noise protection. These issues can be considered under the topic “ Technical means computer networks”.