Human pose estimation for shadow boxing

Shadow boxing – a type of training. Total automatization and gamification boxing pose. Investigating an action person. Recurrent convolutional networks for visual recognition. Corrections the performed action in program. Human pose estimation for boxing.

Рубрика Программирование, компьютеры и кибернетика
Вид дипломная работа
Язык английский
Дата добавления 07.12.2019
Размер файла 3,2 M

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http: //www. allbest. ru/

FEDERAL STATE AUTONOMOUS EDUCATIONAL INSITUTION FOR HIGHER PROFESSIONAL EDUCATION NATIONAL RESEARCH UNIVERSITY

«HIGHER SCHOOL OF ECONOMICS»

Faculty of Computer Science

Qualification paper - Master of Science Dissertation

Field of study 01.04.02 «Applied Mathematics and Informatics»

Program: Data Science

Human pose estimation for shadow boxing

Student Stanislav Petrov

Supervisor

Ilya Makarov

Moscow, 2019

  • Plan
  • 1. Introduction
  • 2. Shadow boxing
  • 3. Related work
    • 3.1 One-stream network
      • 3.1.1 Single Frame
      • 3.1.2 Early fusion
      • 3.1.3 Late fusion
      • 3.1.4 Slow fusion
    • 3.2 Two-stream Networks
    • 3.3 Long-term Recurrent Convolutional Network
    • 3.4 Convolution3D
      • 3.4.1 Architecture
    • 3.5 Convolution 3D & Attention
      • 3.5.1 Architecture
    • 3.6 TwoStreamFusion
    • 3.7 Temporal-segment-networks
    • 3.8 ActionVLAD
    • 3.9 Hidden Two Stream
    • 3.10 Inception 3D
      • 3.10.1 Multi-stream Inception 3D
    • 3.11 The R(2+1)
  • 4. Dataset
    • 4.1 Hardware
    • 4.2 Software
    • 4.3 Data collection
    • 4.4 Data preparation
    • 4.5 Data augmentation
  • 5. Model and Results
    • 5.1 Training process
    • 5.2 Results
  • Conclusion and future work
  • Bibliography

1. Introduction

shadow boxing automatization network

Nowadays, sports plays a big role in our life and people spend a lot of money on different smart gadgets that help training. Computer vision and action recognition plays small role in this process, but some efforts was made.

We want to start concertation for using action recognition in application domain. Domain of this work is boxing. Shadow boxing - is a type of training, when sportsman fights with invisible vis-а-vis and couch tells the person what he is doing wrong or right. In era of total automatization and gamification this is a huge area to automatization. Our main goal is to automate a human-to-human interaction with trainer. Trainer function can be broken in two parts:

· Investigate an action person is performing

· Corrects the performed action

The first function could bе replaced by action recognition techniques that evolved in last few years. In order to achieve this we will collect hand-crafted dataset for action recognition in boxing.

2. Shadow boxing

Boxers usually use shadow boxing in training. It is a good way to warm up with shadow boxing. For regular boxer it takes from 15 to 60 minutes.

During such a training, the boxer should not be tired. It is main goals are practicing punches, working out fighting strategies with different opponents, developing muscle memory.

In our work on the initial step to automation of practicing punches - will try to automate punches recognition using computer vision techniques.

3. Related work

In recent years, the field of action recognition and video processing is developing very quickly. Previous works falls in several categories:

· Long-term Recurrent Convolutional Networks

· One stream convolutional networks

· Two stream convolutional networks

· Temporal Segment Networks

3.1 One-stream network

In this work (Karpathy, et al., 2014) authors investigate different to merge temporal information from videos using 2D pre-trained convolutions.

Figure 1 Fusion Ideas (Karpathy, et al., 2014)

As can be seen in Figure 1 in all setups consecutive frames are presented.

3.1.1 Single Frame

Single frame was used to determine contribution of static view to the classification. In fact, this network is close to the ImageNet challenge winning model but input size is

3.1.2 Early fusion

Early fusion gets data across all video on pixel level. This done by convolving over 10 frames in the first layer.

3.1.3 Late fusion

Late fusion uses two separate single-frame networks with shared parametes. It get 15 frames apart and combines predictions at the end in first fully connected layer.

3.1.4 Slow fusion

The Slow fusion combines two approaches together in a balance between early and late fusion. This is done by increasing the connectivity of all convlayers in time and performing temporal convolutions as well as spatial convolutions to compute activations. For final predictions, multiple clips is sampled from video and prediction scores from them is averaged for final prediction.

Unfortunately, the results were significantly worse compared to state-of-the-art hand-crafted feature based algorithms. There were reasons for this failure:

· spatiotemporal features cound not capture action features

· learning detailed features from less diverse dataset was hard

3.2 Two-stream Networks

In this pioneering work (Simonyan , et al., 2014) the authors build on the failures of the previous work by (Karpathy, et al., 2014). Given the fact of the hardness deep architectures to extract motion features, authors explicitly create the motion features. They doing that by stacking optical flow vectors and instead of one single network that deals with video they propose to architecture with two separate networks -one for spatial context and for motion context. Input for spartial network is a single frame of a video. Authors found that best input for temporal net is bi-directional optical flow stacked across 10 consecutive frames. Streams were trained separately and combined using SVM. Prediction scores were combined from streams and were averaged for final prediction.

Figure 2 Two stream architecture (Simonyan , et al., 2014)

Despite this method improved the performance compared to single stream network there were still a few drawbacks:

1. Long range temporal information was still missing in learnt features

2. False label assignment, because of the uniformed sampling training clips from entire video the ground truth could be different because of the small duration

3. Pre-computing optical flow

4. Separate training of two streams

3.3 Long-term Recurrent Convolutional Networks

Authors(Donahue , et al., 2014) build on idea encoder and decoder architecture. They build encoder on convolutions blocks and decoder on LSTM blocks. They also compared optical flow and RGB input and conclude that a weighted combination of a prediction based on both optical flow and RGB are the best.

Figure 3 LRCN for action recognition (Donahue , et al., 2014)

During training, clips of 16 frame length are sampled from video. Training is performed in end-to-end mode with optical flow and RGB inputs. Final prediction for each clip is the average of prediction across each time step. The final prediction is the average of predictions from each clip

Though authors proposed end-to-end training there are still few drawbacks:

1. False label assignment as in (Simonyan , et al., 2014)

2. Inability to capture long range temporal information

3. Precomputing optical flow

3.4 Convolution3D

In (Tran, et al., 2014) authors built upon (Karpathy, et al., 2014). Instead using 2D Conv across frames they used 3D Convs on video frames. The main idea was to train this huge networks on Sports1M and then use it as feature extractor for other dataset.

3.4.1 Architecture

On the top of the ensemble of generated features they used simple linear classifier like SVM and find out that it performs better than state-of-the-art algorithms.

Figure 4 Differences in C3D paper and single stream paper (Tran, et al., 2014)

The other key feature of the paper was using deconvolutinal layers in order to interpret the predictions. Purpose of the deconvolutinal layers - to visualize the activity maps for each layer for different inputs. This helped the authors in understanding object categories responsible for activation in a given feature map.

In training phase, 5 random two second clips are sampled with ground truth as entire video action. During testing, 10 random clips are extracted and predictions across them are averaged.

Drawbacks:

· Long range temporal modeling still a problem

· Training such huge network is computationally expensive

· False label assignment

3.5 Convolution 3D & Attention

In this work (Yao, et al., 2015) authors is not working directly with action recognition, but this is a reference point in video representation. In this paper authors propose 3D CNN +LSTM for video description task. This paper presented attention mechanism for the first time for video representations.

3.5.1 Architecture

Architecture is encoder-decoder architecture described in LRCN with 3 keypoints:

· 3D CNN features are stacked with concatenated 2D CNN feature maps.

· The 3D CNN and 3D CNN are pretrained and encoder-decoder is not trained end-to-end

· Weighted average across all frames used to combine the temporal features. The attention weights are decided based on LSTM output at every time step.

Figure 5 Attention mechanism for action recognition (Yao, et al., 2015)

3.6 Two Stream Fusion

In this work, (Feichtenhofer, et al., 2016) proposed two novel approaches for working with two stream networks and gain better performance without significant increase in size of the model.

Spatial net can capture the presence a object in a video while temporal net capture periodic motion for each object in video. Authors suggest to use fusion of spatial and temporal streams at early level such that pixel positions are put in correspondence rather than fusing them in the end.

Figure 6 Possible strategies for fusing spatial and temporal streams (Feichtenhofer, et al., 2016)

Secondly, authors propose to combine temporal net output across time frame so that long term dependency is also modeled.

Figure 7 Two stream fusion architecture (Feichtenhofer, et al., 2016)

3.7 Temporal-segment-networks

Introduced in 2018 by (Wang, et al., 2016), TSN is one of the most recent and influential works in the context of untrimmed videos. Temporal segment networks works with sequence of short fragments sparsely sampled from action video instead of working with single frame or frame stacks. Then for every sequence prediction is produced and aggregated into consensus predicton through segmental consensus function to get the final classification at the video level.

Authors also used as input to their models not only RGB video, but also two extra modalities, namely RGB difference and warped optical flow fields.

RGB difference between two frames characterize the view change, which may match to the selected area of movement.

In two stream ConvNets optical flow field one of the input stream is presented to capture motion information, but in fact there often are camera motion and it becomes bad representation of a human action. Following (Wang, et al., 2013) authors the warped optical flow first estimating homography matrix and then compensating camera motion. Thus, authors suppress the camera motion and extracts the actor movement.

Figure 8 Temporal segment network scheme (Wang, et al., 2013)

3.8 ActionVLAD

In (Ramanan , et al., 2017) authors propose novel approach to for feature aggregation instead of maxpooling or average pooling. This technique is close to bag of words in linguistics. There are number of learned decision points based on vocab representing n actions related to spatiotemporal features. The output is encoded in terms of n visual words features - features represents the difference from corresponding reference point for given spatial or temporal location.

Figure 9 ActionVLAD (Ramanan , et al., 2017)

3.9 Hidden Two Stream

In this work (Zhu, et al., 2017) authors propose architecture of the network in order to generate optical flow input on-the-fly. This solves problem of precomputing optical flow, which is computationally a problem.

The authors investigated number of architectures and strategies to generate optical flow. Final architecture uses MotionNet to generate optical flow stacked to the temporal stream of two-stream architecture.

Also, the authors outperforms their own network using TSN based fusion instead of regular two-stream architecture.

Figure 10 HiddenTwoStream (Zhu, et al., 2017)

3.10 Inception 3D

In ( Carreir, et al., 2017) authors explore idea of taking advantage of pre-trained 2D models. The authors simply repeat 2D pre-trained weights in third dimension. And now input for spatial stream is 4 dimensional, where time plays a role of 4th dimension.

Figure 11 The Inflated Inception-V1 architecture (left) and its detailed inception submodule (right) ( Carreir, et al., 2017)

3.10.1 Multi-stream Inception 3D

In this paper (Hong, et al., 2019) authors add two another stream to classic I3D architecture - pose stream and pairwise stream to improve accuracy of action classification. They using MaskRCNN to get pose and pairwise streams for videos.

Pairwise stream presented in two different ways: mask and bounding box. The authors found out that mask outperforms bounding box in two different setups: fusion with other layers and single pairwise layer

Figure 12 Multi streams I3D (Hong, et al., 2019)

3.11 The R(2+1)

The main idea in is to replace 3D Convolutions by 2D Convolutions followed by 1D convolutions. Full 3D convolutions is made of filter size of . (2+1)D splits it to first convolution in spatial 2D and after that 1D convolution in time dimension. By doing that we:

· Increase number of nonlinearities in the mpvel because of two blocks instead of one 3D convolution

· Easier optimization than for 3D CNN with same number of blocks.

4. Dataset

One of the main goal in this work is try to create neural network that could capture actions that differs a little from each other. Also the domain in our work - boxing, it is very narrow area for action recognition. Thus, there is no dataset with different boxing punches where boxer stays right in front of camera.

Our objective in this work create domain specific network that could help boxer in their trainings. Given that we can assume some limitations to model usage:

· The presence only one person in the video

· This person in 2-3 meters away in front of the camera

· Boxing is performed in direction of camera

In this work we wanted to create dataset with as much as possible features. There several ways of doing that:

1. Record video and using different neural networks extract pose, mask and etc.

2. Record video using special camera that write depth, IR, pose and mask data

First approach cheap, straightforward and accurate - using video and modern neural networks we can collect pose estimation and mask of the human, but this approach is computationally expensive (Hong, et al., 2019)

The last approach more expensive than first but gives human pose, mask of person and depth map in near real time and does not requires wearing additional sensor from the sportsman.

As the balance for our task we stop on the last approach.

4.1 Hardware

For dataset collection purposes, we have ORRBEC Astra Pro. It can capture RGB image data in resolution, depth image with 30fps both. It uses USB 2.0 to connect to a computer.

4.2 Software

ORRBEC developed Astra SDK for body tracking. It was used to develop a program that in different threads read and write to correspondent files:

· RGB video of the action

· Depth video

· Mask of the person body with mask of the floor

· Pose keypoints

4.3 Data collection

For this work were collected video representation of four basic boxer actions:

· Defense

· Cross punch

· Hook

· Uppercut

There were recorder several videos of 1-3 minutes length of performing one punch in a row. Then for each punch mannualy was created starting point - number of frame where person stays in defence position. Each class contains 200 samples of the author boxing in 2-3 meters away in front of the camera. Actions captured in one location, different clothes and different techniques was used(with wrist rotation and without it).

All punches was performed with right hand.

4.4 Data preparation

Data preparation could be splited in 3 stages:

1. Input video splinted manually in 32 video piece representing one boxing punch. If ending point of a punch is out of the piece - video cuted, if it is inside of the piece - video continues to the 32-nd frame

2. Splinted video cuted to resolution from the right and left sides

3. Cuted video is reshaped to resolution video

4.1 Data augmentation.

There were 3 types of data augmentation were performed:

1. On the second step of the data preparation video was randomly cropped from right between 60 and 100 pixel range

2. Temporal augmentation. For each 32-frame cuted video clip randomly was taken bias in range from -8 to -4 and from 4 to 8. This bias served to produce another 32-frame but using starting point plus bias.

3. Second and first approaches combined.

In fact, Astra Camera recorded video with some freezes and not all samples were captured with 30fps, but as actions were performed with different speed it had not effect on the collected data.

5. Model and Results

In this work we stopped our choice on Multi streams I3D network. There are multiple reasons:

· I3D a good performer on Kinetics dataset

· It is not enormous as C3D

· It does not need sampling frames from video as TSN approach

There are 2 key difference in our approach and Multi streams I3D:

· We do not use optical flow as an input

· We use depth flow as input

Also, we do not load pre-trained on ImageNet, because of specific domain of application.

Figure 13 Proposed architecture

5.1 Training process

Training was performed in with size of batch size in 1 epoch, where 4 is the number of streams, 2 - number of actions in a batch, 32 - number of frames for each video, and rest - frame size. For the training all video filenames was randomly shuffled and then splited and into train, test and validation data set in the proportion of 3/1/1. For optimization was picked Stochastic Gradient Descent optimizer with learning rate = 0.001. Learning rate and batch size was fixed during training.

Training was performed end to end with Cross Entropy loss on the sum of the logits of all streams. It took in total 5 hours and half our and required 11GB of GPU memory.

5.2 Experiments and Results

We trained network in 3 setups:

· All four streams

· Depth, pose and mask streams

· Depth and pose streams

And we got following results on validation dataset:

Table 1 Results

Setup

Accuracy

RGB + DEPTH +TEXTURE + POSE

0.92

DEPTH +TEXTURE + POSE

0.85

TEXTURE + POSE

0.83

Figure 14 Confusion matrices

As we can see that network successfully captured dependencies in actions, Big accuracy of defence class prediction can be explained by the fact that on the video with defence class no action is performed. Also, one can see that sometimes network confused hook and cross - that could be explained by the fact they could vеry similar to human if performed by non professional. Another role plays the length of the action - indeed, in one video could be the part of another actions. Difference of the 4 and 3 streams setups could be explained by color of the clothes on the actor. Average inference time for 32 peace is 121 ms which is almost of the performed video fragment.

Also, we performed test on the untrimmed video and the get following results

Table 2

Setup

Accuracy

RGB + DEPTH +TEXTURE + POSE

0.7

DEPTH +TEXTURE + POSE

0.65

TEXTURE + POSE

0.62

Experiment was performed with sliding window of 32 with step of 4 frames. In fact in untrimmed video network gets input videos not from starting point but at random point of the action, which nowadays most challenging problem in action recognition.

Conclusion and future work

In this work, we proposed a new dataset for action recognition in boxing domain and build a model that captures difference for such actions. We build a model that successfully classifies actions in trimmed videos. We propose a data augmentation for video action recognition technique. Future work is to expand dataset with not only different actions, but different persons, locations, clothes.

In the domain of the neural network there could be several improvements:

· Adding fast optical flow approximation and adding optical flow stream to network architecture

· Change fusion of streams in network and made it weighted

· Change place of the fusion of streams in the network

· Adapt network to untrimmed videos

· Decrease inference time to work in near realtime

Bibliography

1. Carreir Joao и Zisserman Andrew Quo Vadis, Action Recognition? {A} New Model and the Kinetics Dataset [Статья] // CoRR. - [б.м.] : DBLP, 2017 г..

2. Donahue Jeff [и др.] Long-term Recurrent Convolutional Networks for Visual Recognition [Статья] // CoRR. - [б.м.] : DBLP, 2014 г..

3. Feichtenhofer Christoph , Pinz Axel и Zisserman Andrew Convolutional Two-Stream Network Fusion for Video Action Recognition [Журнал]. - [б.м.] : DBLP, 2016 г..

4. Hong Jongkwang [и др.] Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition [Статья] // Sensors. - [б.м.] : MDPI AG, 2019 г.. - 6.

5. Karpathy Andrej [и др.] 2014 IEEE Conference on Computer Vision and Pattern Recognition [Конференция] // Large-Scale Video Classification with Convolutional Neural Networks. - Columbus, OH, USA : IEEE, 2014.

6. Ramanan Deva [и др.] ActionVLAD: Learning spatio-temporal aggregation for action classification [Статья] // CoRR. - [б.м.] : dblp, 2017 г..

7. Simonyan Karen и Zisserman Andrew Two-Stream Convolutional Networks for Action Recognition in Videos [Статья] // CoRR. - [б.м.] : DBLP, 2014 г..

8. Tran Du [и др.] Learning Spatiotemporal Features with 3D Convolutional Networks [Статья] // CoRR. - [б.м.] : DBLP, 2014 г..

9. Wang Heng и Schmid Cordelia 2013 IEEE International Conference on Computer Vision [Конференция] // Action Recognition with Improved Trajectories. - Sydney, NSW, Australia : IEEE, 2013.

10. Wang Limin [и др.] Temporal Segment Networks: Towards Good Practices for Deep Action [Статья] // CoRR. - 2016 г..

11. Yao Li [и др.] The IEEE International Conference on Computer Vision (ICCV) [Конференция] // Describing Videos by Exploiting Temporal Structure. - [б.м.] : IEEE, 2015.

12. Zhu Yi [и др.] Hidden Two-Stream Convolutional Networks for Action Recognition [Статья] // CoRR. - 2017 г.. - DBLP.

Размещено на Allbest.ru

...

Подобные документы

  • Program of Audio recorder on visual basic. Text of source code for program functions. This code can be used as freeware. View of interface in action, starting position for play and recording files. Setting format in milliseconds and finding position.

    лабораторная работа [87,3 K], добавлен 05.07.2009

  • Technical methods of supporting. Analysis of airplane accidents. Growth in air traffic. Drop in aircraft accident rates. Causes of accidents. Dispatcher action scripts for emergency situations. Practical implementation of the interface training program.

    курсовая работа [334,7 K], добавлен 19.04.2016

  • Создание тестовой программы используя flash-технологии, Action-скрипт. Характеристика и принципы работы в программе Macromedia Flash 7 MX. Использование панели Actions-скрипт. Создание и оформление теста с помощью программы Macromedia Flash.

    курсовая работа [614,0 K], добавлен 10.04.2008

  • Program automatic system on visual basic for graiting 3D-Graphics. Text of source code for program functions. Setting the angle and draw the rotation. There are functions for choose the color, finds the normal of each plane, draw lines and other.

    лабораторная работа [352,4 K], добавлен 05.07.2009

  • Program game "Tic-tac-toe" with multiplayer system on visual basic. Text of source code for program functions. View of main interface. There are functions for entering a Players name and Game Name, keep local copy of player, graiting message in chat.

    лабораторная работа [592,2 K], добавлен 05.07.2009

  • Creation of the graphic program with Visual Basic and its common interface. The text of program code in programming of Visual Basic language creating in graphics editor. Creation of pictures in Visual Basic, some graphic actions with graphic editor.

    лабораторная работа [1,8 M], добавлен 06.07.2009

  • Алгоритмы поиска динамических шумов и их компенсации на основе метода Motion estimation. Разработка программного продукта для детектирования движения капель дождя и их удаления на видеопоследовательностях, и его реализация среде Microsoft Visual Studio.

    магистерская работа [6,6 M], добавлен 09.02.2013

  • Изучение возможности среды программирования Microsoft Visual Studio C#, а именно компонент Treeview, а также набора стандартных операций, например, циклы, массивы. Написание программы с функционалом, приближенным к функционалу программы Total Commander.

    контрольная работа [108,1 K], добавлен 24.07.2012

  • Как правильно выбрать монитор. Мониторы: CRT, Shadow mask, Slot mask, Aperture grille, LCD, STNDual, Thin Film Transistor (TFT). Plasma FEDLEP-дисплеи: день завтрашний. Максимальная разрешающая способность в цифрах. Настройка мониторов, их проблемы.

    реферат [137,8 K], добавлен 07.11.2007

  • Basic assumptions and some facts. Algorithm for automatic recognition of verbal and nominal word groups. Lists of markers used by Algorithm No 1. Text sample processed by the algorithm. Examples of hand checking of the performance of the algorithm.

    курсовая работа [22,8 K], добавлен 13.01.2010

  • Пример окна входа в систему Linux (графический режим). Простейшие команды Linux. Основные задачи при управлении пользователями. Сведения, которые нужно указать для вновь создаваемого пользователя. Содержимое файла/etc/shadow (в котором содержатся пароли).

    лекция [603,7 K], добавлен 20.12.2013

  • Тестування і діагностика є необхідним аспектом при розробці й обслуговуванні обчислювальних мереж. Компанія Fluke Networks є лідером розробок таких приладів. Такими приладами є аналізатори EtherScope, OptіVіew Fluke Networks, AnalyzeAir та InterpretAir.

    реферат [370,5 K], добавлен 06.01.2009

  • Description of a program for building routes through sidewalks in Moscow taking into account quality of the road surface. Guidelines of working with maps. Technical requirements for the program, user interface of master. Dispay rated pedestrian areas.

    реферат [3,5 M], добавлен 22.01.2016

  • Основные функциональные и технологические возможности файлового менеджера Total Commander. Практические навыки применения антивирусных программ на примере программы NOD32. Особенности использования основных и дополнительных возможностей Total Commander.

    лабораторная работа [1,9 M], добавлен 08.03.2010

  • Робота з файловою системою і файлами в інтерфейсі файлових менеджерів. Початок роботи з програмою Total Commander. Перегляд і редагування файлів. Робота з групами файлів і папок. Наявністю в Total Commander спеціально встановлених програм – плагінів.

    доклад [13,0 K], добавлен 12.10.2014

  • Написaние прoграммы, выполняющей трансляцию с языка программирования Пaскaль нa язык прoгрaммирoвaния Си и транслирующей конструкции, такие кaк integer, repeat … until Le, procedure, type, record для type. Обработка арифметических и логических выражений.

    курсовая работа [314,3 K], добавлен 03.07.2011

  • Lists used by Algorithm No 2. Some examples of the performance of Algorithm No 2. Invention of the program of reading, development of efficient algorithm of the program. Application of the programs to any English texts. The actual users of the algorithm.

    курсовая работа [19,3 K], добавлен 13.01.2010

  • Использование программы "Total Commander": пользовательский интерфейс, клавиатурные сочетания, операции с файлами, контекстные меню, внутренний просмотр файлов. Назначение и применение функциональных клавиш. Особенности работы с каталогами и файлами.

    презентация [462,3 K], добавлен 25.09.2014

  • Процесс создания простейшей мультипликации в приложении в Macromedia Flash путем применения автоматической и покадровой анимации. Пример использования Action Script. Пошаговое описание выполнения данной работы со всеми комментариями и изображениями.

    контрольная работа [4,2 M], добавлен 06.05.2011

  • Проводник — служебная программа, относящаяся к категории диспетчеров файлов. Навигация по файловой структуре. Запуск программ и открытие документов. Копирование и перемещение файлов. Создание ярлыков объектов. Внешний вид программы Total Commander.

    реферат [220,8 K], добавлен 23.04.2009

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.