Evaluation of the effect of operational and design parameters on performance of steam reforming of methane using fluidized catalytic bed reactor using artificial neural networks
The use of artificial intelligence represented by artificial neural networks as a tool for assessing the performance of chemical technology. Development of five ANS for assessing the parameters of the catalytic process of steam reforming methane.
Рубрика | Программирование, компьютеры и кибернетика |
Вид | статья |
Язык | английский |
Дата добавления | 09.12.2024 |
Размер файла | 1,6 M |
Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже
Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.
Размещено на http://www.allbest.ru/
Размещено на http://www.allbest.ru/
Evaluation of the effect of operational and design parameters on performance of steam reforming of methane using fluidized catalytic bed reactor using artificial neural networks
Aljajan Yamen Ph.D. student
1 year, Faculty of Chemical Technology and Ecology Russian State University of Oil and Gas - Gubkin University
Russia, Moscow
Abstract
The use of artificial intelligence (AI), represented by artificial neural networks (ANN), is one of the most recent approaches of modeling in the field of research and development. In this study, 5 artificial neural networks were developed to estimate the major parameters of the catalytic process of steam reforming of methane in fluidized catalytic bed reactors using nickel-based catalyst, based on operational and design factors. Recently, research interest in artificial neural networks has expanded because to its capacity to differentiate applications efficiently solve complicated difficulties, reliability, and low cost when compared to other complex modeling and simulation approaches. It is stated that ANN is a useful performance evaluation tool for chemical engineering researchers in forecasting operational and design parameters for both academic and practical applications.
Keywords: Artificial Neural Networks, Steam reforming, Methane, Nickel- based catalysts, Fluidized Bed Reactors, MATLAB.
Аннотация
artificial intelligence neural networks
Аль-Жажан Ямен Аспирант
1 курс, факультет «Химическая технология и экология»
Российский государственный университет нефти и газа имени И.М.
Губкина.
Россия, г. Москва
ОЦЕНКА ВЛИЯНИЯ ЭКСПЛУАТАЦИОННОКОНСТРУКТИВНЫХ ПАРАМЕТРОВ НА ПРОИЗВОДИТЕЛЬНОСТЬ ПРОЦЕССА ПАРОВОЙ КОНФЕРЕНЦИИ МЕТАНА В РЕАКТОРЕ КИПЯЩЕГО КАТАЛИТИЧЕСКОГО СЛОЯ С ИСПОЛЬЗОВАНИЕМ ИСКУССТВЕННЫХ НЕЙРОННЫХ СЕТЕЙ
Использование искусственного интеллекта (ИИ), представленного искусственными нейронными сетями (ИНС), является одним из самых последних подходов к моделированию в области исследований и разработок.
В последнее время исследовательский интерес к искусственным нейронным сетям расширился из -за их способности дифференцировать приложения, эффективно решать сложные проблемы, надежности и низкой стоимости по сравнению с другими подходами к сложному моделированию и симуляции.
Утверждается, что ИНС является полезным инструментом оценки производительности для исследователей химической инженерии при прогнозировании рабочих и проектных параметров как для академических, так и для практических приложений.
В этом исследовании были разработаны 5 искусственных нейронных сетей для оценки основных параметров каталитического процесса парового риформинга. метана в реакторах с псевдоожиженным каталитическим слоем с использованием никельсодержащего катализатора с учетом эксплуатационных и конструктивных факторов.
Ключевые слова: Искусственные нейронные сети, Паровой риформинг, Метан, Никельсодержащие катализаторы, Реакторы с псевдоожиженным слоем, MATLAB.
Introduction
Methods of studying chemical industrial processes have undergone great changes throughout the history of scientific research, where the study of primitive processes were taking place within simple laboratory units based on the principle of trial and error. Then, the integration of computer technologies into the field of chemical engineering began, and this resulted in modeling and simulation applications that allowed researchers to conduct more complex and closer to realistic experiments, where there is a need to deal with more complex systems (non-linear systems).
One of the most recent applications of modeling in the field of chemical engineering is the use of artificial intelligence (AI), represented by artificial neural networks (ANN). Catalysts tremendously used in chemical industries, and operations of industrial catalytic rate of more than 90% globally, and that because of the importance of the use of catalysts in guiding production in terms of quality and economically.
Recently, research interest has grown in artificial neural networks to distinguish applications quickly resolve complex issues, reliability and cheap price compared to other complex modeling and simulations techniques [1]. Accordingly, applications of artificial neural networks have a promising future in the study of catalytic chemical processes, including functions of data classification, error detection, prediction of chemical product properties, data correction, modeling and simulation of process control systems [2]. The purpose of this article is to conduct an evaluation and analysis of the operational and design parameters of the performance of fluidized catalytic bed reactors used in steam reforming of methane by using artificial neural networks, and conducting this evaluation using MATLAB program.
1. Artificial Neural Networks (ANN)
The concept of ANN
The term "artificial neurons" means those computational structures that perform special arithmetic operations of the input values and form a total of the artificial neural network, which are interconnected neurons to convert the input values into an output that expresses a specific function of the network with a mechanism similar to the work of brain neurons in converting signals sensory (image - smell - feel - ..etc.) to a mental belief (happiness - sadness - fear - ..etc.) or a kinesthetic act (running - hitting - .. etc.). The principle of the work of a single neuron can be summarized by receiving an input value (or more), which may be an initial input value (in the input layer) or coming as an output value from the previous layer (in the hidden layers) and performing a calculation based on a numerical value specific to each neuron, called “ weight” , then another value called “bias” specific to each neuron is added , the arithmetic product of this mathematic process as a whole is subjected to the so-called “activation function”, which helps represent nonlinear system [2,3].
Figure 1. Configurational comparison between cerebral neuron (A) and artificial neuron (B)
Usually, an artificial neural network consists of several neurons, an input layer, an output layer, and hidden layers (usually between one and three layers). The complexity of the network structure depends on the nature of the study, the number of inputs and outputs, the algorithm used, and the type of learning process.
Types of ANN learning
The term of artificial neural network learning means finding the optimal values of weights through a feedback loop, and the learning process is called “network training”. As a result of the learning process, we have different values of weights. Great numerical values indicate the importance of the related neuron. On the other
hand, weights with small numerical values indicate that the related neuron can be neglected during the study and simplify the structure of the network [4]. There are three types of learning process according to the learning style: Supervised, unsupervised and reinforcement learning.
Supervised ANN learning
Supervised learning is the process of training the artificial neural network and modifying its structure and values with the intervention of the researcher himself (Fig.2). Supervised learning includes the following procedure:
1. Assigning random initial values to the weights.
2. Entering training input values into the network and calculating output values.
3. Calculate the error value between the output values obtained from training the network and the correct output values.
4. Modify the weights to reduce the error value.
5. Repeat the training and modification process several times until a minimum error value is reached.
Unsupervised ANN learning
In contrast to supervised learning, unsupervised learning in which the artificial neural network relies on itself to determine the optimal values of weights by recognizing new patterns. Unsupervised learning is a more difficult teaching method, but it allows the study of more complex phenomena [5,6].
Reinforcement ANN learning
Reinforcement learning style means training the artificial neural network based on the network system taking the optimal behavior or path that it should take under specific conditions, that is, the training set contains - in addition to the inputs - some random output values and reinforcement values for these outputs that determine the behavior of the network. Learning does not take place through training data, but rather through learning by experience that is reinforced with specific values [7]
Table 1. Comparison of characteristics between supervised , unsupervised and reinforcement learning
Supervised Learning |
Unspervised Learning |
Reinforcement Learning |
||
Concept |
Repeatedly reducing error values between computed outputs and known correct values |
Training the network with undirected data with correct values |
Dependence on the interaction of the artificial neural network with the input data |
|
Dataset |
Inputs, correct outputs |
Inputs |
Inputs, some outputs, reinforcement values |
|
Complexity |
Simple |
Complex |
Complex |
|
Algorithms |
Linear Regression, Naive Bayes, Decision Tree, Support Vector Machines, Logistic Regression, .. etc. |
k-means clustering, c- mean clustering, Association Rules. |
Q-learning , Temporal Difference (TD) , Deep Adversarial Networks , State-action-reward- state-action (SARSA) |
|
Target |
Finding out the outputs |
Detecting patterns and characteristics of input data |
Finding the optimum behavior |
2. Methodology
Data collection
In this work, the data of Chen et al. [8] were used. The data was selected and separated into 5 groups: The first group is data on the change in reaction speeds, concentrations of the resulting substances, reaction yield and selectivity according to the change in the reaction temperature. The second group includes data for studying the change in the yield and selectivity of the reaction and the methane conversion ratio according to the change in the reaction temperature and the ratio of vapor to methane. The third group included data for studying the change in the yield and selectivity of the reaction and the rate of methane conversion according to the change in the reaction temperature and the speed of entry of the reactants. The fourth group included data for studying the change of molar fractions of the resulting materials and the reaction yield depending on the change in the rate of entry of the reactants and the length of the conversion reactor. The fifth group included data for studying the change in yield and selectivity of the reaction and the rate of methane conversion according to the change in the reaction temperature and the initial heating temperature of the reactants.
Artificial Neural Networks development
Five artificial neural networks were developed to study the effect of operational parameters (temperature, vapor-to-carbon ratio, initial heating temperature) and design parameters (inlet velocity, reactor length) on process outputs such as product concentrations, yield and molar fractions.
The Levenberg-Marquardt algorithm was proposed as a training algorithm for the networks due to its simplicity and requires less time, despite the need for more memory. According to this algorithm, the training of the neural network stops automatically when the generalization stops the improving indicated by an increment in the mean square error (MSE) of the validation data [9].
The first network studies the effect of reaction temperature on reaction rate, carbon monoxide, carbon dioxide and hydrogen concentrations, hydrogen production yield, carbon monoxide selectivity, and methane conversion ratio. It has been suggested that there are 3 neurons in the hidden layer of the network.
Figure 3. The first developed network its parts
The second network studies the combined effect of reaction temperature and vapor to carbon ratio on hydrogen production yield, carbon monoxide selectivity and methane conversion ratio. It has been suggested that there are 5 neurons in the hidden layer of the network. The third network studies the combined effect of the reaction temperature and the rate of entry into the reactor on the yield of hydrogen production, carbon monoxide selectivity, and methane conversion rate. It has been suggested that there are 4 neurons in the hidden layer of the network. The fourth network studies the effect of the rate of entering the reactor and the length of the reactor on the yield of hydrogen production and the molar fractions of carbon monoxide and carbon dioxide in the products stream. 7 neurons are proposed in the hidden layer of the network. The last network studies the effect of the preheating temperature of a stream of reactants and reaction temperature on the yield of hydrogen production, carbon monoxide selectivity and methane conversion ratio. It has been suggested that there are 3 neurons in the hidden layer of the network. The structure of the previous networks and the selection of the number of neurons in the hidden layer was developed based on the selection of the network with a lower MSE value.
Figure 4. The developed networks: (a) the second, (b) the third, (c) the fourth, (d) the fifth ANNs.
The percentage of validation data was set at a value of 15% of the number of data set, test data at a value of 15% of the number of data set, and training data at a value of 70% of the number of studied data set.
• i represents the index of the dataset,
• у is the predicted outcome,
• y is the actual value, and
• n is the number of samples in the dataset.
Performance of the ANNs
Generally, while analyzing the performance of the network during the training, the mean squared error decreases during training with increasing the value of epochs, but sometimes the error of the validation or test data set begins to increase as a result of overfitting the training data, and therefore when evaluating the best epoch with the lowest error is chosen. In machine learning, the term “epoch” indicates the number of the cycles of the entire dataset of training that the algorithm has completed [10].
Figure. 5 shows the performances of the developed networks. The optimal validation performance of the first ANN is 1.14*10-2 at 82 epoch and the process is ended at epoch 88. The second ANN has optimal validation performance equal to 0.45534 at 14 epoch and the process is ended at epoch 20. The third ANN has optimal validation performance equal to 3.17*10-2 at 12 epoch and the process is ended at epoch 18. The fourth ANN has optimal validation performance equal to 1.66*10-6 at 93 epoch and the process is ended at epoch 99. The last ANN has optimal validation performance equal to 1.23*10-2 at 101 epoch and the process is ended at epoch 107.
Error Histograms of the ANNs
The histogram shows the distribution of the computed (objective-output) error committed after training the network. The yellow line (the zero line) separates the positive and negative error values. Positive error values indicate that the target value is greater than the output and vice versa.
From Figure 6, it can be seen that the calculated error in the first network is centered at the zero line with 34 instances in the training set, beside that there are a number of negative errors. For the second network, most of the errors were concentrated at the zero-line area in the range between -0.07015 to 0.07916. In the third network, most of the calculated errors were within the range -0.2351 to 0.1082 with instances between 8 to 27. The same applies to the fourth network. The errors were concentrated and distributed more within the range from -0.0096 to 0.001337. In the last network, the total errors are concentrated at the value 0.00289 with instances 38 in the training set, and to a lesser extent toward the positive range.
Figure 5. Changing of the network error during the ANNs training process: the first, (b) the second, (c) the third, (d) the fourth, (e) the fifth ANNs.
Figure 6. Error histograms of the ANNs training process: the first, (b) the second, (c) the third, (d) the fourth, (e) the fifth ANNs.
Training state of the ANNs
The direction and magnitude of an error gradient are determined during the training of a neural network and are used to update the network weights in the correct direction and by the appropriate amount. The lower the gradient coefficient, the better the training and testing of networks will be [10,11]. Figure 7-a shows that the final gradient value is 0.0026605 at epoch 88 for the first network, and its mu value is very small and equal to 1*10-5 It can be noted that there are 5 errors were committed in the range between 0 to 10 and 6 between epoch 80 to 88.
It should be noted here that Mu is considered a control parameter of the training algorithm of ANN. Variation of mu affects the error convergence. Fig. 7-b shows the final gradient value equal to 0.010754 at epoch 20 for the second network and an acceptable mu value equal to 0.001 and 6 verification errors after epoch 14. Fig. 8-a also shows a final gradient value for the third network equal to 0.037606 at epoch 18 and mu value equal to 0.0001 and 6 verification errors after epoch 12 and a single error at epoch 2 at the beginning. Fig. 8-b indicates a smooth drop of the gradient value up to a value of 1.20*10-6 at the 99th epoch of the fourth network and a mu value of 1.00*10-8 with 6 errors at the end in the range between epoch 90 to 99, 3 errors between the range 70 to 80 and a single error in the beginning between the field 0 to 10. As for Fig. 9, it indicates that the value of the gradient reached 0.011789 for the last network at the epoch 107 after its behavior was an oscillatory behavior. As for the value of Mu, it reached a value of 0.001 at the end of the training, noting that there were several errors that reached a maximum of 6 errors in the field between 100 and 107 at the end.
Table 2. The basic information about the developed networks.
Studied parameters |
Network |
MSE |
R |
Data set |
|||
topology |
Training |
Validation |
Testing |
||||
f(T) = {rCO,rC02, Cco, Cc02> Ch2,Yh2,ScO,XcH4] |
1-3-8 |
3.56 * 10-4 |
1.000 |
5 |
1 |
1 |
|
f(T,S/C) = {yH2,sC0,xCH4} |
2-5-3 |
7.65 * 10-1 |
0.999 |
25 |
5 |
5 |
|
f(j,v) = {Yh2,ScO,XcH4} |
2-4-3 |
2.78 * 10-4 |
1.000 |
17 |
4 |
4 |
|
f(y,L) = {Ус02,Ун2,Усо) |
2-7-3 |
1.23 * 10-6 |
0.999 |
73 |
16 |
16 |
|
f(T0,T) = {yH2,sC0,xCH4} |
2-3-3 |
1.33 * 10-3 |
1.000 |
17 |
4 |
4 |
Figure 7. Training state plot of (a) first ANN and (b) second ANN
Figure 8. Training state plot of (a) third ANN and (b) fourth ANN
Figure 9. Training state plot of the fifth ANN
Regression of the ANNs
Figures 10-14 illustrate the regression coefficient for the validation, training, and test procedures, which is the coincidence between the target and response variables. R-values are a statistical measure of how relatively close data sets are to the fitted regression model. In the regression plots, the "Target" values reflect the "Measured" values, while the "Output" values indicate the "Predicted" values. The R-values from the regression chart indicate the model's acceptable levels of accuracy in both the training and validation procedures.
Figure 10. First ANN regression
Figure 11. Second ANN regression
Figure 12. third ANN regression
Figure 13. Fourth ANN regression
Figure 14. Fifth ANN regression Table 2. Linear regression functions of ANNs development.
Function of linear regression
Training |
Validation |
Test |
All |
||
1 |
У = x + 9.7 x 10-5 R = 1.00 |
у = x + 0.0042 R = 0.99 |
У = 0.99x + 0.03 R = 0.99 |
у = x + 0.005 R = 0.99 |
|
2 |
у = x + 0.0028 |
у = x + 0.014 |
у = x + 0.064 |
у = x + 0.0033 |
|
R = 1.00 |
R = 0.99 |
R = 0.99 |
R = 0.99 |
||
3 |
у = x + 0.0037 R = 0.99 |
у = x + 0.053 R = 0.99 |
У = 0.99x + 0.076 R = 0.99 |
у = x + 0.00046 R = 0.99 |
|
У |
У |
||||
4 |
= x + 1.0 x 10-5 |
у = x + 0.00024 R = 0.99 |
у = x + 0.00015 R = 0.99 |
= x + 6.7 x 10-5 |
|
R = 0.99 |
R = 0.99 |
||||
5 |
у = x + 0.0014 |
у = x + 0.0076 |
у = x + 0.0024 |
у = x + 0.0012 |
|
R = 1.00 |
R = 0.99 |
R = 0.99 |
R = 0.99 |
Conclusion
Artificial neural networks were developed simulating the process of steam reforming of methane in fluidized catalytic bed systems using nickel-based catalyst, depending on operational and design parameters to predict the main parameters of the catalytic process.
The results of this analysis revealed a strong correlation coefficient (R-value) between the original measured data and forecasted output variables, ranging up to 9. As a result, the model proposed in this study has adequate applicability and reliability. It is stated that ANN is a beneficial performance evaluation tool for chemical engineering researches in predicting the operational and design parameters for both academic and practical purposes.
References
artificial intelligence neural networks
Vo N. D. et al. Combined approach using mathematical modelling and artificial neural network for chemical industries: Steam methane reformer //Applied Energy. - 2019. - Т. 255. - С. 113809.
Cavalcanti F.M. et al. A catalyst selection method for hydrogen production through Water-Gas Shift Reaction using artificial neural networks //Journal of environmental management. - 2019. - Т. 237. - С. 585-594.
Baughman D.R., Liu Y.A. Neural networks in bioprocessing and chemical engineering. - Academic press, 2014.
Kim P. Matlab deep learning //With machine learning, neural networks and artificial intelligence. - 2017. - Т. 130. - №. 21.
Graupe D. Principles of artificial neural networks. - World Scientific, 2013. - Т. 7.
Gurney K. An introduction to neural networks. - CRC press, 1997.
Sutton R.S., Barto A.G. Reinforcement learning: An introduction. - MIT press, 2018.
Chen K. et al. The intrinsic kinetics of methane steam reforming over a nickel-based catalyst in a micro fluidized bed reaction system //International Journal of Hydrogen Energy. - 2020. - Т. 45. - №. 3. - С. 1615-1628.
Arcotumapathy V. Artificial neural networks assisted catalyst design and optimisation of methane steam reforming: дис. - UNSW Sydney, 2013.
Muravyova E.A., Timerbaev R.R. Application of artificial neural networks in the process of catalytic cracking //Optical Memory and Neural Networks. - 2018. - Т. 27. - С. 203-208.
Rahman P.A., Muraveva E.A., Sharipov M.I. Reliability model of fault- tolerant dual-disk redundant array //Key Engineering Materials. - 2016. - Т. 685. - С. 805-810.
Размещено на Allbest.ru
...Подобные документы
Понятие о нейронных сетях и параллели из биологии. Базовая искусственная модель, свойства и применение сетей. Классификация, структура и принципы работы, сбор данных для сети. Использование пакета ST Neural Networks для распознавания значимых переменных.
реферат [435,1 K], добавлен 16.02.2015Overview of social networks for citizens of the Republic of Kazakhstan. Evaluation of these popular means of communication. Research design, interface friendliness of the major social networks. Defining features of social networking for business.
реферат [1,1 M], добавлен 07.01.2016Тестування і діагностика є необхідним аспектом при розробці й обслуговуванні обчислювальних мереж. Компанія Fluke Networks є лідером розробок таких приладів. Такими приладами є аналізатори EtherScope, OptіVіew Fluke Networks, AnalyzeAir та InterpretAir.
реферат [370,5 K], добавлен 06.01.2009Решение задач прогнозирования цен на акции "Мазут" на 5 дней, построение прогноза для переменной "LOW". Работа в модуле "Neural networks", назначение вкладок и их характеристика. Построение системы "Набор программистов" нечеткого логического вывода.
курсовая работа [3,2 M], добавлен 26.12.2016Модели оценки кредитоспособности физических лиц в российских банках. Нейронные сети как метод решения задачи классификации. Описание возможностей программы STATISTICA 8 Neural Networks. Общая характеристика основных этапов нейросетевого моделирования.
дипломная работа [1,4 M], добавлен 21.10.2013Технологии решения задач с использованием нейронных сетей в пакетах расширения Neural Networks Toolbox и Simulink. Создание этого вида сети, анализ сценария формирования и степени достоверности результатов вычислений на тестовом массиве входных векторов.
лабораторная работа [352,2 K], добавлен 20.05.2013Сущность и понятие кластеризации, ее цель, задачи, алгоритмы; использование искусственных нейронных сетей для кластеризации данных. Сеть Кохонена, самоорганизующиеся нейронные сети: структура, архитектура; моделирование кластеризации данных в MATLAB NNT.
дипломная работа [3,1 M], добавлен 21.03.2011The material and technological basis of the information society are all sorts of systems based on computers and computer networks, information technology, telecommunication. The task of Ukraine in area of information and communication technologies.
реферат [29,5 K], добавлен 10.05.2011Information security problems of modern computer companies networks. The levels of network security of the company. Methods of protection organization's computer network from unauthorized access from the Internet. Information Security in the Internet.
реферат [20,9 K], добавлен 19.12.2013Lists used by Algorithm No 2. Some examples of the performance of Algorithm No 2. Invention of the program of reading, development of efficient algorithm of the program. Application of the programs to any English texts. The actual users of the algorithm.
курсовая работа [19,3 K], добавлен 13.01.2010Понятие и виды Web-хостинга. Анализ рынка хостинговых компаний. Языки Web-программирования: HTML, PHP, Water, Clear Methods Steam. Web-дизайн и браузеры. Возможности современных визуальных HTML-редакторов. Создание сайта "Каталога хостинговых компаний".
курсовая работа [537,6 K], добавлен 15.01.2012Методология, технология и архитектура решения SAP Business Objects. Возможные действия в Web Intelligence. Создание документов и работа с ними. Публикация, форматирование и совместное использование отчетов. Общий обзор приложения, его интерфейсы.
курсовая работа [1,4 M], добавлен 24.09.2015Выбор элемента, который необходимо изменить в процессе моделирования. Изменение названия в окне Display Properties. Выбор элемента Parameters из библиотеки и добавление на рабочее поле. Ввод начального и конечного значений сопротивления, шагов модуляции.
лабораторная работа [1,4 M], добавлен 29.12.2014Basic assumptions and some facts. Algorithm for automatic recognition of verbal and nominal word groups. Lists of markers used by Algorithm No 1. Text sample processed by the algorithm. Examples of hand checking of the performance of the algorithm.
курсовая работа [22,8 K], добавлен 13.01.2010Анализ существующего программного обеспечения, технических систем: eM-Workplace, eM-Spot, eM-Arc, tecnomatix Robcad, Human Performance. Структура программного продукта. Руководство программиста, оператора. Выполнение анализа на тестируемой выборке данных.
курсовая работа [45,0 K], добавлен 03.04.2012История развития корпорации Intel, ее финансовые показатели и планы на будущее. Основные программные продукты: C++ Compiler for Linux и for Windows, Visual Fortran Compiler for Windows, VTune Performance Analyzer. Защита информации Intel Anti-Theft.
реферат [20,6 K], добавлен 02.04.2010Наличие удобного графического интерфейса как характерная особенность пакета программ схемотехнического анализа MicroCAP-7. Окно отображения результатов моделирования. Электронная лупа Scope, функции раздела Performance и вывод графиков в режиме Probe.
реферат [98,0 K], добавлен 15.01.2011Lines of communication and the properties of the fiber optic link. Selection of the type of optical cable. The choice of construction method, the route for laying fiber-optic. Calculation of the required number of channels. Digital transmission systems.
дипломная работа [1,8 M], добавлен 09.08.2016Предназначение глобальной вычислительной сети Wide Area Networks. История создания Интернет, способы подключения к нему компьютера. Поиск информации, ведение бизнеса и дистанционного обучения. Структура сетей ARPANET, NSFNET. Протоколы и адреса Интернета.
контрольная работа [565,1 K], добавлен 24.02.2014Social network theory and network effect. Six degrees of separation. Three degrees of influence. Habit-forming mobile products. Geo-targeting trend technology. Concept of the financial bubble. Quantitative research method, qualitative research.
дипломная работа [3,0 M], добавлен 30.12.2015