Bot detection in social media

Classification and detection of bots in the social network Vkontakte, comparison of existing methods. Implementation of classification algorithms that use comments, among other things. Example of dividing a hyperplane into a randomly generated data set.

Рубрика Социология и обществознание
Вид статья
Язык английский
Дата добавления 28.08.2020
Размер файла 4,2 M

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http://www.allbest.ru/

Размещено на http://www.allbest.ru/

Federal State Autonomous Educational Institution for Higher Education National Research University Higher School of Economics

Faculty of Computer Science Educational Program

Applied Mathematics and Information Science

Bachelor's Thesis

Bot detection in social media

SumekenovAkhmad

Moscow, 2020

Contents

  • Introduction
    • Literature review
    • Dataset
    • Methods
    • Experiments
    • Conclusion
  • References
    • Appendix

Аннотация

Цель работы - классификация и обнаружение ботов в социальной сети ВКонтакте. Проводится сравнение существующих методов. Реализованы и протестированы алгоритмы классификации, использующие, в том числе, комментарии. Реализованный метод обладает высокой точностью.

The purpose of this work is to classify and detect bots in the social network Vkontakte. The existing methods are compared in this work. Classification algorithms that use comments among other things are implemented and tested. The implemented method has high accuracy.

Introduction

Motivation

Social networks have now become a popular venue for people to spend their free time. In Russia, the most popular social network is "VKontakte", with 97 million active monthly users. One of the activities in social networks is to discuss news or various events. A social network gives people the opportunity to express their opinion, as well as to see the opinion of other people, which in turn also affects the formation of the opinion of the social network user.

In recent years, it has become known that many comments on social media posts affecting the discussion of social and political news are not written by real users, but by "bots" or "trolls", i.e. people with fake accounts organized to form a certain point of view among social media users. Thesefakeaccountsareoperatedbypeople who are hired by certain organizations and create commentaries with the intent to influence public opinion.

Thus, the ability to identify the "fakeness" of the comment and the user-creator of the comment is very important for drawing conclusions from social media discussions.

My work has the purpose of classifying the comment and its userusing two "indicators": the text itself and the metadata of account : online activity of the user-creator of the comment in social network groups related to news and policy and their IDs.

Tasks

Formally, the purpose of classifying a comment and its user is defined as follows:

· There is the set of all comments. Each comment is identified by unique ID (comment_id).

· Eachcommenthasitsowncreator, and each creator has one or more comments. Thesetofcreators, orthe set of users is called. Each user is identified by uniqueuser_id.

· Eachuseriseitherabotorarealuser, we define it by. Forconvenience we define the function comment ->useras

· Ifcommentisfake, itisautomaticallydeducedthatuserisfaketoo, and vice versa. Thesetis connected within the following way :

· The task is to infer

The task is to create a model that automatically assigns the user to one of the classes 0 or 1, i.e. an ordinary user or a bot, having data on the user's activity or comment text.We are looking at three approaches to this classification:

· The first approach is based on the text of the commentary. In this approach, we do not look at the user who created the comment. That is, the model takes c_j to the input and outputs

· The second approach is based on user activity and metadata, where activity refers to the number of comments left by a user in different groups related to news and policy. (These groups are fixed and will be shown below)

· The third approach combines the two above. We build a model that considers both the text of the comment and the activity and metadata of user.

Literature review

There is a lot of literature on detecting bots on social networking sites, especially Twitter and Facebook. Many articles describe how to identify fake accounts created to influence public opinion, thus influencing public and political life in their respective countries. For example, foreign interference in the U.S. elections in 2016 and 2018 has been widely investigated.

The sphere of machine learning has also been touched upon in these studies. For example, the U.S. media holding company NBC, together with the Committee to Investigate Foreign Interference of the U.S. Congress, released a dataset on the Kaggle, which is machine learning competition platform, containing "tweets" created by supposedly fake accounts sponsored by the foreign government.

NBCwebsitescreenshot

The first work I would like to consider is [1] the article about Twitter account classification. This work dealt with three issues:

1. Is it possible to create an algorithm that would recognize such "fake accounts" accurately enough?

2. Is it possible to accurately generate such "fake accounts" to increase the sample for training subsequent algorithms?

3. The latter goal is to implement the "synthetic minority oversampling" technique, which is essentially an over-sampling option where smaller class samples are synthesized rather than taken from an existing dataset.

The dataset on which the authors of the article conduct experiments was taken from the work [2] and consists of a combination of different datasets composed of tweets from fake accounts and tweets from real accounts (also called “genuine tweets”). In this paper, the authors used two "levels" of classification: account-basedandtweet-based. The same classification levels will be used in our work with some changes. For example, in this paper, the account-based approach means a classification based on data in the profile description on Twitter. In our task, this approach will be unsuccessful, because most of accounts in VKontakte are “closed” and so it is impossible to view user profile metadata. Our approach will use user activity in a specific set of Vkontakte groups and ID of user as features in the account-based classification.

The authors used many different classification algorithms for the account-based approach and LSTM model for the tweet-based approach. The best classification algorithms have reached 99.81% accuracy for the account-based approach and 96.43% accuracy for the tweet-based approach.

Another work [3] is centered around an enhanced deep model. The authors of the article consider the dataset of tweets not as a set of texts, but as a time series consisting of these texts. Thus, they use this time series as features and build various deep neural networks as models that will be based on these features. The dataset on which the experiments were made was taken from [4]. The authors of the article make a time series from the usual set of texts in the following way: two additional features are also added to the features of plain text: timestamp of tweet and posting type. Then these concatenated features are directed to deep neural models as input. The best result among all models was the F1-Score of 87.32%. The score of the best model in this work is lower than in the previous because datasets and the way they are created are fundamentally different. In the first work, dataset is consisted of samples with all labels being manually confirmed. In the second work, authors used dataset composed of Twitter accounts provided by Twitter ban history, which has a policy of “precision first” algorithms, thus there is a subset of accounts which are bots but are not labelled as such.

Dataset

Data collecting

To start experimenting with classification methods, we had to collect data, which means collecting comments on posts in social network groups. A total of 43 groups in VK social network relate to politics and news. We are listing some of them:

· «Медуза», vk.com/meduzaproject

· «Новая Газета», vk.com/novayagazeta

· «Агентство ТАСС», vk.com/tassagency

· «Известия IZ.RU», vk.com/izvestia

· «Лентач», vk.com/lentach

In this work we will consider such sets as sets of entities, in this case we have a set of groups (similar to the set of users and comments).We call a set of groups. Each group is identified by unique ID (group_id).

To label dataset, we used an external source where dataset of bot IDs is provided. That is, we were given a subset .

We also had information from an external source on the percentage of bot activity in different groups. We used this information to make our sample representative of the real situation.

VK API was used for data collection. With the help of VK API, unique IDs of the last posts (post_id) were obtained at the time of data collection (April 2020). To get the comments themselves through the VK API functionality you need to specify the post ID and group ID, i.e. post_id and group_id. With this information, we went through 50,000 last posts in 43 groups and received about 369,000 comments. Labelling of a comment user as fake/unfake () is very simple -it decides whether the user-creator of the comment belongs to set of bots:

Let us remind that if a user is fraudulent, then by definition all his comments are also considered to be fraudulent. As a result, only 4.1% of 369,000 comments turned out to be bot comments, i.e. the dataset is unbalanced.

The following preprocessing was performed over the dataset:

1. All the numbers and delimiters were removed from text.

2. Sentences with number of words less than 4 were eliminated. The threshold was chosen without any fundamental analysis and can be changed in any direction.

After analyzing the text,a wordcloud was constructedforcommentariesfrom bots and commentaries from real accounts without consideration of too frequent words:

Bots word cloud

Real accounts word cloud

When analyzing number of words in commentaries, there is no visible differences between two:

One more indicator of differences between two sets of commentaries is relative difference in the frequency of use of same words. We took words which were used more than 10 times from both subsets and compared their use relative one to another by dividing the number of times the word was used in the first set by the number of times it was used in another. Sorting this result, we get the words with most differing numbers of use. Below we demonstrate the results, the number given after word is how more often it is used in comments from real accounts. Consequently, if the number is far below one it means it is used far more frequently in the comments from bots:

Methods

In our work we used several algorithms for text-based classification. The first set of algorithms are classical SVM, LogisticRegression, XGBoost and RandomForestClassifier on top of TF-IDF text vectorization algorithm.

Bag of Words (BOW) algorithm is used to represent text as a numerical vector for consequent use in machine learning algorithms. The vector represents the number of times a word was used in the text.

An example of BOW representation

TF-IDFisanadvancedversionofBOWalgorithm.The method of TF-IDF considers the importance of individual words in this comment considering the frequency of occurrence of this word in all comments:

TF-IDF algorithm is one of mainstream algorithms to give a weighing factor to words depending on their relative frequency. ItisoftenusedinvariousNLPfields such as text mining, text classification and document analysis. There are multiple variations of TF-IDF in practice, but we will use the one provided by sklearn Python package.

Sklearn TF-IDF has multiple hyperparameters. Setting all the others to default values, we change lowercasing to False and consider only words which have overall occurrence number of more than 3.

After vectorizing the text, these vectors can be used for machine learning classification models. There are two important factors worth considering when using TF-IDF vectors as word embeddings. First, TF-IDF vectorizer should be fitted on training dataset and test dataset should be transformed using the same vectorizer. Secondly, the resulting dataset is a sparse matrix which is stored and manipulated in a specific way to avoid memory and computational overconsumption. Dataset has the size of , where Nis the number of samples and D is the size of vocabulary yielded from training dataset.

It should be noted that in our paper TF-IDF vectors are only used in non-neural network models, and neural network models use pre-trained embeddings of considerably smaller size. They are described in neural networks models section.

Support Vector Machine.

SVM is usually used as a non-probabilistic binary classifier or multiple labels classifier with one-vs-all or one-vs-one nature. SVMs take a set of points with labels in space and finds a hyperplane of dimension n-1 subject that minimizes certain loss. In separable case it tries to find hyperplane with maximum minimal distance from any point in space. In our case, the space of will be a space of commentaries after TF-IDF transformation. Advantages include simplicity of training and inference process and interpretability of results. As for disadvantages, the rise in dimension dramatically increases the time of training process. Secondly, because of the underlying nature of algorithm, it has a limited capacity out of linear domain. Mathematically, SVM classifier solves the following problem:

Where are weights, are features and are targets. C is a regularization parameter.

Example of dividing hyperplane on randomly created dataset

Logistic Regression.

Logistic regression is in its nature a probabilistic classifier which decides the class of a sample by taking a threshold over logistic function:

Logistic regression is by far one of the most popular machine learning algorithms for classification. Not only it can be constructed on top of the original features, it is often used as classification algorithm over features from more complex model such as neural networks and various boosting models (XGBoost, CatBoost). Logistic regression downsides are assumption of linearity between features and the target and also it can be overfitted in high-dimensional spaces.

Example of 3 class logistic regression on Iris data.

Random Forest Classifier.

Random Forest algorithm is an ensemble method based on decision forests. While decision trees lack the stability and tend to overfit aggressively random forests provide the way to tackle these problems. Random Forest algorithm consists of fitting multiple trees on random subsets of training data and then produce mean or otherwise weighted result. It is different from usual bagging technique because each tree is fitted on random subsets of features, what is sometimes called “feature bagging”. It is often used in scientific articles because of its relative simplicity and interpretability.

XGBoost Classifier. social network vkontakte bot

XGBoost Classifier is a parallel tree boosting algorithm which is optimized to run in distributed and efficient way. XGBoost Classifier is often used on tabular datasets but is also used as classification method on top of neural networks last layer features. Gradient boosting is a powerful algorithm, being used in a lot of top solutions on Kaggle competition platform. Simply put, it is an ensemble method, in which decision trees called weak learners are fitted on errors from ensemble of previous weak learners.On the other side, XGBoost Classifier needs tuning of many hyperparameters, which is a disadvantage compared to previous algorithms.

Text-level classification

After TF-IDF vectorizing, the proposed set of algorithms suggests the following actions:

· To balance the dataset, theunder sampling is performed: the size of the dataset is reduced to about 30.000.

· Then all 4 algorithms (SVM, LogisticRegression, RandomForestClassifier, XGBoost) are trained and tested with 5-fold cross-validation.

The second set of algorithms includes deep neural networks trained on pre-trained BERT-embeddings. These pre-trained Russian word embeddings were taken from DeepPavlov project.

· The first neural network includes a pre-trained embedding layer, several layers of LSTM (Long Short-Term Memory) and a fully connected layer at the end. With the help of the tool for working with graphs and their visualization it was possible to build a neural network visualization:

· The second neural network includes a pre-trained embedding layer, several convolution layers (Convolutional Layer) with MaxPool layer and a fully connected layer at the end.

Classification with user activity and metadata

It should be noted that the size of the dataset is significantly reduced when classified based on user activity, because there are much more unique comments than unique users. Moreover, due to under sampling, the size of our dataset is reduced to 1268. The vector of attributes in our models is the number of comments in each group of our set, i.e. the size of the vector is 43. The only metadata we have access to is user ID.We tried 4 algorithms - RandomForestClassifier, SVM, LogisticRegression and XGBoost.

Classification with combined features

After separate experiments we combine features in a sophisticated way to improve model quality with respect to both models based on user activity and metadata and models based on text features.

Experiments

Classificationwithtextfeatures. Non-neuralnetworkalgorithms

At first, we divided all comments into train and test datasets. We have trained the TF-IDF algorithm on a train dataset and conducted training with each classification algorithm with cross-validation on 5 folds.

Model Name

Mean Accuracy

LinearSVC

0.763254

LogisticRegression

0.765488

XGBoost

0.702159

RandomForestClassifier

0.756309

As you can see, the two best algorithms for this task are LinearSVC and LogisticRegression. Their similar accuraciesare explained by the fundamentally linear nature of these classifiers.Having chosen the algorithm of logistic regression, we checked additional metrics on the test sample, such as precision, recall and F1-score:

As a result, we see that the model has not been overfitted, reaching 77% accuracy on the test sample.

Classification with text features. Neural Networks

For training neural network models, we took two simple architectures.

The first neural network consists of the following basic layers: Embedding layer, LSTM layer and full-connected layer at the end. The second neural network also consists of an embedding layer at the beginning and a full-connected layer at the end, but the "core" has been replaced from the LSTM layer to a series of convolutionalones which are followed by MaxPool layer.

To improve the quality of the classification, we took pre-trained embeddings from the open project DeepPavlov, namely Conversational RuBERT, because it is assumed that online comments are closer to the "free" style of the Russian language than to the official one, as on Russian Wikipedia. It should be noted that these embeddings were used in both architectures.

So, havingbeen trained on both architectures, we got the following results of modelperformances on the test sample:

Model

Accuracy

RNN: Embedding + LSTM + FC

78.61%

CNN: Embedding + [Conv] + FC

76.01%

As we can see, the recurrent model got the result higher than the convolutional and classical models. This can be explained by the fact that text is a sequence, and therefore the recurrent model is better suited for this type of problems.

Classification with user activity

We conducted experiments using 4 algorithms of classification on user activity features without any pre-processing. We trained these models with cross-validation on 5 folds. The results are shown below:

Model Name

Mean Accuracy

LinearSVC

0.737384

LogisticRegression

0.739759

XGBoost

0.714892

RandomForestClassifier

0.777601

Note that the highest accuracy is yielded fromRandomForest model, which is explained by the nature of features: the trees analyze the number of comments and decide on the type of profile. AftertrainingRandomForest on a training dataset, let us look at the resulting metrics of this model on the test sample:

We see that the model based on the profile activity has the same accuracy as the neural networks model trained on text features.

Classification with user activity and their ID

Besides user activity, we can use metadata of user account. Unfortunately, in VK social network there is a considerable part of accounts which are closed from view (also called “private” accounts). These accounts metadata consist only of account ID. To our advantage, IDs have valuable latent information about accounts date of registration.It is known that in VK social network, ID of user is a strictly increasing variable with respect to the time. While we cannot get explicit date of registration, it gives us an information about whether one account is registered before or after another. Therefore, this meta-feature is supposed to be of huge value to bot detection algorithm.

We suppose the following method to use this feature:

1. Fit bucketizer on training data IDs, then transform both training and test data IDs.

2. Concatenate new features with account activity vectors.

3. Train 4 classification algorithms used previously on concatenated features using 5-fold cross-validation. Each time bucketizer is refitted on corresponding folds.

Results are given below:

Model Name

Mean Accuracy

LinearSVC

0.854894

LogisticRegression

0.860393

XGBoost

0.873225

RandomForestClassifier

0.880891

As it is seen from table, accuracies are significantly higher than before. Training RandomForest classifier on the whole training dataset, we are getting the following metrics on test dataset:

Combining account-level features and text features

Next logical step is to combine account-level and text features to create a more accurate classifier for commentaries. A sample from dataset is now constructed by concatenating TF-IDF embedding of commentary text and account-level features of user who left the commentary. The peculiarity of situation is that account label (bot or real) automatically yields the commentary label (bot or real) and vice versa. Thus, if we split dataset randomly without any additional consideration, we may have the same account commentaries both in training and test datasets which contradicts the principle of train-test split. Because these commentaries have the same label a priori it is suggested to use a more complicated way of train-test split, which is presented below in the diagram.

The main idea is:

1. Split account IDs into training and test sets

2. Then train dataset consists of commentaries left by users with IDs from training set IDs and test dataset is constructed the same way.

Model Name

Accuracy

LinearSVC

0.871657

LogisticRegression

0.841488

XGBoost

0.871035

RandomForestClassifier

0.920891

The best model in this case, as in previous ones, is RandomForest classifier. Other metrics for this model are provided below:

Notably, model has high precision for class 1 and high recall for class 0. It means that samples which are labeled 1 by model (in other words, model decides that commentary is bot-originated) are almost certainly bot-originated but model does not discriminate other bot-originated commentaries.

As RandomForest classifier can yield probabilities, we are able to change decision threshold. By lowering decision threshold, we can make model to have a higher accuracy in the expense of precision for the class 1.First model has its advantage in the case when there is “precision-first” policy. As mentioned before, such policy is used in Twitter when banning suspicious accounts. The second model is more relevant when accuracy is the main metric. It has the following metrics:

Conclusion

In the course of the work, the task of classifying comments at the text and user levels was considered. Methods which were used in related researches were studied and applied in this work. Various methods of text vectorization were used. As a result of the work, several models of comment classification were obtained:

1. Text level classification model with TF-IDF embeddings and non-neural network models

2. Text level classification model with BERT embeddings and neural network architecture: RNN-based and CNN-based

3. Account level classification with non-neural network models. Features include activity of account and its ID

4. Model based on concatenated features. These features combine text-level and account-level features.

During this work, the dataset of VK commentaries was created from scratch and can be used for continued research on this topic.

As for future research, the next steps include treating commentaries of each user as time series and including timestamps and time differences between commentaries of the same user as new features. Also, in this work we have not included commentary attributes such as attached photos, attached links and whether commentary was original commentary or a reply to someone else. Another area of research is the “likes” attribute. “Likes” attribute provides an information about IDs of users who “liked” the commentary. There is a strong hypothesis that these attributes may significantly improve model performance.

References

[1] - Kudugunta, S., & Ferrara, E. (2018). Deep neural networks for bot detection. // Information Sciences, 467, 312-322.

[2] - Stefano Cresci, Roberto D.P., MarinellaPetrocchi, Angelo Spognardi, Maurizio Tesconi,The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race // Proceedings of the 26th International Conference on World Wide Web Companion, 2017

[3] - Cai, C., Li, L., &Zengi, D. (2017). Behavior enhanced deep bot detection in social media. // 2017 IEEE International Conference on Intelligence and Security Informatics (ISI).

[4] - Morstatter, F., Wu, L., Nazer, T. H., Carley, K. M., & Liu, H. (2016). A new approach to bot detection: Striking the balance between precision and recall. // 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).

Appendix

1. Part of code in github : https://github.com/ahmados/RussianBotDetection

2. External source with bot IDs:https://gosvon.ru/

Размещено на Allbest.ru

...

Подобные документы

  • Social structure as one of the main regulators of social dynamic. The structure of the social system: social communities, social institutions, social groups, social organizations. The structure of social space. The subsystem of society by T. Parsons.

    презентация [548,2 K], добавлен 06.02.2014

  • The need for human society in the social security. Guarantee of social security in old age, in case of an illness full or partial disability, loss of the supporter, and also in other cases provided by the law. Role of social provision in social work.

    презентация [824,4 K], добавлен 16.10.2013

  • The concept, definition, typology, characteristics of social institute. The functions of social institution: overt and latent. The main institution of society: structural elements. Social institutions of policy, economy, science and education, religion.

    курсовая работа [22,2 K], добавлен 21.04.2014

  • Understanding of social stratification and social inequality. Scientific conceptions of stratification of the society. An aggregated socio-economic status. Stratification and types of stratification profile. Social stratification of modern society.

    реферат [26,9 K], добавлен 05.01.2009

  • Four common social classes. Karl Marx's social theory of class. Analysis the nature of class relations. The conflict as the key driving force of history and the main determinant of social trajectories. Today’s social classes. Postindustrial societies.

    презентация [718,4 K], добавлен 05.04.2014

  • The essence of social research communities and their development and functioning. Basic social theory of the XIX century. The main idea of Spencer. The index measuring inequality in income distribution Pareto. The principle of social action for Weber.

    реферат [32,5 K], добавлен 09.12.2008

  • The essence of the terms "Company" and "State" from a sociological point of view. Description criteria for the political independence of citizens. Overview of the types of human society. The essence of the basic theories on the origin of society.

    реферат [20,1 K], добавлен 15.12.2008

  • The essence of modern social sciences. Chicago sociological school and its principal researchers. The basic principle of structural functionalism and functional imperatives. Features of the evolution of subprocesses. Sociological positivism Sorokina.

    реферат [34,8 K], добавлен 09.12.2008

  • The study of human populations. Demographic prognoses. The contemplation about future social developments. The population increase. Life expectancy. The international migration. The return migration of highly skilled workers to their home countries.

    реферат [20,6 K], добавлен 24.07.2014

  • American marriage pattern, its types, statistics and trends among different social groups and ages. The reasons of marriage and divorce and analyzing the statistics of divorce and it’s impact on people. The position of children in American family.

    курсовая работа [48,3 K], добавлен 23.08.2013

  • Race discriminations on ethnicity backgrounds. The Globalization and Racism in Media Age. African American writers about racism. Comparative analysis of the novel "To Kill a Mockingbird" Harper Lee and story "Going to Meet The Man" by James Baldwin.

    дипломная работа [135,9 K], добавлен 29.03.2012

  • Studies to determine the effects of fulltime and parttime employment on the academic success of college students, on time to graduation and on future earnings. Submission of proposals on how a university student employment offices may utilize these data.

    статья [62,1 K], добавлен 23.02.2015

  • Detection the benefits of Corporate Social Responsibility strategies that would serve as a motivation for managers and shareholders in the context of a classical firm, which possesses monetary preferences. Theoretical framework and hypothesis development.

    курсовая работа [319,5 K], добавлен 14.02.2016

  • Overview of social networks for citizens of the Republic of Kazakhstan. Evaluation of these popular means of communication. Research design, interface friendliness of the major social networks. Defining features of social networking for business.

    реферат [1,1 M], добавлен 07.01.2016

  • The necessity of using innovative social technologies and exploring the concept of social entrepreneurship. Analyzes current level of development of social entrepreneurship in Ukraine, the existing problems of creating favorable organizational.

    статья [54,5 K], добавлен 19.09.2017

  • Data mining, developmental history of data mining and knowledge discovery. Technological elements and methods of data mining. Steps in knowledge discovery. Change and deviation detection. Related disciplines, information retrieval and text extraction.

    доклад [25,3 K], добавлен 16.06.2012

  • Задачи дисциплины Social Analytics. Основное понятие Social Media Analytics и его составляющие. Важность вовлеченности компании в социальные медиа. Сбор данных и пошаговая организация вовлеченности в соц-медийные проекты. Инструменты для обработки данных.

    реферат [1,8 M], добавлен 05.12.2014

  • Social network theory and network effect. Six degrees of separation. Three degrees of influence. Habit-forming mobile products. Geo-targeting trend technology. Concept of the financial bubble. Quantitative research method, qualitative research.

    дипломная работа [3,0 M], добавлен 30.12.2015

  • What is social structure of the society? The concept of social structure was pioneered by G. Simmel. The main attributes of social structure. Social groupings and communities. Social status. Structural elements of the society’s fundamental institutions.

    реферат [25,4 K], добавлен 05.01.2009

  • Social interaction and social relation are identified as different concepts. There are three components so that social interaction is realized. Levels of social interactions. Theories of social interaction. There are three levels of social interactions.

    реферат [16,8 K], добавлен 18.01.2009

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.