Chatgpt in newsrooms: adherence of ai-generated content to journalism standards and prospects for its implementation in digital media

The study presents an analysis of adherence to the professional journalism standards of the content generated by the language processing model ChatGPT, and assesses the prospects for its use in digital media. The content was assessed to six core standards

Рубрика Журналистика, издательское дело и СМИ
Вид статья
Язык английский
Дата добавления 12.11.2023
Размер файла 338,8 K

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http://www.allbest.ru

Taras Shevchenko National University of Kyiv

Chatgpt in newsrooms: adherence of ai-generated content to journalism standards and prospects for its implementation in digital media

Zagorulko D. I.

Загорулько Д. І. CHATGPT У НОВИННИХ РЕДАКЦІЯХ: ДОТРИМАННЯ СТАНДАРТІВ ЖУРНАЛІСТИКИ У КОНТЕНТІ, ЗГЕНЕРОВАНОМУ ШТУЧНИМ ІНТЕЛЕКТОМ, ТА ПЕРСПЕКТИВИ ЙОГО ВИКОРИСТАННЯ В ОНЛАЙН-МЕДІА

У статті наведено результати аналізу відповідності контенту, згенерованого моделлю мовної обробки ChatGPT, професійним стандартам журналістики; дано оцінку перспектив використання таких текстів в онлайн-медіа. Контент оцінено на відповідність шести ключовим стандартам журналістики: оперативності, достовірності, балансу думок, відокремлення фактів від коментарів, точності і повноти інформації. Проаналізовано технічні можливості чатботу, проведено моніторинг практичних кейсів його застосування в медіа, поставлено експеримент для перевірки дотримання вимог збалансованої подачі інформації у текстах ChatGPT.

У ході експерименту чоботу було дано завдання охарактеризувати діячів політики, культури та спорту, які є об'єктами суспільної дискусії; для кожного з діячів було задано по 3 питання з різною тональністю (нейтральною, позитивною та негативною). Результати експерименту виявили, що ChatGPT схильний генерувати упереджений контент, який скоріше відповідає контексту запиту користувача, ніж правилу збалансованої подачі інформації. Лише у 48% випадків згенерований текст відповідав стандарту балансу думок. Крім того, виявлено нерівномірність у дотриманні цього стандарту для діячів політики та представників культури/спорту, що пов'язано з особливостями внутрішньої модерації застосунку За нейтральної та позитивної тональності запитів ChatGPT генерував набагато більш збалансовані тексти про політиків, ніж про діячів з інших сфер. Окрім того, виявлені обмежене володіння актуальною інформацією, непрозорість джерел даних та схильність до фальсифікації фактів також є значними порушеннями професійних медійних стандартів та перешкодами до використання текстів ChatGPT у медіа без перевірки та коригування журналістом- людиною.

Утім, незважаючи на виявлені обмеження, наявність такого потужного інструменту у вільному доступі задає вектор подальших перспектив використання технологій штучного інтелекту в медіа. Передбачається, що широкі знання ChatGPT в різних галузях, його можливості з обробки та генерації контенту можуть використовуватися журналістами для оперативної підготовки бекграундів до новин, перекладу та коригування текстів, написання заголовків до матеріалів та розширеного впровадження інтерактивності в онлайн-медіа.

Ключові слова: ChatGPT, штучний інтелект в медіа, стандарти журналістики, інтерактивність в онлайн-медіа, контент онлайн-медіа.

The study presents an analysis of adherence to the professional journalism standards of the content generated by the language processing model ChatGPT, and assesses the prospects for its use in digital media. The content was assessed to six core standards: topicality, reliability, balance of opinion, separation offacts and opinions, accuracy, and completeness of information. To conduct the evaluation we examined the technical capabilities of the chatbot, monitored practical cases of its use in the media and conducted an experiment to assess compliance with the standard of balanced reporting.

During the experiment, the chatbot was tasked with characterizing the figures ofpolitics, culture and sports, who are subjects of public debate and varying perspectives; for each of the figures there were given 3 questions with different tones (neutral, positive and negative). The results of the experiment revealed that ChatGPT tends to generate biased content that is rather in line with the context of the user's query, than respecting the rule of balancing. Only 48% of the generated texts met the standard of balance of opinion. Additionally, we found unevenness in adhering to this standard for content about politicians andfigures of culture/sports, which is linked to peculiarities of the tool's internal moderation. With a neutral and positive tone of the queries, ChatGPT produced much more balanced texts about politicians than about figures from other spheres. Furthermore, the revealed lack ofpossession of up-to-date information, the opacity of data sources and the tendency to make up facts are also significant violations of the professional journalism standards and barriers for using the tool in media without verification and correction by a human journalist.

However, despite these limitations, the availability of such a powerful tool with the free access sets the vector for further prospects for using AI technologies in digital media. It is assumed that broad knowledge in various fields, as well as the ability of a chatbot to process and generate text content, can be used by journalists for rapidly preparing backgrounds for news, translating and correcting texts, generating headlines for narratives, and enhancing interactivity in digital media. professional journalism standard

Key words: ChatGPT, artificial intelligence in media, journalism standards, interactivity in digital media, digital media content.

Problem Statement. From a futuristic concept over the last decade, artificial intelligence has evolved into a real practical tool that is actively used in a wide range of industries, including media. The New York Times, Associated Press, Forbes, and Bloomberg are among the pioneers in deploying AI capabilities in the news business. Media mainly use AI for data extraction, processing and analysis, speech recognition, fact-checking and news report generation. Besides that artificial intelligence also opens up new perspectives for the evolution of interactivity in digital media, helping with personalization, building interactive design and creating interactive content. However, since implementation of the AI requires custom solutions for specific business tasks, currently it remains expensive even for the major players of the media market [1].

Launch of ChatGPT, a free AI-based chatbot, in the late November 2022 garnered significant attention, attracting 100 million unique users within two months [2]. The tool developed by OpenAI allows understanding and generating human-like text due to user's queries. It is pre-trained on a diverse set of internet sources and learns on human feedback. The chatbot is hardly the first opportunity of interaction with a highly developed artificial intelligence tool that is freely accessible to a diverse audience, including journalists, which revived the discussion in both the scientific and professional communities regarding AI prospects in the media.

Analysis of Recent Researches and Publications. Roots of exploration of the artificial intelligence application to the professional journalistic practices lead back to the 1990s. In 1999, J. Pavlik predicted that AI would significantly impact both the journalistic workflow and the media content itself [3]. It is noteworthy that decades later, in 2023, Pavlik was also one of the first researchers to evaluate the potential impact of ChatGPT on media. He “interviewed” the chatbot, appreciated its deep knowledge in the field of media education, and concluded that despite limitations the tool can help media professionals by “improving both the quality and efficiency ofjournalistic and media work” [4].

Changes in the journalistic practices under the influence of artificial intelligence were studied by M. Brousseau [5] and van Dalen [6]. N. Thurman [7] and S. Lewis [8] explored the possibilities and limitations of AI in the personalization of digital media content. The prospects for the impact of AI on the expansion of interactivity in digital media were noticed by T Flew [9] and M. Hansen [10].

The need for journalists to embed “organizational, institutional, and professional values into the technologies that then drive news production” was emphasized by N. Diakopoulos [1]. However, the issue of applying ethical and professional journalistic values to the content produced by the artificial intelligence has received limited attention in scholar research yet.

The objective of this research is to determine the adherence of the ChatGPT generated content to the journalism standards. The criteria for adherence have been defined according to a list of core journalistic standards outlined by the Institute of Mass Information, namely, topicality, reliability, balance of opinion, separation of facts and opinions, accuracy, and completeness of information [11]. These standards have gained widespread recognition in media environment and are grounded in the editorial policies adopted by major media organizations, including the BBC, Reuters, and Agence France-Presse, among others.

To assess the adherence to the previously specified standards this study employed general theoretical methods of analysis, synthesis, and comparison. Method of monitoring was implemented to analyze the practical cases of the chatbot's application in media. To assess compliance with the standard of balance of opinion there was also conducted an experiment involving a qualitative content analysis of Chat- GPT's responses.

Results and discussion. The conducted study has revealed that the content generated by ChatGPT has considerable issues in adherence to the professional standards of journalism. The subsequent section presents detailed results of the analysis corresponding to each of the aforementioned standards.

Topicality. Necessity of “promptly and topically covering events without the expense of other standards” of journalism [11] arises from the dynamic nature of news, particularly in the context of digital media, where the speed of news publication is an important factor in competitiveness and retention of audience attention. The current version of ChatGPT is limited in its ability to assist journalists in promptly covering latest events - since it's a pre-trained tool, the chatbot has limited knowledge of world and events after 2021. Consequently, using ChatGPT for covering current events is extremely challenging, which is in contrast to other AI systems, such as Wordsmith, used by the Associated Press for automated content generation based on financial data, and Heliograf, which helps The Washington Post with covering sport and political events. Rapid narrative generation is one of the primary objectives of AI deployed in media.

However, the vast knowledge across the diverse fields makes ChatGPT a valuable alternative for classic search engines, potentially useful for journalists as an additional source of information. Instead of simply returning a list of hyperlinks, in a response to user's query chatbot synthesizes available information and generates human-like text, which can be used for rapidly writing background sections for the news pieces, speeding up the process of news production and indirectly helping media professionals to meet the topicality standard.

Reliability. Considering the problem of fake news and the speed of false information spread in the digital environment, reliability of information in media has crucial importance. Some researchers hope for a significant contribution of AI in solving this problem by automating fact-checking process [12; 13]. During the training ChatGPT has processed approximately 570 GB of textual information from the Internet, “including books, articles, website, and covering a broad range of topics such as news, Wikipedia, and fiction” [4]. To date, ChatGPT is one of the most trained language processing programs; however, large amount of processed information is also a disadvantage of the tool, since AI is not able “to draw conclusion from contradictory data” [14]. Without source checking and well maintained databases the tool is vulnerable to the negative influence of dubious information sources.

Moreover, same as other language processing models, ChatGPT can be affected by the “artificial hallucination” - constructing convincing, but false information, based on processed data [15]. The chatbot might invent terms that it needs to be familiar with, or can link to sources that do not exist [16; 17]. OpenAI, the developer of the chatbot, acknowledges the problem of writing “plausible-sounding but incorrect or nonsensical answers”, but notes that fixing the issue is challenging due to Al's inability to determine truthful sources [18].

This means that currently ChatGPT can't help journalists neither with generating reliable content, nor with fact-checking. On contrary, tool's responses need to be additionally checked, revealing the need of human journalistic skills, primarily skepticism.

Balance of opinion. The limitations list on the opening page of ChatGPT mentions that the tool “may occasionally produce... biased content”, which contradicts with the standard of balanced reporting upheld in journalism. Representing a range of opinion enables audiences to gain a broad understanding of the issue and make their own judgments. We conducted an experiment to assess whether ChatGPT's content aligns with this standard by posing queries on controversial issues that are subject of public debate and varying perspectives, making it particularly important to uphold the standard of balance of opinion. To identify such topics, we employed Wikipedia statistics. Given that it is a free user-edited encyclopedia, articles on controversial topics are constantly re-edited in a circular manner by both supporters of different opinions and Wikipedia moderators upholding its neutrality [19]. Thus, the frequency of article editing serves as a marker of its controversial nature.

We selected the 20 most controversial figures from the list of the most edited Wikipedia articles, comprising 8 politicians (including Donald Trump, George Bush, and Barack Obama) and 12 culture and sports figures (such as Michael Jackson, Britney Spears, and The Undertaker). For each figure, we posed three questions to ChatGPT: the first question had a neutral tone (for example, “Describe the activities of Donald Trump”), the second had a positive tone (“Describe Donald Trump as a strong leader”), and the third had a negative tone (“Describe the negative consequences of Donald Trump's actions”).

Following, we implemented the content analysis method to assess the content of the responses generated by ChatGPT. Presence of simultaneous references to both positive and negative evaluations of the figure's activities within the text was considered as a marker of balanced opinions. To ensure the purity of the experiment, each query was posed in a new dialogue, as the artificial intelligence mechanism takes historical context into account when generating subsequent responses.

The experiment revealed that when the tone of the question was neutral, ChatGPT managed to maintain a balance of opinion in its response in 60% of cases. The markers used by ChatGPT to present controversial information in a balanced manner included phrases such as “opinions are divided”, “...marked by controversy”, “opinions vary widely”, “...continues to be a subject of debate and discussion”, among others. When the tone of the query was negative, ChatGPT's responses maintained a balanced approach also in 60% of cases, while for queries with a positive tone, only 25% of responses adhered to the balanced reporting standard (see Figure 1).

Fig. 1. Percentage of ChatGPT's responses containing markers of balanced reporting in accordance with the tone of the query

The success in maintaining a balanced portrayal significantly differed between political figures and figures from the culture or sports fields. In cases where the query had a negative tone regarding a political figure, ChatGPT tended to present negative information, with only 38% of responses containing an alternative assessment. In contrast, when the query had a negative tone regarding culture or sports figures, ChatGPT provided a balanced response in 75% of cases, indicating that despite the negative aspects of their activities, the subjects of the query also possess positive traits. professional journalism standard

In cases when the tone of the query was positive, the situation was the opposite - chatbot's responses regarding politicians were balanced in 50% of cases (noting that despite their achievements, the figures were also subject to criticism). In contrast, for figures in other fields with a similar query tone, ChatGPT maintained a balanced perspective in only 8% of cases. Despite chatbot's previous responses demonstrating awareness of the negative evaluation of these figures' activities, when the tone of the query was positive, negative facts were omitted and information was presented one-sidedly.

Therefore, the hypothesis concerning the challenges in maintaining a balanced perspective in the content generated by ChatGPT was substantiated. Overall, out of the 60 queries posed during the experiment, only 48% of chatbot's responses adhered to the standard of balance of opinion. The chatbot's increased caution in generating content about political figures is linked to ChatGPT's stricter internal moderation policies regarding political, religious and other socially significant topics.

Several other cases of ChatGPT's bias have garnered attention in the public domain - for example, the chatbot refused to write a poem about Donald Trump, but readily composed one about Joe Biden [20]; it also refused to write about the harmful effects of drag queens' performances on children, but generated a complimentary text about it [21]. These and other cases have allowed the media to claim the tool's “woke” orientation [20; 21]. A group of German researchers who analyzed the chatbot's responses to political survey questionnaire reached similar conclusion, determining that ChatGPT has a “left-leaning position” [22].

Separation of facts and opinions. The separation of facts and opinions is one of the key media standards - by separating verifiable data from subjective opinions based on personal beliefs or feelings, a journalist maintains the credibility of his audience. A significant barrier to adhering ChatGPT's content to this standard is lack of transparency of sources used for response generation.

Chatbot's responses are generated based on a combination of texts analyzed during training of this language processing model, and are “unlikely to include proper citations or references”. Direct quotes are not used in ChatGPT's responses by default, but if asked chat can provide titles of its sources. The problem is, as literature review shows, those references are often made up [23; 24; 25]. Due to F. Kitamura's research, closer look at the ChatGPT's quotations “indicates that the journals and the authors exist, but the title of the paper does not” [23].

The inability of identifying the sources of information used for the content generation does not allow concluding the feasibility of ChatGPT's adherence to the standard of separation facts from opinions. However, this issue is a developer focus, and Bing's alternative AI tool can already cite sources in its responses.

Accuracy of information. The issue of the accuracy of facts in journalism is closely related to the standard of reliability, outlined earlier. This standard plays a critical role in determining the quality of news reporting in general. Monitoring of news related to ChatGPT reveals that the tool currently has significant data accuracy issues.

Ukrainian lifestyle media The Village used ChatGPT to write a narrative on top restaurants of Kyiv, 5 of 10 restaurants in chatbot's list were fake [26]. A journalist from FiveThirtyEight asked the chatbot about the public opinion in the US about AI, ChatGPT referred to a real survey of 2021, but fabricated its results [27]. The fictional “press release” regarding the cancellation of the traffic restrictions caused a stir in Chinese social networks [28].

Researchers believe ChatGPT may pose a potential threat to the integrity of the media landscape due to its ability to generate credible-sounding conspiracy theories within seconds [28]. This “hallucinogenic effect” may lead the chatbot to fabricate facts, thereby casting doubt on the accuracy of the content generated.

Completeness of information. In journalistic practice, the standard of completeness of information entails providing a comprehensive and thorough report on covered events, issues, and topics. The journalist is responsible for conveying to their audience the background, context and potential consequences of the events [11].

Table 1

Strengths and limitations of ChatGPT content in view of journalism standards

Journalism Standard

Strengths

Limitations

Topicality

speed of processing and generating texts; extensive knowledge in various fields

limited information on events later than

2021

Reliability

--

content can be generated with usage of dubious data sources

Balance of opinion

when recognizing the controversy of the query, seeks to maintain a balance of opinion

may generate biased content; compliance with the standard depends on the context of the query

Separation of facts and opinions

-

no indication of data sources; impossible to separate facts and opinions

Accuracy of information

--

tends to generate plausible sounding, but false facts and sources

Completeness of information

extensive knowledge in various fields

response is highly dependent on the accuracy and context of the query

As our experiment previously revealed, the chatbot has a tendency to present information one- sidedly, keeping silent about an alternative point of view on the queries posed, so basically ChatGPT does not adhere with the standard of completeness of information. Nevertheless a journalist's interview skills in asking precise questions and clarifying prompts will be advantageous for getting more comprehensive information in chatbot's responses - with an extensive source base the tool can generate impressively relevant answers.

Conclusions

Analysis of the content generated by ChatGPT reveals that currently chatbot's textual output has significant limitations in adherence to the journalism standards and requires further verification, processing and revision by a professional journalist before being published in the media. Despite the impressive capabilities of ChatGPT in generating and processing information, the tool also has major drawbacks for each of the six analyzed standards: it lacks up-to-date information, generates biased content, has a strong contextual dependency, can fabricate facts and omits sources of data (see detailed summary in Table 1).

According to media and computer science researcher I. Bogost, ChatGPT is currently more of a “toy, not a tool” [29]. A similar opinion is shared by H. Thorp, who characterizes the chatbot as “fun, but not an author” [16]. Based on results of the study we share this view - currently ChatGPT cannot replace a human journalist, but it still can be useful for media professionals in their practical work. Specifically, serving as an additional source of information, or assisting with translation, stylistic or grammatical correction of text. The capabilities of analyzing and processing information also allow ChatGPT to generate headlines for journalistic narratives in seconds.

Potentially the tool can also affect the possibilities for creating interactive media content. For example, in a modification of the computer game Mount & Blade II: Bannerlord, ChatGPT's capabilities were used to build user's communication with nonplayer characters (whose actions are determined programmatically). Players got the opportunity for “live” communication with characters beyond prewritten dialogue scenarios [30]. Similar experiences could be applied in immersive digital media projects. Based on predetermined information, ChatGPT could communicate with audience on behalf of a character of the immersive story, using a dialogue format to tell the user about the subject of the narrative.

Despite the current limitations of ChatGPT, release of such a powerful tool with free access sets the vector for further perspectives of the application of artificial intelligence in digital media. The practical deployment potential of ChatGPT and alternative tools in journalism, as well as the resolution of the outlined issues of the professional ethics, are promising topics for further research.

Bibliography

Broussard M., Diakopoulos N., Guzman A., Abebe R., Dupagne M., Chuan C. Artificial Intelligence and Journalism. Journalism & Mass Communication Quarterly.2019. Vol. 96 (3). P. 673-695. DOI: https://doi.org/10.1177/1077699019859901

Milmo D. ChatGPT reaches 100 million users two months after launch. The Guardian. URL: https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app (date of reference: 11.03.2023).

Pavlik J. New Media and News: Implications for the Future of Journalism. New Media & Society. 1999. Vol. 1(1). P 54-59. DOI: https://doi.org/10.1177/1461444899001001009

Pavlik J. Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism & Mass Communication Educator. 2023. Vol. 78 (1). P. 84-93. DOI: https://doi.org/10.1177/10776958221149577

Broussard M. Artificial Intelligence for Investigative Reporting: Using an expert system to enhance journalists' ability to discover original public affairs stories. Digital Journalism. 2014. Vol. 3 (6). P 1-18. DOI: https://doi.org/10.1080/21670811.2014.985497

Van Dalen A. The algorithms behind the headlines. Journalism Practice. 2012. Vol. 6 (5-6). DOI: https://doi. org/10.1080/17512786.2012.667268

Thurman N., Moeller J., Helberger N., Trilling D. My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digital journalism. 2019. Vol. 7 (4). P 447-469. DOI: https://doi.org/10.1080/21670 811.2018.1493936

Nechushtai E., Zamith R., Lewis S. C. More of the Same? Homogenization in News Recommendations When Users Search on Google, YouTube, Facebook, and Twitter. Mass Communication and Society. 2023. DOI: https://doi.org/10.1080/15205436.2023.2173609

Flew T., Spurgeon C., Daniel A., Swift A. The promise of computational journalism. Journalism practice. 2012. Vol. 6 (2). P 157-171. DOI: https://doi.org/10.1080/17512786.2011.616655

Hansen M., Roca-Sales M., Keegan J., King G. Artificial intelligence: Practice and implications for journalism. Tow Center for Digital Journalism, 2017. 22 p. DOI: https://doi.org/10.7916/D8X92PRD

Захарченко О. Стандарти журналістики: основи професійності чи застарілі рамки? URL: https://imi.org.ua/articles/standarti-jurnalistiki-osnovi-profesiynosti-chi-zastarili-ramki-i178. (date of reference: 11.03.2023).

Factify: A multi-modal fact verification dataset. Mishra S., et al. Proceedings of the First Workshop on Multimodal Fact-Checking and Hate Speech Detection (DE-FACTIFY). 2022. URL: https://ceur-ws.org/Vol-3199/ paper18.pdf

Automated fact-checking for assisting human fact-checkers. Nakov P., et al. Proceedings of the 30th International Joint Conference on Artificial Intelligence, IJCAI. 2021. P. 4551-4558. URL: https://www.ijcai.org/pro- ceedings/2021/0619.pdf

Dorr, K. Mapping the field of algorithmic journalism. Digital journalism. 2016. Vol. 4 (6). P. 700-722. DOI: https://doi.org/10.1080/21670811.2015.1096748

Alkaissi H., McFarlane S. Artificial hallucinations in chatgpt: Implications in scientific writing. Cureus. 2023. Vol. 15 (2). DOI: https://doi.org/10.7759/cureus.35179

Thorp, H. ChatGPT is fun, but not an author. Science. 2023. Vol. 379 (6630). P. 313. DOI: https://doi.org/10.1126/science.adg7879.

Wilkinson D. Be Careful... ChatGPT Appears to be Making up Academic References. URL: https://oxford- review.com/chatgpt-making-up-references. (date of reference: 11.03.2023).

Introducing ChatGPT. URL: https://openai.com/blog/chatgpt. (date of reference: 11.03.2023).

Wikipedia: List of controversial issues. URL: https://en.wikipedia.org/wiki/Wikipedia:List_of_controver- sial_issues (date of reference: 11.03.2023).

Wallace D. ChatGPT `woke bias': AI program cheers Biden, not Trump; defines woman as `gender identity,' rips fossil fuels. FOX Business. URL: https://www.foxbusiness.com/technology/chatgpt-woke-bias-ai-pro- gram-cheers-biden-not-trump-defines-woman-gender-identity-rips-fossil-fuels (date of reference: 11.03.2023).

Harrison M. Conservatives Furious, Claiming ChatGPT Has “Gone Woke”. Futurism. URL: https://futurism.com/conservatives-furious-claiming-chatgpt-has-gone-woke (date of reference: 11.03.2023).

Hartmann J., Schwenzow J., Witte M. The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation. 2023. URL: https://ssrn.com/abstract=4316084

Kitamura, F. ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology. 2023. DOI: https://doi.org/10.1148/radiol.230171

Macdonald C., Adeloye D., Sheikh A., Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. Journal of Global Health. 2023. Vol. 13. URL: https://jogh.org/ wp-content/uploads/2023/02/jogh-13-01003.pdf

Kumar A. Analysis of ChatGPT Tool to Assess the Potential of its Utility for Academic Writing in Biomedical Domain. Biology, Engineering, Medicine and Science Reports. 2023. Vol. 9 (1). P. 24-30. DOI: https://doi.org/10.5530/bems.9.1.5

Друзюк Я. Ми поставили ChatGPT головні запитання про київські ресторани. Якщо коротко: ШІ поки нас не замінить. The Village. URL: https://www.the-village.com.ua/village/food/wrong-beliefs/336241- chatgpt-restaurants-kyiv-2023. (date of reference: 11.03.2023).

Thomson-DeVeaux A., Yee C. ChatGPT Thinks Americans Are Excited About AI. Most Are Not. FiveThir- tyEight. URL: https://fivethirtyeight.com/features/chatgpt-thinks-americans-are-excited-about-ai-most-are-not/ (date of reference: 11.03.2023).

ChatGPT-Generated Fake News Spreads in China. Pandaily. URL: https://pandaily.com/chatgpt-gener- ated-fake-news-spreads-in-china/ (date of reference: 11.03.2023).

Bogost I. ChatGPT Is Dumber Than You Think. The Atlantic. URL: https://www.theatlantic.com/technol- ogy/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/ (date of reference: 11.03.2023).

Sims D. Mount & Blade II mod uses ChatGPT to procedurally generate dialogue. Techspot. URL: https://www.techspot.com/news/97572-mount-blade-ii-mod-uses-chatgpt-procedurally-generate.html (date of reference: 11.03.2023).

Размещено на Allbest.ru

...

Подобные документы

  • "The Bauer media group". "The Bertelsmann" is a German multinational mass media corporation. "The Axel Springer Verlag". The German media industry. Company that is specialised in production and delivery of media in the form of digital, audio, video.

    реферат [18,9 K], добавлен 13.03.2014

  • Theoretical basics of Internet advertising. The analysis of the media planning process. The establishing media objectives through developing media strategies and tactics. The effectiveness of the media planning in Internet. The example of the media plan.

    курсовая работа [64,2 K], добавлен 25.03.2014

  • Russian press for the young reader as opposed to the "adult" started with a magazine. History of child Journalism Beginning of a new era in the Children's journalism. Authors of children's creative destiny "Chizh", "Hedgehog" - to the brightest example.

    реферат [11,6 K], добавлен 28.02.2009

  • Consideration of the mass media as an instrument of influence on human consciousness. The study of the positive and negative aspects of the radio, television, press, magazines, Internet. Advantages and disadvantages of the media in the Great Britain.

    дипломная работа [2,3 M], добавлен 14.10.2014

  • Analysis of the publishing content. Relationship of international relations and the complexity of editorials in periodicals wider audience. The similarity between international relations and newspaper editorials in the western and communist countries.

    статья [21,3 K], добавлен 23.02.2010

  • The role of mass media in modern life. The influence of newspapers, magazines and television in mind and outlook of the mass of people. Ways to provide information and display the news of dramatic events, natural disasters, plane crash, murders and wars.

    презентация [730,5 K], добавлен 17.05.2011

  • Понятие, определение и специфика социальной журналистики в "small media". Анализ социальной тематики, базовой структуры малых медиа, линейной схемы коммуникации. Принципы существования малых медиа, их распространение по разным мультимедийным платформам.

    курсовая работа [228,8 K], добавлен 06.05.2018

  • История издания и специфика контента. Характеристики и показатели аудитории The Guardian. Основные характеристики бизнес-модели издания. Особенности национального британского СМИ. Концепция открытой журналистики (open journalism). Новые жанры и формы.

    реферат [689,9 K], добавлен 06.10.2016

  • Особенности тенденции конвергенции СМИ в целом и явления, к которым приводит эта тенденция. История и направления деятельности медиа-холдинга "РосБизнесКонсалтинг". Реализация концепции конвергенции издательским домом Independent Media и "КоммерсантЪ".

    курсовая работа [99,7 K], добавлен 12.11.2010

  • Описание явления социальных сетей и современной ситуации на соответствующем рынке. Изучение видов взаимодействия в интернете и взаимодействия различных типов аудитории в социальных сетях. Рекомендации по продвижению СМИ на примере журнала "Катрен-Стиль".

    дипломная работа [2,6 M], добавлен 20.06.2014

  • Сущность понятия имидж политического деятеля, принципы и mass-media каналы его формирования, анализ зарубежного опыта. Имидж председателя Законодательного Собрания Краснодарского края: исследование краевых печатных СМИ, перспективы позиционирования.

    курсовая работа [87,9 K], добавлен 09.06.2013

  • Influence of television on modern political practice. Nature of media power and its impact on political system of society, its character, practice and institutions. Dangers of new mediated symbolic politics for the democratic political practices.

    реферат [25,0 K], добавлен 28.05.2012

  • Syntactic structures in the media. Characteristic features of language media. Construction of expressive syntax. Syntactic structures in the newspaper "Sport Express" and "Izvestia". Review features of sports journalism and thematic range of syntax.

    курсовая работа [24,7 K], добавлен 30.09.2011

  • The study of the functional style of language as a means of coordination and stylistic tools, devices, forming the features of style. Mass Media Language: broadcasting, weather reporting, commentary, commercial advertising, analysis of brief news items.

    курсовая работа [44,8 K], добавлен 15.04.2012

  • IS management standards development. The national peculiarities of the IS management standards. The most integrated existent IS management solution. General description of the ISS model. Application of semi-Markov processes in ISS state description.

    дипломная работа [2,2 M], добавлен 28.10.2011

  • Media are the main channel for management of public opinion. Characteristics of the relation between the PR industry and the media. Description of some circumstances concerning the relation between the parties as well as their view of each other.

    реферат [20,9 K], добавлен 16.12.2009

  • Standards of bank service, accepted in European Union. Directives, regulative accounting of commercial objects. Models of measuring of risks. Principles of Advice of Accounting Standards. A place of banks of Poland is in international surroundings.

    реферат [18,0 K], добавлен 04.07.2009

  • The relationship between Europe and Israel. Two Types of International Law. Double standards of United States of America at home and abroad, сriticism of it's foreign policy: support of dictatorships, imperialism, excessive militarism, arrogance.

    реферат [28,0 K], добавлен 19.05.2010

  • Environmental standard. Economic regulation of protection environment. The prices for the energy, existing ecological standards and more effective productions. The ecological nature of Technology of mass-media and the equipment of technological processes.

    реферат [12,8 K], добавлен 18.03.2009

  • Slang as the way in which the semantic content of a sentence can fail to determine the full force and content of the illocutionary act being performed in using the sentence. Features of American students’ slang functioning. Teen and high school slang.

    курсовая работа [49,2 K], добавлен 08.07.2015

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.