Legal issues and risks of the artificial intelligence use in space activity

The impact of artificial intelligence technologies on human rights and freedoms, relations with state authorities and the private sector. Characteristics of legal obligations caused by the consequences of dependence on the specified technologies.

Ðóáðèêà Ãîñóäàðñòâî è ïðàâî
Âèä ñòàòüÿ
ßçûê àíãëèéñêèé
Äàòà äîáàâëåíèÿ 27.04.2023
Ðàçìåð ôàéëà 32,4 K

Îòïðàâèòü ñâîþ õîðîøóþ ðàáîòó â áàçó çíàíèé ïðîñòî. Èñïîëüçóéòå ôîðìó, ðàñïîëîæåííóþ íèæå

Ñòóäåíòû, àñïèðàíòû, ìîëîäûå ó÷åíûå, èñïîëüçóþùèå áàçó çíàíèé â ñâîåé ó÷åáå è ðàáîòå, áóäóò âàì î÷åíü áëàãîäàðíû.

Ðàçìåùåíî íà http://www.allbest.ru/

Scientific Institute of Public Law

Analysis Center of Air and Space Law

Legal Issues and Risks of the Artificial Intelligence Use in Space Activity

Larysa Soroka

Doctor of Law, Professor,

Anna Danylenko

PhD in Law, Researcher, Maksym Sokiran

PhD in Law, Lawyer

Kyiv, Ukraine

Abstract

Ever since the use of artificial intelligence (AI) in the space sector, there have been progressive changes that brought both benefits and risks. AI technologies have started to influence human rights and freedoms, relationships with public authorities and the private sector. Therefore, the use of AI involves legal obligations caused by the influence and consequences of the dependency on the specified technologies. This indicates the need to investigate the issue of the legal problems and risks of using AI in space activity to identify, describe and explain legal issues and dilemmas in general and in the space sector in particular. The author has formulated theoretical definitions in the defined area through the logical-semantic approach. The comparative method has allowed us to carry out a comparative analysis of the AI legal regulation in the space sector in the USA, China and the EU. The authors have gone into rethinking the issue of action, cause-effect relationship, legal personality in matters of prosecution for damage caused by autonomous space objects. As well as combining the existing space law and its state-oriented responsibility with the space activity of private players using autonomous objects.

Keywords: artificial intelligence, legal responsibility, space activity, law, autonomy, legal personality, globalization.

Introduction

While AI is still in its infancy, it is already doing the most improbable things that will forever change our interaction with the space sector. This applies, first of all, to the application of autonomous robots in space missions. Since 1997, when Sojourner Rover was involved in the (Mars, 2020) mission, they have been used in space expeditions for various purposes. For example, today, the European Space Agency (ESA) has announced that they are working on a project that will use AI to send robotic elements into space. This project is still in its initial phase, but it could be used for anything from exploring new planets to studying the effect of microgravity on humans (Artificial, 2021a).

Automation of many processes in space has always been relevant. Since the space factors impede the constant supervision of all functions performed during a space mission. Increasing the level of autonomy and automation using AI technologies allows a wider range of space exploration. Increasing the level of autonomy and automation using AI technologies allows a wider range of space exploration. In some cases, autonomy and automation are critical to the mission success. For example, deep space exploration may require greater autonomy of the spacecraft, since communications with ground operators will be carried out with significant delays, or there could be no connection.

As well, with the help of AI, huge amounts of data obtained during Earth observation or telemetry data from spaceships are processed (Gal et al., 2020). AI determines the information required for further analysis, saving time and money. Another application of AI is geospatial intelligence, which extracts and analyzes images, information about objects and events on and around Earth (Walker, 2018). For example, AI has successfully captured and analyzed the first image of a black hole that was recorded using CHIRP technology (Bouman et al., 2016). An algorithm that uses AI to reconstruct low-resolution input data into high-resolution images (The Role, 2021). Data from some rovers, which has even been taught to navigate on their own, are also transmitted using AI (Artificial, 2021b). Thus, AI development, coupled with space data, can improve the production, storage, access and dissemination of data in space and on Earth.

The German Aerospace Center (DLR) has been developing AI techniques for space and terrestrial applications for many years. In 2018, DLR created an AI-powered robotic assistant, CIMON, intended to support astronauts in their daily tasks aboard the International Space Station (ISS). Fully voice-controlled CIMON can see, speak, hear, understand, and even fly (Artificial, 2021b).

The use of AI in space activities can be conveniently classified into three types of operations: predictable, unpredictable, and real-time response (Jonsson et al., 2007). Complex situations can arise even during conditionally predictable operations, where the application of AI in automatic mode allows people to handle the situation and helps to make decisions. Not to mention unpredictable situations or those carried out in real-time (Chien & Morris, 2014). There can also be distinguished the following types of autonomy - autonomy for unmanned missions and autonomy in support of human space missions (Jonsson et al., 2007).

However, currently, there are several unresolved issues concerning the use of AI. If they are not solved on Earth, they will be implemented in space. It is highly weighted towards ethical and legal problems. AI has been given the ability to learn and to act autonomously to some extent. These abilities have led to situations where it has become possible to challenge traditional legal postulates. First of all, this concerns the legal personality of AI, for example, in copyright (Guadamuz, 2017) or legal liability grounds for harm caused by autonomous systems (Pepito et al., 2019).

There are already a sufficient number of recorded examples where issues of AI legal regulation have arisen. From scandals over the misuse of users' personal data of Facebook and Cambridge Analytica (Facebook, 2019) (privacy issues) to accidents involving cars (Boudette, 2021; Bateman, 2021) and aircraft (Coeckelbergh, 2020) (legal liability issues) operated in autonomous mode. Since AI, to a greater extent, controls the work of mechanisms, devices, robots, the ability to determine responsible one for possible harm is a necessary issue when considering disputes in courts. authority legal obligation

These benefits and risks become even more acute in the use of AI in space activity. First, commercial space activities, which account for about 70% of space-related activities (Inter-Agency, 2018), have increased ubiquitous impact on Earth and the benefits that space products and technologies provide in solving the problems and needs of humankind (Soroka, 2020; Gal et al., 2020). Second, new actors, together with emerging new technologies such as AI, develop new global business models driven by demand, such as satellite constellations, tourism, asteroid and lunar mining, in-situ resource utilization (ISRU), 5G, in-orbit servicing (IoS), 3D printing of satellite parts (e.g., solar panels, etc.), and commercial space station (Gal et al., 2020). Third, the use of AI is realized without a global regulator and global rules. So, how can a legal framework be established if AI technologies are distinguished by spontaneity, constant development and transformation, and the space conditions where this technology will be used are generally difficult to predict. Some authors have noted that the regulatory framework governing this sphere should be universal. After all, constant amending of legislation in response to changes in the field of technology can be a difficult procedure (Cerka et al., 2015). Legal regulations must ensure a regulatory balance in the interaction of robotic and virtual technologies with humans in space missions. AI capabilities should not go beyond their intended purpose. It is essential that AI meets human needs and values and be incapable of adapting to them (Del Castillo, 2017).

All of the above determines the relevance and prospects of the legal aspects and risks research of AI use in the space industry.

Theoretical approaches and directions in the Artificial Intelligence legal definition

The ambitious short- and long-term goals set by various national space agencies require groundbreaking advancement in new space technologies, including the development of space AI agents. Recently there has been an upsurge in interest in AI across the aerospace community (Soroka & Kurkova, 2019). Therefore, such an interest determines the need for adequate and timely development of appropriate legislative regulation in this area. Industry, government, academics and civil society advocates suggest using ethical or technical approaches when discussing the establishment of the necessary regulatory framework. However, the specified framework setting will be challenging without genuine understanding and adequate interpretation of the term “Artificial Intelligence” (Morhat, 2017). Accordingly, within the framework of this study, the concept of Artificial Intelligence analysis is of great importance.

It is believed that the term “Artificial Intelligence” has been put into professional and scientific practice by John McCarthy in 1956 at the Dartmouth Artificial Intelligence (AI) conference (Newell et al., 1959; Smith et al., 2006). John McCarthy pointed out that one of the AI defining characteristics is the ability to replicate human intelligence (Wienrich & Latoschik, 2021). The majority of the figures from academia have perceived this formulation as a postulate and begun to state in their works that AI is an imitation of human intelligence by machines or computer systems (The Role, 2021).

Nils Nilsson has provided an interesting historical overview of the definition of AI. He gives an example of the use of autonomous systems by NASA (Remote Agent project, 1999) in “The Quest for Artificial Intelligence a History of Ideas and Achievements” while researching the use of AI in space missions (Nayak et al., 1999). Nilsson specifies that one of the unique RA characteristics and the key difference from traditional spacecraft control is that ground operators can use the RA by targeting it; rather than by issuing detailed sequential synchronized commands. The RA itself sets the strategy to achieve these goals and implements this plan by issuing commands to the spacecraft (Nilsson, 2010).

Nils Nilsson specified that AI technology “is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment” (Nilsson, 2010). Thus if we interpret Nielson's definition, AI is the activity of machines that are endowed with intelligence, i.e., intelligent machines; it brings us back to Turing's theory (Turing, 1950) and his statement about the possibility of intelligence mechanization. Or to more ancient artifacts, such as in Proverbs 6:6-8, King Solomon says “Go to the ant, thou sluggard; consider her ways and be wise” (King, 2021). While his advice was intended to warn against laziness; it may also motivate us to look in biology for clues on how to create or improve something based on the knowledge gained while observing nature (Nilsson, 2010).

The Russian researcher Petr Morkhat followed a different course in his monograph “Artificial Intelligence: Legal View” while defining Artificial Intelligence; the author does not focus on intelligence, and at the same time, considers the possible options for the technology application and properties. By artificial intelligence, the author understands fully or partially autonomous self-organizing computer-software virtual or cyber-physical (also bio-cybernetic), system (unit) with the following preset capabilities: 1) anthropomorphic- cognitive thinking and actions, such as image recognition, symbolic systems and languages, reflection, reasoning, modelling, imaginative (meaning-generating and sense-perceiving) thinking, analysis and evaluation; 2) self-reference, self-regulation, self-adaptation to changing conditions, self-restraint; 3) self-maintenance in homeostasis; 4) genetic algorithm (a heuristic search algorithm that preserves important aspects of “parental information” for “subsequent generations” of information), accumulation of information and experience; 5) learning and self-learning (based on their own mistakes and experience); self-development and self-application of self-homologation algorithms; 6) independent development of tests for their own testing, independent self-testing and testing of computer and, if possible, physical reality; 7) anthropomorphic-intelligent independent (including creative) decisionmaking and problem-solving” (Morhat, 2017: 69-70).

However, in our opinion, the use of such a complex terminological construction in a regulatory legal act will be quite problematic. AI is changing every day, new processes and technologies are being created; therefore, when forming a legal definition of Artificial Intelligence, it is not necessary to list all the types and areas of its application. Besides, the author does not state that the system (unit) is endowed with intelligence; he bypasses the controversies in the formulation and indicates that it is endowed/possessed of abilities and capabilities. Further, Morkhat describes various types and forms of intelligence, using phrases “intelligent thinking and cognitive actions”; “reasonable independent creative decision-making,” which are the signs of intelligence. Since it is generally believed that intelligence is a reason, human ability to think and cogitate (Large, 2001). By “artificial” we shall mean something that is created by humans and imitates natural phenomena, processes, etc., including intelligence. And in this context, Matthew Scherer pointed out that the difficulty in defining AI lies not in the concept of artificiality but rather in the conceptual ambiguity of intelligence. As humans are the only entities that are universally recognized (at least among humans) as possessing intelligence, it is hardly surprising that definitions of intelligence tend to be tied to human characteristics (Scherer, 2016).

Some researchers consider incorrect the emphasis on the machines exclusivity, imitating human intelligence, right from the beginning. For example, Stuart Russell and Peter Norvig distinguish AI as a separate area of research, without associating it with human intelligence (Russell & Norvig, 2020). These authors defined AI as artificial agents that receive percepts from the environment and perform actions. Each agent implements a function that maps percept sequences to actions. The authors referred to such agents: reactive agents, real-time planners, decision-theoretic systems and deep learning systems (Russell & Norvig, 2020). Therefore, we support the position of the authors of the study “RoboLaw: Towards a European framework for robotics regulation” (Palmerini et al., 2014), which focus on applications rather than single technologies in forming the concept of artificial intelligence. The authors specify two reasons for such an approach: the impossibility to isolate a technology or system from its context of use, including the operative environment and its users, and the impossibility to focus on a single technology since the majority of technologies do not work in isolation but rather as components of technological systems (Palmerini et al., 2014).

In 2018, a group of high-level European Al-experts released a document, “A definition of AI: Main capabilities and scientific disciplines,” which described and defined the capabilities of AI, as well as outlined ethical guidelines for the policy of AI use. According to the document: “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions - with some degree of autonomy - to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g., voice assistants, image analysis software, search engines, speech and face recognition systems), or AI can be embedded in hardware devices (e.g., advanced robots, autonomous cars, drones, or Internet of Things applications)” (A Definition, 2018). In this document, the authors interpreted the definition they suggested that AI should be understood not only as a computer program but as a technology and a scientific discipline. As a technology, AI refers to the systems created by humans to act in the physical or digital world to find an efficient solution to complex problems. As a scientific discipline, AI includes several approaches and techniques, such as (1) machine learning; (2) machine reasoning; (3) robotics.

In a more recent variant, the above definition was expanded and updated as follows: “Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems)” (A Definition, 2019).

Thus, in our research, by artificial intelligence, we understand a system designed by humans (computer program, algorithm), which, by analyzing the environment, is capable of acting (without active control or supervision from an individual) or being idle autonomously to achieve a specific goal.

And, although there are many scientific developments in the analysed area, researchers note that there is still no agreed (generally recognized, conventionally recognized by all theorists and practitioners) universal, comprehensive, clear and unambiguous definition of artificial intelligence concept (Nilsson, 2010; Gal et al., 2020). Therefore, we shall consider current legal definitions of the analysed concept.

Legal regulation of Artificial Intelligence

Until recently, artificial intelligence development in general and in the space sector, in particular, took place in a kind of regulatory vacuum (though not absolute). Except for the present national regulations of some countries, regarding basically autonomous vehicles and drones. There exist very few legal provisions that specifically address unique challenges raised by AI. There is practically no case law on this issue (Scherer, 2016).

On this issue, the position of Doctor of Law, Professor Mykola Karchevsky is quite interesting, in “The main problems of legal regulation of socialization of artificial intelligence” (Karchevsky, 2017), he identified three key issues of legal regulation of AI use and gave the following answers to them:

Should the development of artificial intelligence be banned or regulated? Despite the risks, an absolute ban on the development of artificial intelligence systems is impossible; legal regulation in this sector should provide incentives for socially efficient use of technologies and minimize the risks of technology abuse.

What will be the legal regulation in the field of robotics? The classic “developer- owner-user” scheme is relevant and necessary for the current level of technology. The complexity of technology will require the transition to a new, more complex scheme of legal regulation. Most likely, the legal regulation of AI socialization will go from considering the robot as an object of relations to endowing it with rights and responsibilities.

If robots get rights and responsibilities, how will the justice system change? Complementary to traditional justice, we can discuss the emergence of two new types, notionally named “hybrid justice” and “AI justice.” The operation of the latter will provide counteraction to robots that pose risk to social development and stability. Most likely, AI justice will be developed based on robots. The establishment of such a system involves the generalization of clear algorithms of experience gained through the traditional justice existence (Karchevsky, 2017).

The first attempt to develop universal law to regulate autonomous systems, namely the robotics sector, was made by science fiction writer Isaac Asimov. Thus, the author proposed Three Laws of Robotics: “1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law” (Anderson, 2017). Eventually, Isaac Asimov added the Zeroth Law “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Alternative laws have been suggested to update Asimov's laws which follows: 1) a human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics; 2) a robot must respond to humans as appropriate for their roles; and 3) a robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control to other agents consistent the first and second laws (Pepito et al., 2019).

However, still among the relevant sources are legislative acts and subordinate legislation, as well as official documents. At the moment, it's difficult to provide many examples of using AI definition at the legislative level; for now, such cases are rare. Nonetheless, we can cite some examples.

In 2011, the UN adopted a landmark document - the Guiding Principles on Business and Human Rights, which calls on industry to respect and protect human rights (Guiding, 2011). These principles set out general rules for technology companies for designing AI- based products to ensure that they do not violate fundamental human rights. Although the UN Guiding Principles are an important milestone in business and human rights, they are only a starting point for human rights respect in the technology sector. And this regulation does not define what AI is; it states that the production and use of technology should be under state control, and digital institutions should not restrict natural rights and human rights (Soroka, 2020).

According to the “TES analysis of AI Worldwide Ecosystem in 2009-2018” (Samoili et al., 2020), the EU, China, and the USA are the leading players in AI. Therefore, the analysis of the legal and regulatory framework in our study will focus on the following countries.

The USA

An analysis of the US legislative framework makes it possible to state that as of June 2021, there is no national federal law on AI. Most of the legal provisions are based on the cross-application of rules and regulations governing traditional areas such as product liability, data confidentiality, intellectual property, discrimination and rights in the workplace. The first federal document regulating legal relationships in the AI field is Executive Order 13859-Maintaining American Leadership in Artificial Intelligence (DCPD-201900073, 2019). The document determined that “the policy of the United States Government [is] to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI” (DCPD-201900073, 2019).

Following the adoption of the Order, the American Artificial Intelligence Initiative was launched at the federal level. It is guided by five principles: (1) driving technological breakthroughs; (2) driving the development of appropriate technical standards; (3) training workers with the skills to develop and apply AI technologies; (4) protecting American values, including civil liberties and privacy, and fostering public trust and confidence in AI technologies; and; (5) Protecting US technological advantage in AI, while promoting an international environment that supports innovation (Chae, 2020).

Then in early 2020, the Office of Management and Budget released the Memorandum for the Heads of Executive Departments and Agencies (Russell, 2020). This is the Guidance for Regulation of Artificial Intelligence Applications, which guided on developing normative and non-normative approaches to AI technology. It has also identified potential ways to reduce barriers to the use of AI in promoting innovation in the private sector. The Memorandum contains a set of principles that should be considered when formulating normative and nonnormative approaches. It also states that if existing regulations are sufficient, or if the costs of the new regulations outweigh the benefits, then the agencies concerned may find alternative approaches (Russell, 2020). It is believed that the new guidance for the regulation of AI is or will become the actual set of US regulatory principles (Zhu & Lehot, 2021).

In April 2020, the Federal Trade Commission published further guidance on the commercial use of AI technology, recognizing that while AI technology has significant positive potential, it also presents negative risks, such as the risk of unfair or discriminatory outcomes or entrenchment of existing disparities (Zhu & Lehot, 2021). The FTC has emphasized that the use of AI tools should be: (1) transparent; (2) explainable; (3) fair; (4) empirically sound; (5) and at the same time accountable. Besides, commercial companies were encouraged to take responsibility for compliance, ethics, fairness and non-discrimination (Smith, 2020). Thus, the USA, being a leader in AI, has not yet passed a national law on AI at the federal level and has not standardized the rules in the AI sector. At the same time, at the executive level, certain aspects of AI are regulated mainly in the field of technical standards through the adoption of directives and rules.

Thus, the USA, being a leader in AI, has not yet passed a national law on AI at the federal level and has not standardized the rules in the AI sector. At the same time, at the executive level, certain aspects of AI are regulated mainly in the field of technical standards through the adoption of directives and rules.

The PeopleS Republic of China

China ranks second after the USA in terms of the number of AI players (Samoili et al., 2020). However, in the USA, AI is represented by commercial enterprises and start-ups, and in China by government agencies and research institutes. The legal and regulatory framework is based on strategic policy documents aimed at stimulating the AI industry development. These include: “Made in China in 2025” (Explanation, 2020), the Action Plan for Promoting the Development of Big Data (Circular, 2015), the New Generation of Artificial Intelligence Development Plan (Notice, 2017).

As for the last-mentioned document, it refers to AI as a revolutionary technology that can affect governance, economic security and social stability, and even global governance. However, all of this can lead to problems associated with changes in employment structure, impact law and social ethics, violate personal privacy and challenge international relations. While vigorously developing AI, the Chinese government attaches great importance to potential threats to flight safety (Notice, 2017). The recommendations are being developed to prevent, limit and minimize risks to ensure safe, reliable and controlled AI development. According to the New Generation of Artificial Intelligence Development Plan, China's legal and regulatory framework should start to be created from 2025, through the adoption of laws and regulations. While the Plan lacks specific details, its ambitious agenda and selected policy targets indicate soon important sectoral, legal and regulatory changes (Karch, 2021).

Based on national plans, there also began to adopt plans for the development of AI at the local level. For example, the leadership of the city of Shenzhen (known as Silicon Valley of China) on 28 June 2021 announced Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations (Draft) (Announcement, 2021). The project was submitted to the National People's Congress, and its main goal is to regulate and support the development of its AI industry. This made it the first local government in China to establish targeted policies for the sector (Koty, 2021). Concerning governing principles and measures, the draft regulation provides that the AI industry of the city follows the principles of harmony, fairness and justice, tolerance and sharing, and respect for privacy as well. Besides, the draft regulation stipulates that AI enterprises must incorporate compliance with ethical norms into their professional norms requirements and incorporate ethical safety risk education into the content of their induction and on-the-job training (Shenzhen, 2021). It also defines artificial intelligence. Article 2 indicates that the term “artificial intelligence” referred to in these Regulations refers to the use of AI methods and technologies using computers or equipment controlled by them, to study and analyze the collected external data, the perception of the environment, through the knowledge and deduction, research and development of theories, methods, techniques and applications for modelling and expanding human intelligence (Announcement, 2021).

Thus, through political decisions at the national level and administrative documents at the local level, the Chinese government develops regulations and national standards that promote the implementation of innovation policy in China.

The EU

The first European project, which brought together different countries and research institutes to address the legal issues of AI use, was launched in March 2012 and was called RoboLaw. It was focused on the issues facing the legal “status” of robotics, nanotechnologies, neuroprostheses, brain-computer interfaces, areas in which very little work has been done so far (RoboLaw, 2019). The radical novelty of these technological applications and tools required an original and more complex study, characterized by an interdisciplinary method and a comparative analysis of the diverse approaches adopted in different legal systems. Several research institutes worldwide have studied aspects of the regulatory and legal consequences of developments in robotics. Researchers took for analyses North American and Eastern approaches as examples. The outcome of this project is the development of a set of regulatory guidelines (D6.2 “Guidelines on Regulating Robotics”) addressed to the European policymakers (RoboLaw, 2019). The Guidelines were to become an ethically and legally sound basis for future robotics developments and the use of AI. However, the settlement of this issue (both in Europe and abroad) are still sketchy.

Without global regulation of the stated issues, the technological countries have independently begun to establish the legal framework for the AI technologies establishment.

In 2018, the European Commission established the High-Level Experts Group on AI with the general objective to support the implementation of the European Strategy on AI, including the elaboration of recommendations on future-related policy development and ethical, legal and societal issues (Neri et al., 2020).

In April 2021, the European Commission published its final Regulatory framework proposal on artificial intelligence (hereinafter the EU Proposal) (Regulatory, 2021) to promote robust AI in Europe. A classification of AI into systems with “high risk” and “low risk” has been proposed. Risk and its level were determined by a set of AI characteristics concerning the quality of the datasets used, technical documentation and record-keeping, transparency and information delivery to users, human oversight and reliability, accuracy and cybersecurity. High-risk AI systems will have to go through testing for compliance in national AI certification bodies to ensure compliance with the new EU regime. This will entail the publication of algorithms and datasets for certification bodies. High-risk AI systems that pose potential harm to health and safety or negatively affect fundamental rights necessarily undergo certification. The EU Proposal does not contain similar detailed requirements for low-risk AI systems, but there are codes of conduct and transparency rules that will be applied.

Thus, following the EU Proposal, users will need to be notified about: (1) AI systems designed to interact with individuals (such as chatbots), (2) emotion recognition systems or biometric categorization, (3) some machine-generated content such as images or videos that resemble real people or objects (deepfakes). The proposed AI regulation, or rather the legal requirements for AI systems, is the result of two years of preparation based on the ethical principles of HLEG (Regulatory, 2021).

Proposing new rules and actions to turn Europe into a global hub for robust AI, the following documents have been adopted: (1) a Communication Fostering a European Approach to Artificial Intelligence; (2) the Coordinated Plan with the Member States: 2021 update; (3) a proposal for an AI Regulation laying down harmonized rules for the EU (Artificial Intelligence Act) (A European, 2021).

For the first time, the Artificial Intelligence Act at the legislative level determined that: “artificial intelligence system (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Part 1 Article 3) (Proposal, 2021).

The specified framework European law paved the way for the legal regulation of the use of AI, if not at the global level, but at least at the European level.

As for the legal regulation of AI in the space sector, we have not found such specialized regulatory legal acts. Space treaties do not address the use of AI, nor any international treaty governs the use of AI in space. This means that domestic legislation should serve as the main source of substantive law regarding the use of AI in space (Gal et al., 2020). As for the issues of responsibility for damage caused by space activities with the use of AI, in our work, we will consider the existing regulatory legal acts on space law; we will attempt to compare current norms of space law with the realities of the normative vacuum in the AI sector.

Legal and ethical problems require solutions, which cannot be defined and implemented with the current national laws and declarations. Therefore, it is essential to develop legal procedures that will link AI and AI-based services with the applicable rules system. Existing legal AI regulatory tools do not fully allow making space law effective and compatible with the specified technology, as well as the risks that this technology can bring.

Issues of legal responsibility and legal personality of Artificial Intelligence

Before answering the question: “when and how can legal liability arise for causing harm by AI technologies in space activities?” it is necessary to determine what is a legal responsibility and what should be the conditions under which it occurs.

There is still no consensus in legal science on the definition of “legal responsibility.” Some authors associate it with the punishment (Hart, 2000), others - associate legal responsibility with stigmatization public condemnation (Khachaturov & Lipinsky, 2007)-Regarding the conditions under which legal liability is possible, it should be noted that even Aristotle in the Nicomachean Ethics (1984, 1109b30-11nb5) (Complete, 1984), argued, that an action should come from an individual and that what they do cannot be ignored. Thus, responsibility arises if (1) there is an action or inaction (if there was a need to act), i.e., there is an action agent who has a sufficient degree of control over the action (inaction) and (2) if the action (inaction) agent knows what he is doing and understands the nature of his action (Fischer & Ravizza, 1998; Rudy-Hiller, 2018; Neri et al., 2020; Prus, 2015). That is, since Aristotle's time, there are two generally accepted conditions, upon the occurrence of which it is possible to apply for liability for action (inaction) - it must be volitional (free) and conscious (intentional).

Thus, the entire history of the legal responsibility institution, starting with the works of ancient Greek philosophers, revolved and revolved around the postulate of a causal relationship between an action that must be volitional, conscious, and harmful.

After all, when people act and make decisions, freedom of action is usually associated with responsibility. One influences the world and others, so they are responsible for their deeds and decisions. However, it is not always obvious who should be held liable. It may be confusing who exactly caused the corresponding consequences (for example, harm, but it can also be a benefit), and even if it is clear who did it, possibly, that person did not act voluntarily or did not know what he was doing. So how, to whom, and when can people and society meaningfully attribute responsibility? And what does this mean for establishing accountability for AI? (Coeckelbergh, 2020).

As for legal liability for harm caused by AI, in the current law systems, which are specially designed and created for human beings, liability applies only to cases where the reason for the actions or being idle of AI can be associated with a specific individual, such as an operator, manufacturer, owner or user. And where this human agent could foresee and, therefore, avoid negative consequences (Asaro, 2015). It is essential to understand that when AI makes autonomous decisions, traditional rules will not be enough to create legal liability for the damage caused. Since it is impossible to determine the liable party and demand from this party compensation for the damage caused by the autonomous AI (Kingston, 2018). It remains difficult to determine who is liable when an AI capable of making autonomous decisions causes harm to human beings or damages property, as such decisions disturb the chain of causation (Pepito, 2019).

Some autonomous robots can be unpredictable in theory and many in practice. A critical approach to current legal approaches to liability is predictability. Within traditional product liability, the manufacturer is responsible for the product working as designed and anticipates likely problems or harms it may cause. Determinating what is “foreseeable” is often up to the courts, but the used legal standards are whether the manufacturer knew of the potential hazards, whether a reasonable person should have foreseen it, or whether there is an industry standard of practice that to reveal this (Palmerini et al., 2014).

In any case, no matter how events unfold with the use of AI, it will be necessary to revise the concept of legal personality, which in itself is quite controversial, in any case, it is very difficult. Therefore, politicians are most likely faced with the task of adopting a law to establish regulatory rules for the AI design and use, as well as procedures for bringing to legal responsibility for the damage caused.

Liability under the space law treaty regime is based on Article VII of the Outer Space Treaty, which states: “Each State Party to the Treaty that launches or procures the launching of an object into outer space, including the Moon and other celestial bodies, and each State Party from whose territory or facility an object is launched, is internationally liable for damage by such objects” (United, 2008). Article VIII of the same treaty obliges the State Party on whose registry an object launched into outer space is carried shall retain jurisdiction and control over such an object.

The use of AI in space activities raises the following questions: how can a state retain control over a space object it launches or on whose registry an object is; how to balance the use of artificial intelligence in space activities with its obligations under the Outer Space Treaty (Long, 2018).

Article II of the Convention on International Liability for Damage Caused by Space Objects (1971) specifies that for damage caused by a space object on the surface of the Earth or to an aircraft in flight, a launching State shall be absolutely liable to pay compensation to the victims. In the event of damage being caused elsewhere than on the surface of the Earth, then a launching State is liable only if the damage is due to its fault or the fault of the persons for whom it is responsible (Article III) (United, 2008). These liability rules can also apply to spacecraft that have used AI technology. As is evident from the above norms, space law establishes state-oriented responsibility.

However, when creating AI, as we have already indicated, a large number of people take part, and it is extremely problematic to prove the guilt of a particular person (persons) for causing damage to bring the launching state to justice. Sometimes this situation is referred to as “the problem of many hands.”

Dennis Thompson, who was probably the first to use the concept of “the problem of many hands” in an article on the responsibility of public officials, describes it as follows: “Because many different officials contribute in many ways to decisions and policies of the government, it is difficult in principle to identify who is morally responsibility for political outcomes” (Thompson, 1980: 905). In a more recent article, Helen Nissenbaum discusses the problem of many hands as one of the barriers for attributing accountability in what she calls a “computerized society.” She characterizes the problem of many hands as follows: “Where a mishap is a work of “many hands,” it may not be obvious who is to blame... The conditions for blame, therefore, are not satisfied in a way normally satisfied when a single individual is held blameworthy for a harm” (Nissenbaum, 1996; Ibo van de Poel et al., 2012).

Since fault liability under Liability Convention Article III is premised on the fault of a State or the faults of persons, a decision by an intelligent space object will, in all likelihood, not be the fault of persons (Gal et al., 2020). Therefore, it is difficult to trace the cause and effect relationship between the action of a particular person in the chain of AI creation, maintenance, use, et cetera, and the harm caused by an autonomous spacecraft.

Therefore, some authors suggest using the so-called joint tort liability when causing harm by autonomous machines (Scherer, 2016). Essentially, all activities on designing AI systems should be certified by an appropriate government body, such as an agency. This will create a liability system under which the designers, manufacturers, and sellers of agency-certified AI programs would be subject to limited liability, while uncertified programs offered for commercial sale or use would be subject to strict joint and severe liability (Scherer, 2016).

Likewise, Liability Convention Article XVIII provides that the “Claims Commission shall decide the merits of the claim for compensation and determine the amount of compensation payable, if any” (United, 2008). However, neither Article XVIII nor any other provision of the Liability Convention specifies what substantive law is used to decide on the merits and determine the compensation issue.

An analysis of the Space Liability Convention suggests that the relevant substantive law is domestic (national) law of 1) the launching State of the space object causing the damage, 2) the Registry State of the space object causing the damage, 3) the State that owned or whose national owned the damaged space object, 4) the Registry State of the damaged space object, 5) the home State of the software developer for the AI used by the space object that caused the damage (Gal et al., 2020).

Thus, the lack of (1) the conventional legal regime at the international level; (2) a single global administrative body; (3) pluralism of liability, state-oriented only, limits the ability of the space law treaty regime to establish a harmonious or uniform legal standard for making decisions on claims relating to damage related to space activities using AI.

Conclusions

AI is a future technology, which even nowadays expands the boundaries of human perception and enhances their capabilities where a few years ago, there seemed no prospects for a full and comprehensive study. Moreover, one can hardly imagine the development of the space industry without AI, although there are still some issues from the point of view of legal problems and risks.

Increasing the level of autonomy and automation using AI technologies in space activities has significant benefits, from simplifying the implementation of space missions to the production, storage, access and data dissemination in space and on Earth. However, the unpredictability of AI decisions in a specific situation sets on thinking on the issues of legal personality, precautions, and global postulates of protecting human rights. These issues have gained relevance as AI to a greater extent controls the mechanisms, devices, robots, and therefore the main concern is to determine responsible for its actions. At the same time, it should be understood that the solution of these issues is directly related to both the accuracy of the legal definition of the concept of Artificial Intelligence and the establishment of a clear legal framework for its functioning at the global level.

Analyzing numerous opinions of researchers, including European experts, it is prudent to consider that AI should be understood as a system designed by humans, which, by analyzing the environment, is capable of acting (without active control or supervision from an individual) or being idle autonomously to achieve a specific goal. From the point of view of AI legal regulation, the main concern is that currently the definition of AI cannot be specified at the global level, and its normative definition is rare. Still, there are some examples, like the Artificial Intelligence Act, which laid the foundation for the legal regulation of AI use, if not at the global level, but at least at the European one. Still, there are some examples, like the Artificial Intelligence Act, which laid the foundation for the legal regulation of AI use, if not at the global level, but at least at the European one. This issue is indirectly raised in the Guiding Principles on Business and Human Rights, but mostly it has a national background.

At the moment, the use of AI in space is in the plane of a legal vacuum. The existing outer space law can no longer satisfy the needs of the space sphere. It is essential to develop legal procedures that will link AI and AI-based services with the system of applicable rules. Modern legal AI regulatory tools do not fully allow making space law effective and compatible with the specified technology, as well as the risks that this technology can bring.

References

1. Anderson, Mark Robert (2017) After 75 years, Isaac Asimov's Three Laws of Robotics need updating. The Conversation. Announcement on Public Consultation on the “Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations (Draft)” (2021) Shenzhen National People's Congress.

2. Artificial Intelligence in Space: How AI is Literally Taking Over our Planet (2021a) Incus Services.

3. Artificial Intelligence in Space (2021b) ESA.

4. Asaro, Peter (2015) The Liability Problem for Autonomous Artificial Agents. Association for the Advancement of Artificial Intelligence.

5. A Definition of AI: Main Capabilities and Scientific Disciplines (2018) Definition developed for the purpose of the deliverables of the High-Level Expert Group on AI. European Commission.

6. A Definition of AI: Main Capabilities and Scientific Disciplines (2019) Definition developed for the purpose of the AI HLEG's deliverables. Independent High-Level Expert Group on Artificial Intelligence. European Commission.

7. A European approach to artificial intelligence (2021) European Commission.

8. Bateman, Tom (2021) Tesla Autopilot crash investigation expands as Ford, BMW and 10 other carmakers asked for data. Euronews.

9. Boudette, Neal E. (2021) “It Happened So Fast”: Inside a Fatal Tesla Autopilot Accident. The New York Times.

10. Bouman, Katherine L., Michael D. Johnson, Daniel Zoran, Vincent L. Fish, Sheperd S. Doeleman, and William T. Freeman (2016) Computational Imaging for VLBI. Image Reconstruction. Instrumentation and Methods for Astrophysics.

11. Chae, Yoon (2020) US AI Regulation Guide: Legislative Overview and Practical Considerations. The Journal of Robotics, Artificial Intelligence & Law, Vol. 3, No 1, 17-40. Chien, Steve and Robert Morris (2014) Space Applications of Artificial Intelligence. Association for the Advancement of Artificial Intelligence. AI Magazine, Vol. 35, No 4, 3-6.

12. Circular of the State Council on Printing and Distributing the Action Outline for Promoting the Development of Big Data No 50 (2015) State Council Document.

13. Coeckelbergh, Mark (2020) Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability. Science and Engineering Ethics, 26, 20512068.

14. Complete Works of Aristotle, Volume 2 (1984) The Revised Oxford Translation. Edited by Jonathan Barnes. Princeton: Princeton University Press.

15. Cerka, Paulius, Jurgita Grigiene, and Gintare Sirbikyte (2015) Liability for Damages Caused by Artificial Intelligence. Computer Law & Security Review, Vol. 31 (3), 376-389

...

Ïîäîáíûå äîêóìåíòû

  • The concept and features of the state as a subject of international law. The sovereignty as the basis of the rights and duties of the state. Basic rights and obligations of the state. The international legal responsibility of states. Full list of rights.

    êóðñîâàÿ ðàáîòà [30,1 K], äîáàâëåí 17.05.2016

  • The differences between the legal norm and the state institutions. The necessity of overcoming of contradictions between the state and the law, analysis of the problems of state-legal phenomena. Protecting the interests and freedoms of social strata.

    ñòàòüÿ [18,7 K], äîáàâëåí 10.02.2015

  • The major constitutional principle, considering the person, his rights and freedoms. Law of the subject of the Russian Federation. Rights and freedoms of a person and a citizen, their protection as the basic contents of activity of the democratic state.

    ðåôåðàò [15,5 K], äîáàâëåí 07.01.2015

  • The steady legal connection of the person with the state, expressing in aggregate of legal rights and duties. The Maastricht Treaty of 1992. Establishment of the European Economic Community. Increase of the number of rights given to the citizens.

    ðåôåðàò [22,5 K], äîáàâëåí 13.02.2015

  • Determination of the notion of the legal territory of estimation. Sensor bases of information for legal estimating activity (estimation). Legal estimating abilities. Motivation of applied psychotechnics for legal estimating, and self-estimating.

    ðåôåðàò [19,3 K], äîáàâëåí 13.02.2015

  • Characteristics of Applied Sciences Legal Linguistics and its main components as part of the business official Ukrainian language. Types of examination of texts and review specific terminology used in legal practice in interpreting legal documents.

    ðåôåðàò [17,1 K], äîáàâëåí 14.05.2011

  • The official announcement of a state of emergency in the country. Legal measures that State Party may begin to reduce some of its obligations under the International Covenant on Civil and Political Rights. Ensure public order in emergency situations.

    ðåôåðàò [19,2 K], äîáàâëåí 08.10.2012

  • Legal regulation of rights and freedoms of a person and a citizen, according to article 71 of the Constitution of the Russian Federation. Regulation about the order of granting of gratuitous grants for residing in Republic Severnaya Ossetia - Alaniya.

    ðåôåðàò [19,8 K], äîáàâëåí 13.02.2015

  • Placing the problem of human rights on foreground of modern realization. The political rights in of the Islamic Republic Iran. The background principles of vital activity of the system of judicial authorities. The executive branch of the power in Iran.

    ðåôåðàò [30,2 K], äîáàâëåí 14.02.2015

  • The requirements of human rights. The rights to life and liberty. Impact In Terms Of Substantive Law. Procedure or Levels of Damages in the Field Of Health Law. Effects of Traditional Practices on Women and Children. Traditional Childbirth Practices.

    ðåôåðàò [16,0 K], äîáàâëåí 27.01.2012

  • Understanding the science of constitutional law. Organization of state power and the main forms of activity of its bodies. The study of the constitutional foundations of the legal status of the citizen, local government. Research on municipal authorities.

    ðåôåðàò [15,3 K], äîáàâëåí 14.02.2015

  • Problems of sovereignty in modern political life of the world. Main sides of the conflict. National and cultural environment of secessional conflicts. Mutual relations of the church and the state. The law of the Pridnestrovskaia Moldavskaia Respublika.

    ðåôåðàò [20,1 K], äîáàâëåí 10.02.2015

  • Interaction of the courts of general jurisdiction and the Constitutional court of Ukraine. Impact of the institute of complaints on human rights. Analis of an independent function of the Constitutional court and courts of the criminal jurisdiction.

    ñòàòüÿ [19,6 K], äîáàâëåí 19.09.2017

  • Creation history International Partnership for Human Rights. Projects aiming to advance the rights of vulnerable communities, such as women, children, migrants and minorities, who are subject to human rights abuses in different parts of the world.

    ïðåçåíòàöèÿ [472,6 K], äîáàâëåí 04.10.2012

  • "E-democracy" is a public use of Internet technologies Analysis of the problems dialogue information and of the notional device, uniform and available for specialists, facilities of the electronic constitutional court, on-line participation of citizens.

    ðåôåðàò [17,1 K], äîáàâëåí 14.02.2015

  • The concept of legitimate force, the main condition and the possibility of entry of legal acts in force. Reflection of the procedure in the legislation of the European Union and the Russian Federation: comparative characteristics and differences.

    ðåôåðàò [20,5 K], äîáàâëåí 13.02.2015

  • The constitution, by the definition of K. Marx, the famous philosopher of the XIXth. Real purpose of the modern Constitution. Observance and protection of human rights and a citizen. Protection of political, and personal human rights in the society.

    ðåôåðàò [19,2 K], äîáàâëåí 10.02.2015

  • The role of constitutional justice in strengthening constitutional legality. Protection of the constitutional rights, freedoms, formation of the specialized institute of judicial power. The removal of contradictions and blanks in the federal legislation.

    ðåôåðàò [24,0 K], äîáàâëåí 14.02.2015

  • The international collective human rights' concept is still in process of development, and that we may say about many of international human rights. However, such a view is particularly true with regard to this group of rights.

    ðåôåðàò [21,3 K], äîáàâëåí 10.06.2003

  • Degradation of environment in cities has brought to destruction of ecosystems and its inconvertible nature. At characteristics of the occupied (housing) lands in the city as important condition of formation of favorable ambience of environment for people.

    ñòàòüÿ [20,4 K], äîáàâëåí 10.02.2015

Ðàáîòû â àðõèâàõ êðàñèâî îôîðìëåíû ñîãëàñíî òðåáîâàíèÿì ÂÓÇîâ è ñîäåðæàò ðèñóíêè, äèàãðàììû, ôîðìóëû è ò.ä.
PPT, PPTX è PDF-ôàéëû ïðåäñòàâëåíû òîëüêî â àðõèâàõ.
Ðåêîìåíäóåì ñêà÷àòü ðàáîòó.