Artificial Intelligence and Global Security

Cover of Artificial Intelligence and Global Security

Future Trends, Threats and Considerations

Subject:

Synopsis

Table of contents

(13 chapters)
Abstract

Advances in Artificial Intelligence (AI) technologies and Autonomous Unmanned Vehicles are shaping our daily lives, society, and will continue to transform how we will fight future wars. Advances in AI technologies have fueled an explosion of interest in the military and political domain. As AI technologies evolve, there will be increased reliance on these systems to maintain global security. For the individual and society, AI presents challenges related to surveillance, personal freedom, and privacy. For the military, we will need to exploit advances in AI technologies to support the warfighter and ensure global security. The integration of AI technologies in the battlespace presents advantages, costs, and risks in the future battlespace. This chapter will examine the issues related to advances in AI technologies, as we examine the benefits, costs, and risks associated with integrating AI and autonomous systems in society and in the future battlespace.

Abstract

The diffusion and adoption (D&A) of innovation propels today's technological landscape. Crisis situations, real or perceived, motivate communities of people to take action to adopt and diffuse innovation. The D&A of innovation is an inherently human activity; yet, artificially intelligent techniques can assist humans in six different ways, especially when operating in fifth generation ecosystems that are emergent, complex, and adaptive in nature.

Humans can use artificial intelligence (AI) to match solutions to problems, design for diffusion, identify key roles in social networks, reveal unintended consequences, recommend pathways for scaling that include the effects of policy, and identify trends for fast-follower strategies. The stability of the data that artificially intelligent systems rely upon will challenge performance; nevertheless, the research in this area has positioned several promising techniques where classically narrow AI systems can assist humans. As a result, human and machine interaction can accelerate the D&A of technological innovation to respond to crisis situations.

Abstract

This chapter presents reflections and considerations regarding artificial intelligence (AI) and contemporary and future warfare. As “an evolving collection of computational techniques for solving problems,” AI holds great potential for national defense endeavors (Rubin, Stafford, Mertoguno, & Lukos, 2018). Though decades old, AI is becoming an integral instrument of war for contemporary warfighters. But there are also challenges and uncertainties. Johannsen, Solka, and Rigsby (2018), scientists who work with AI and national defense, ask, “are we moving too quickly with a technology we still don't fully understand?” Their concern is not if AI should be used, but, if research and development of it and pursuit of its usage are following a course that will reap the rewards desired. Although they have long-term optimism, they ask: “Until theory can catch up with practice, is a system whose outputs we can neither predict nor explain really all that desirable?” 1 Time (speed of development) is a factor, but so too are research and development priorities, guidelines, and strong accountability mechanisms. 2

Abstract

New technologies, including artificial intelligence (AI), have helped us begin to take our first steps off Earth and into outer space. But conflicts inevitably will arise and, in the absence of settled governance, may be resolved by force, as is typical for new frontiers. But the terrestrial assumptions behind the ethics of war will need to be rethought when the context radically changes, and both the environment of space and the advent of robotic warfighters with superhuman capabilities will constitute such a radical change. This essay examines how new autonomous technologies, especially dual-use technologies, and the challenges to human existence in space will force us to rethink the ethics of war, both from space to Earth, and in space itself.

Abstract

Discussions of ethics and Artificial Intelligence (AI) usually revolve around the ethical implications of the use of AI in multiple domains, ranging from whether machine learning trained algorithms may encode discriminatory standards for face recognition, to discussions of the implications of using AI as a substitute for human intelligence in warfare. In this chapter, I will focus on one particular strand of ethics and AI that is often neglected: whether we can use the methods of AI to build or train a system which can reason about moral issues and act on them. Here, I discuss (1) what an “artificial conscience” consists of and what it would do, (2) why we collectively should build one soon given the increasing use of AI in multiple areas, (3) how we might build one in both architecture and content, and (4) concerns about building an artificial conscience and my rejoinders. Given the increasing importance of artificially intelligent semi- or fully autonomous systems and platforms for contemporary warfare, I conclude that building an artificial conscience is not only possible but also morally required if our autonomous teammates are to collaborate fully with human soldiers on the battlefield.

Abstract

This chapter explores how data-driven methods such as Artificial Intelligence pose real concerns for individual privacy. The current paradigm of collecting data from those using online applications and services is reinforced by significant potential profits that the private sector stands to realize by delivering a broad range of services to users faster and more conveniently. Terms of use and privacy agreements are a common source of confusion, and are written in a way that dulls their impact and dopes most into automatically accepting a certain level of risk in exchange for convenience and “free” access. Third parties, including the government, gain access to these data in numerous ways. If the erosion of individual protections of privacy and the potential dangers this poses to our autonomy and democratic ideals were not alarming enough, the digital surrogate product of “you” that is created from this paradigm might one day freely share thoughts, buying habits, and your pattern of life with whoever owns these data. We use an ethical framework to assess key factors in these issues and discuss some of the dilemmas posed by Artificial Intelligence methods, the current norm of sharing one's data, and what can be done to remind individuals to value privacy. Will our digital surrogate one day need protections too?

Abstract

It is no longer merely far-fetched science fiction to think that robots will be the chief combatants, waging wars in place of humans. Or is it? While artificial intelligence (AI) has made remarkable strides, tempting us to personify the machines “making decisions” and “choosing targets”, a more careful analysis reveals that even the most sophisticated AI can only be an instrument rather than an agent of war. After establishing the layered existential nature of war, we lay out the prerequisites for being a (moral) agent of war. We then argue that present AI falls short of this bar, and we have strong reason to think this will not change soon. With that in mind, we put forth a second argument against robots as agents: there is a continuum with other clearly nonagential tools of war, like swords and chariots. Lastly, we unpack what this all means: if AI does not add another moral player to the battlefield, how (if at all) should AI change the way we think about war?

Abstract

Weapons systems and platforms guided by Artificial Intelligence can be designed for greater autonomous decision-making with less real-time human control. Their performance will depend upon independent assessments about the relative benefits, burdens, threats, and risks involved with possible action or inaction. An ethical dimension to autonomous Artificial Intelligence (aAI) is therefore inescapable. The actual performance of aAI can be morally evaluated, and the guiding heuristics to aAI decision-making could incorporate adherence to ethical norms. Who shall be rightly held responsible for what happens if and when aAI commits immoral or illegal actions? Faulting aAI after misdeeds occur is not the same as holding it morally responsible, but that does not mean that a measure of moral responsibility cannot be programmed. We propose that aAI include a “Cooperating System” for participating in the communal ethos within NSID/military organizations.

Abstract

Constant transformation plays a crucial role in the future success of the NATO Alliance. In the contemporary security environment, those who can get the latest technology to the war fighter faster will tend to enjoy a comparative advantage, unless that technology in turn blinds the organization to alternatives. Thus, the author lays out a strategic vision for AI enabled transformation for the Alliance detailing NATO's ability to adapt throughout history, introducing contemporary efforts for AI enabled tech solutions for NATO, but also pointing out the necessity of organizational learning.

His conclusion is that in an age where technological development is exponential, the Alliance appears increasingly unable to deal with the problems related to the exponential technology disruption. Complex contexts require a different mindset, and NATO has to look for new AI enabled tools, to face the increasing number of wicked problems. Most importantly he points out that building a platform with AI enabled technological solutions is only one side of the coin. There is a need for an organizational one as well which connects the different components together, and creates interoperability within the Alliance. As NATO incorporates new AI solutions, there is a need for introducing radically new training and education solutions, and create a framework for what the author calls Mission Command 2.0.

Abstract

Harnessing the power and potential of Artificial Intelligence (AI) continues a centuries-old trajectory of the application of science and knowledge for the benefit of humanity. Such an endeavor has great promise, but also the possibility of creating conflict and disorder. This chapter draws upon the strengths of the previous chapters to provide readers with a purposeful assessment of the current AI security landscape, concluding with four key considerations for a globally secure future.

Cover of Artificial Intelligence and Global Security
DOI
10.1108/9781789738117
Publication date
2020-07-15
Editor
ISBN
978-1-78973-812-4
eISBN
978-1-78973-811-7