The United States lacks a deliberate theory of artificial intelligence (AI) warfare. This contributes to the lack of discussion of the implications of AI at the operational level of war. AI is typically defined using a technological lens devoid of implications for operational art.
The focus of this research is to examine the relationship between target discrimination (TD) and command responsibility (CR) as the primary barrier to the lawful use of autonomous lethal weapons under jus in hello. This paper begins with a thesis followed by three main points regarding the relationship and dependencies between TD and CR in the context of autonomous lethal weapons.
Proponents and opponents of furthering AWS development are often actually debating different points. They may, in other words, each be voicing legitimate issues for discussion but framing them imprecisely—or worse, inaccurately—preventing the effective comparison of positions and achievement of conceptual clarity, much less consensus, on the legal issues.
Uninformed beliefs and biases continue to skew the discourse regarding unmanned systems. These systems do not constitute a fundamental change in the nature or character of warfare. Policymakers, strategists, or operators who attempt to use unmanned systems in place of human prudence will be profoundly disappointed with the results.
The United States government needs to develop and employ lethal autonomous weapons (LAWS) on the battlefield. There are two main arguments that this paper will explore: first, robots are potentially more proficient than humans on the battlefield and second, the United States needs to employ LAWS because other countries already are and the U.S. needs to set the international example.
The Department of Defense (DoD) is making significant strides to develop and deploy unmanned vehicles in a variety of environments. Specifically, the Secretary of the Navy is sponsoring a new program, Consortium for Robotics and Unmanned Systems Education and Research ("CRUSER"), at the Naval Postgraduate School to enhance the ability to address unmanned vehicle research in a systematic manner.
Humanity's quest to find innovative ways to deal with difficult, monotonous and dangerous activities has been an ever evolving and unending endeavor. The current proliferation of robotic technology is just the next step in this evolutionary sequence.
This monograph analyzes three case studies and compares them to determine some of the critical factors behind models of successful and unsuccessful innovation. These case studies include the German and French Armies and their mechanized doctrine development 1919-1939 and the U.S. Army's autonomous robotic doctrine development.
Also check out some similar keyword tags authors have used: