This paper discusses and reviews some previous research concerning what we denote as ‘goal-management’, in other words how to set, apply and evaluate goals when conducting military operations planning. We aim to explain and answer the following question:
We suggest a guideline (a planning tool) for how to conduct goal-management when planning military operations and exemplify our guideline with two fictive examples concerning the development of an Operational advice and Appreciation of Rules of Engagement. The paper concludes that the application of decision theory and ethics, i.e. important parts of philosophy, can contribute to military operations planning by focusing on three perspectives: an axiomatic, an ethical and a deliberative perspective.
Two concepts are central in the debate regarding lethal autonomous weapon systems: autonomy and dignity. Autonomy is crucial when looking at the responsibility, particularly when autonomous systems become more advanced, and there is also an issue on whether they can be held responsible for their actions. But even if that is not the case, they may affect the responsibility of humans in the decision-chain. The other term, dignity, is used in the debate on whether autonomous systems should be allowed to make decisions on killing. The argument is that autonomous systems should not be allowed to kill, since they are not able to respect human dignity. My point is that these concepts need to be discussed more in anticipation of more advanced robots and autonomous weapon systems.
Robotar kan vara roliga. Samtidigt är det svårt att tänka sig att de skulle kunna programmeras för att få känsla för tillvarons absurditeter. Frågan är om en robot kan ha humor.
Two categories of ethical questions surrounding military autonomous systems are discussed in this article. The first category concerns ethical issues regarding the use of military autonomous systems in the air and in the water. These issues are systematized with the Laws of Armed Conflict (LOAC) as a backdrop. The second category concerns whether autonomous systems may affect the ethical interpretation of LOAC. It is argued that some terms in LOAC are vague and can be interpreted differently depending on which ethical normative theory is used, which may increase with autonomous systems. The impact of Unmanned Aerial Vehicles (UAVs) on the laws of war will be discussed and compared to Maritime Autonomous Systems (MAS). The conclusion is that there is need for revisions of LOAC regarding autonomous systems, and that the greatest ethically relevant difference between UAVs and MAS has to do with issues connected to jus ad bellum – particularly lowering the threshold for starting war – but also the sense of unfairness, violation of integrity, and the potential for secret wars.