This paper discusses and reviews some previous research concerning what we denote as ‘goal-management’, in other words how to set, apply and evaluate goals when conducting military operations planning. We aim to explain and answer the following question:
We suggest a guideline (a planning tool) for how to conduct goal-management when planning military operations and exemplify our guideline with two fictive examples concerning the development of an Operational advice and Appreciation of Rules of Engagement. The paper concludes that the application of decision theory and ethics, i.e. important parts of philosophy, can contribute to military operations planning by focusing on three perspectives: an axiomatic, an ethical and a deliberative perspective.
The purpose of this article is to look at argument mapping in intelligence analysis and make suggestions for improvements in terms of analytic rigor and clarity, as well as justification when there is time to evaluate the boxes. Argument mapping is described in a similar way in the intelligence literature, but somewhat differently compared to philosophical literature and there are some things that are questionable or need to be clarified. It is also not clear what should be included in terms of analysis of competing hypotheses (ACH) or Bayesian analysis, for instance. The point of argument mapping is clarity of structure. Therefore, there should be a main claim or main hypothesis at the top, which is not an argument for something else in the tree, and which is argued for in the tree. ACH and Bayesian analysis should be performed before the argument map in order to find main hypotheses for separate trees. Even if it might be possible to put numbers on some boxes in the tree, doing it on all boxes might produce misleading results, depending on what they contain. The argument tree should be as clean as possible. Without numbers and likelihoods, we might use the notion of justified belief when investigating the tentative judgments so common in intelligence analysis.
Two concepts are central in the debate regarding lethal autonomous weapon systems: autonomy and dignity. Autonomy is crucial when looking at the responsibility, particularly when autonomous systems become more advanced, and there is also an issue on whether they can be held responsible for their actions. But even if that is not the case, they may affect the responsibility of humans in the decision-chain. The other term, dignity, is used in the debate on whether autonomous systems should be allowed to make decisions on killing. The argument is that autonomous systems should not be allowed to kill, since they are not able to respect human dignity. My point is that these concepts need to be discussed more in anticipation of more advanced robots and autonomous weapon systems.
Robotar kan vara roliga. Samtidigt är det svårt att tänka sig att de skulle kunna programmeras för att få känsla för tillvarons absurditeter. Frågan är om en robot kan ha humor.
Two categories of ethical questions surrounding military autonomous systems are discussed in this article. The first category concerns ethical issues regarding the use of military autonomous systems in the air and in the water. These issues are systematized with the Laws of Armed Conflict (LOAC) as a backdrop. The second category concerns whether autonomous systems may affect the ethical interpretation of LOAC. It is argued that some terms in LOAC are vague and can be interpreted differently depending on which ethical normative theory is used, which may increase with autonomous systems. The impact of Unmanned Aerial Vehicles (UAVs) on the laws of war will be discussed and compared to Maritime Autonomous Systems (MAS). The conclusion is that there is need for revisions of LOAC regarding autonomous systems, and that the greatest ethically relevant difference between UAVs and MAS has to do with issues connected to jus ad bellum – particularly lowering the threshold for starting war – but also the sense of unfairness, violation of integrity, and the potential for secret wars.