The development and application of artificial intelligence (AI) for military purposes are increasing rapidly in many parts of the world. Military powers are driving programs aimed at the advantages that AI can generate. Simultaneously, ethical questions arise concerning autonomous military systems. This study aims to provide clarity on how future Swedish officers with different backgrounds within the profession relate to the ethical issues that accompany the use of autonomous weapon systems. In this study, the respondents are presented with two fictitious scenarios based on the principles of distinction and proportionality, describing ethically problematic attacks that affect civilians. In each scenario, respondents are asked to take a stance on attacks carried out with different degrees of autonomy. The results of the study show that future officers consider the ethical defensibility of an attack to decrease as the degree of autonomy in the weapon system used increases.