Two concepts are central in the debate regarding lethal autonomous weapon systems: autonomy and dignity. Autonomy is crucial when looking at the responsibility, particularly when autonomous systems become more advanced, and there is also an issue on whether they can be held responsible for their actions. But even if that is not the case, they may affect the responsibility of humans in the decision-chain. The other term, dignity, is used in the debate on whether autonomous systems should be allowed to make decisions on killing. The argument is that autonomous systems should not be allowed to kill, since they are not able to respect human dignity. My point is that these concepts need to be discussed more in anticipation of more advanced robots and autonomous weapon systems.