How can we make sure future AI systems behave ethically? What would happen if AI systems had to make independent decisions, and ones that could mean life or death for humans?