DescriptionAs autonomous systems become responsible for more complex decisions, it is crucial to consider how these systems will respond in situations wherein they must make potentially controversial decisions without input from users. While previous literature has suggested that users prefer machinelike systems that act to promote the greater good, little research has focused on how the humanlikeness of an agent influences how moral decisions are perceived. We ran two online studies where participants and an automated agent made a decision in an adapted trolley problem. Our results conflicted with previous literature as they did not support the idea that humanlike agents are trusted in a manner analogous to humans in moral dilemmas. Conversely, our study did support the importance for trust of shared moral view between users and systems. Further investigation is necessary to clarify how humanlikeness and moral view interact to form impressions of trust in a system.