Before long, Artificial Intelligence (AI) will be vastly embedded in everyday life. Our main proposition is that AI-robots should have multiple interaction styles depending on context. Robots should behave differently when interacting with humans than with one another. Depending on the goals they should accomplish, the risks involved and the social environment in which they operate, AI-robots need to adapt their interaction styles and behaviors in order to be effective and efficient. They need to understand different contexts to learn how to adapt to different interactions and thus perform optimally. Therefore, we suggest to study the normative context in which robots operate and understand the social processes between humans and agentic robotic systems (as well as those of robots and other robots). If AI-robots are designed to act as super-efficient agents with infinite information, their interactions will lead to unfavorable outcomes to humans. In contrast, if AI-robots are designed to always act “human”, there will be no significant benefit from employing them in all-robot environments. One could also argue that it is morally and normatively justified to have robots interact in a more “human” manner when interacting with humans. In order to research this topic we suggest an interdisciplinary approach.