Developing autonomous agents that can reason about the perspective of their (human or artificial) peers is paramount to realistically model a variety of real-world domains. Being aware of the state of mind of others is a key aspect in different fields—e.g., legal reasoning, business negotiations, ethical AI and explainable AI. In particular, in the area of Multi-Agent Epistemic Planning (MEP), agents must reach their goals by taking into account the knowledge and beliefs of other agents. Although the literature offers an ample spectrum of approaches for planning in this scenario, they often come with limitations. This paper expands previous formalization of MEP to enable representing and reasoning in presence of inconsistent beliefs of agents, trust relations and lies. The paper explores the syntax and semantics of the extended MEP framework, along with an implementation of the framework in the solver Epistemic Forward Planner (EFP). The paper reports formal properties about the newly introduced epistemic states update that have been also empirically tested via an actual implementation of the solver.