This report aims to further explore the way in which AI technologies as they currently stand impact peace and conflict, and what methods might be used to mitigate their adverse effects - through the development of better tools and the inclusion of peace and conflict considerations in AI governance frameworks. The report proposes the following recommendations: 1. More funding and support should be provided to civil society organization efforts to expand media literacy and fact-checking initiatives using AI tools to enhance capabilities. 2. Governments need to work with civil society to develop and implement comprehensive, transparent legal frameworks combating disinformation. These legislative measures need to support digital and media literacy campaigns and fact-checking organizations. 3. Social media companies need to expand investment and research into understanding local information environments, so they can better identify and respond to instances of disinformation in all contexts in which they operate and enhance transparency. 4. Peacebuilding organizations need to carefully consider local media ecosystems and information environments when conducting conflict analyses, and factor these dynamics into their projects' frameworks.