AIRET 2025

The First International Workshop on
Agentic Intelligence: Risks, Ethics, and Trust

Co-located with IEEE CogMI 2025

November 2025 in Pittsburgh, PA

Overview

Artificial intelligence (AI) has entered a new phase marked by the rise of agentic systems—autonomous entities capable of planning, adapting, and acting over time toward goals. Unlike conventional AI models that operate within fixed boundaries or reactive paradigms, agentic AI embodies dynamic, proactive behavior that can reshape digital and physical environments. This shift demands a fundamental rethinking of risk and threat models, ethical frameworks, socio-technical solutions, and governance strategies.

The Workshop on Agentic Intelligence: Risks, Ethics, and Trust (AIRET) is motivated by the urgent need to address the complexities introduced by agentic AI. These systems challenge existing assumptions about controllability, oversight, and accountability. Risks such as instrumental convergence, emergent behavior, or self-preservation, as well as intended and unintended harms to individuals and society, are no longer speculative but are becoming practical concerns. Similarly, ethical questions about manipulation, responsibility, and human autonomy gain new urgency when intelligent agents act on our behalf—or against our interests—without direct supervision.

This workshop invites a cross-disciplinary audience, including AI researchers, ethicists, legal scholars, cybersecurity and privacy experts, and policy makers. Our goal is to foster a shared vocabulary and critical perspective on how agentic AI redefines the landscape of AI safety and ethics. We aim to bridge socio-technical insights with philosophical and regulatory foresight, charting a course toward systems that are not only powerful but also principled, accountable, and aligned with human and societal values.

Themes & Topics of Interest

We encourage submissions addressing the risks, ethical implications, technical architectures, and governance of agentic AI systems. Topics of interest include, but are not limited to:

Risks and Harms

  • Instrumental convergence and power-seeking behavior
  • Goal misalignment and reward hacking in long-horizon agents
  • Irreversibility and loss of oversight in autonomous deployment
  • Emergent behaviors in multi-agent ecosystems
  • Unintended generalization of capabilities
  • Threat models, risk frameworks, and understanding of intended and unintended harms

Ethics

  • Moral responsibility and accountability in autonomous decisions
  • Value alignment across dynamic and uncertain contexts
  • Human manipulation, persuasion, or deception by agents
  • The impact of over-delegation on human autonomy and critical thinking
  • Long-term ethical risks beyond bias and fairness

Policy and Governance

  • Legal liability and accountability frameworks for agentic AI
  • Thresholds for safe deployment and escalation control
  • Dual-use concerns and malicious applications (e.g., cyberwarfare, finance)
  • Auditability and explainability of autonomous behavior over time
  • International governance and standards for agentic AI oversight

Cybersecurity, Privacy, and Trust

  • Resilience and containment of autonomous and adaptive systems
  • Strategic manipulation of information and infrastructure
  • Cybersecurity and privacy risks from multi-agent interactions (e.g., collusion, conflict escalation)
  • Threat modeling and defenses
  • New paradigms for agent containment and monitoring
  • Frameworks and methods for trust and trustworthiness in Agentic AI
  • Accountability and transparency frameworks

Technology and Architectures

  • Planning, memory, and goal management in open environments
  • Tool use, API chaining, and real-world actuation
  • Episodic and semantic memory for long-term autonomy
  • Self-modification and dynamic learning strategies
  • Multi-agent coordination, competition, and negotiation
  • Benchmarking and simulation of persistent agent behavior
  • Scalable human-in-the-loop oversight mechanisms
  • Privacy-enhancing technologies, security solutions for end-to-end protection

Submission Instructions

We welcome three types of contributions:

  • Regular Technical Papers (up to 10 pages)
  • Extended Abstracts (2–4 pages)
  • Position Papers (1–10 pages)

All submissions must follow the same submission guidelines and instructions for the main conference (IEEE CogMI, with the IEEE two-column conference format). Templates are available from the IEEE website.

Submissions must be made through EasyChair.
Select the track: "Workshop on Agentic Intelligence: Risks, Ethics, and Trust (AIRET)"

Each submission will be reviewed by the workshop's Program Committee. Accepted papers will be included in the CogMI 2025 Workshop Proceedings, published by IEEE, and will be included in IEEE Xplore. At least one author must register and attend to present the work.

Important Dates

  • Submission deadline: Sep 7, 2025
  • Acceptance notification: Sep 25, 2025
  • Final version due: Oct 2, 2025