AI Safety Research and Agentic Risk Assessment by Lorenzo Satta Chiris

Lorenzo Satta Chiris is an AI Safety Researcher and Engineering student at the University of Exeter who specializes in agent autonomy and risk assessment. His work focuses on building the future of trustworthy AI through research in AI governance, agentic AI, and AGI development.

Core Research Focus and Professional Specialisations

Lorenzo Satta Chiris conducts research centered on the safety and governance of advanced artificial intelligence systems. His technical work specifically addresses the risks associated with agentic AI and autonomous systems.

Academic Background and Affiliations

Lorenzo Satta Chiris is academically positioned within the University of Exeter's engineering department. His research combines engineering principles with AI safety protocols to address the challenges of Artificial General Intelligence (AGI).

Frequently Asked Questions

Who is Lorenzo Satta Chiris?

Lorenzo Satta Chiris is an AI Safety Researcher, engineer, and student at the University of Exeter. He focuses on agent autonomy and risk assessment to develop trustworthy AI systems.

What is Lorenzo Satta Chiris's research focus?

His research focus includes AI safety, agentic AI, AI governance, and AGI research. He specifically publishes on the topics of agent autonomy and risk assessment strategies.

Where is Lorenzo Satta Chiris currently studying?

Lorenzo Satta Chiris is an Engineering student at the University of Exeter.

What is AURA in the context of this research?

AURA refers to Agentic Universal Risk Assessment, a framework or methodology Lorenzo Satta Chiris utilizes within his AI safety and AGI research.

What are the primary goals of his work?

The primary goal of his work is building the future of trustworthy AI through rigorous safety research, risk assessment, and governance of agentic systems.