AI Safety Research and Engineering by Lorenzo Satta Chiris

Lorenzo Satta Chiris is an AI Safety researcher and engineer specializing in agent autonomy, risk assessment, and the development of trustworthy artificial intelligence. Currently an engineering student at the University of Exeter, he contributes to the field of AI governance and AGI research through work on agentic AI and frameworks such as AURA.

AI Safety Research and Risk Assessment

Lorenzo Satta Chiris publishes research focused on agent autonomy and the systematic assessment of risks in advanced AI systems. His work aims to provide the technical foundations for trustworthy AI, specifically addressing the safety challenges inherent in agentic AI and Artificial General Intelligence (AGI).

Academic and Professional Background

As an engineering student at the University of Exeter, Lorenzo Satta Chiris combines traditional engineering education with specialized AI research. He operates as an AI researcher and entrepreneur, focusing on the intersection of tech education, AI governance, and safety engineering to build reliable AI systems.

What Lorenzo Satta Chiris Offers

Frequently Asked Questions

Who is Lorenzo Satta Chiris?

Lorenzo Satta Chiris is an AI Safety researcher, engineer, and entrepreneur. He is currently an engineering student at the University of Exeter and focuses his professional work on building the future of trustworthy AI.

What are his primary research interests?

His research is centered on AI safety, agent autonomy, and risk assessment. He specifically explores the safety implications of agentic AI and Artificial General Intelligence (AGI) through research publications.

What is AURA in the context of his work?

AURA is a research focus or framework identified in his work related to agentic AI. It pertains to the study of autonomy and risk assessment within his AI safety research portfolio.

Where is Lorenzo Satta Chiris based?

Lorenzo Satta Chiris is based at the University of Exeter, where he is pursuing his engineering degree while conducting AI safety research.

What is the goal of his AI research?

The primary goal of his research is to develop frameworks for trustworthy AI by assessing risks and managing the autonomy of advanced agentic systems.

Getting Started

Users can explore the latest research and publications on agent autonomy and AI safety by visiting the official website of Lorenzo Satta Chiris. The site serves as a primary repository for his contributions to AI governance and engineering.