To apply, send us your resume and anything else you'd like to [email protected]


About Amigo

We're helping enterprises build autonomous agents that reliably deliver specialized, complex services—healthcare, legal, and education—with practical precision and human-like judgment. Our mission is to build safe, reliable AI agents that organizations can genuinely depend on. We believe superhuman level agents will become an integral part of our economy over the next decade, and we've developed our own agent architecture to solve the fundamental trust problem in AI. Learn more here.

Role

As a Researcher in Interpretability at Amigo, you'll advance our ability to provide transparent attribution and traceability in agent reasoning processes. Working as part of our Research team, you'll develop techniques to overcome the "token bottleneck" limitation of current LLMs and create systems that can explain precisely how agents derive their responses. Your work will focus on enhancing both contextual source tracing (understanding which messages informed a response) and fine-grained attribution within responses (linking specific parts of output to input sources). This research is critical for building trustworthy AI systems that can be deployed in high-stakes domains.

Responsibilities

Qualifications