Collaborate With Us
Join us in advancing the understanding of long-term AI agent behavior and making AI systems safer through rigorous evaluation.
Who We're Looking For
We're keen to collaborate with various stakeholders in the AI ecosystem who share our commitment to long-term AI safety.
AI Labs & Safety Teams
Organizations developing AI agents who want to run extended evaluations on their systems and understand long-term behavior patterns.
- • Multi-day evaluation testing
- • Goal drift detection
- • Safety assessment frameworks
- • Collaborative research studies
Researchers
Individual researchers interested in designing or contributing to new evaluation scenarios and metrics for long-term agent behavior.
- • Novel evaluation scenarios
- • Metric development
- • Collaborative publications
- • Open-source contributions
Policy & Governance
Policy entities looking for empirical data to support or craft guidelines on safe deployment of autonomous AI systems.
- • Empirical safety data
- • Risk assessment insights
- • Regulatory framework input
- • Policy recommendations
Collaboration Opportunities
Research Partnerships
Collaborate on empirical studies, develop new evaluation frameworks, or contribute to our understanding of long-term agent failure modes.
- • Joint research projects
- • Co-authored publications
- • Shared datasets and findings
- • Workshop and conference presentations
Technical Collaboration
Help improve our open-source tools, contribute evaluation scenarios, or integrate our frameworks with your existing systems.
- • Framework development
- • Tool integration
- • Evaluation scenario design
- • Code contributions
Evaluation Services
We can help organizations run extended evaluations on their agent systems using our frameworks and expertise.
- • Custom evaluation design
- • Multi-day testing protocols
- • Behavior analysis and reporting
- • Safety assessment consultation
Knowledge Sharing
Participate in workshops, contribute to our documentation, or help establish best practices for long-term agent evaluation.
- • Best practices development
- • Community workshops
- • Educational content creation
- • Standard setting initiatives
How to Get Involved
1. Reach Out
Contact us via email to discuss your interests and how we might collaborate. We're happy to explore custom arrangements that work for your organization.
2. Explore Our Tools
Check out our Long Agent Framework on GitHub and see how it might fit with your existing evaluation processes or research goals.
3. Join the Conversation
Participate in discussions about long-term AI safety, contribute to our research insights, or help shape the future of agent evaluation standards.
Get in Touch
Ready to collaborate? We'd love to hear from you and explore how we can work together to advance long-term AI safety.
Reach out directly to discuss collaboration opportunities, research partnerships, or any questions about our work.
diogo.abc.cruz@gmail.comGitHub
Contribute to our open-source framework, report issues, or explore our codebase and evaluation tools.
View FrameworkResponse Time
We typically respond to collaboration inquiries within 2-3 business days. For urgent matters, please indicate the priority in your subject line.
Ready to Make AI Safer?
Join us in building the tools and knowledge needed to ensure AI agents remain safe and aligned over extended periods of operation.