About the job
About Telnyx
At Telnyx, we are not just envisioning the future of global connectivity—we are actively creating it. Our dedication to constructing a private, global, multi-cloud IP network empowers seamless interconnection between people, devices, and applications through innovative solutions, including hyperlocal edge technology delivered via intuitive APIs.
Our aim is to revolutionize outdated systems, automate manual processes, and address real-world challenges through cutting-edge connectivity solutions. We take pride in our financial stability and profitability, which allows us to invest in pioneering technologies while nurturing an environment for continuous learning and career growth.
Join us in our vision of a world where borderless connectivity drives limitless innovation. We are eager to welcome passionate individuals who are excited about contributing to an industry-leading company while enhancing their own skills and careers.
The Opportunity
Our AI teams are at the forefront of developing reliable, low-latency microservices using advanced, cloud-native technologies. This position is a key part of our larger AI organization, focusing on crafting the platforms, services, and tools that underpin AI-driven products—encompassing areas such as inference, embeddings, APIs, data flows, and observability.
In this fast-paced role, you will prioritize reliability, performance, and usability alongside innovation. While some squads concentrate on inference pipelines and LLM tooling, others delve into foundational platforms, troubleshooting, reliability, data systems, or observability—each critical to deploying AI solutions at scale.

