About the job
Your Opportunity
Join our innovative team at LM Studio as a Software Engineer specializing in AI/ML Systems. We seek a dynamic individual who thrives at the crossroads of machine learning and systems engineering, enjoys tackling complex challenges, and is dedicated to crafting exceptional products.
In this role, you will be responsible for developing and enhancing on-device inference engines and integrations for local large language models (LLMs) and advanced AI technologies within LM Studio. You will design and maintain the runtime that facilitates on-device model inference, provide immediate support for new models on launch day, and optimize performance across diverse hardware setups.
Collaboration is key here. You will engage closely with model authors and upstream open-source communities (such as llama.cpp, MLX, and others) to ensure that LM Studio users access cutting-edge AI experiences.
If you're passionate about bridging the gap between models and systems and care about delivering outstanding local AI-powered experiences for both users and developers, this position could be your next great adventure!

