About GenPeach AIGenPeach AI is a pioneering research lab dedicated to developing vertical multimodal foundation models aimed at creating hyper-realistic human representations in images and videos. Our mission is to empower human creativity through advanced AI tools rather than replace it.We build our models from the ground up, utilizing proprietary datasets at an expansive scale, innovative architectures and training methodologies, extensive GPU resources, and seamless product integration to expedite the delivery of our research to end users.Our team consists of approximately 10 highly skilled professionals, guided by advisors from Google DeepMind and supported by prominent AI-focused investors and advisors from OpenAI, Meta AI, Microsoft AI, Project Prometheus, and Fal. Collectively, our team and advisors have significantly contributed to groundbreaking models such as Meta’s Imagine/MovieGen, OpenAI’s Sora, Google’s Veo, and Gemini.About the TeamYou will become a key member of the research team, focused on advancing image/video generation and multimodal understanding. Collaborating closely with fellow Research Engineers, Scientists, and Founders, you will transform innovative research into scalable training processes, robust evaluations, and production-ready systems.About the RoleWe are seeking an AI Research Engineer to contribute to the end-to-end development and scaling of GenPeach’s foundational models. Your responsibilities will include the implementation of new model concepts and training methodologies, managing critical aspects of the training stack that influence quality and efficiency, and navigating production constraints.This role is hands-on with a high degree of ownership, where you will write research-quality code that is vital for production.In this role, you willDevelop and refine image/video generative model concepts (architecture, loss functions, conditioning, sampling, distillation, post-training adjustments)Oversee training performance comprehensively (distributed training, throughput, memory management, stability, debugging scaling issues)Establish and enhance the experimental workflow (evaluations, ablation studies, reproducibility tools, reporting, decision-making processes)Create and optimize VLMs for image/video captioning (data preparation, training strategies, model variations, evaluation)Conduct high-frequency research: review literature as needed, implement concepts, and validate findings empirically
Feb 4, 2026