Qualifications
Key Responsibilities· Architect, develop, and sustain Dagster-based data pipelines using Python.· Create and implement efficient data models tailored to our product features and application needs.· Write SQL queries and procedures for data pipeline and API services.· Troubleshoot, maintain, and enhance our current codebase.· Develop and perform unit tests and integration tests to guarantee software reliability.· Execute performance profiling and stress testing to optimize system responsiveness.· Keep well-organized documentation for data models and ETL/ELT pipelines.· Engage collaboratively with cross-functional teams, including client applications and cloud teams.· Remain flexible in learning new technologies and contributing across various technical areas as required.Essential Skills & Experience· Bachelor's degree in Computer Science or a related field, or equivalent experience in database and data pipeline development.· Proven experience with relational databases and SQL, particularly PostgreSQL.· Strong programming skills in Python and experience with Object-Oriented Programming (OOP).· Expertise in data modeling for both transactional and analytical systems.· Proficient in debugging, troubleshooting, and performance optimization.· Familiarity with Git, including active involvement in code reviews.· Strong communication and collaboration abilities.
About the job
About XYZ Reality
XYZ Reality is an innovative, award-winning Series-A startup on the cusp of our next funding round. Our goal is to enhance our platform by improving its features, performance, and scalability, all while transforming the construction industry.
As a dynamic multi-disciplinary organization, we operate across various fields such as cloud development, data governance, data processing pipelines, electronics, embedded software/hardware, mechanical design/manufacturing, AI & computer vision, and data science, all contributing to the efficacy of our Building Information Modeling (BIM) Platform.
To help propel our mission, we are in search of a Senior Data Engineer who possesses extensive experience in data modeling, database management, and data pipeline development. The ideal candidate will play a pivotal role in maintaining our existing tech stack and developing new features with an emphasis on performance and scalability. Collaboration with our API/backend development and data pipeline teams will be crucial to create robust and efficient solutions.