About the job
Role: Data Engineer with Advanced SQL Skills
Location: Remote
Duration: Long-term
Position Overview
As a Data Engineer at Jupitor Consulting, you will play a pivotal role in architecting and implementing data pipelines that drive our analytical capabilities. You will work extensively with Snowflake, dbt, and Apache Airflow to ensure data integrity and performance for analytical dashboards.
Your Responsibilities:
• Design, document, and implement robust data pipelines for efficient data transformation and modeling.
• Ensure data accuracy and completeness for reporting and analytics.
• Monitor and troubleshoot critical issues in data engineering processes.
• Collaborate with stakeholders to evaluate technical solutions and develop MVPs or PoCs.
• Maintain awareness of data security issues and industry trends.
• Provide feedback and support for team members' growth and development.
• Implement data performance and security policies in line with governance and regulatory standards.
What You Bring:
• Expertise in data warehousing, data modeling, and engineering pipelines.
• Proficient in ETL/ELT methods using tools or scripting.
• Strong analytical skills for working with unstructured datasets.
• Experience in collaborating with product owners for effective requirement gathering.
• Advanced SQL skills with hands-on experience in SQL database design, alongside Python proficiency.
• Familiarity with version control systems (GitHub & Bitbucket).
• Knowledge of dbt, Apache Airflow, and Snowflake is essential.
• Continuous learning mindset to stay updated with data engineering advancements.
• Experience with AWS cloud is a plus.
Qualifications:
• Bachelor's degree in Information Science, Data Management, Computer Science, or a related field is preferred.
• Minimum of 4 years of IT experience focused on data warehousing and database projects.
