Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Manager
Qualifications
Qualifications: - Strong proficiency in data engineering technologies including SQL, Python, and ETL processes. - Experience with big data tools such as Hadoop, Spark, and Kafka. - Proven leadership skills with experience managing teams and projects. - Ability to work collaboratively in a fast-paced environment.
About the job
Lead Data Engineer As a Lead Data Engineer at Brillio, you will play a pivotal role in designing, implementing, and managing robust data pipelines and architectures that drive our data-driven decision-making processes. You will lead a team of talented engineers and collaborate with cross-functional teams to deliver innovative data solutions that empower our clients and enhance their operational efficiencies.
About Brillio
Brillio is a global technology consulting and services firm that specializes in digital transformation and data analytics solutions. We empower organizations by leveraging cutting-edge technologies to enhance their operational capabilities and drive growth.
Join Our Team as a Senior Lead Data EngineerAre you passionate about data and technology? At Brillio, we are seeking a highly skilled Senior Lead Data Engineer to guide our data engineering initiatives. You will play a pivotal role in driving data strategy, architecture, and implementation across our projects, ensuring that we leverage data to deliver except…
Join our dynamic team at Squircle IT Consulting Services Pvt Ltd as a Data Scientist and Data Engineer. In this exciting role, you will leverage your expertise in data science and engineering to drive innovative solutions and enhance data-driven decision-making across various projects.
Lead Data EngineerAs a Lead Data Engineer at Brillio, you will play a pivotal role in designing, implementing, and managing robust data pipelines and architectures that drive our data-driven decision-making processes. You will lead a team of talented engineers and collaborate with cross-functional teams to deliver innovative data solutions that empower our clients and enhance their operational efficiencies.
Role overview Forbes Advisor seeks a Lead Data Engineer based in Chennai. The position centers on designing and building data infrastructure to support business strategy and deliver insights. Maintaining data integrity and ensuring that information is available to those who need it are key priorities. Main responsibilities Direct the architecture and implementation of data systems Convert raw data into insights that inform business decisions Collaborate with teams across the company to understand and meet their data needs Create solutions that enable better decision-making through data Collaboration This role works closely with cross-functional teams. Gathering input, clarifying requirements, and delivering tailored data solutions are part of daily work to address real business challenges.
Lead Data EngineerJoin our innovative team at Brillio as a Lead Data Engineer, where you will spearhead data engineering initiatives and lead a talented group of engineers. Your expertise will guide our projects, ensuring the development of robust data solutions that drive business decisions. Collaborate with AI and data engineering experts, and leverage cutting-edge technologies to transform data into actionable insights.
Join Infotel India as a seasoned ETL, Python, and Visualization Lead in our vibrant team. In this pivotal role, you'll spearhead the design and execution of robust ETL processes, craft innovative Python solutions, and develop insightful data visualizations to drive strategic business decisions. As a leader, you will work in synergy with stakeholders, data engineers, and analysts to establish efficient data workflows that yield impactful insights. This is a remarkable opportunity for a technical leader to significantly influence our data initiatives and enhance client outcomes.Key Responsibilities Design, develop, and maintain scalable ETL pipelines. Manage and optimize large datasets to guarantee data quality and integrity. Create data processing solutions utilizing Python. Develop interactive dashboards and reports leveraging data visualization tools such as Power BI, Tableau, or equivalents. Mentor and guide junior team members. Collaborate with cross-functional teams to foster data-driven decision-making. Enhance data workflows and boost system performance.
Join our dynamic team at Squircle IT Consulting Services Pvt Ltd as a Data Engineer! We are looking for skilled professionals who are passionate about data management and analytics. In this role, you will be responsible for designing and implementing data pipelines, ensuring data accuracy, and optimizing data processes.
Join our innovative team at Moving Walls India Pvt Ltd as a Data Engineer, where you will play a crucial role in driving data solutions and analytics to support our business objectives. We are looking for a passionate individual who thrives in a fast-paced environment and is eager to tackle complex data challenges.
About Gen Digital Gen Digital is a global company focused on digital freedom and security. Our brands include Norton, Avast, LifeLock, and MoneyLion, serving nearly 500 million users in over 150 countries. We provide cybersecurity, online privacy, identity protection, and financial wellness products. Our mission centers on helping people manage and secure their digital and financial lives. We value diverse experiences and ideas, and we see AI as a partner for innovation. Gen Digital encourages autonomy, supports career growth, and offers flexible work options, generous time off, competitive pay, and wellness programs. The company culture emphasizes customer satisfaction, open discussion, experimentation, and continuous learning. Team members collaborate in an environment that respects and values differences as strengths. Senior Staff Data Engineer – Role Overview The Senior Staff Data Engineer will serve as a senior technical leader within the organization. This role focuses on designing and implementing large-scale data solutions that support Gen Digital’s cybersecurity platform strategy. The position combines deep technical skill with organizational influence. Key responsibilities include: Designing complex data architectures for enterprise-scale needs Implementing solutions that support a multi-petabyte data infrastructure Mentoring and guiding engineering teams Shaping the technical vision for data systems serving millions of users Location Chennai, India
Join our dynamic team at Squircle IT Consulting Services Pvt Ltd as a Big Data Hadoop Engineer. In this pivotal role, you will leverage your expertise in big data technologies to design, implement, and maintain robust data processing systems that drive data-driven decision-making. You will work closely with cross-functional teams to develop scalable solutions that enhance our data capabilities.
About Us:BigID is a pioneering tech startup specializing in cutting-edge solutions for data security, compliance, privacy, and AI data management. We are at the forefront of the data landscape, empowering our customers to mitigate risks, foster business innovation, achieve compliance, build trust, make informed decisions, and maximize the value of their data.We are committed to building a global team united by a passion for innovation and advanced technology. BigID has received numerous accolades, including:Named a Hot Company in Artificial Intelligence and Machine Learning at the Global InfoSec AwardsListed in Citizens JMP Cyber 66 as one of the Hottest Privately Held Cybersecurity CompaniesCRN 100 list recognizes BigID as one of the 20 Coolest Identity Access Management and Data Protection Companies for three consecutive yearsRanked among the DUNS 100 Best Tech Companies to Work forFeatured as a Top 3 Big Data and AI Vendor to Watch in the 2023 BigDATAwire Readers' and Editors' Choice AwardsIncluded in the 2024 Inc. 5000 list for the fourth consecutive year!Shortlisted for the 2024 AI Awards in the Best Use of AI in Cybersecurity categoryAt BigID, our team is the cornerstone of our success. Join our dynamic, people-centric culture where you’ll have the opportunity to collaborate with some of the most talented professionals in the industry who prioritize innovation, diversity, integrity, and teamwork.Who We Are Looking For:We are on the hunt for a Senior Data Platform Engineer to enhance our Data Platform team. The ideal candidate will possess substantial experience in data engineering, particularly with Kafka and Elasticsearch, to design and maintain our robust data platforms. You will collaborate closely with cross-functional teams to ensure the scalability and reliability of our data solutions.Role Overview:As a Senior Data Platform Engineer, you will be instrumental in the design, development, maintenance, troubleshooting, and implementation of our big data architecture. Your proficiency in Elastic, Kafka, and Node.js will play a vital role in ensuring the scalability and performance of our data systems.Key Responsibilities:Develop data processing pipelines utilizing Kafka for real-time data streaming.Enhance and manage search functionalities leveraging Elastic technologies.Work alongside product managers, data analysts, and stakeholders to gather requirements and translate them into technical specifications.Lead code reviews and promote best practices in coding and data handling.
Join the dynamic team at Minderacraft as a Team Lead Data Engineer. We are looking for a motivated and talented individual who possesses extensive experience in AWS cloud services, Databricks, Apache Spark, Python, and SQL. Your role will be pivotal in designing, developing, and optimizing data pipelines and analytical solutions that drive key business initiatives.Primary Responsibilities:Architect and maintain robust data pipelines and transformation workflows utilizing Databricks and Spark.Develop and enhance ETL/ELT processes to efficiently manage large datasets.Leverage AWS services such as S3, Lambda, IAM, ECS, and CloudWatch to meet data architecture and operational requirements.Collaborate with data analysts, scientists, and business stakeholders to gather requirements and convert them into effective technical solutions.Ensure the integrity, reliability, and performance of all data systems.Advocate for and implement best practices in coding, version control, continuous integration/deployment, and environment management.Monitor, troubleshoot, and ensure the high availability of data pipelines.Contribute to architectural design decisions, documentation, and process enhancements.Qualifications:Proven experience with AWS Cloud (including S3, Lambda, IAM, EC2 or similar).In-depth knowledge of Databricks, Delta Lake, and Unity Catalog.Strong proficiency in Apache Spark (preferably PySpark).Excellent programming skills in Python.Advanced SQL skills, with experience in performance tuning and managing large datasets.Adept at thriving in fast-paced, agile environments.Strong analytical and problem-solving capabilities, with a proactive mindset to drive improvements.Exceptional communication and stakeholder management skills, capable of engaging with diverse teams.Familiarity with data governance tools and frameworks (e.g., DataHub, Soda).Experience with CI/CD tools like GitHub Actions.Availability: Must accommodate meetings and calls in US Pacific Time (PT).Personal Attributes:Self-motivated, accountable, and proactive.Strong ownership mentality with the ability to work independently.Adept at engaging with both technical and non-technical stakeholders.Passionate about building reliable, scalable, and high-quality data systems.
About the RoleJoin our innovative team at Brillio as a Lead Data Engineer specializing in Microsoft Fabric. We are looking for a seasoned professional with over 7 years of experience in data engineering. In this pivotal role, you will spearhead the design, development, and deployment of data pipelines, optimizing data movement and integration within the Microsoft Fabric ecosystem.Key Responsibilities- Data Pipeline Development: Create, develop, and implement data pipelines using Microsoft Fabric, incorporating OneLake, Data Factory, and Apache Spark for efficient, scalable, and secure data operations.- ETL Architecture: Design and execute ETL workflows tailored for Fabric’s integrated data platform to enhance data ingestion, transformation, and storage processes.- Data Integration: Develop and maintain solutions that consolidate both structured and unstructured data sources into Fabric’s OneLake environment, utilizing SQL, Python, Scala, and R for advanced data handling.- Fabric OneLake & Synapse: Utilize OneLake as the central data repository to facilitate enterprise-level analytics, seamlessly working with Synapse Data Warehousing for comprehensive big data processing and reporting.- Cross-functional Collaboration: Collaborate with Data Scientists, Analysts, and BI Engineers to ensure that Fabric’s data infrastructure effectively supports Power BI, AI workloads, and advanced analytics.- Performance Optimization: Oversee, troubleshoot, and enhance Fabric pipelines to ensure high availability, rapid query performance, and minimized downtime.- Data Governance & Security: Enforce governance and compliance frameworks within Fabric, ensuring data lineage, privacy, and security across the unified platform.- Leadership & Mentorship: Lead and mentor a talented team of engineers, overseeing Fabric workspace design, code reviews, and the implementation of new Fabric features.- Automation & Monitoring: Automate workflows using Fabric Data Factory, Azure DevOps, and Airflow to ensure operational efficiency.- Documentation & Standards: Thoroughly document Fabric pipeline architecture, data models, and ETL processes while contributing to engineering best practices and enterprise guidelines.- Innovation: Stay abreast of Fabric’s evolving functionalities (such as Real-Time Analytics and AI integration) and foster a culture of innovation within the team.
Join our dynamic team as a Technology Engineer specializing in DevOps, containerization, and big data technologies. In this pivotal role, you will drive enterprise-level digital and data platform initiatives, ensuring the design, implementation, and optimization of scalable infrastructure and data solutions.Key ResponsibilitiesDevOps & CI/CDDesign, implement, and maintain robust CI/CD pipelines leveraging tools such as Jenkins and GitOps.Automate build, deployment, and release processes to enhance operational efficiency and reliability.Containerization & OrchestrationDeploy and manage containerized applications utilizing Kubernetes and OpenShift.Ensure high availability, scalability, and resilience of applications.Infrastructure as Code (IaC)Develop and manage infrastructure using Terraform, Ansible, or comparable tools.Maintain version-controlled infrastructure to promote consistency and scalability.Big Data EngineeringArchitect and implement data solutions with Hadoop, Spark, and Kafka.Manage large-scale data processing and streaming pipelines.Distributed SystemsDesign and oversee distributed data architectures.Optimize data storage and processing performance across various systems.CollaborationEngage closely with engineering, DevOps, and data teams to deliver comprehensive solutions.Translate business and technical requirements into scalable implementations.Monitoring & Performance OptimizationImplement monitoring, logging, and alerting solutions.Continuously enhance system performance, reliability, and cost-efficiency.Security & ComplianceEnsure that infrastructure and data platforms adhere to security best practices.Maintain compliance with enterprise and regulatory standards.
Join our dynamic team at minderacraft as a Senior Data Engineer, where your expertise will be pivotal in shaping our data infrastructure. We are seeking a highly skilled individual with a deep understanding of big data technologies, ETL/ELT processes, and data modeling methodologies. Your primary focus will be to design, optimize, and maintain robust data pipelines, ensuring the integrity of our data and supporting our analytics initiatives.
ValGenesis builds digital validation platforms for life sciences organizations. Its products support pharmaceutical and biotech companies as they move toward digital processes, maintain regulatory compliance, and ensure manufacturing quality throughout the product lifecycle. Thirty of the top fifty global firms in this industry use ValGenesis solutions. More details about the company's work in paperless validation are available at valgenesis.com/about. Role overview The Senior Software Engineer - Data Engineering position is based in Chennai. This role focuses on developing and maintaining data engineering solutions to support ValGenesis platforms. Work will center on building systems that help life sciences clients manage and analyze data for compliance and quality throughout their operations.
About ValGenesis ValGenesis stands at the forefront of digital validation solutions tailored for life sciences. Our innovative platform is embraced by 30 of the top 50 pharmaceutical and biotech companies globally, enabling them to drive digital transformation, ensure total compliance, and achieve manufacturing excellence throughout their product lifecycle. Discover more about the exceptional work environment at ValGenesis, recognized as the industry standard for paperless validation in Life Sciences: valgenesis.com/about
Vultr is expanding its global cloud infrastructure with a new data center in Chennai. The Data Center Operations Engineer will play a central role in preparing this facility for launch and ensuring it meets Vultr’s operational standards from day one. This position acts as the on-site operational lead during the initial ramp-up phase, before permanent staff are in place. The engineer will coordinate readiness activities, make deployment decisions, and serve as Vultr’s technical and operational point of contact at the new site. The role offers significant autonomy and direct influence on how the Chennai data center comes online and supports Vultr’s growth in new markets. What you will do Act as the primary operational owner for the Chennai data center during launch and ramp-up phases Evaluate site conditions to apply Vultr’s infrastructure standards and operational procedures Plan, coordinate, and sequence deployment activities to bring systems online safely and efficiently Track and report on operational performance metrics throughout the launch process Employee benefits Annual medical insurance stipend 9 paid company holidays Generous leave policy, including a 1-month paid sabbatical every 5 years and an anniversary bonus each year Professional development reimbursement Fitness membership reimbursement Company-sponsored Wellable subscription
Join our dynamic Data Team at Mindera, where as a Senior Data Engineer, you will play a pivotal role in creating the data pipelines and tables that drive our business-critical dashboards, empower self-service analytics, and support advanced machine learning models and real-time data products. Utilizing state-of-the-art tools such as DBT, Spark, and Airflow, you will convert high-volume raw event data into user-friendly, impactful datasets.You will collaborate cross-functionally with Machine Learning Engineers, Data Scientists, and BI Developers to facilitate data-driven decision-making throughout the organization. Our engineers benefit from a culture of autonomy, innovation, and continuous learning, supported by structured career progression paths and access to training resources.As a Senior Data Engineer, your responsibilities will include:Designing and constructing scalable data pipelines, models, and feature stores to support analytics and machine learning workloads.Deploying and managing cloud-native data applications on AWS, leveraging CI/CD pipelines to automate builds, tests, and releases.Ensuring the technical quality, performance, and reliability of production-grade data pipelines through robust observability and engineering best practices.
Join our dynamic team at gsstech-group as a Data Engineer specializing in real-time streaming and event-driven architectures. We seek a talented individual who will take charge of creating scalable data pipelines, enhancing streaming systems, and achieving optimal performance in distributed environments.Key ResponsibilitiesDesign and implement real-time data streaming pipelines utilizing technologies such as Apache Flink, Kafka, and JavaBuild and sustain event-driven architectures for extensive distributed systemsConduct JVM tuning and performance optimization for streaming applicationsDevelop and deploy applications utilizing containerization tools (Docker, Kubernetes)Utilize the Cloudera platform for data engineering and pipeline orchestrationApply robust design patterns while upholding high coding standardsTroubleshoot and resolve challenges within a distributed systems ecosystemCollaborate with DevOps teams to manage CI/CD pipelines (GitHub, Jenkins)Operate on Linux-based systems, including configuration and shell scriptingEnhance data processing through caching mechanisms (e.g., Redis – considered a plus)Required Skills & ExperienceExtensive hands-on experience in Real-Time Streaming (Flink / Kafka / Java)Comprehensive understanding of event-driven architectureExperience with JVM performance tuningProficiency in Docker and KubernetesStrong background in Linux OS and shell scriptingKnowledge of design patterns and scalable system designFamiliarity with CI/CD tools such as GitHub and JenkinsPractical troubleshooting experience in distributed systemsNice to HaveFamiliarity with Redis or other caching systemsExposure to Cloudera Data Platform engineeringPrior experience in the banking or financial domain is advantageous