Software Engineer For Data Infrastructure Acquisition jobs in Seoul – Browse 421 openings on RoboApply Jobs

Software Engineer For Data Infrastructure Acquisition jobs in Seoul

Open roles matching “Software Engineer For Data Infrastructure Acquisition” with location signals for Seoul. 421 active listings on RoboApply Jobs.

421 jobs found

1 - 20 of 421 Jobs
Apply
Speechify logo
Full-time|On-site|Seoul, South Korea

Join Speechify as a Software Engineer focused on Data Infrastructure & Acquisition, where you will play a pivotal role in designing and implementing robust data systems that enhance our product capabilities. This is an exciting opportunity to work with cutting-edge technologies in a dynamic environment.

Apr 30, 2026
Apply
daangn logo
Full-time|On-site|SEOUL

Welcome to the Journey of Joining the Daangn Team!At Daangn, we strive to create an environment where individuals can grow alongside the company's growth.The Daangn recruitment team is here to help facilitate those moments of thoughtful collaboration with wonderful colleagues. Introducing the Data Value TeamThe Daangn team is dedicated to uncovering valuable information within local neighborhoods and resolving inconveniences in regional living. To create user value, it's essential to provide trustworthy information that users can easily access and incorporate into their decision-making. While Daangn already utilizes extensive data for decision-making, maximizing the value of our data requires significant changes.The vision of the Data Value Team is to make decisions for users through data every day. To realize this vision, we proactively tackle challenges in data value realization and lead the way in solving them.About the Data Software Engineer RoleThe Data Software Engineer plays a crucial role in addressing the challenges that arise during the process of data value realization through software engineering.In alignment with Daangn's rapid growth, you will design data systems that will not become bottlenecks in the future. You will ensure data reliability through automated testing and system observability. Additionally, you will solve technical problems that arise as Daangn members seek to understand users through data, thereby exponentially enhancing data-informed decision-making through data products (indicator platforms, experiment platforms, etc.).The mission of the Data Value Team's engineers is to facilitate a seamless flow of high-quality data at Daangn, enabling the creation of value without bottlenecks. Discover the Journey of the Data Value Team Growing with Daangn (Google Data Webinar) Learn about Daangn's Indicator Platform, KarrotMetrics Seven Challenges Daangn Faced in Implementing DBT and Airflow Tips for Easy Modeling with DBT from Daangn's Data Engineer (2024 Data Conference) Creating a Data Map at Daangn: Building Column Level Lineage No Need to Always Fetch Everything? Daangn's MongoDB CDC Build

Mar 16, 2026
Apply
daangn logo
Full-time|On-site|SEOUL

Welcome to the Journey of Joining the Daangn Team!At Daangn, we are committed to fostering an environment where individuals can grow alongside the company's success.Our recruitment team is here to assist you in achieving those joyful moments of collaboration with amazing colleagues. Introducing the ML Infrastructure TeamThe ML Infrastructure Team within our Infrastructure Department is responsible for developing a robust and scalable machine learning infrastructure that ensures effective service delivery and efficient operation of Daangn’s machine learning-based services. Machine learning is extensively utilized at Daangn to enhance service quality and improve user convenience across various domains, including feed recommendations, ad recommendations, and service operations. The ML Infrastructure Team handles everything from data processing, model training, model serving, to the deployment processes necessary for machine learning service development. Daangn's GenAI Platform Building Serverless ML Training Infrastructure: Vertex AI Pipelines & TFX ML Infrastructure with GCP | 2025 Daangn GCP MeetupYour ResponsibilitiesDevelop and manage model servers and serving systems for efficient deployment of various machine learning models.Develop and maintain ML infrastructure SDKs, frameworks, and training systems used across the organization.Create specialized monitoring systems for machine learning services to detect quality changes early.Implement various optimization methods across the machine learning infrastructure to enhance development iteration speed and resource efficiency.We Are Looking ForA proficient user of one or more programming languages such as Python or C++.Strong understanding of the infrastructure required for training and serving machine learning models.Over 7 years of experience in backend service or machine learning service development/operations.A desire to improve machine learning infrastructure through solid software engineering skills.Experience in developing and operating GPU clusters.Preferred QualificationsFamiliarity with cloud services like AWS and GCP, with practical experience.A deep understanding of the machine learning ecosystem and contributions to open source projects (e.g., TensorFlow, PyTorch, TensorFlow Extended).A keen interest in new technological trends and a willingness to learn.Additional InformationFor full-time hires, there is a 3-month probation period.We prioritize individuals with disabilities and veterans according to the Employment Promotion and Vocational Rehabilitation Act and the Act on the Honorable Treatment and Support of Veterans.Application Process1. Document Screening → 2. Video Interview → 3. Technical Interview → 4. Culture Fit Interview and Reference Check → 5. Compensation Negotiation → 6. Final Acceptance and OnboardingGo to Daangn Joining Journey Guide

Dec 22, 2025
Apply
Full-time|On-site|Seoul, South Korea

Who We AreJoin us in setting global standards for video understanding AI! Twelve Labs is dedicated to developing cutting-edge AI models specifically for video content, enabling efficient processing of vast amounts of video data. Our technology offers advanced capabilities for search, analysis, summarization, and generating insights from video.Our models are utilized by the largest sports leagues worldwide, quickly and accurately selecting highlights from extensive game footage, providing a hyper-personalized viewing experience. In South Korea, integrated control centers partner with us to efficiently analyze CCTV footage for rapid crisis response. Major broadcasters and studios across the globe leverage our models to create content for billions of viewers.Headquartered in San Francisco with an office in Seoul, Twelve Labs is a Deep Tech startup recognized for four consecutive years as one of the Top 100 AI Startups by CB Insights. We have secured over $110 million in funding from leading venture capital firms and corporations, including NVIDIA, NEA, Index Ventures, Databricks, and Snowflake. Our AI models are uniquely available through Amazon Bedrock, and we thrive on innovation and collaboration with exceptional colleagues worldwide.At Twelve Labs, we operate on core values that include:Honesty and reflection about ourselves and our teamsResilience and humility, embracing failure and feedbackA commitment to continuous learning and enhancing team capabilitiesIf you enjoy tackling challenging problems and growing through the journey, the opportunity awaits you here at Twelve Labs.About the TeamOur ML Data team operates on the belief that data determines AI model performance. We build high-quality data for training and evaluating multimodal AI models end-to-end. This includes gathering, filtering, processing, and labeling various types of multimodal data such as video, images, and audio. We collaborate with diverse teams to design datasets that unlock new model capabilities and develop evaluation datasets that reflect real user experiences. We also develop and continually enhance internal tools to perform these processes efficiently.The ML Data team plays a pivotal role in the development of Twelve Labs' world-class video understanding models through a meticulously designed data pipeline.About the RoleAs a Software Engineer specializing in Data, you will design and develop pipelines for multimodal (video, image, audio) data that fundamentally enhance model performance through data quality. If you have experience designing and operating distributed systems for handling unstructured multimodal datasets, you can make a significant impact in this position. The rigorously refined and accurately labeled data forms the foundation of all model development at Twelve Labs, and you will have the opportunity to influence model quality more than any other engineering role.We are looking for someone to help us build data infrastructure that elevates our video understanding technology to the next level.In this Role, You WillBuild data engines capable of collecting, preprocessing, refining, filtering, and labeling large multimodal (video, image, audio) datasets for LLM/VLM training.Design and develop data systems that efficiently manage and visualize petabyte-scale video, image, and audio data.Create libraries and services that deliver tangible impact beyond just eye-catching features.Collaborate closely with various teams to define project priorities and goals, leading technical initiatives from planning through development and operations.

Jun 9, 2025
Apply
tosscareers logo
Full-time|On-site|Seoul

About the Role tosscareers is hiring an Infrastructure Automation Engineer in Seoul. This role focuses on building and maintaining automated systems that improve the efficiency and reliability of IT services. The position involves hands-on work designing, implementing, and supporting infrastructure automation. What You Will Do Design automated solutions for managing infrastructure Implement and maintain automation tools and workflows Support the stability and scalability of IT services through automation Who We’re Looking For Experience or strong interest in cloud technologies Familiarity with automation tools and systems Motivated to improve IT operations through automation

Apr 16, 2026
Apply
Normal Computing logo
DevOps Software Engineer

Normal Computing

Full-time|On-site|Seoul

Join Normal Computing | Unlock Exciting OpportunitiesAt Normal Computing, we create cutting-edge software and hardware solutions that drive technological advancements, particularly in the semiconductor sector, AI infrastructure, and the systems that shape our future. Our collaborative team spans across New York, San Francisco, Copenhagen, Seoul, and London, working together to achieve remarkable goals.Your Impact:We are on the lookout for a passionate DevOps Software Engineer to architect and implement high-performance cloud infrastructure and services. This position presents a unique chance to directly engage in pioneering AI projects within one of the most critical industries — Semiconductors. While previous experience in AI Engineering or the Semiconductor field is appreciated, it is not a prerequisite.Crafting, developing, and managing infrastructure for Normal’s innovative products, automating the development lifecycle, and ensuring system reliability within our Linux environment.Partnering with product and AI teams to establish and enforce DevOps best practices.Deploying and overseeing Normal’s product stack in client environments.Engineering and executing Continuous Integration/Continuous Deployment (CI/CD) pipelines for software and data workflows, incorporating security measures and policy compliance.Implementing best practices for logging, monitoring, and alerting, managing credentials, and ensuring adherence to regulatory standards (e.g., SOC2).Executing ongoing improvements, maintenance, and upgrades across all of Normal’s infrastructure.

Jan 28, 2026
Apply
Toss Securities logo
Full-time|On-site|Seoul

Toss Securities is seeking an IDC Infrastructure Engineer (Network & System) to help build and maintain the backbone of our data center operations in Seoul. This position focuses on ensuring the stability, efficiency, and continuous improvement of our large-scale infrastructure, covering everything from network design to physical equipment and operational processes. The role is primarily based at our modern data center (IDC), working closely with a team of network and system engineers. What you will do Design hardware architectures for network and server infrastructure, including equipment selection and implementation standards. Develop data center network architectures (such as Spine-Leaf and redundancy) and server/storage configurations. Define technical standards for bandwidth, NIC specifications (10G/25G/100G/400G), cabling (OM, LR/SR, MPO), and switch port setups. Plan physical infrastructure, taking into account rack space, power, and environmental needs. Set and manage organization-wide standards for equipment models, firmware, and configuration templates. Install, configure, and operate servers, storage, and network equipment on site, driving ongoing improvements. Oversee equipment transport, installation, and relocation, ensuring quality and safety throughout the process. Coordinate with contractors for cabling and equipment setup, maintaining technical standards and quality control. Monitor the IDC environment (power, temperature, network, equipment status) and respond proactively to anomalies. Lead first-level incident response and root cause analysis, working to prevent recurrence and resolve complex issues across network, system, and physical layers. Manage the lifecycle of IDC assets, ensuring data integrity and optimizing long-term operational plans for capacity, power, and space. Collaborate with colocation service providers and partners to manage schedules, quality, and costs. Automate repetitive operational tasks, such as firmware upgrades and configuration deployments, and build monitoring/CMDB integrations using SNMP and REST API. Establish and refine operational standards, procedures, and policies for ongoing efficiency. Requirements At least 5 years of experience in data center (IDC) or infrastructure design and operations. Experience designing data center network architectures (Spine-Leaf, redundancy, etc.). Understanding of server/storage hardware and operating systems (Linux, Windows, Hypervisor). Ability to design at the hardware level, with knowledge of NICs, cabling, and switch ASICs. Experience with IDC space planning, rack layout, power, and space design and operations. Background in large-scale equipment installation, relocation, and asset management within IDC operations. Problem-solving skills that address both physical and logical aspects of incidents. Strong communication skills for effective collaboration with contractors and internal teams. Experience with equipment implementation proof-of-concept (PoC), vendor evaluation, and performance validation. Proven ability to create infrastructure standard documents, such as design guides and operational procedures.

Apr 29, 2026
Apply
Toss CX logo
Full-time|On-site|Seoul

Join Our Team!The Internal Infrastructure Engineer at Toss CX is a vital member of the General Affairs Team, responsible for establishing and implementing optimal internal infrastructure strategies that align with the rapid expansion of our business.This role transcends basic IT support, as you will enhance our network and security systems based on financial security guidelines, ensuring a safe and engaging work environment for all team members.Your Responsibilities:Design and Stabilize Internal IT Infrastructure: Create a scalable internal network and server infrastructure that matches Toss CX's business direction and organizational size, ensuring reliable operations.Manage Office Network and Security Infrastructure: Optimize and maintain the overall office network architecture, including L2/L3 switches and access points. Implement and manage cutting-edge security solutions like Zscaler, NAC, and WIPS to uphold a zero-trust work environment.Enhance IT Governance and Environment: Lead IT projects to establish an account management system based on Okta and Active Directory, standardizing internal infrastructure and maximizing work efficiency.

Mar 16, 2026
Apply
Toss Careers logo
Full-time|On-site|Seoul

Role Overview Toss Careers is hiring a Systems Engineer to strengthen our backup infrastructure in Seoul. This position focuses on building, maintaining, and improving backup systems that protect critical data and support business continuity. What You Will Do Design, implement, and maintain backup solutions to safeguard company data Work with teams across the company to troubleshoot and resolve backup-related issues Monitor and optimize backup system performance Identify areas for improvement and help refine backup processes over time Location This role is based in Seoul.

Apr 16, 2026
Apply
daangn logo
Internship|On-site|SEOUL

About the Network Engineer Internship at Daangn Daangn is looking for a Network Engineer Intern to join the Infrastructure (Network) team in Seoul. This is a 3-month internship designed for those eager to develop technical skills while supporting a platform that connects hyperlocal businesses and users worldwide. Meet the Network Team The Network Team builds and maintains secure, high-performance network services for Daangn customers. The group designs and operates a network environment tailored for both local and global needs. Team members automate routine tasks, monitor traffic flow and network status in real time, and respond quickly to issues. Maintaining service quality during traffic spikes and protecting the platform from security threats are central to the team’s mission. What You Will Do Help design and operate Daangn’s service and office network architecture (cloud, wired, wireless) Assist in designing and operating network security architecture Contribute to the development and operation of network observability and automation platforms Who We’re Looking For Currently pursuing or recently completed a degree in Computer Science, Information Communication, Information Security, or a related field Solid understanding of basic networking concepts, including the OSI model and TCP/IP Knowledge of IPv4 addressing and subnetting Basic experience with physical network devices such as switches and routers Familiarity with network security concepts, including firewalls Ability to use scripting languages like Python or Bash at a basic level or higher Motivated to learn and grow proactively Preferred Qualifications Hands-on experience with AWS or GCP consoles Understanding of REST API calls and integrations Practical experience with networks or infrastructure through personal projects or coursework Knowledge of security principles such as encryption, authentication, and access control Interest in network observability or automation Attention to documentation and willingness to share knowledge within the team Additional Details This internship lasts for 3 months. Daangn gives preference to candidates with disabilities or veterans, in line with the 'Promotion of Employment for the Disabled and Vocational Rehabilitation Act' and the 'Act on the Honorable Treatment and Support of Persons of Distinguished Service.' Applications close on May 3rd, 11:59 PM. The deadline may change depending on circumstances. The expected start date is June 1st, though this may be adjusted if needed.

Apr 15, 2026
Apply
Toss Securities logo
Full-time|On-site|Seoul

Join Our Innovative TeamThe Machine Learning Engineer (Infra) will be part of the ML Platform Team within the Product Division at Toss Securities.The primary goal of the ML Platform Team is to create an optimal machine learning platform that enables the efficient and stable development and operation of various AI/ML services at Toss Securities.The ML Engineer (Infra) will focus on maximizing the efficiency of large-scale AI infrastructure, finely controlling resource usage, and enhancing infrastructure performance to its peak. Your Responsibilities Design and operate high-performance AI computing environments reliably.Design and operate top-of-the-line GPU clusters (H100, B300 series) connected via InfiniBand and high-performance storage (400Gbps) within a Kubernetes environment.Beyond merely building infrastructure, optimize networks and storage to extract the full potential of hardware performance. Develop a comprehensive control system for the entire AI infrastructure.Create an observability system to integrate and monitor AI resources distributed across internal infrastructure and external cloud.Implement management features to prevent resource monopolization by specific services and allocate resources precisely based on importance. Create automation tools for the most efficient resource usage.Analyze actual usage patterns to develop tools that recommend 'just-right resources' to avoid waste.Implement features that automatically scale up or down based on real-time model performance or error rates, and reallocate GPUs where necessary. Establish an environment for identifying and resolving model performance bottlenecks.Build profiling environments to accurately pinpoint slowdowns during model training or serving.Support the analysis and improvement of performance degradation causes between hardware and software. Who We Are Looking ForYou have experience building and operating Kubernetes-based ML infrastructures that handle large-scale traffic.You take responsibility for reliably operating live services beyond simple development.You have experience persistently analyzing and debugging to resolve root causes when issues arise.You possess a solid understanding of system resources (GPU/CPU/Memory/Network/Storage) and have experience building monitoring systems for them.You value the process of solving various problems that arise during service operations and strengthening the system. Preferred QualificationsExperience in unified monitoring of resource usage in large-scale clusters.Experience building systems to systematically control resources through Quota and Rate Limits.Experience with open-source platforms like Kubeflow or Kubernetes, including in-depth modifications as needed.Experience analyzing and optimizing bottlenecks at the kernel level using tools like Nsight Systems/Compute or PyTorch Profiler.Experience designing tasks to reduce costs or enhance performance tailored to workload characteristics (Rightsizing, Cost Optimization).Experience leveraging GPU virtualization technologies like MIG and MPS to maximize resource utilization.

Mar 10, 2026
Apply
Full-time|On-site|Seoul, South Korea

Join Our TeamWe are on the lookout for talented individuals who are eager to contribute to setting the global standard for video understanding AI! At twelve-labs, we are developing cutting-edge AI models that efficiently process vast amounts of video data, offering specialized search, analysis, summarization, and insight generation capabilities.Our models are utilized by the world’s largest sports leagues, enabling them to quickly and accurately select highlights from extensive game footage, providing a super-personalized viewing experience. In South Korea, integrated control centers leverage our technology to efficiently explore CCTV footage, responding swiftly to crisis situations. Major broadcasters and studios worldwide employ our models to create content for billions of viewers.As a deep tech startup with offices in San Francisco and Seoul, twelve-labs has been recognized as one of the top 100 AI startups globally by CB Insights for four consecutive years. We have secured over $110 million in funding from renowned VCs and enterprises, including NVIDIA, NEA, Index Ventures, Databricks, and Snowflake. Notably, our AI models, developed in Korea, are the only ones available through Amazon Bedrock. We thrive on creating innovative products with exceptional colleagues and growing alongside our global clientele.At twelve-labs, we operate based on core values that include:A reflective and honest attitude towards oneself and the teamResilience and humility in the face of failure and feedbackA commitment to continuous learning to enhance the team’s capabilitiesIf you enjoy solving challenging problems and growing through the process, the opportunity is here at twelve-labs.About Our TeamOur Infrastructure Team believes that 'data determines the performance of AI models.' We build high-quality data pipelines for the training and evaluation of multimodal AI models end-to-end. We collect, filter, process, and label diverse multimodal data, collaborating with various teams to design training data that can uncover new model capabilities. Additionally, we create evaluation datasets that reflect actual user experiences and develop internal tools to streamline these processes continuously.Your RoleAs an Infrastructure Engineer at twelve-labs, you will design and build core infrastructure to ensure the stable and scalable operation of our AI SaaS platform. You will work with system architectures across various cloud and on-premise environments, constructing robust infrastructure that supports our video AI foundation models. In our rapidly evolving startup environment, you will optimize performance, security, and flexibility while closely collaborating with multiple internal teams.Key ResponsibilitiesDesign and operate multi-tenant architecture for global enterprise clientsDevelop automation and scalable CI/CD pipelines using TerraformBuild flexible infrastructure encompassing various cloud environments such as AWS, GCP, and Azure, as well as on-premisesOptimize secure and efficient cloud infrastructure through advanced monitoring and security systemsDesign scalable architecture to rapidly support new video AI models and servicesCollaborate with PMs, engineers, and researchers to bring AI products to fruitionIdeal CandidateExperience in building and operating infrastructure in cloud environments such as AWS, GCP, or AzureProficiency in using IaC tools like Terraform and Ansible for automation and architecture designFamiliarity with Kubernetes and container orchestration

Oct 24, 2025
Apply
Toss Securities logo
Realtime Data Engineer

Toss Securities

Full-time|On-site|Seoul

Join Our Team!The Realtime Data Engineer will be part of the Realtime Data Team within our Data Division.This team operates a distributed messaging streaming platform, ensuring the stable transmission of large-scale financial transactions.We manage high-volume data pipelines that deliver data with zero latency while maintaining integrity.Moreover, we integrate real-time data into OLAP environments, enabling immediate business decision-making and service enhancement. Your Responsibilities:Operate and optimize our Kafka cluster to ensure high availability of large-scale data from Toss Securities.Utilize tools like CDC, Kafka Connect, Flink, and ksqlDB to construct real-time data pipelines.Manage OLAP to efficiently store and query large volumes of incoming real-time data, optimizing query performance.Enhance architecture for greater throughput and lower latency, proactively assessing and implementing next-gen technologies for reliable data services. Ideal Candidate:Experience managing large-scale data platforms, ensuring infrastructure stability and performance.Proven experience designing and operating Kafka-based architectures or a deep understanding of distributed messaging systems.Intermediate to advanced proficiency in Java (or Kotlin), capable of implementing complex business logic in real-time streaming frameworks (Flink, ksqlDB).Experience building or operating real-time analytics environments using OLAP systems like ClickHouse, StarRocks, Druid, or Pinot.A broad experience in Data Engineering or depth in a specific area, eager to expand your role.Strong foundation in Data Engineering skills, quick to learn new tech stacks, and adept at finding optimal solutions in diverse situations.Excellent communication skills to collaboratively tackle complex problems with the team. Joining Toss Securities:Application Submission > Job Interview > Cultural Fit Interview > Reference Check > Compensation Negotiation > Final Offer and Onboarding Please Note:Any inaccuracies found in the resume or disciplinary issues during employment history may lead to cancellation of the application.Candidates who are prohibited from hiring or have disqualifying reasons according to Toss Securities' regulations may have their applications canceled.Individuals with disabilities or national veterans are given preferential treatment in accordance with relevant laws. A Note for Future Colleagues:Processing transaction data generated in the securities domain in real-time is of high importance from a business perspective and poses significant technical challenges.The Toss Securities Realtime Data Team is at the forefront of this effort, currently maintaining stable securities services.Toss Securities continues to grow. We hope that the entire process of maintaining the systems of a growing securities firm will be an enjoyable journey.

Mar 10, 2026
Apply
Toss Securities logo
Full-time|On-site|Seoul

Join Our Team!Toss Securities is rapidly growing under the mission of "Innovating every investment experience for our customers", boasting over 7.4 million registered users and 4 million active monthly users. We are currently leading in foreign stock transaction volumes. Our diverse range of investment products, including stocks, bonds, and options, expands our customers' choices. At the core of this growth is our robust and trustworthy data platform. Toss Securities is at a pivotal moment, needing to design a data architecture capable of handling large-scale real-time trading data, user behavior data, and regulatory compliance data.We are seeking a Head of Data Engineering to design and lead this structural transformation. This role is not just about operations; it is key to defining the data future of Toss Securities and enabling our organization to work data-driven.Your Responsibilities:Design and build an on-premise distributed architecture aligned with our mid-to-long-term business strategy, resolving data silos through Hadoop enhancements and transitioning to a Kafka-centered streaming-first approach.Construct and operate large-scale batch and streaming pipelines based on Spark/Flink and Kafka, ensuring reliable data processing through high-availability ETL/ELT design and performance optimization.Establish and manage data standards (layering, naming, permissions), quality management (DQ rules, SLA, lineage), and regulatory compliance frameworks based on metadata and personal information (PII).Coordinate data interests across services, risk, accounting, AI, and backend teams, establishing and executing a comprehensive data strategy including integration strategies and ownership definitions.Design the goals, structure, and processes of the data organization, leading a growing team through coaching and decision-making while addressing technical debt and fostering a trust-based environment.Oversee the design and operation of ML platforms and infrastructure for LLM/recommendation services, collaborating with service teams to build model deployment, operation, monitoring standards, and automated pipelines.Ideal Candidate:10+ years of experience in data engineering or platform architecture.Experience designing and operating large-scale clusters (Hadoop, Kafka).Proficiency in real-time streaming and batch data processing architecture design.Experience in building data governance, quality, and permission management systems.Leadership experience in engineering organizations (5-50 team members) is preferred.Excellent coordination and communication skills across diverse organizational stakeholders.Additional Preferred Qualifications:Experience with financial data in securities, banking, or fintech.Experience handling data within regulatory environments (e.g., PIPA, Financial Transaction Act).Experience in building semantic layers or data meshes.Experience with real-time transaction or advertising/shopping service data.Experience designing and operating large-scale model training, serving, and MLOps/LLMOps pipelines (e.g., Kubeflow, Argo, H100/H200 GPU clusters, vLLM/Triton).Experience with feature stores for real-time recommendations, model optimization, profiling (e.g., BentoML, ONNX, torchserve), and LLM fine-tuning/RAG operations.

Mar 9, 2026
Apply
daangn logo
Full-time|On-site|SEOUL

Security Engineers at daangn focus on protecting infrastructure and keeping user data safe. This position centers on finding vulnerabilities through penetration testing and setting up strong security protocols across our systems. Role overview This role involves hands-on work with infrastructure security in Seoul. The main responsibility is to identify and assess security risks, then act to address those issues. Attention to detail and a methodical approach are important, as your work directly impacts the safety of our platform. What you will do Conduct penetration tests to uncover vulnerabilities in our infrastructure Implement and maintain security protocols to meet high standards Support ongoing efforts to protect user data and system integrity Requirements Experience with infrastructure security and penetration testing Ability to identify, analyze, and address security vulnerabilities Commitment to maintaining strong security practices

Apr 29, 2026
Apply
Toss Securities logo
Full-time|On-site|Seoul

Join Our Dynamic Team!The Data Analytics Engineer (Data Engineer) at Toss Securities is an integral part of the Data Warehouse Team within the Data Division.Your focus will be on Data Platform and Data Mart, with opportunities to collaborate cross-functionally.The Mart responsibilities include structuring and managing data from the Toss Securities domain to facilitate analysis through data warehouse and aggregation table creation.Our current team of approximately 7 members brings diverse experiences ranging from 2 to 14 years, with backgrounds in various sectors such as portals, banking, gaming, and startups. Curious About Our Data Division?The Data Division at Toss Securities strives to become a world-class securities firm by leveraging data technology, services, and data-driven decision-making.We foster close collaboration among various data roles, creating an enjoyable working environment.Regular Tech Weekly sessions are held to share expertise, allowing you to engage with and learn from other roles as per your interest. Your Responsibilities Will Include:Designing clear and reliable table structures that can be easily understood and utilized, encompassing architecture design, compliance with standards, data processing logic management, data integrity validation, DQ monitoring, security reviews, and documentation using meta management systems.Collaborating with data users to design data marts and establish pipelines for key business performance analysis.Setting the groundwork for effective data asset utilization through data cataloging and standard management.Proactively addressing essential data processing tasks in a rapidly growing service environment with your colleagues.Enhancing system efficiency by effectively refactoring and optimizing various existing mart tables through data modeling that considers consistency, reusability, and scalability.Designing data marts and constructing pipelines for external/public reporting requirements. We Are Looking For Someone Who:Has a deep understanding of the securities domain or has actively engaged in stock trading.Can clearly define key concepts of the securities domain as a DW data modeler and take the lead in designing easy-to-understand data structures.Has experience simplifying complex data models or automating repetitive issues.Can propose efficient data processing methods while adhering to data standards through smooth communication with various stakeholders.Has experience structuring enterprise tables through defining data standards and building data catalogs.Is capable of independently conducting data warehouse/mart modeling, pipeline construction, and operational tasks.Can present standards from a clear data structure and efficient utilization perspective, rather than just processing simple requests.Is proficient in SQL and can write organized queries considering readability and efficiency.Has experience developing data pipelines based on Hadoop, Airflow, and DBT.May need to have intermediate to advanced skills in PySpark, depending on the situation.Would benefit from having experience with BI tools such as Tableau. Resume Tips:Detail impactful projects you have worked on.If you have improved services, quantify the results (omit sensitive external information).Elaborate on your work related to data governance.Include business analysis or reporting experience.

Mar 10, 2026
Apply
Toss Securities logo
Full-time|On-site|Seoul

About the Team You'll Join The Data Analytics Engineer at Toss Securities is part of the Data Warehouse Team within the Data Division. Your responsibilities will be focused on Data Platform and Data Mart tasks. While your primary focus will vary, you will also engage in cross-functional projects. The Platform tasks involve maintaining and optimizing ETL/Pipeline Tools to effectively manage the DW Mart tables. You will explore and implement new methods to reduce DW operation time with limited resources. Our goal is to maximize data utilization across the organization using tables managed by the DW team. The current team consists of approximately 7 members with varying experience levels ranging from 2 to 14 years, coming from diverse backgrounds including portals, banking, gaming, and startups. Curious about the Data Division? The Data Division at Toss Securities aims to become the world's leading securities firm in data handling, contributing through data technology, services, and data-driven decision-making. We foster close collaboration among various data professionals and enjoy our work. Regular Tech Weekly sessions allow us to share expertise, and you can freely engage with different teams to learn from each other. Your Responsibilities Experience and contribute to an efficient DW environment within a rapidly growing agile organization. Design data marts and develop and automate DW Data Workflows based on the Hadoop Ecosystem and open-source solutions. Identify and implement methods for structuring and automating numerous DW/Mart tables. Process large volumes of data swiftly and effectively to create and manage various features. Establish Data Quality Checks and Governance within the data marts. Experience in deriving and establishing system requirements for large data processing and analysis is a plus. Ideal Candidate At least 5 years of experience as a Data Engineer is essential. You should possess a fundamental understanding of RDBMS, Hadoop Ecosystem, and Data Warehousing. Proven experience in leading the design, construction, and operation of data marts is required. You should be capable of installing, operating, and troubleshooting Airflow, DBT, and Django, with the ability to modify open-source tools to develop features needed for securities DW. Experience in simplifying complex problems or automating repetitive tasks using data models is critical. Extensive experience in efficiently processing big data using Spark is highly desirable. Intermediate proficiency in Python and advanced skills in SQL are required. Resume Tips If you have resolved critical issues while operating platforms or optimized performance and system resource usage, please include those experiences. Be specific about impactful projects you have worked on. If you have addressed bugs or issues while using open-source tools, or developed or enhanced features, please detail those experiences. Highlight the results of any improvements made in actual services, quantified if possible (please exclude sensitive information if necessary). Join Toss Securities Application Submission > Job Interview > Cultural Fit Interview > Reference Check > Compensation Negotiation >...

Mar 10, 2026
Apply
zoyi logo
Full-time|On-site|Seoul

Join Us to Create the Future of Communication!At ChannelTalk, we are dedicated to driving sustainable growth for businesses through our all-in-one AI messenger, which focuses on enhancing customer conversations to guide business directions effectively.Embracing the philosophy that 'the customer has the answers', we integrate CRM-based support experiences with AI automation to streamline customer service efficiency and improve customer experiences, all within a single product. Our rapid growth, achieving over 20% market share in Japan, sets a solid foundation for our ambitious expansion into the US market as a global SaaS leader.We collaborate with top talents to develop a 'future classic' product that could symbolize a new generation, much like Google Search and the iPhone.

Aug 5, 2025
Apply
daangn logo
Full-time|On-site|SEOUL

Welcome to the daangn Team!At daangn, we strive to create an environment where individuals can grow alongside the company’s success.We are here to assist you in making meaningful connections with fantastic colleagues. Introducing the Data Valuation TeamThe daangn team is dedicated to discovering valuable information that connects neighborhoods and resolving inconveniences in local living. To generate this user value, we must provide easy access to reliable information for decision-making. While we already utilize vast amounts of data for our decisions, maximizing the value of our data requires substantial change.The vision of the Data Valuation Team is to "make user-centric decisions through daily data utilization." We take the lead in addressing and solving the issues of data valuation.Role of the Data Analytics EngineerThe Data Analytics Engineer ensures that data is utilized reliably and consistently, working across data modeling, engineering, and analysis to deliver value to the business and its users.In the diverse service environment of daangn, the Data Analytics Engineer will design and improve the overall flow of data collection to utilization, enabling analysts, engineers, and product teams to leverage data reliably.Moreover, the role involves designing data marts, managing quality, operating data governance frameworks, and performing basic analyses or experiments to support data-driven decision-making. Discover the Journey of the Data Valuation Team with daangn (Google Data Webinar) Learn about daangn's Metric Platform KarrotMetrics The 7 Challenges daangn Faced While Implementing DBT and Airflow How daangn's Data Engineers Simplified Modeling with DBT (2024 Data Conference) Mapping daangn Data: Column Level Lineage Building No Need to Always Pull Everything? Building MongoDB CDC at daangn Why daangn Implements User Activation Engagement in 3 Key Areas

Feb 2, 2026
Apply
Toss Careers logo
Full-time|On-site|Seoul

Role overview Toss Careers is seeking a Data Engineer in Seoul to strengthen its advertising initiatives. The position centers on creating data solutions that guide advertising choices and enhance campaign results. Close collaboration with colleagues from different teams is a key part of the role. What you will do Build and maintain data pipelines and systems tailored for advertising analytics Use modern tools to process and analyze large volumes of data Work with team members to turn business requirements into technical solutions Transform raw data into practical insights that shape advertising strategy Location This role is located in Seoul.

Apr 27, 2026

Sign in to browse more jobs

Create account — see all 421 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.