About the job
About the Team You Will Join
- The Data Engineer (Search) at Toss Securities is part of the AI Tribe within the AI Intelligence Silo.
- This Silo is a collaborative team consisting of Data Engineers, Machine Learning Engineers, Server Engineers, Frontend Engineers, Product Owners, and Product Designers, all working towards creating AI-driven information services leveraging securities domain data.
- Our focus is not just on presenting information but on rapidly experimenting with how to process and present data in a way that assists investors effectively.
- Search acts as the primary entry point connecting various securities data and AI services, with the Data Engineer (Search) being responsible for search/indexing functionalities that can be utilized across AI-based data services.
- The role centers around designing and operating the data and indexing aspects that constitute our search services, concentrating on stably designing and managing the data flow and infrastructure for search indexing rather than just enhancing algorithms or ranking models as seen in larger portals or e-commerce.
Key Responsibilities
- Design and manage the indexing pipeline for Toss Securities search services, including stocks, autocomplete, news, and community features.
- Architect and reliably operate real-time/big data pipelines for search indexing.
- Gain insights into Elasticsearch-based search indexes and enhance the indexing structure and performance from a data perspective.
- Collaborate on data integrity management and re-indexing strategies to ensure stable data delivery for search.
- Gradually expand your responsibilities into areas beyond search, such as Graph search and ingestion of new data sources.
Who We Are Looking For
- A candidate with over 3 years of experience in Data Engineering.
- Strong programming skills are preferred.
- Experience in designing or operating real-time or batch-based data pipelines is a plus.
- Experience in collecting and processing diverse data sources for service utilization is beneficial.
- Familiarity with big data processing platforms such as Spark, Hadoop, Impala is an advantage.
- A passion and curiosity for learning new domains and technologies are highly valued.
- A preference for collaborative environments where feedback and growth are encouraged is ideal.
Additional Preferred Experience
- Experience using or managing search service infrastructures like Elasticsearch, Lucene, or Solr is advantageous.
- A genuine interest or experience in search domains such as search engines, recommendations, or ranking systems is a plus.
Resume Tips
- Please detail your experience in designing/developing/operating data pipelines, ETL, streaming, etc.
- Highlight your role in projects and what you learned and improved through those experiences.
- If you have experience with search or Elasticsearch, even on a smaller scale, please describe the problems you solved.
Your Journey to Joining Toss Securities
- Application submission > Job interview > Cultural fit interview > Reference check > Compensation discussion > Final acceptance and onboarding

