About the job
Employee Applicant Privacy Notice
About Us:
Join us in shaping a more promising financial future.
At SoFi, we’re revolutionizing personal finance with innovative, mobile-first technology that empowers our millions of members to achieve their financial goals. As a next-generation financial services provider and national bank, we’re leading the charge in an industry undergoing remarkable transformation. Our commitment to our core values drives us to create meaningful impacts in people's lives every day. Come along on this journey and invest in your future, your career, and the financial landscape.
Team:
SoFi is on the lookout for a skilled and motivated Staff Data Engineer to pioneer exceptional technical solutions for the Data Products team within our SIPS (Spend, Invest, Protect, Save) division, supporting all of SoFi's Financial Services. The SIPS Data Engineering team is dedicated to facilitating data engineering and reporting for SoFi's Financial Services offerings. As a technical leader, you will shape the vision and strategy to construct essential data models that are extensively utilized across SoFi for analytics, reporting, and machine learning applications. Our mission is to empower users to make data-driven decisions and effectively evaluate their outcomes by offering high-quality, accessible data.
Role:
We are seeking a passionate, detail-oriented, and experienced Data Engineer who thrives on tackling big data challenges in an agile environment. Your responsibilities will encompass big data design and analysis, data modeling, as well as the development, deployment, and maintenance of big data pipelines. You will spearhead the creation of some of the most vital data pipelines and datasets while expanding self-service data capabilities. This position requires you to navigate the intersection of data and engineering, with a solid grasp of analytical techniques and the ability to align insights with business objectives, alongside a proven track record of maintaining high operational standards in ETL and big data pipelines.
Key Responsibilities:
- Design and develop robust data models and pipelines that facilitate data ingestion, processing, storage, and retrieval. Assess and choose suitable technologies, frameworks, and tools to construct scalable solutions.

