Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
Minimum of 2-3 years of experience in operating Linux-based systems. Practical experience in containerized environments (Docker) and Kubernetes, preferably in production settings. Experience with monitoring/logging stacks (e.g., Prometheus/Grafana, ELK, Zabbix). Basic experience with CI/CD systems and Git-based workflows (Gitea, ArgoCD). Operational knowledge of Cloud Native application stacks including Linux, Ubuntu, containerd, Docker, Kubernetes, Prometheus/Grafana, ELK stack, Zabbix, RabbitMQ, MinIO, PostgreSQL. Willingness to participate in on-call rotations with structured troubleshooting skills. Strong communication skills and ability to collaborate across teams and organizational units.
About the job
Manage and operate a Kubernetes-based platform.
Daily management of clusters and nodes, including upgrades, patches, node cordon/drain, and scaling.
Oversee persistent storage solutions (e.g., CSI/Longhorn) with basic capacity planning.
Support reliability and Service Level Objectives (SLOs).
Participate in the establishment and monitoring of Service Level Indicators (SLIs) and SLOs.
Track error budgets, providing feedback on incidents and trends to the team.
Engage in observability and incident management.
Utilize and configure monitoring and logging systems (dashboards, alerts).
Participate in on-call rotation: handle alerts and resolve incidents based on runbooks.
Facilitate automation and manage runbooks.
Support deployment and configuration automation (CI/CD, Git-based processes).
Create and maintain runbooks and operational documentation.
Collaborate across various organizational units.
Work collaboratively with development, operations, and business stakeholders on a daily basis.
Provide suggestions for improving processes and platform reliability.
About mpsolutions
mpsolutions is an innovative and stable company that focuses on delivering diverse projects based on modern technologies. We value a collaborative work environment and encourage participation in professional events, workshops, and hackathons.
Similar jobs
1 - 20 of 393 Jobs
Search for Java Engineer Cloud Native Financial Data Systems
About the Role Qualysoft is looking for a Java Engineer to help build and refine cloud-native financial data systems in Budapest. This role focuses on designing, developing, and improving financial applications that support business insights and operational efficiency. What You Will Do Design and develop software for financial data systems using Java Work with cloud-native technologies to deliver scalable solutions Collaborate with cross-functional teams to meet project goals Contribute to optimizing applications for performance and reliability Team and Collaboration Work closely with colleagues from different disciplines to deliver high-quality software. Expect regular interaction with other engineers, product managers, and stakeholders as part of a collaborative approach.
Join our dynamic team at Hawkeye Innovations as a Senior Java Engineer specializing in our Data Platform Framework. In this role, you will drive the development of scalable and efficient data solutions, collaborate with cross-functional teams, and contribute to the architecture of our data platform. If you are passionate about Java and data engineering, this is your chance to make an impact in a forward-thinking company.
Engage in the development of robust, large-scale distributed systems for computing and reporting intra-day and end-of-day risks, PnL (Profit and Loss), and market scenarios for senior management, trading desks, controllers, and the market risk department. Participate in redesigning pricing and workflow applications for sales teams and traders to ensure competitive advantage in the market. Collaborate on a project focused on reengineering the Front to Back risk scenario infrastructure, enhancing market data and marking systems across a strategic cross-asset platform. Design and implement APIs to facilitate programmatic access to pricing and risk analytics by other internal systems and processes. Deliver IT support for the Macro business in the EMEA region, maintaining daily interactions with sales/trading teams, desk strategies, FID COOs, operations, controllers, and the market risk department.
The Exciting Opportunity This position plays a vital role in architecting and enhancing our platform to meet business demands while optimizing our systems. In this role, you will have the opportunity to develop new data pipelines, manage platforms hosted on data streams for both batch and real-time loading, and create real-time visualizations. Key Responsibilities: Maintain and enhance our existing data platform. Develop processes to ingest data from Kafka, APIs, and databases using AWS MSK Connect. Design and maintain real-time data processing applications utilizing frameworks such as Spark Structured Streaming and Kafka Streams. Implement transformations on data streams. Participate in data modeling adhering to standards like Inmon, Kimball, and Data Vault. Ensure data quality by verifying consistency and accuracy. Stay current with research and advancements in technology to improve our data platform. Possess an investigative mindset to troubleshoot issues creatively and manage incidents effectively. Take full ownership of assigned projects and tasks while collaborating within a team environment. Document processes thoroughly and conduct knowledge-sharing sessions. What We're Looking For: Essential Qualifications: Proven experience with modern cloud database technologies, especially Snowflake. Expertise in orchestrating data pipelines using Airflow. Proficient in AWS Glue. Familiarity with Apache Iceberg. Strong experience with SQL and Data Integration Tools. Proficiency in programming languages such as Python or Scala. Knowledge of AWS Services like S3, Lambda, API Gateways, DMS, and RDS. Development experience in Microsoft and Linux/Cloud environments. Exceptional analytical and problem-solving skills.
Join our esteemed partner, a renowned global leader in the manufacturing of innovative systems, modules, and components tailored for commercial vehicles. Their success is built on a foundation of excellence and a steadfast dedication to quality. By artfully integrating individual products, they deliver highly efficient solutions and develop customized concepts that meet the unique requirements of their clients. Furthermore, they incorporate cutting-edge technologies to ensure their offerings remain ahead of industry trends. This comprehensive strategy positions them to adapt to the dynamic market demands while setting benchmarks for reliability and performance.As a Cloud Engineer, you will: Provision and configure Azure compute, storage, and networking services as part of a large-scale enterprise migration project. Utilize automation tools such as PowerShell or Terraform to streamline processes. Bring hands-on experience in cloud or hybrid environments, server operations, and network management. Monitor and resolve performance, availability, and cost-related issues. Ensure adherence to security and compliance standards. Implement governance and optimize costs effectively. Provide mentorship and guidance to junior engineers. Demonstrate strong troubleshooting capabilities, ownership of tasks, thorough documentation, and collaborate with architects and other teams.
Role overview Deutsche Telekom IT Solutions is hiring a Senior Cloud Engineer to help shape and support cloud solutions for a range of clients. This position is open in Budapest, Debrecen, Pécs, and Szeged. What you will do Design and implement cloud architectures tailored to client needs Develop and execute deployment strategies for cloud environments Monitor and manage cloud performance to ensure reliability and scalability Impact Your work will help clients adopt and benefit from cloud technologies, supporting their business goals and technical growth.
Role Overview mpsolutions is looking for a Technical Architect focused on Cloud and Data Solutions to join the team in Budapest, Hungary. This position plays a key role in shaping architectural direction and supporting technical delivery across projects. What You Will Do Oversee and manage Architecture Decision Records (ADRs) to keep architectural choices clear and consistent. Support the creation of solution and reference architectures, contributing to thoughtful design work. Work closely with architects, engineers, and delivery teams from different areas to help projects move forward smoothly. Lead architecture review sessions that encourage constructive feedback and knowledge exchange. Organize and maintain technical documentation to back up architectural efforts. Help design solutions that span data, application, and infrastructure domains.
Join our innovative team as a Cloud Integration Architect, where you will spearhead integration technology projects within a dynamic cloud-native setting. You will thrive in a motivating professional environment that emphasizes continuous learning and personal growth through tailored training plans, professional development courses, and exam support.Key Responsibilities: Architect and design on-premise and cloud integration platforms, including messaging systems, API gateways, and service meshes that effectively address our customers' integration needs. Customize integration platforms and establish a robust development framework. Conduct detailed integration needs analyses and select appropriate integration technologies and methodologies. Collaborate closely with fellow architects, platform specialists, and DevOps teams to define and support technical requirements, ensuring seamless integration testing with quality assurance teams.
Join our innovative team as a Cloud DevOps Engineer where you will play a vital role in enhancing our cloud infrastructure and ensuring seamless deployment processes. You will collaborate with cross-functional teams to design, implement, and optimize scalable cloud solutions.
We are on the lookout for a talented Data Platform Developer who will play a crucial role in enhancing the clarity and consistency of our innovative cloud-based Lakehouse platform architecture. Your main objective will be to define, develop, and uphold architectural standards and contracts at the platform level. This will enable teams to build solutions in a consistent and transparent manner, fostering a shared understanding of where components belong, the rationale behind their placement, and their movement through the platform. A significant focus will be on establishing clear semantics and promotion rules across our layered data architecture, from raw ingestion to curated, consumption-ready data.As a vital member of the Common Data Intelligence Hub, you will collaborate closely with data architects, data engineers, analytics engineers, and solution designers to drive the target platform architecture forward and create an architectural framework that supports future use cases, without taking ownership of day-to-day development or operations.Key Responsibilities:Define, document, and continuously improve layer semantics and promotion rules across the architecture layers, including Landing, Sources, Bronze, Silver, and Gold. Ensure these rules are integrated into governance tooling such as Databricks Unity Catalog or Microsoft Fabric OneLake.Establish and govern platform guardrails and contracts, focusing on schema evolution, Change Data Capture (CDC) principles at the platform level, and key non-functional requirements such as reliability, performance, security, and cost.Own and refine the platform's reference architecture, including capability/service boundaries, responsibilities, standard integration patterns, and architectural standards.Drive the target platform architecture and maintain a prioritized architectural roadmap, addressing platform enablers and technical debt initiatives.Support and review solution designs for new use cases, ensuring they align with platform standards and governance requirements.Create and maintain an architecture documentation practice that is up-to-date and user-friendly.Ensure the platform is consistently represented within enterprise architecture tools and processes, acting as the liaison to enterprise architecture stakeholders.Collaborate with Data Architects and domain teams to ensure platform rules support high-quality modeling and cross-domain consistency.
Join our dynamic team at Deutsche Telekom IT Solutions as a Senior Cloud Engineer. In this pivotal role, you will leverage your expertise in cloud technologies to drive innovative solutions, ensuring optimal performance and scalability of our cloud infrastructure. Collaborate with cross-functional teams to design, implement, and maintain cloud-based services, contributing to our mission of delivering top-notch IT solutions.
Join our dynamic team as a Senior Data Engineer specializing in the Power Platform at Deutsche Telekom IT Solutions in Budapest. In this pivotal role within our international Common Data Intelligence Hub, you will enhance our Cloud Data & Analytics Platform by serving as the crucial link between Self-Service BI teams and the Azure Lakehouse backend.Our Cloud Data & Analytics Platform integrates Azure Lakehouse technologies (Data Factory, Databricks, dbt, CDC Framework) with Microsoft Power Platform services (Power BI, Power Apps, Power Automate, SharePoint Online). This innovative platform empowers business units to develop and manage their own analytics and reporting solutions while adhering to enterprise-level security, governance, and data management protocols.As a Senior Data Engineer, you will ensure that self-service solutions operate efficiently, securely, and reliably in accordance with established standards. You will provide guidance and support to business and data teams, helping them navigate the technical framework, governance principles, and best practices for sustainable analytics solutions. Collaboration with platform engineers will be key to maintaining operational alignment between the Power Platform frontend and the Lakehouse backend.Platform Operations & Maintenance:Ensure the reliable operation and performance optimization of Power Platform components (Power BI Service, Power Apps, Power Automate, SharePoint Online) within the Cloud Data & Analytics Platform.Monitor platform usage and performance to guarantee optimal resource allocation and cost efficiency.Manage Power BI gateways and secure data connections to the Azure Lakehouse backend (ADF, Databricks, dbt, CDC Framework).Oversee role-based access control (RBAC) and workspace permissions in line with corporate governance and Azure security principles.Optimize data refresh and load processes to maintain the freshness and stability of analytical datasets.Work closely with platform engineering teams to address incidents, coordinate upgrades, and implement technical improvements across both frontend and backend layers.Create standardized operational dashboards and reports to enhance transparency regarding platform usage, capacity, and performance metrics.Governance & Best Practices:Consult and coach Self-Service BI teams on effectively utilizing Power BI, Power Apps, and Power Automate within the governance framework.Guide new teams through onboarding and compliance processes, ensuring proper workspace configuration.
Deutsche Telekom IT Solutions is looking for a Junior System Administrator to join the Database Services team supporting T Cloud Public. The role is open in Budapest, Debrecen, Pécs, and Szeged. Key responsibilities Provide support for the database infrastructure used by T Cloud Public Monitor system performance and stability to help maintain reliable operations Work alongside senior administrators to assist with managing cloud-based platforms Who will thrive here This position suits individuals aiming to grow their career in IT and cloud technology, with particular interest in databases and system administration. A willingness to learn and collaborate with experienced colleagues will help in this role.
Manage and operate a Kubernetes-based platform.Daily management of clusters and nodes, including upgrades, patches, node cordon/drain, and scaling.Oversee persistent storage solutions (e.g., CSI/Longhorn) with basic capacity planning.Support reliability and Service Level Objectives (SLOs).Participate in the establishment and monitoring of Service Level Indicators (SLIs) and SLOs.Track error budgets, providing feedback on incidents and trends to the team.Engage in observability and incident management.Utilize and configure monitoring and logging systems (dashboards, alerts).Participate in on-call rotation: handle alerts and resolve incidents based on runbooks.Facilitate automation and manage runbooks.Support deployment and configuration automation (CI/CD, Git-based processes).Create and maintain runbooks and operational documentation.Collaborate across various organizational units.Work collaboratively with development, operations, and business stakeholders on a daily basis.Provide suggestions for improving processes and platform reliability.
Lab49 is seeking an experienced Senior Java Developer to spearhead innovative projects that will drive transformative change for our distinguished Financial Services clients. In our Agile-driven environment, you will leverage your extensive server-side expertise to develop cutting-edge financial systems that redefine industry standards.
Join Wise as a Senior Backend Java Engineer and play a pivotal role in building robust and scalable backend solutions. You will be responsible for designing, developing, and maintaining high-quality software applications. Collaborate with cross-functional teams to define, design, and ship new features while ensuring the performance and reliability of our systems.
Join our team as a Commercial Systems Administrator in Budapest, Hungary! We operate in a hybrid work environment, requiring 3+ days in the office each week.Tulip is a pioneer in AI-native frontline operations, empowering companies globally with composable, connected applications that enhance work quality, operational efficiency, and provide comprehensive traceability across all operations. Our innovative, cloud-native, no-code platform, driven by embedded AI, is revolutionizing the digital transformation of industrial settings with human-centric solutions that redefine the Manufacturing Execution System (MES) landscape.Originating from MIT, Tulip is headquartered in Somerville, MA, with additional offices located in Germany, Hungary, Singapore, and Israel. We have been honored as a World Economic Forum Global Innovator, recognized with the 2024 Deloitte Technology Fast award, and listed among Energage’s Top Workplaces USA and Built In Boston’s “Best Places to Work” and “Best Midsize Places to Work.”
At SEON, we are the pioneers in fraud prevention and AML compliance, empowering thousands of enterprises globally to combat fraud, mitigate risks, and safeguard their revenue streams. Utilizing over 900 real-time, first-party data signals, we enhance customer profiles, identify suspicious activities, and optimize compliance processes—all from a single platform. SEON offers unparalleled data richness, versatile and transparent analyses, and a swift path to value that surpasses any competitor in the industry. Our innovative approach has enabled clients to decrease fraud by 95% and achieve a remarkable 32x return on investment. We are rapidly expanding, fueled by collaborations with some of the most forward-thinking digital brands like Revolut, Wise, and Bilt.As a FullStack Java Engineer at SEON, you will leverage your robust software engineering skills while adopting a mentoring role to empower your peers and collaborate closely with our engineering leadership. We are committed to a product-led growth strategy.Join our Scoring Team, the core of SEON, where we aggregate data to provide our clients with the tools needed to block fraudulent activities. We are in a constant state of evolution, adding new data sources to refine our sophisticated real-time analyses and enhancing user experience to bolster their fraud prevention capabilities. This role encompasses vital product functions including backend and frontend development, research and development, design, and database management.This position offers flexibility with a hybrid working schedule based in Budapest.
Role overview Wise is looking for a Senior Java Engineer in Budapest to help build and improve the platform. This role focuses on creating scalable, efficient solutions that support Wise’s products and services. What you will do Develop and maintain Java-based software for the Wise platform Work closely with teams across engineering, product, and other functions Contribute to the design and delivery of reliable, high-quality features Support ongoing improvements in system performance and scalability
Apr 20, 2026
Sign in to browse more jobs
Create account — see all 393 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.