Design, build, and optimize data ingestion, transformation, and processing pipelines using Apache Spark–based platforms (e.g., AWS EMR, Glue Jobs, Databricks, Snowflake/Snowpark), along with AWS services (Amazon S3, Athena), and Kafka.
Implement and maintain scalable solutions aligned with Data Lakehouse architecture and modern data engineering best practices.
Use SQL and Python to develop robust data workflows, automation scripts, and transformation logic.
Understand data models (conceptual, logical, and physical) and implement them as solutions.
Develop and manage YAML-based configurations for pipeline metadata, orchestration, and schema definitions.
Ensure end-to-end data quality, validation, lineage, security, availability, and performance across data assets.
Collaborate closely with system analysts, data analysts, product teams, and software engineers to understand requirements and deliver dependable data solutions.
Apply data product thinking when designing datasets and interfaces to support self-service analytics and reusable data assets.
Support orchestration workflows on AWS MWAA (Managed Apache Airflow) or similar pipeline management platforms.
Monitor, troubleshoot, and optimize data pipelines in production environments.
Requirements
3–5 years of experience as a Data Engineer or similar role
Strong experience with AWS data stack : S3, EMR, Glue data catalog, Kafka, Athena, IAM and orchestration tools like MWAA
Strong programming experience with Python , including writing clean, maintainable scripts
Expertise in SQL and performance optimization
Strong understanding of Data Lakehouse, ETL/ELT architectures , and distributed data processing
Practical experience in data modeling and industry‑standard modeling techniques
Proficiency with YAML for configuration and pipeline metadata management
Strong communication and documentation skills
Detail‑oriented with a strong commitment to high-quality, reliable data delivery
Payments industry experience and BIAN exposure are a plus
Fluent English is required in spoken and written communication due to working on the Pan-Baltic level
Tech Stack
Airflow
Apache
AWS
ETL
Kafka
Python
Spark
SQL
Benefits
Flexibility. Flexible working hours, Hybrid work, and the possibility to work from anywhere in the EU, Iceland, Switzerland, and the UK (in total 90 days per year)
International teams. Teams that go outside Pan-Baltic borders, where people value challenging work together with good humor and having fun
Culture. You’ll be joining a team where things move fast, ideas flow freely, and change is part of the everyday. It’s a dynamic environment that keeps you learning, growing, and never bored
More vacation. Additional weeks of vacation are available to all employees who have been in the company for 1 year or more
Volunteer time off. We care about giving back to society, therefore, you will get additional days off for volunteering purposes
Paid leave. We are proud of our employees who are participating in military training. Therefore, Luminor offers 30 fully paid calendar days for military training every year
Health benefits. A competitive benefits package in addition to your salary that includes health insurance after the first 3 months pass in all three Baltic states, as well as Health days in case of your absence due to sickness without a doctor's note needed
Wellbeing. Access to tools and resources that help you feel good and be productive at work and in life
Professional growth. Internal and external training programs, LinkedIn learning, workshops, conferences, online training, etc
Special Offer for Luminor products & services. Enjoy special offers & pricing for products and services provided by Luminor