ApacheETLKafkaMS SQL ServerPythonSQLData WarehousingBIBusiness IntelligencePower BIApache KafkaFastAPISQL Server
About this role
Role Overview
Design, develop, and maintain data pipelines and ETL processes using MS SQL Server, and Python, ensuring reliable data movement from source systems to the data warehouse and analytical layers.
Build and operate real-time and near-real-time data ingestion pipelines using Apache Kafka — designing topics, building producers and consumers, and managing stream processing workflows.
Develop and optimize stored procedures, views, and data transformation logic for both backend processing and frontend consumption (SSAS, Power BI).
Write and maintain Python-based data services and integrations, leveraging libraries such as sqlalchemy, pyodbc, confluent_kafka, json, and FastAPI to connect, transform, and expose data across systems.
Perform requirements analysis and effort estimation within projects and implement solutions as part of an international, cross-functional team.
Collaborate closely with BI analysts and architects to translate business requirements into robust, scalable data solutions.
Support Business Analysts with ad-hoc analyses on large data volumes, complex data models, and data extraction from operational systems.
Design, code, test, debug, document, and support all types of data flows and transformation procedures consistent with established specifications and business requirements to deliver maximum business value.
Support, enhance, and modify existing applications, pipelines, and codebases.
Requirements
Well-rounded knowledge of data warehousing and Business Intelligence concepts, including physical data modeling techniques specific to data warehouse requirements.
Strong experience with MS SQL Server — database implementation, performance tuning, and writing efficient queries on large data volumes.
Solid Python programming skills, with hands-on experience building data pipelines, REST APIs, and integration scripts using libraries such as sqlalchemy, pyodbc, confluent_kafka, json, and FastAPI.
Experience with Apache Kafka — designing and managing topics, building producers and consumers, and working with Kafka Streams or similar stream-processing patterns.
Ability to collaborate effectively with team members and cross-functional stakeholders in an international environment.
Tech Stack
Apache
ETL
Kafka
MS SQL Server
Python
SQL
Benefits
Access to local/international trainings, development and growth opportunities, including access to e-learning platforms, covering both technical and soft skills areas;
Modern technologies, product responsibility;
Flexible work schedule;
Hybrid work option;
Medical services package from one of two private providers;
25 vacation days per year;
Substitute days off for public holidays that occur on the weekend;
Meal tickets;
Internal referral program;
Team events, networking events organized to promote a passionate, creative and diverse culture;
Summerfest and Winterfest parties;
Of course, coffee, soft drinks and fresh fruits are on us in the office.