RKube is seeking a skilled Data Engineer with strong expertise in Python and SQL to support their data pipelines, integrations, and analytics initiatives. The ideal candidate will be responsible for building, optimizing, and maintaining scalable data solutions while ensuring high levels of reliability and performance.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL processes
- Develop automation scripts and data workflows using Python
- Write optimized SQL queries for data extraction, transformation, and analysis
- Work with relational and/or cloud databases to manage structured datasets
- Collaborate with data analysts, data scientists, and engineering teams to deliver high-quality data solutions
- Optimize data storage, processing, and retrieval performance
- Ensure data quality, integrity, and governance across systems
- Troubleshoot data issues, identify root causes, and implement fixes
- Support migration, modernization, or cloud-based data initiatives (if applicable)
Requirements:
- Strong hands-on experience with Python for data processing and automation
- Expertise in SQL with the ability to write complex, optimized queries
- Experience working with relational databases
- Good understanding of ETL/ELT concepts and data pipeline architecture
- Familiarity with version control (Git) and CI/CD workflows
- Strong problem-solving and analytical skills
- Ability to work independently and in cross-functional teams
- PostgreSQL
- Experience with cloud platforms (GCP)
- Knowledge of data warehousing concepts (Preferred: BigQuery)
- Experience with orchestration tools (Airflow, Prefect, Luigi)
- Familiarity with big data technologies (Spark, Hadoop)
- Understanding of API integrations and data streaming (Kafka, Kinesis)