Own the lifecycle of data pipelines and storage systems across cloud environments, ensuring they are scalable, secure, and high-performing.
Work closely with developers, partners, and leadership to deliver robust data solutions and integrate best practices into daily workflows.
Build software services and APIs using Python frameworks to create smooth, reliable data processing and deployment experiences.
Apply appropriate data modeling methodologies (Kimball, Inmon, Data Vault) to solve different problems and optimize the stack for every challenge.
Work with streaming data platforms like Kafka or Kinesis and proprietary open-source platforms including Snowflake, Databricks, Vertica, Spark, and Airflow to enable teams through automation and continuous improvement.
Expected travel varies by project (0-15%).
Requirements
6+ years of experience in data architecture and engineering roles
Solid experience working in cloud environments with strong exposure to AWS, plus familiarity with Azure and GCP
Hands-on with streaming platforms like Apache Kafka or AWS Kinesis
Experience with proprietary open-source platforms including Snowflake, Databricks, Vertica, Apache Spark, and Apache Airflow
Strong programming skills including organizing Python-based projects and building software services and APIs
Strong understanding of different data modeling methodologies including Kimball dimensional modeling, Inmon enterprise data warehouse, and Data Vault 2.0
Ability to own complete lifecycle of data pipelines and storage systems
Experience building automated data processing and deployment solutions
Comfortable tackling complex problems across different technologies
Excellent collaborator working seamlessly between developers, partners, and leadership
Ability to explain data modeling techniques, architectures, and technical concepts to various stakeholders
Self-organization with courage to take risks and belief that teaching others is the best way to learn
Fluent English (mandatory for client and leadership communication)