Standardize Lattice’s data infrastructure and practices through establishing a simple, cohesive, and reliable platform that seamlessly integrates internal and partner data
Design, develop, and deploy scalable data pipelines that can ingest, process, and serve data across multiple teams for batch and streaming workloads.
Make data effortlessly accessible – abstract infrastructure complexity through reusable SDKs, libraries, and tooling
Shape data governance practices ensuring access to data is secure, compliant, and high-quality.
Partner with Engineering, Product, and business leaders to identify data needs and implement solutions that align with strategic objectives.
Requirements
2+ years of experience in software, data, or analytics engineering with exposure to a centralized "data platform" or large-scale data infrastructure project.
Fluency in SQL and a general purpose programming language; Python or TypeScript preferred
Familiarity with a data warehouse/lakehouse platform e.g. Snowflake, Redshift, BigQuery, Databricks
Familiarity with AWS services or their equivalents e.g. IAM, Glue, Lambda, Athena, Redshift, Kinesis, EKS
Nice to have: Hands-on experience with data processing frameworks like dbt, Spark, Kafka Streams, or Flink
Willingness to learn and curiosity into how trustworthy, insightful data can unlock value for our users.