Huron helps its clients drive growth, enhance performance and sustain leadership in the markets they serve. They are seeking a Platform Integration Engineer to unify a diverse ecosystem of analytics applications under a coherent, scalable technology platform.
Responsibilities:
- Audit existing analytics applications to understand their underlying technology choices, data flows, and integration points
- Design and implement integration patterns (APIs, event streams, shared data contracts) that allow siloed applications to interoperate without unnecessary rewrites
- Evaluate build vs. consolidate vs. federate tradeoffs for components across the platform, documenting your rationale and driving alignment
- Build and maintain shared platform services — authentication, observability, configuration management, logging — that reduce duplication across applications
- Develop a deep understanding of existing data pipelines owned by distributed teams — including their data sources, transformation logic, and dependencies
- Design and build a unified presentation layer that surfaces data from heterogeneous compute environments — including large-scale distributed systems (e.g., Spark) and traditional relational databases — in a consistent, accessible way
- Partner with domain teams who have the deepest knowledge of their own data — acting as an integration architect and collaborator rather than a centralized pipeline owner; identify unification opportunities but defer to domain expertise on data semantics and processing logic
- Deploy and manage platform services on cloud infrastructure (AWS preferred; Azure/GCP experience a plus)
- Contribute to infrastructure-as-code practices using tools such as Terraform, CDK, or Pulumi
- Collaborate with DevOps/SRE to define deployment pipelines and reliability standards for integration services
- Construct evidence-based business cases for cloud feature adoption — translating technical requirements into business value, risk reduction, and compliance language that resonates with IT governance and procurement stakeholders
Requirements:
- 5+ years of software engineering experience, with 3+ years focused on integration, platform, or infrastructure work
- Proficiency in at least one backend language commonly used in data-intensive systems (Python, Go, Java, or similar); proficiency in two or more languages strongly preferred
- Hands-on experience with cloud platforms (AWS strongly preferred) including compute, storage, networking, and managed services
- Experience building and operating data pipelines (ETL/ELT), including schema management and data quality monitoring
- Working experience with AI-assisted development tooling in a professional engineering context
- Strong written communication skills — able to produce clear architecture decision records (ADRs), runbooks, and design documents
- Experience evaluating and rationalizing heterogeneous technology stacks — i.e., inheriting systems you didn't design and making them work together
- Familiarity with analytics or ML platform tooling (e.g., Airflow, dbt, MLflow, Spark, or SaaS equivalents)
- Experience with API gateway management, service mesh, or distributed tracing tooling
- Background working with both structured (SQL) and semi-structured (JSON, Parquet, Avro) data at scale