ApacheAzurePythonSparkSQLAIGenerative AIDatabricksApache SparkAzure FunctionsService Bus
About this role
Role Overview
Develop, test, deploy, monitor, and continuously improve scalable data pipelines and API integrations
Apply generative AI tools in day-to-day engineering work to accelerate development and improve quality
Translate product and business concepts into incremental, high-quality technical deliverables
Contribute through hands-on design sessions and thorough code reviews
Implement data quality monitoring to ensure accurate, reliable production data
Review and recommend architectural patterns aligned with enterprise platforms and best practices
Consistently applies generative AI in day‑to‑day engineering work
using it to accelerate development, improve code quality, troubleshoot complex systems, and design scalable data solutions
Requirements
5+ years of hands-on experience developing solutions using Microsoft SQL
3+ years working with big data technologies such as Databricks, Apache Spark, Python, and Microsoft Azure (ADF, Dataflows, Azure Functions, Service Bus)
Strong understanding of engineering fundamentals including automated testing, code reviews, telemetry, iterative delivery, and DevOps
Experience with polyglot storage architectures including relational, columnar, key-value, and graph systems
Hands-on experience with Delta Lake tables and Parquet data stored in ADLS
Experience building distributed, componentized applications using event-driven patterns
Ability to communicate effectively with both technical and non-technical, globally distributed audiences
Solid foundation in software architecture, design patterns, and engineering best practices
Experience working with healthcare datasets; familiarity with HL7 or EDI is a plus
High attention to detail and a strong sense of ownership.