responsible for delivering, managing, and operating scalable trusted data products and platforms that enable trusted analytics, AI/ML, and Generative AI use cases
responsible for leading the task of curating datasets and data pipelines created by various business departments, data scientists, and other technology teams
responsible for using innovative and modern tools, techniques and architectures to automate the most common, repeatable and tedious data preparation and integration tasks to minimize manual and error-prone processes and improve productivity
develop and improve standards and procedures to support quality development, testing, and production support
will act as an innovation catalyst—rapidly prototyping new approaches (i.e. automation, metadata-driven pipelines, and AI-enabled data experiences) and turning the best ideas into production-grade capabilities
designs and develops durable, flexible, and scalable data pipelines, data load processes and frameworks to automate the ingestion, processing and delivery of both structured and unstructured batch and real-time streaming data
develop reusable data products and curated datasets aligned to enterprise domains
implement modern ELT and distributed data processing patterns
conduct performance tuning of ETL processes for large volumes of data, develop and oversee monitoring systems to ensure data loads complete on schedule and data is accurate
performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues
identifies ways to improve data reliability, efficiency and quality
creates and maintains technical design documentation
assists with requirements gathering
enable AI/ML and GenAI: Deliver governed training/inference datasets and feature foundations; partner with ML/AI engineers on data access patterns that support ML pipelines and production ML deployments
identify opportunities to simplify architectures, automate manual processes, improve developer experience, and evaluate new tools/techniques through controlled prototypes
participates in planning, applies design patterns, and performs code reviews
follows standards, processes and methodologies to develop each phase of data architecture (e.g. data manipulating processes, database technology generating processes)
mentor junior engineers, raise the bar on best practices, and lead technical initiatives across teams and provides guidance
helps resolve issues regarding the implementation of data architecture components
applies DevOps principles to data pipelines to improve the cost, communication, integration, reuse and automation
responsible for production support, including analyzing root cause and developing fixes to restore ETL and data operational readiness, planning and coordinating maintenance, conducting audits and validating jobs and data
position requires on-call and off-hours support.
Requirements
Bachelor or Master of Science in Engineering, Computer Science, Information Technology or equivalent
6+ years of experience in Data Warehouse design and data modeling patterns (relational and dimensional)
6+ years of experience with ETL tool development such as Talend or ADF
Must have strong analytical skills for effective problem solving
Ability to work independently, handle multiple tasks simultaneously and adapt quickly to change with a variety of people and work styles
Must be capable of fully articulating concisely technical concepts to non-technical audiences
Hands-on experience with at least one major cloud (AWS/Azure/GCP) and one warehouse/lakehouse technology (e.g., Snowflake, BigQuery, Redshift, Databricks/Lakehouse)
Strong proficiency in Python and/or Java/Scala; ability to build maintainable services and libraries
Experience with GitHub Copilot and Databricks Assistant a plus
Experience building or operating streaming pipelines using Kafka/Kinesis/Pub/Sub
Experience with Spark (or equivalent) and a workflow orchestrator (e.g., Airflow) plus familiarity with CI/CD and automated testing
Experience partnering with data science/ML teams, supplying training-ready datasets/features, and designing data products that support ML in production
Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management
Strong experience with popular database programming languages including SQL, PL/SQL, T-SQL, others for relational databases
Strong experience in one of the following tools: ADF or Talend
Strong experience with relational SQL (Oracle, MSSQL, MySQL) and NoSQL databases such as Couchbase
Strong experience with various Data Management architectures like Data Warehouse, Data Lake and the supporting processes like Data Integration, Governance, Metadata Management
Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies
Strong experience writing and optimizing advanced SQL queries in a business environment with large-scale, complex datasets
Strong experience of data warehousing and data lake best practices within the industry
Strong experience and hands-on experience with scripting languages: Python, Scala, Java, etc …
Working knowledge of relational and dimensional data modeling patterns
Working knowledge of the essential elements of data architecture, platforms and products
Working knowledge to build and launch new data models
Addresses stakeholder concerns by utilizing business data modeling, including data entities, attributes and their relationships.