Pyramid Consulting, Inc. is a leading Utility Industry company, and they are seeking a talented Data Engineer IV for a 12+ months contract opportunity. The role involves providing technical direction, leading the design and deployment of data components, and collaborating with various stakeholders to deliver technical solutions.
Responsibilities:
- Provides technical direction, guides the team on key technical aspects and responsible for product tech delivery
- Lead the Design, Build, Test and Deployment of components
- Where applicable in collaboration with Lead Developers (Data Engineer, Software Engineer, Data Scientist, Technical Test Lead)
- Understand requirements / use case to outline technical scope and lead delivery of technical solution
- Confirm required developers and skillsets specific to product
- Provides leadership, direction, peer review and accountability to developers on the product (key responsibility)
- Works closely with the Product Owner to align on delivery goals and timing
- Assists Product Owner with prioritizing and managing team backlog
- Collaborates with Data and Solution architects on key technical decisions
- The architecture and design to deliver the requirements and functionality
- Ability to perform hands on development and peer review for certain components / tech stack on the product
- Standing up of development instances and migration path (with required security, access/roles)
- Develop components and related processes (e.g. data pipelines and associated ETL processes, workflows)
- Lead implementation of integrated data quality framework
- Ensures optimal framework design and load testing scope to optimize performance (specifically for Big Data)
- Supports data scientist with test and validation of models
- Performs impact analysis and identifies risk to design changes
- Ability to build new data pipelines, identify existing data gaps and provide automated solutions to deliver analytical capabilities and enriched data to applications
- Ensures Test Driven development
Requirements:
- Power Systems engineering experience
- DER Dispatch experience
- Automated fault analysis systems experience
- Ability to perform hands on development and peer review for certain components / tech stack on the product
- Standing up of development instances and migration path (with required security, access/roles)
- Develop components and related processes (e.g. data pipelines and associated ETL processes, workflows)
- Lead implementation of integrated data quality framework
- Ensures optimal framework design and load testing scope to optimize performance (specifically for Big Data)
- Supports data scientist with test and validation of models
- Performs impact analysis and identifies risk to design changes
- Ability to build new data pipelines, identify existing data gaps and provide automated solutions to deliver analytical capabilities and enriched data to applications
- Ensures Test Driven development
- Key skills: - Python, AWS, Data Pipeline
- Experience leading teams to deliver complex products
- Strong technical skills and communication skills
- Strong skills with business stakeholder interactions
- Strong solutioning and architecture skills
- Experience building real time data ingestion streams (event driven)
- Ensure data security and permissions solutions, including data encryption, user access controls and logging
- Experience with native AWS technologies for data and analytics such as Athena, S3, Lambda, Glue, EMR, Kinesis, SNS, CloudWatch, etc
- Tools and Languages such as Django, Python, Java, Scala, Pandas
- Infrastructure as Code technology such as Terraform
- AWS services such as S3, EMR, Glue, Lambda, Athena, Kinesis, EC2, SNS, SQS, Cloudwatch
- Experience with databases such as Redshift, Document DB, DynamoDB and Mongo DB
- Experience transitioning on premise big data platforms into cloud based platforms such as AWS
- Hadoop platform (Hive; HBase; Druid)
- Spark
- PySpark
- SQL
- Workflow Automation
- DevOps pipeline (CI/CD); Bitbucket; Concourse
- API frameworks