Paradigm Nat'l is a top AWS partner in the Midwest specializing in AWS Migration and cloud-native architectures. They are seeking a Data Engineer to expand and optimize data architecture and pipelines, focusing on designing and building solutions leveraging AWS services.
Responsibilities:
- Process unstructured data into a form suitable for analysis
- Support the business with ad hoc data analysis and build reliable data pipelines
- Implementation of best practices and IT operations in mission-critical tighter SLA data pipelines using Airflow
- Query Engine Migration from Dremio to Redshift
Requirements:
- Bachelor's Degree strongly preferred
- 3 to 5 years of real-world Data Engineering experience
- Programming experience, ideally in Python and other data engineering languages like Scala
- Programming knowledge to clean structure and semi-structure datasets
- Experience processing large amounts of structured and unstructured data
- Experience building and optimizing ‘big data' data pipelines, architectures, and data sets
- Background in Linux
- Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and other cloud ‘big data' technologies like DataBricks, Snowflake, Dremio, and Qubole
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management
- A successful history of manipulating, processing, and extracting value from large, disconnected datasets
- Experience creating a platform on which complex data pipelines are built using orchestration tools like Airflow, and Astronomer
- Experience with real-time sync between OLTP and OLAP using AWS technologies like real-time sync between AWS Aurora and AWS Redshift
- Certifications related to cloud provider preferred
- Streaming data experience is a plus