Gandiv Insights LLC is seeking a Lead Data Engineer with Retail Experience. The role involves building data ingestion pipelines, managing data architecture, and working with various BI tools to extract meaningful insights.
Responsibilities:
- Write SQL for processing raw data, kafka ingestions, adf pipelines, data validation and QA
- Knowledge working with APIs to collect or ingest data
- Experience with advanced SQL and Python
- Strong Database knowledge, SQL & No-SQL preferred
- 12+ years Experience building data ingestion pipelines (simulating Extract, Transform, Load workload), data warehouse or database architecture
- Experience writing design documentation, Source to Target mapping documentation, manage confluence pages
- Experience in converting business functionalities into technical jira stories
- Experience in analyzing huge datasets, identify trends, patterns, and outliers to extract meaningful insights
- Strong experience with data modeling, design patterns, building highly scalable Business Intelligence Solutions and distributed applications
- Knowledge of cloud platforms, for example: Experience with Azure, AWS or equivalent cloud platforms
- Experience with storing, joining, filtering, and analyzing data using SQL, Spark, Hive etc
- Experience working with continuous integration framework, building regression-able code within data world using GitHub, Jenkins and related applications
- Experience with programming/scripting languages such as Scala/Java/Python/R etc. (any combination)
- Analytical approach to problem-solving with an ability to work at an abstract level and gain consensus; excellent interpersonal, leadership and communication
- Data-oriented personality. Motivated, independent, efficient and able to handle several projects; work under pressure with a solid sense for setting priorities
- Ability to work in a fast-paced (startup like) agile development environment
- Friendly, articulate, and interested in working in a fun, small team environment
Requirements:
- 12+ years of software development and deployment experience
- Hands-on experience with SQL, Databricks, ADF, Datastage (or other ETL tool), SSAS cubes, Cognos, Tableau, Thoughtspot and other BI tools
- Write SQL for processing raw data, kafka ingestions, adf pipelines, data validation and QA
- Knowledge working with APIs to collect or ingest data
- Experience with advanced SQL and Python
- Strong Database knowledge, SQL & No-SQL preferred
- 12+ years Experience building data ingestion pipelines (simulating Extract, Transform, Load workload), data warehouse or database architecture
- Experience writing design documentation, Source to Target mapping documentation, manage confluence pages
- Experience in converting business functionalities into technical jira stories
- Experience in analyzing huge datasets, identify trends, patterns, and outliers to extract meaningful insights
- Strong experience with data modeling, design patterns, building highly scalable Business Intelligence Solutions and distributed applications
- Knowledge of cloud platforms, for example: Experience with Azure, AWS or equivalent cloud platforms
- Experience with storing, joining, filtering, and analyzing data using SQL, Spark, Hive etc
- Experience working with continuous integration framework, building regression-able code within data world using GitHub, Jenkins and related applications
- Experience with programming/scripting languages such as Scala/Java/Python/R etc. (any combination)
- Analytical approach to problem-solving with an ability to work at an abstract level and gain consensus; excellent interpersonal, leadership and communication
- Data-oriented personality. Motivated, independent, efficient and able to handle several projects; work under pressure with a solid sense for setting priorities
- Ability to work in a fast-paced (startup like) agile development environment
- Friendly, articulate, and interested in working in a fun, small team environment