CHRISTUS Health is a leading healthcare organization dedicated to delivering high-quality care through innovative technology. The Data and Analytics Engineer II will develop and maintain data pipelines, ensuring efficient data integration and analysis to support business intelligence initiatives.
Responsibilities:
- Analyze and understand data sources, participate in requirements gathering, and provide insights on data technology and modeling best practices
- Design and develop scalable data pipelines to support analytics and reporting needs, ensuring performance and reliability
- Collaborate with cross-functional teams to identify issues and implement technical solutions that are cost-effective and do not compromise system performance
- Develop and maintain code following industry best practices and organizational standards
- Evaluate proposed system solutions, considering factors such as compatibility, cost, and operational impact, providing recommendations accordingly
- Integrate various software components and subsystems into existing environments, assessing impact and coordinating with relevant teams
- Participate in the development of standards, processes, and documentation for data collection, reporting, and system maintenance
- Research, design, and implement applications, databases, and interfaces using modern technology platforms
- Fix issues identified during testing cycles and continuously improve the quality of deliverables
- Document each development phase thoroughly for future reference and operational support
- Utilize critical thinking and programming principles to develop innovative solutions for complex data challenges
- Leverage expertise in enterprise application integration, data warehousing, and big data technologies to build efficient data processing systems
- Work with big data querying tools such as Hive, Impala, and Spark SQL, and build stream-processing systems using NiFi or Spark Streaming
- Optimize data integration and consumption processes through effective SQL programming and query tuning for large datasets
- Stay informed of the latest trends and advancements in data analytics, big data processing, and healthcare IT to enhance organizational capabilities
Requirements:
- Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field
- Minimum of three (3) years of experience with MapReduce and Spark programming
- At least three (3) years of experience developing analytics solutions with large data sets within OLAP and MPP architectures
- Five (5) years of experience in designing, architecting, and developing enterprise-scale platforms based on open-source frameworks
- Proficiency in SQL programming with intermediate knowledge of query performance tuning
- Experience with data mining techniques, relational and non-relational databases
- Experience with data integration using ETL frameworks and tools such as NiFi, Hive, and Spark SQL
- Preferred experience working within a healthcare IT environment and familiarity with Microsoft SQL Server, SSIS, and related tools
- Certifications in Hadoop, Java, or related technologies are a plus