rockITdata is a unique SDVOSB services company that partners with leading commercial healthcare/life sciences organizations on cutting edge innovations. They are seeking a talented and experienced Full Stack Data Engineer to join their team, responsible for building end-to-end data solutions and contributing to innovative data-driven applications.
Responsibilities:
- Design and implement scalable data ingestion pipelines to efficiently collect and process data from various sources
- Integrate data from different systems and platforms to create unified datasets for analysis and reporting
- Develop and maintain data storage solutions such as data lakes, data warehouses, and NoSQL databases
- Optimize data storage and retrieval mechanisms for performance, scalability, and cost-effectiveness
- Implement data processing workflows for cleaning, transforming, and enriching raw data into usable formats
- Apply data transformation techniques such as ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes
- Design and implement data models to support analytical and reporting requirements
- Optimize data models for query performance, data integrity, and storage efficiency
- Build software applications and APIs to expose data services and functionality to other systems and applications
- Integrate data engineering workflows with existing software systems and platforms
- Establish monitoring and alerting mechanisms to track the health and performance of data pipelines and systems
- Conduct regular maintenance activities to ensure the reliability, availability, and scalability of data infrastructure
- Document data engineering processes, architectures, and solutions to facilitate knowledge sharing and collaboration
- Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to understand requirements and deliver solutions
Requirements:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field
- Proficiency in programming languages such as Python, Java, or Scala for data engineering and software development
- Expert-level skills in data visualization platforms beyond Tableau and Power BI (e.g. Qlik)
- Strong understanding of database concepts, data modeling techniques, and SQL programming
- Hands-on experience with cloud platforms such as AWS, Azure, or GCP for building and deploying data solutions
- Knowledge of data warehousing concepts and technologies (e.g., Redshift, BigQuery, Snowflake)
- Familiarity with version control systems (e.g., Git) and software development best practices (e.g., Agile, CI/CD)
- Experience building solutions for Commercial clients in Pharma, Biotech, CPG, Retail or Manufacturing industries
- Experience with containerization technologies such as Docker and orchestration tools like Kubernetes
- Knowledge of streaming data processing frameworks (e.g., Apache Flink, Apache Kafka Streams)
- Familiarity with data governance and security practices for protecting sensitive data
- Strong problem-solving skills and the ability to troubleshoot complex data engineering issues
- Excellent communication skills and the ability to collaborate effectively in a team environment