Sammons Financial Group is seeking a Systems Analyst – Data Engineer to design, develop, and implement scalable data and integration solutions that power Life analytics and operations. The ideal candidate will have strong technical expertise in data platforms and will ensure solutions are performant, secure, and aligned with business objectives while supporting emerging analytics and AI use cases.
Responsibilities:
- Design, develop, and implement scalable data ingestion, integration, and processing pipelines across cloud platforms (Azure, Snowflake/ and similar EDW/Lakehouse platforms , AWS)
- Develop and manage data orchestration workflows using tools such as Azure Data Factory (ADF), Azure Data Lake (ADLS), dbt, and comparable technologies
- Ingest and process large volumes of structured, semi-structured, and unstructured data, including compressed formats (e.g., .tar), and automate extraction, transformation, and loading processes
- Design and implement modern data lakehouse architectures, including Iceberg (or similar table formats), to support scalable and high-performance analytics
- Develop and maintain data models that accurately represent complex relationships within life insurance and policy administration domains
- Integrate enterprise data platforms with internal and external systems (e.g., APIs, Kafka, MuleSoft) to enable real-time and batch data exchange
- Collaborate with product owners, architects, analysts, and developers to translate business, functional, and non-functional requirements into scalable technical solutions
- Establish and enforce data engineering standards, best practices, and governance controls across ingestion, transformation, and storage layers
- Implement data quality validation, reconciliation processes, and error handling to ensure accuracy, consistency, and reliability of data pipelines
- Monitor pipeline performance, reliability, scalability, and cost efficiency; recommend and implement improvements
- Research and recommend emerging tools, patterns, and technologies to improve data ingestion, processing, integration, and enablement of AI-driven use cases
- Build and support backend data systems, APIs, distributed services, and orchestration layers for secure enterprise use
- Support enterprise data integration capabilities across Azure, Snowflake, and similar platforms to unify, govern, and operationalize data for analytics and AI applications
- Support code versioning, CI/CD pipelines, and controlled deployment of data engineering assets
- Troubleshoot pipeline, data, and platform issues across development, testing, and production environments
- Provide operational support, including on-call or on-demand support for critical data processes
Requirements:
- Strong technical expertise in Snowflake/ similar EDW/Lakehouse platforms
- Strong technical expertise in SQL
- Strong technical expertise in Python
- Strong technical expertise in cloud-based data platforms (Azure, AWS)
- Ability to design, develop, and implement scalable data ingestion, integration, and processing pipelines across cloud platforms
- Ability to develop and manage data orchestration workflows using tools such as Azure Data Factory (ADF), Azure Data Lake (ADLS), dbt, and comparable technologies
- Ability to ingest and process large volumes of structured, semi-structured, and unstructured data, including compressed formats (e.g., .tar), and automate extraction, transformation, and loading processes
- Ability to design and implement modern data lakehouse architectures, including Iceberg (or similar table formats)
- Ability to develop and maintain data models that accurately represent complex relationships within life insurance and policy administration domains
- Ability to integrate enterprise data platforms with internal and external systems (e.g., APIs, Kafka, MuleSoft)
- Ability to collaborate with product owners, architects, analysts, and developers to translate business, functional, and non-functional requirements into scalable technical solutions
- Ability to establish and enforce data engineering standards, best practices, and governance controls across ingestion, transformation, and storage layers
- Ability to implement data quality validation, reconciliation processes, and error handling to ensure accuracy, consistency, and reliability of data pipelines
- Ability to monitor pipeline performance, reliability, scalability, and cost efficiency; recommend and implement improvements
- Ability to research and recommend emerging tools, patterns, and technologies to improve data ingestion, processing, integration, and enablement of AI-driven use cases
- Ability to build and support backend data systems, APIs, distributed services, and orchestration layers for secure enterprise use
- Ability to support enterprise data integration capabilities across Azure, Snowflake, and similar platforms
- Ability to support code versioning, CI/CD pipelines, and controlled deployment of data engineering assets
- Ability to troubleshoot pipeline, data, and platform issues across development, testing, and production environments
- Ability to provide operational support, including on-call or on-demand support for critical data processes
- Criminal background check required
- College Degree in the field of computer science, information science, management information systems
- Minimum 8 years' IT development experience or equivalent
- Effective verbal and written communications skills and the ability to communicate with business partners and other IT staff
- Problem solving skills sufficient to perform research and recommend a proposed solution to problems
- Able to work on multiple tasks and meet established deadlines
- Able to effectively direct and coordinate the work of other team members on a project without having HR management responsibility for them
- Knowledge of computer programming languages as required for the system
- Certifications in Snowflake
- Experience with Python or Java for data processing and automation
- Familiarity with emerging technologies supporting AI/ML and advanced analytics
- Experience working in Agile delivery models and cross-functional enterprise environments