Stott and May is hiring Salesforce Data Engineers for a consulting client in the Financial Services industry. This role involves building and maintaining data systems that support AI, analytics, and data-driven decision-making, focusing on creating efficient data pipelines and ensuring data quality across the business.
Responsibilities:
- Design, build, and maintain data pipelines that support efficient collection, ingestion, storage, and processing
- Implement modern data architectures such as, data lakes, data warehouses, lakehouses, and data mesh platforms
- Develop streaming data flows for near real time and low latency use cases
- Clean and prepare data to support analytics, reporting, and AI model readiness
- Improve performance and reliability across data systems
- Apply data governance and security best practices to safeguard customer information
- Partner with technical and business teams to understand requirements and deliver effective solutions
- Identify opportunities to streamline operations and reduce cost through smarter data design
- Monitor and resolve issues to maintain dependable, resilient data operations
Requirements:
- Experience building and maintaining data pipelines, and ETL/ELT scalable frameworks
- Experience in Salesforce projects handling data migrations and integrations within the platform
- Strong foundation in relational and non relational data systems
- Strong data modeling skills
- Working knowledge of data lake, data warehouse, and lakehouse patterns
- Hands on experience with both batch and streaming data pipelines
- Proficiency in SQL, Python and modern data engineering tools and libraries, such as Pandas
- Ability to design structured, scalable solutions for analytics and AI preparation
- Familiarity with cloud platforms and distributed processing frameworks
- Clear, concise communication skills
- Experience with Databricks, Snowflake, Microsoft Synapse, Fabric, AWS Glue, DMS, or similar data platforms and technologies
- Experience with Open Data platforms and tools, such as Apache Spark, Airflow, Delta Lake, or Iceberg
- Background supporting Data migrations, API integrations, and Machine Learning or AI data requirements
- Understanding of data governance, lineage, and secure data practices
- Exposure to a data product mindset and domain oriented or data mesh approaches
- Professional English proficiency is required