Snowflake is a company focused on empowering enterprises to achieve their full potential through innovative technology. They are seeking a Senior Solution Engineer who will provide AI/ML technical guidance to customers, collaborate with sales teams, and design solutions using the Snowflake Cloud Data Platform. The role requires a strong understanding of data engineering and the ability to communicate effectively with both technical and business executives.
Responsibilities:
- Apply your multi-cloud data architecture expertise while presenting Snowflake technology and vision to executives and technical contributors at strategic prospects, customers, and partners
- Work hands-on with prospects and customers to demonstrate and communicate the value of Snowflake technology throughout the sales cycle, from demo to proof of concept to design and implementation
- Immerse yourself in the ever-evolving industry, maintaining a deep understanding of competitive and complementary technologies and vendors and how to position Snowflake in relation to them
- Collaborate with Product Management, Engineering, and Marketing to continuously improve Snowflake’s products and marketing
Requirements:
- 5 - 10+ years of data engineering experience within the Enterprise Data space with a strong preference for candidates possessing deep, hands-on Snowflake architecture experience
- 5+ years experience working with AI/ML technologies
- Outstanding presentation skills to both technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos
- Ability to connect a customer's specific business problems and Snowflake's solutions
- Ability to do deep discovery of customer's architecture framework and connect those with Snowflake Data Architecture
- Broad range of experience within large-scale Database and/or Data Warehouse technology, ETL, analytics and cloud technologies. For example, Data Lake, Data Mesh, Data Fabric
- Hands on Development experience with technologies such as SQL, Python, Pandas, Spark, PySpark, Hadoop, Hive and any other Big data technologies
- Deep understanding of data integration services and tools for building ETL and ELT data pipelines such as Apache NiFi, Matillion, Fivetran, Qlik, or Informatica
- Familiarity with streaming technologies (ex. Kafka, Flink, Spark Streaming, Kinesis) and real-time or near real time use cases (ex. CDC)
- Experience designing interoperable data lakehouse architectures and experience working with Iceberg, Delta, and Parquet
- Strong architectural expertise in data engineering to confidently present and demo to business executives and technical audiences, and effectively handle any impromptu questions
- Bachelor's Degree required, computer science, engineering, mathematics or related fields, or equivalent experience preferred