
Hope you are doing good!!!!
My name is Pavan and I work with SPAR Information System., I have a great opportunity for you, please find the job details below, if you are interested in applying please send me your updated resume and best time for you to discuss about this opportunity in details.
Sr Data Engineer Data & Intelligence
Location: Frisco, TX
Job Summary
We are seeking a Senior Data Engineer to design, build, and operate highly scalable batch and streaming data pipelines supporting T Mobile$B!G(Bs Finance and Intelligence platforms. This role requires deep expertise in modern cloud data stacks (Snowflake, Databricks, dbt), strong SQL/Python skills, and solid understanding of finance data domains including billing, revenue, GL, and OPEX. The ideal candidate owns complex pipelines end to end, mentors junior engineers, and helps drive platform standards and best practices.
________________________________________
Key Responsibilities
Data Pipeline Development
Design and build scalable, reliable ELT/ETL pipelines for finance data (billing, revenue, GL, OPEX).
Implement batch and incremental ingestion patterns (full load, CDC, watermark-based).
Build idempotent, rerunnable pipelines with robust error handling, retry logic, and dead-letter queue patterns.
Platform & Tooling
Develop and optimize pipelines using Snowflake (Snowpipe, Streams, Tasks, Dynamic Tables, performance tuning).
Build data processing workflows in Databricks (PySpark, Delta Live Tables, Unity Catalog, job clusters).
Create and maintain dbt models, tests, snapshots, macros, and packages with CI integration.
Orchestrate data workflows using Airflow or Azure Data Factory (DAG design, dependencies, scheduling, alerts).
Cloud Infrastructure
Work within Azure (ADLS Gen2, Event Hub, ADF, Azure Functions, Key Vault) and/or AWS (S3, Glue, Lambda, Secrets Manager).
Apply Infrastructure as Code fundamentals (Terraform, Bicep) for pipeline and resource provisioning.
Apply cloud cost awareness including compute sizing, partitioning strategies, and storage optimization.
Languages & Frameworks
Write advanced SQL (CTEs, window functions, query tuning, execution plan analysis).
Develop in Python (pandas, PySpark, requests, pytest, logging).
Read and modify existing Scala/Spark jobs as needed.
Use shell scripting for automation and operational tasks.
Streaming & Real Time Processing
Build near real time pipelines using Apache Kafka / Azure Event Hub.
Implement Spark Structured Streaming with stateful aggregations, watermarking, and checkpointing.
Support finance use cases such as revenue reconciliation and fraud signal feeds.
Data Quality & Testing
Implement unit and integration testing for pipelines (pytest, dbt tests).
Create data quality checks (row counts, nulls, duplicates, referential integrity).
Use Great Expectations or custom frameworks for validation.
Monitor SLAs for pipeline latency and data freshness with alerting.
Data Modeling Support
Implement architected schemas (star, snowflake, data vault).
Manage Slowly Changing Dimensions (SCD Type 1 & 2) for finance entities.
Define partitioning and clustering strategies for large-scale finance tables.
Support semantic layer definitions (metrics and dimensions).
DevOps & Engineering Practices
Participate in CI/CD for data pipelines using GitHub Actions or Azure DevOps.
Follow Git branching strategies (trunk-based, feature branches).
Perform code reviews and enforce engineering standards.
Support environment promotion patterns (dev $B"*(B QA $B"*(B prod).
Security & Governance
Implement RBAC and row/column-level security in Snowflake and Databricks.
Ensure PII and CPNI handling per T Mobile TISS 310 policy.
Manage secrets securely (Key Vault, environment variables, no hardcoded credentials).
Implement data lineage and audit instrumentation for compliance.
Collaboration & Communication
Partner with Data Architects to translate design specs into production-ready pipelines.
Work closely with Data Analysts to optimize downstream consumption performance.
Communicate pipeline incidents and data issues clearly to business stakeholders.
Participate in on-call rotation to support production pipelines.
________________________________________
Senior-Level Expectations
Own delivery of complex, multi-source pipelines with minimal direction.
Mentor junior and mid-level data engineers through pairing and code reviews.
Identify and drive technical debt reduction alongside feature delivery.
Contribute to and shape team standards, templates, and reusable components.
Influence tooling, framework, and platform decisions across the team.
________________________________________
Required Qualifications
8+ years of experience in data engineering or platform engineering roles.
Strong experience with Snowflake, Databricks, and dbt in production environments.
Advanced SQL and Python skills.
Experience building finance or regulated data pipelines at scale.
Preferred Qualifications
Telecom industry experience (ARPU, churn, prepaid/postpaid metrics).
Experience with both Azure and AWS cloud platforms.
Prior experience supporting financial reporting and period-end close cycles.
Hope you are doing good!!!!
My name is Pavan and I work with SPAR Information System., I have a great opportunity for you, please find the job details below, if you are interested in applying please send me your updated resume and best time for you to discuss about this opportunity in details.
Job title: Data Architect
Location: Frisco, TX or Seattle, WA
Duration: Long term contract
We are looking for someone who can architect a scalable revenue data platform to support enterprise-wide data needs and deliver actionable insights across Client business units. Our current tech stack includes Azure. Snowflake, Databricks, Claude, and GitLab, with AWS also available Furthermore, having knowledge of finance domain and platforms like SAP and Oracle is an added advantage
The ideal candidate will bring a forward-thinking approach-able to translate vision into architecture and design, develop proof of concepts, incorporate cost considerations, and deliver incremental value aligned to specific business units. This is a critical role that requires strong ownership, a bias for action, and the ability to balance strategic thinking with hands-on collaboration at the engineering level.
Role Overview We are seeking a highly skilled Data Architect to lead the design and evolution of our Revenue Data Platform (RDMP). This role will be instrumental in architecting a scalable, enterprise-grade data platform that enables unified data access and delivers actionable insights across Client business units.
The ideal candidate will bring a forward-looking mindset, translating strategic vision into practical architecture, while driving innovation, cost efficiency, and measurable business outcomes.
Key Responsibilities
Define and architect a scalable, enterprise-wide Revenue Data Platform to support diverse business needs Translate business vision into robust data architecture, design patterns, and implementation roadmap Develop proof of concepts (POCs) to validate architectural decisions and accelerate adoption
Design solutions leveraging modern date platforms including Azure, Snowflake, Databrica, and AWS Collaborate closely with engineering teams to ensure hands on alignment and execution
Incorporate cost optimization strategies into architecture and platform design Deliver incremental value aligned to specific business units through phased implementation Ensure data solutions enable high-quality, secure, and governed data access across the enterprise
Required Qualification:
Proven experience as a Data Architect designing and implementing large-acale data platforms Strong expertise in cloud data ecosystems (Azure preferred; AW'S experience is a plus) Hands-on experience with Snowflake,
Databricks, and modem data engineering practices
Experience working with CUCD tools such as GitLab Ability to bridge strategic architecture and hands-on engineering execution
Strong understanding of data messline, date governance, and data platform scotebluty Preferred Qualifications Experience in Finance/Revenue domains Familiarity with enterprise platforms such as SAP and Oracle Exposure to AI/LLM tools (c.g.. Claude) and their integration into data platforms
Thanks & Regards,
Pavan Raikhelkar
LEAD TALENT ACQUISITION SPECIALIST
Direct Number:-
Fax :
Email:
Website:
(An E-verify Company)
NOTE: We respect your online privacy. This is not an unsolicited mail. Under bill 1618 title III passed by the 105th us congress this mail cannot be considered Spam as long as we include contact information and a method to be removed from our mailing list. If you are not interested in receiving our e-mails, please reply with a "REMOVE" in the subject line. We apologize for any inconvenience caused by this mail.