Home
Jobs
Saved
Resumes
Data Architect Specialist at Runtalent | JobVerse
JobVerse
Home
Jobs
Recruiters
Companies
Pricing
Blog
Jobs
/
Data Architect Specialist
Runtalent
Remote
Website
LinkedIn
Data Architect Specialist
Brazil
Full Time
3 weeks ago
No Sponsorship
Apply Now
Key skills
Airflow
Amazon Redshift
AWS
Azure
BigQuery
Cloud
Google Cloud Platform
Kafka
Spark
Tableau
Terraform
C
Data Lake
Analytics
Looker
BI
Power BI
Redshift
Databricks
GCP
Google Cloud
Lambda
S3
IAM
Glue
Kinesis
Pub/Sub
composer
GitOps
CI/CD
Remote Work
Cloud Security
About this role
Role Overview
Design and evolve C&A's enterprise data architecture in a hybrid, multi-cloud environment (on-premises ↔ AWS ↔ Azure ↔ GCP).
Develop data extraction, ingestion, and transformation across the layers of the data lake to be built.
Design integrations between legacy systems and cloud ingestion and processing layers.
Define integration standards between local data centers, on-premises systems, and cloud solutions.
Ensure the architecture is resilient, secure, high-performing, and cost-optimized.
Architect large-scale data pipelines distributed across AWS, Azure, and GCP.
Define and maintain data lifecycles across Bronze, Silver, and Gold layers.
Implement Data Governance, Data Quality policies, and a unified catalog (Lake Formation, Glue Catalog, DataHub, Collibra, or equivalent).
Work with Product, Customer, and Supplier teams to define domains and enterprise models.
Integrate analytics consumption and data science environments across clouds.
Build architectures for real-time and batch consumption.
Guide engineering, analytics, and systems squads in adopting unified architecture patterns for the multi-cloud ecosystem.
Lead POCs, potentially comparative, across AWS, GCP, and Azure.
Support development of data extraction and ingestion.
Requirements
Strong experience with hybrid integration: on-premises, AWS, GCP, and Azure.
Hands-on experience with corporate connectivity.
Experience designing multi-cloud architectures with standardization and interoperability.
Deep knowledge of multi-cloud data pipelines: AWS (Kinesis, Glue, S3, Lake Formation, Redshift, EMR, Lambda); GCP (BigQuery, Dataflow/Beam, Pub/Sub, Composer); Azure (Data Factory, Synapse, ADLS, Databricks, Event Hub).
Terraform.
Deployment pipelines.
Experience with ingestion patterns: CDC, streaming, batch, API ingestion, file ingestion.
Data modeling: conceptual, logical, physical, canonical models, corporate standards.
Big Data and distributed processing (Spark, EMR, Databricks desirable).
Governance, metadata, quality, catalog, and lineage.
Security and IAM.
Orchestration tools: Step Functions, Airflow, MuleSoft.
Data governance, metadata, Data Quality, lineage, and enterprise catalog.
Experience with DW/BI (Power BI (Required), Looker (Preferred), Tableau (Preferred), QuickSight (Preferred)).
Solid knowledge of messaging systems (Kafka, Pub/Sub, Event Hub, Kinesis).
Experience with BI and self-service analytics (Power BI, Tableau, Looker).
Experience with CI/CD, Infrastructure as Code (Terraform preferred), GitOps.
Cloud security.
Hands-on profile to support the development of data extraction and ingestion.
Tech Stack
Airflow
Amazon Redshift
AWS
Azure
BigQuery
Cloud
Google Cloud Platform
Kafka
Spark
Tableau
Terraform
Benefits
Remote work
Apply Now
Home
Jobs
Saved
Resumes