Credit Acceptance is an award-winning company recognized for its workplace culture and success in the used car finance industry. The Senior Software Engineer for the Data Platform will design and implement core components of the data platform while collaborating with stakeholders to deliver effective solutions for data management.
Responsibilities:
- Design and implement core components of the data platform (e.g., data lake, streaming infrastructure, DaaS, catalog), emphasizing scalability, reliability, and observability
- Balance hands-on delivery with architectural foresight, contributing to cross-functional initiatives that strengthen the platform
- Partner with data and engineering stakeholders to understand requirements and deliver effective, efficient solutions for data acquisition, transformation, and integration
- Write unit and integration tests, validating software against acceptance criteria to ensure platform reliability
- Apply and promote team standards for coding, documentation, and testing, ensuring maintainable and high-quality engineering practices
- Conduct impact analysis to identify dependencies and assess potential risks of changes across applications and services
- Develop a strong understanding of platform use cases and business processes to align technical solutions with organizational needs
- Experiment with new tools and approaches, validate assumptions, and recommend solutions that improve the platform’s capabilities
- Participate in design and code reviews, providing constructive feedback and communicating changes effectively
- Document platform components and designs, ensuring projects are maintainable and understandable by others
- Troubleshoot and resolve production issues, proposing effective solutions to restore platform stability
- Contribute to sprint commitments and actively engage in Agile practices, including retrospectives and process improvements
- Engage in continuous learning, deepening knowledge of modern data platform technologies, distributed systems, and engineering best practices
Requirements:
- Bachelor's degree in Computer Science, Information Systems, or a closely related field; or equivalent work experience
- Minimum 5 years of software engineering experience, with recent hands-on experience building and maintaining data platforms or distributed systems in cloud environments
- Strong knowledge of software engineering best practices, with practical experience building and operating data platforms, products, or solutions
- Experience building and operating applications on cloud platforms (e.g., AWS, Azure, or GCP), including deploying and supporting containerized services (Docker, Kubernetes, ECS/EKS)
- Familiarity with lakehouse principles (Delta Lake, Iceberg, or Hudi) and best practices for schema evolution, versioning, and performance optimization
- Experience with observability practices (metrics, logs, tracing, alerting) and tools (e.g., Dynatrace, Splunk, CloudWatch) to ensure platform reliability
- Knowledge of data storage technologies relevant to data platforms, including object stores (S3, ADLS, GCS), relational databases, and NoSQL systems
- Awareness of data governance and security practices (e.g., access controls, encryption, compliance considerations), with the ability to design platform components that align with organizational standards
- Solid understanding of distributed systems concepts (scalability, reliability, consistency, partitioning) and their application to data platforms
- Experience working with enterprise-class applications where uptime, reliability, and scalability are essential
- Strong programming skills in one or more languages commonly used for platform engineering (e.g., Python, Java, Scala, Go)
- Demonstrated ability to mentor and coach less experienced engineers, contributing to team growth and technical maturity
- Familiarity with Agile delivery practices and other software development lifecycle methodologies
- Hands-on experience with lakehouse technologies (Delta, Iceberg, Hudi), beyond conceptual familiarity
- Exposure to workflow orchestration frameworks (Airflow, Dagster, Prefect, Databricks Workflows)
- Experience with CI/CD pipelines for automated testing and deployment
- Exposure to observability tooling (Datadog, Prometheus, Grafana, ELK, Dynatrace, Splunk) beyond basics
- Experience debugging performance issues and optimizing systems for cost and scale
- Financial services or FinTech industry experience