General Dynamics Information Technology is a global technology and professional services company that delivers consulting, technology and mission services to every major agency across the U.S. government. As a Data Engineer Senior, you will develop enterprise-grade data platforms and services, helping clients overcome big-data obstacles and advance their missions.
Responsibilities:
- Deliver insights to help our clients turn data into action as a Data Engineer Senior at GDIT
- Your work will provide transformative solutions to our clients’ big-data obstacles and help advance the mission
- Help ensure today is safe and tomorrow is smarter
- Work with our clients to develop enterprise grade data platforms, services, and pipelines
- Lead and architect migration of data environments with performance and reliability
- Assess and understand the ETL jobs, workflows, BI tools, and reports
- Address technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
- Experience in crafting database / data warehouse solutions in cloud (Preferably AWS. Alternatively Azure, GCP)
- Support an Agile software development lifecycle
- Contribute to the growth of our Data Exploitation Practice!
Requirements:
- Clearance Level Must Currently Possess: Secret
- Clearance Level Must Be Able To Obtain: Secret
- 2 + years of related experience
- US Citizenship Required: Yes
- Big Data
- Data Engineering
- Data Modeling
- Data Tools
- Structured Query Language (SQL)
- Ability to hold a position of public trust with the US government
- 2-4 years of experience working with MS SQL Server and SSIS to build ETL pipelines
- 2-4 years industry experience coding commercial software and a passion for solving complex problems
- 2-4 years direct experience in Data Engineering with experience in tools such as: Big data tools: Hadoop, Spark, Kafka, etc
- Relational SQL and NoSQL databases, including Postgres and Cassandra
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc
- AWS cloud services: EC2, EMR, RDS, Redshift (or Azure equivalents)
- Data streaming systems: Storm, Spark-Streaming, etc
- Search tools: Solr, Lucene, Elasticsearch
- Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc
- Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases
- Experience with message queuing, stream processing, and highly scalable ‘big data' data stores
- Experience manipulating, processing, and extracting value from large, disconnected datasets
- Experience manipulating structured and unstructured data for analysis
- Experience constructing complex queries to analyze results using databases or in a data processing development environment
- Experience with data modeling tools and process
- Experience architecting data systems (transactional and warehouses)
- Experience aggregating results and/or compiling information for reporting from multiple datasets
- Experience working in an Agile environment
- Experience supporting project teams of developers and data scientists who build web-based interfaces, dashboards, reports, and analytics/machine learning models