FICO is a leading global analytics software company, helping businesses in 100+ countries make better decisions. The Lead Data Engineer will collaborate with data scientists and stakeholders to deliver data-driven solutions, analyze large datasets, and support the analytic software development lifecycle.
Responsibilities:
- Collaborate with data scientists to ensure data is accurately extracted, transformed, and loaded for analysis and decision-making
- Effectively collaborate and partner with various Scores stakeholders to deliver data driven solutions that support strategic Scores initiatives
- Ability to analyze, interpret, and manipulate large data sets to support analytic research and model development efforts
- Ability to deliver high level results supporting business-critical projects within expected timelines
- Use internal technologies in the development, maintenance and improvement of tools and processes to help solve challenging business problems in predictive analytics
- Support our existing code base and the overall analytic SDLC
- Demonstrate self-initiative and innovation by writing new code to continuously evaluate and improve existing code base
- Apply advanced data transformation techniques to optimize the processing of large datasets
- Work closely with the data scientists and other data engineers in constructing the best methodologies in generating new tools, code and datasets based on project requirements
Requirements:
- BS degree in Computer Science, Engineering, Information Technology, Management Information Systems (or equivalent work experience)
- Proven programming skills in Python, Java/Groovy, Perl and/or Shell scripting
- Demonstrated expertise utilizing Linux (RedHat) and Windows operating systems
- Expertise in AWS services- SageMaker, Jupyter Notebooks, S3, Athena, Python
- Proven expertise analyzing large datasets and applying data-cleaning techniques along with strong data wrangling skills and experience working with big data technologies
- Strong team-player with great communication skills
- High proficiency with VS Code and notebook interfaces such as Jupyter
- Adept at translating business problems into executable code
- Ability to use statistical software and data manipulation tools for purposes of data quality evaluation
- Highly detailed and demonstrates the ability to execute a given process with meticulous precision
- Proficient in auditing results with a critical level of accuracy and detail
- Experience working with big data technology (Spark, Hadoop, etc.)
- Familiarity with relational databases (Oracle, mySQL, etc.)
- Familiarity with Eclipse IDE
- Familiar with version control tools like GitHub or Bitbucket
- Prior/current experience working with U.S. Credit Bureau data
- Python programming skills (Highly Preferred)
- AWS Certifications are a plus