
Description
Description
JOB DESCRIPTION:
SAIC is seeking a Data Extract, Transform, Load (ETL) Systems Engineer in Chantilly, VA to focus on Application Programming Interface design (API), implementation and maintenance to extract data from various sources, transform it, and visualize it for analysis and use by Senior decision makers. This role also involves advising on data infrastructure, data quality, and data security while employing a variety of technical tools and solutions.
The successful applicant will exhibit flexibility in task execution and provide technical expertise. As a SETA advisor, candidates will be required to demonstrate value-added judgment to advise the government on program activities. The successful candidate will produce recommendations and deliverables in a thorough, practicable, and consistent manner congruent with the organization's objectives. US Citizenship required. Active Top Secret/Sensitive Compartmented Information (TS/SCI) clearance with Poly is required to start and must be maintained.
Responsibilities:
· Design, develop, and implement ETL processes to extract, transform, and load data from various sources (e.g., databases, flat files, APIs).
· Build, maintain, and optimize data pipelines to ensure efficient and reliable data flow.
· Ensure data quality and integrity throughout the ETL process, addressing data cleansing, validation, and security concerns.
· Design and maintain data warehouse schema and data models, ensuring data consistency and accuracy.
· Provide technical expertise and guidance on data infrastructure, data modeling, and data governance practices.
· Monitor and optimize ETL pipeline performance, addressing bottlenecks and improving execution times.
· Troubleshoot ETL issues, identify root causes, and implement solutions.
· Create and maintain documentation for ETL processes, data mappings, and data models.
· Collaborate with cross-functional teams (e.g., data analysts, business users) to understand data requirements and ensure data quality.
Qualifications
Required Skills and Qualifications:
- Must be a U.S. Citizen.
- Active TS/SCI clearance with polygraph.
- Bachelor's degree in science, technology, engineering, or mathematics (STEM) and minimum three (3) years or more relevant experience; master's in STEM field and one (1) year or more experience.
- Demonstrated proficiency in designing and implementing data pipelines that automate the Extraction, Transformation, and Loading (ETL) process.
- Experience with Application Programming Interfaces (APIs) and implementing “as-is” or “reworking” existing APIs.
- Experience with Tableau extract, transform, and load functions.
- A strong background in Python, Java, Scala, and/or SQL.
- Significant AWS service knowledge.
- Comprehensive understanding of data architecture and best practices.
- Demonstrated ability to write scripts for data ETL.
- Experience with data integration tools and platforms, such as Apache Kafka, Apache Flume, AWS Glue, Azure Data Factory, or Google Cloud Dataflow.
- Proficiency in working with RESTful APIs and web services.
- Proficiency with relational databases (e.g., MS SQL, MySQL, PostgreSQL, Oracle).
- Experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
- Knowledgeable of data warehousing concepts and data warehousing platforms (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- Experienced in documenting project requirements and schedule using agile project management techniques.
- Experience with software configuration management techniques.
- Experience with Retrieval Augmented Generation (RAG).
- AWS Solutions Architect, Data Analytics, or Developer Associate certification preferred.
- Cloudera Data Platform, Azure Data Engineer, or Google Data Engineer certification(s) preferred.
- Familiarity with data modeling, data cataloging, and/or data governance is desired.
- Familiarity with the application and use of Artificial Intelligence (AI) and Machine Learning (ML) services is desired.
Apply on company website