Senior IT Engineer
Washington, DC
Full Time
Experienced
Senior IT Data Engineer (Cloudera / Big Data Platforms)
Location: Washington, DC
Clearance Requirement: U.S. Citizenship required
About The Britton Group
The Britton Group is a premier provider of intelligence and national security solutions, specializing in mission-critical IT services, enterprise digital transformation, artificial intelligence, full stack development, multimedia design, and advanced intelligence support.
With over 25 years of experience delivering innovative, secure, and agile solutions to the federal government, we are a trusted partner to the Intelligence Community.
The Opportunity
This position supports a mission-critical data engineering initiative focused on building, optimizing, and sustaining enterprise data pipelines within a large-scale distributed data environment. The selected candidate will play a key role in enabling data-driven decision-making by ensuring the availability, integrity, and performance of critical data assets.
We are seeking a Senior IT Data Engineer with strong experience in big data platforms, data pipeline development, and distributed processing frameworks. This role is ideal for engineers who thrive in complex data ecosystems, understand end-to-end data lifecycle management, and can build scalable solutions that support enterprise analytics and operational needs.
You will be responsible for designing and maintaining robust data pipelines, integrating data from multiple sources, and ensuring high data quality and reliability across the platform.
Core Experience
Candidates should bring hands-on experience in:
Designing, developing, and maintaining data pipelines within distributed data environments such as Cloudera Data Platform
Building ETL/ELT workflows to ingest, cleanse, transform, and aggregate structured and unstructured data
Working with large-scale data processing frameworks including Hadoop, Spark, Hive, HBase, and Kafka
Developing data solutions using Python, SQL, and Java
Utilizing data integration and ingestion tools such as Apache NiFi
Performing data quality validation, monitoring, and performance tuning across data pipelines
Supporting long-term operations, maintenance, and optimization of enterprise data platforms
Implementing version-controlled, code-based data solutions using Git and DevOps best practices
Collaborating within Agile environments using Scrum or Kanban methodologies
Working in UNIX/Linux environments, including shell scripting and command-line operations
Additional Experience That Adds Value
Experience with data transformation frameworks such as PySpark, pandas, or dbt
Experience implementing CI/CD pipelines for data engineering workflows
Familiarity with data governance, data lifecycle management, and data protection practices
Experience working with real-time or streaming data architectures
Exposure to cloud-based data platforms or hybrid data environments
Experience supporting federal or regulated environments
Demonstrated Expertise
Candidates should be able to clearly demonstrate:
Ability to design scalable, high-performance data pipelines across distributed systems
Strong understanding of data ingestion, transformation, and integration techniques
Experience troubleshooting and optimizing data workflows in production environments
Ability to ensure data accuracy, consistency, and adherence to data quality standards
Strong collaboration skills within Agile development teams
Technical Stack
Strong experience with the following technologies is expected:
Cloudera Data Platform
Apache NiFi
Hadoop Ecosystem (MapReduce, Hive, HBase)
Apache Spark / PySpark
Kafka
Python, SQL, Java
UNIX/Linux (shell scripting)
Git (version control)
CI/CD tools and DevOps workflows
Microsoft SQL Server (preferred)
Education
Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or a related technical discipline is preferred. Equivalent professional experience will be considered.
Minimum of 5+ years of experience in data engineering, application development, or related roles.
Qualifications
At least 5 years of experience in Python or application/data development
At least 5 years of experience with data ingestion tools such as Apache NiFi
Advanced knowledge of SQL and distributed data processing frameworks
Experience working in Agile environments (Scrum or Kanban)
Experience supporting CI/CD pipelines and data platform operations
Location: Washington, DC
Clearance Requirement: U.S. Citizenship required
About The Britton Group
The Britton Group is a premier provider of intelligence and national security solutions, specializing in mission-critical IT services, enterprise digital transformation, artificial intelligence, full stack development, multimedia design, and advanced intelligence support.
With over 25 years of experience delivering innovative, secure, and agile solutions to the federal government, we are a trusted partner to the Intelligence Community.
The Opportunity
This position supports a mission-critical data engineering initiative focused on building, optimizing, and sustaining enterprise data pipelines within a large-scale distributed data environment. The selected candidate will play a key role in enabling data-driven decision-making by ensuring the availability, integrity, and performance of critical data assets.
We are seeking a Senior IT Data Engineer with strong experience in big data platforms, data pipeline development, and distributed processing frameworks. This role is ideal for engineers who thrive in complex data ecosystems, understand end-to-end data lifecycle management, and can build scalable solutions that support enterprise analytics and operational needs.
You will be responsible for designing and maintaining robust data pipelines, integrating data from multiple sources, and ensuring high data quality and reliability across the platform.
Core Experience
Candidates should bring hands-on experience in:
Designing, developing, and maintaining data pipelines within distributed data environments such as Cloudera Data Platform
Building ETL/ELT workflows to ingest, cleanse, transform, and aggregate structured and unstructured data
Working with large-scale data processing frameworks including Hadoop, Spark, Hive, HBase, and Kafka
Developing data solutions using Python, SQL, and Java
Utilizing data integration and ingestion tools such as Apache NiFi
Performing data quality validation, monitoring, and performance tuning across data pipelines
Supporting long-term operations, maintenance, and optimization of enterprise data platforms
Implementing version-controlled, code-based data solutions using Git and DevOps best practices
Collaborating within Agile environments using Scrum or Kanban methodologies
Working in UNIX/Linux environments, including shell scripting and command-line operations
Additional Experience That Adds Value
Experience with data transformation frameworks such as PySpark, pandas, or dbt
Experience implementing CI/CD pipelines for data engineering workflows
Familiarity with data governance, data lifecycle management, and data protection practices
Experience working with real-time or streaming data architectures
Exposure to cloud-based data platforms or hybrid data environments
Experience supporting federal or regulated environments
Demonstrated Expertise
Candidates should be able to clearly demonstrate:
Ability to design scalable, high-performance data pipelines across distributed systems
Strong understanding of data ingestion, transformation, and integration techniques
Experience troubleshooting and optimizing data workflows in production environments
Ability to ensure data accuracy, consistency, and adherence to data quality standards
Strong collaboration skills within Agile development teams
Technical Stack
Strong experience with the following technologies is expected:
Cloudera Data Platform
Apache NiFi
Hadoop Ecosystem (MapReduce, Hive, HBase)
Apache Spark / PySpark
Kafka
Python, SQL, Java
UNIX/Linux (shell scripting)
Git (version control)
CI/CD tools and DevOps workflows
Microsoft SQL Server (preferred)
Education
Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or a related technical discipline is preferred. Equivalent professional experience will be considered.
Minimum of 5+ years of experience in data engineering, application development, or related roles.
Qualifications
At least 5 years of experience in Python or application/data development
At least 5 years of experience with data ingestion tools such as Apache NiFi
Advanced knowledge of SQL and distributed data processing frameworks
Experience working in Agile environments (Scrum or Kanban)
Experience supporting CI/CD pipelines and data platform operations
Apply for this position
Required*