Global InfoTek Inc. is seeking a Lead RF Software Engineer to support Cyber Operations Research and Development on passive RF emitter identification and network analysis from real-time sensor data streams. The candidate will implement, test, and maintain components of a production software pipeline, contributing to pipeline development across various functional areas in support of real-world cyber operations.
Responsibilities:
- Implement, test, and maintain assigned pipeline components including stream ingestion, rollup processing, database write, and batch post-processing modules in support of real world cyber operations
- Develop and maintain browser-based visualization and reporting tools (track plots, waterfall displays, SmartBook report generation) that consume pipeline database output
- Implement and maintain stream simulation infrastructure, including TDMA network mission log replay and stream generation at controllable rates for pipeline testing
- Develop lightweight TNS simulator components: emitter and receiver models capable of following track plots and emitting in accordance with a network description
- Contribute to database integration work on tactical-box-spec hardware, including MySQL schema design, query optimization, and performance benchmarking
- Write comprehensive unit and integration tests for assigned components; implement and maintain CI/CD pipelines using GitLab to ensure functionality on hardware or in cloud environment
- Identify and report performance bottlenecks in Python pipeline components; assist with porting mature components to Rust or C as directed
- Perform basic Linux system administration on remote servers including package management, user configuration, and environment setup
- Manage source code using GitLab; follow disciplined versioning, branching, and code review practices as established by the Principal Engineer
- Produce clear technical documentation for implemented components including interface specifications, configuration guides, and test procedures
- Participate in periodic technical check-ins with the program technical lead; share findings and flag blockers promptly
Requirements:
- Clearance Level: Public Trust
- US Citizenship: Required
- Job Classification: Full Time
- Location: Remote
- Years of Experience: 7-10 years of relevant experience
- Education Level: BS Degree — experience may be considered in place of education requirement
- Strong proficiency in Python, with demonstrated experience in data processing pipelines, stream ingestion, or ETL development
- Proficiency with Python data science libraries including NumPy, Pandas (or Polars), and scikit-learn
- Experience with relational database development using MySQL, PostgreSQL, or SQLite, including schema design and query optimization
- Experience parsing or generating binary serialization formats (FlatBuffers, Protocol Buffers, or equivalent)
- Ability to develop, test, and debug on remote Linux servers via SSH using command-line tools and a modern IDE
- Solid Linux operating system fundamentals including file system management, process control, and basic security hardening (Ubuntu)
- Proficient in software engineering practices including Git/GitLab version control, unit testing, and CI/CD pipeline usage
- Experience developing browser-based data visualization or reporting tools, or demonstrated ability to learn React/D3-based tooling on the job
- Strong written and oral communication skills; ability to produce clear technical documentation for engineering audiences
- Ability to work independently on assigned components with minimal supervision in a small, distributed team
- Experience with TNS (Target Network System) sensor data formats and NDF ICD specifications
- Familiarity with TDMA network protocols, time-division access architectures, and passive RF signal processing concepts
- Experience with lightweight stream or message queue architectures (ZeroMQ, RabbitMQ, or equivalent)
- Experience with Rust or Go for systems-level or performance-critical development on Linux
- Experience with Polars or DuckDB for high-performance analytical workloads
- Experience with performance profiling and optimization of Python pipelines on resource-constrained x86 hardware
- Experience with LLM-assisted software development tools (e.g., Claude Code, GitHub Copilot, JetBrains AI Assistant, or equivalent); demonstrated ability to use AI tools productively for code generation, refactoring, and test case development while maintaining engineering judgment and code quality standards
- Familiarity with AI/ML libraries (PyTorch, TensorFlow); ability to integrate trained model inference into a pipeline without requiring deep ML expertise
- Experience with Jupyter Notebooks and research enclave environments; ability to read and adapt research prototype code
- Experience with simulation or synthetic data generation for pipeline testing purposes
- Familiarity with Apache data science tools such as Spark or Dask for large-scale data processing