S3 logo

Data Product Engineer (PyFlink/Docker/ Kafka)

S3
7 days ago
Contract
On-site
Charlotte, North Carolina, United States

Job Description

Job Title: Data Product Engineer  (PyFlink/Docker/Kafka)

Location: Charlotte, NC 28206 (Hybrid)
Duration: 6–9 month contract

Position Overview

We are seeking an experienced Software Engineer III to support the development of an Enterprise Protective Services (EPS) Data Mart. This role focuses on building scalable data solutions, supporting streaming architectures, and contributing to complex application development within an enterprise environment.

The ideal candidate brings strong experience in Python, PyFlink/Flink, and Docker, along with a solid understanding of data integration and the software development lifecycle.

Key Responsibilities

  • Design, develop, and support complex software and data solutions
  • Build and maintain data pipelines and streaming applications using PyFlink/Flink
  • Translate business requirements into user stories and technical solutions
  • Break down large initiatives into smaller components and provide effort estimates
  • Support testing, code migration, and deployment across environments
  • Ensure adherence to coding standards, design principles, and source control practices
  • Collaborate with product owners, business users, and cross-functional teams
  • Participate in code reviews and design walkthroughs
  • Transfer knowledge and provide guidance to less experienced team members
  • Ensure compliance with regulatory and enterprise standards

Required Qualifications

  • Bachelor’s degree in Computer Science or related field
  • 5–10+ years of experience in application development and support
  • Strong experience with Python
  • Experience with PyFlink / Apache Flink
  • Hands-on experience with Docker (Docker Swarm preferred)
  • Strong SQL and database knowledge
  • Experience working within the software development lifecycle (SDLC)
  • Strong analytical, problem-solving, and communication skills
  • Ability to manage multiple priorities and work on concurrent user stories

Desired Qualifications

  • Experience with Kafka or streaming technologies
  • Experience with data integration architectures (ODS, Data Warehouse, Data Mart)
  • Experience with API and application integrations
  • Familiarity with modern source code management tools and processes
  • Experience working in regulated or enterprise environments
  • Experience working with distributed or remote teams

Technical Environment

  • PyFlink / Apache Flink
  • Python
  • Docker (Swarm mode preferred)
  • SQL / Data platforms
  • Kafka (optional)
  • Data warehousing and integration tools

What We’re Looking For

  • Strong data engineering / software development background
  • Candidates comfortable working with streaming and data integration technologies
  • Engineers who can design and build solutions end-to-end
  • Strong communicators who can collaborate across technical and business teams