S3 logo

Hadoop Data Engineer

S3
18 days ago
Contract
On-site
Charlotte, North Carolina, United States
Data Engineering

Job Description

STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!

This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.

“Beware of scams. S3 never asks for money during its onboarding process.”

STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!

This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.

“Beware of scams. S3 never asks for money during its onboarding process.”

Job Title: Hadoop Data Engineer 
Contract Length: 12+ Month contract
Hybrid Work (Some on site work)
Location: Charlotte, NC 

We are seeking a skilled Hadoop Data Engineer to design, build, operate, and optimize large-scale distributed data platforms. This role focuses on supporting and enhancing HPE MapR–based Hadoop ecosystems, ensuring high availability, performance, security, and reliability for enterprise data workloads.

Key Responsibilities

  • Consult on complex initiatives with broad impact and large-scale planning for Software Engineering
  • Review and analyze complex, multi-faceted, larger scale or long-term Software Engineering challenges requiring in-depth evaluation of multiple factors, including intangible or unprecedented factors
  • Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of function, policies, procedures, and compliance requirements
  • Strategically collaborate and consult with client personnel

Required Qualifications

  • 5+ years of Software Engineering experience, or equivalent
  • Strong hands-on experience with Hadoop distributions, specifically HPE MapR
  • Deep understanding of distributed systems, data storage, and cluster computing concepts
  • Proficiency in Linux/Unix system administration
  • Experience with at least one programming or scripting language: Python, Java, Scala, or Bash
  • Working knowledge of Spark and batch/stream processing paradigms
  • Experience troubleshooting performance and stability issues in large-scale environments

Experience

  • Hadoop: 2–4 years