Independently and collaboratively design, develop, and maintain high-performance, cloud-native data solutions and ETL pipelines in complex enterprise environments. Analyze system specifications, business requirements, and data models to build scalable and secure platforms for analytics, reporting, and real-time processing.
Requires Master’s degree in information technology, Computer Science, Software Engineering, or related IT field plus Six (6) years of Related professional experience. Experience must include: Six (6) years of experience in the following: Designing and developing data pipelines using Spark (PySpark/Scala), Hadoop, Hive, and Kafka; Scripting and programming with Python, Shell Scripting, and Java. Five (5) years of experience with the following: Data warehouse and data lake architectures, including data modeling, ETL, and data governance; DevOps tools, including Terraform, Jenkins, and CI/CD pipelines; SQL and NoSQL databases, including PostgreSQL, Oracle, SQL Server, MongoDB, and DynamoDB. Four (4) years of experience in monitoring and observability using Splunk, CloudWatch, or similar tools. Three (3) years of experience in the following: Data visualization and reporting tools such as Power BI and Microsoft Fabric. Two (2) years of experience with Agile methodologies, Scrum processes, and collaboration in cross-functional teams. One(1) year of experience in the following: AWS Cloud technologies including Glue, Lambda, Step Functions, Athena, CloudFormation, S3, RDS, and DynamoDB.
40 hours/week, 9:00am-5:00pm, Salary range: $190,000 to $195,000 per year.
To apply: Send resume and cover letter to us.careers@exlservice.com. Must cite job title and code EXL69 in response. This notice is subject to ExlService.com, LLC's employee referral program. EEO/Minorities/Females/Vets/Disabilities.