DescriptionOwn high-impact systems that power analytics and AI at scale, and deliver elite performance, security, and cost efficiency.
As a Principal Software Engineer at JPMorganChase within Data & Analytics Services within the Corporate Technology team, you will drive the architecture, delivery, and operations of a next-generation, cloud-native distributed data platform. This role owns end-to-end outcomes—partnering with product, data, and infrastructure leaders to deliver reliable, secure, and scalable data services that power analytics, AI/ML, and mission-critical applications. You will set the technical strategy, lead multiple engineering teams, and establish platform standards across compute, storage, streaming, governance, and observability.
Job responsibilities
Required qualifications, capabilities and skills
- Formal training or certification on software engineering concepts and 10+ years applied experience
- Demonstrable ownership of cloud-native distributed systems or data platforms at scale as a hands-on individual contributor.
- Experience with Cloud platforms (AWS/Azure/GCP): Kubernetes (EKS/AKS/GKE), serverless, VPC/networking, IAM, and cost optimization.
- Experience with Storage and lakehouse tech: Object storage (S3/ADLS/GCS), table formats (Delta/Iceberg/Hudi), columnar formats (Parquet/ORC).
- Data processing/streaming: Spark/Flink/Beam; Kafka/Kinesis/Event Hubs; CDC and schema management.
- Query/compute engines: Trino/Presto, Snowflake, Databricks, BigQuery; profiling and tuning at TB–PB scale.
- Strong foundation in distributed systems: consensus, partitioning, replication, consistency models, scheduling, and failure modes.
- Security and governance experience: encryption, secrets, identity, policy enforcement, DLP, audit logging.
- DevOps/SRE proficiency: IaC (Terraform/CloudFormation/Bicep), CI/CD, GitOps, blue/green and canary releases, autoscaling and resilience engineering.
- Excellent system design and communication skills; ability to influence roadmaps and standards across teams without formal authority.
Preferred Qualifications
- Experience in regulated or mission-critical environments with strict RTO/RPO and evidencing requirements.
- Hands-on with data governance stacks (e.g., Glue/Purview/Data Catalog, OpenLineage), data quality frameworks, and policy engines.
- Familiarity with ML/AI data patterns: feature stores, model training/inference data pipelines, low-latency serving.
- Multi-region active-active designs, DR automation, chaos engineering, and capacity modeling.
- FinOps practices for large-scale data workloads.