We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Data Engineer, Senior

PG&E
United States, California, Oakland
Dec 03, 2025

Requisition ID# 169117

Job Category: Information Technology

Job Level: Individual Contributor

Business Unit: Information Technology

Work Type: Hybrid

Job Location: Oakland

Department Overview

Information Systems Technology Services is a unified organization comprised of various departments which collaborate effectively in order to deliver high quality technology solutions.

Position Summary

The Data Analytics and Insights team is seeking an experienced and talented Senior Data Engineer to join our growing team of analytics experts. As a key member of our team, you will play an essential role in the design, development, and maintenance of data pipelines, analytic products, which includes data applications, reports, and dashboards. We are looking for a proactive, detail-oriented, and motivated individual who can thrive in a fast-paced environment and help us scale our analytic product development to meet our clients' ever-evolving needs. The data engineer will collaborate with our cross functional team including solution architects, data pipeline engineers, data analysts, and data scientists on mission critical initiatives and will ensure optimal delivery of analytic products.

You will have a unique opportunity to be at the forefront of the utility industry and gain a comprehensive view of the nation's most advanced smart grid. It is the perfect role for someone who would like to continue to build upon their professional experience and help advance PG&E's sustainability goals.

This position is hybrid, working from your remote office and Oakland, CA based on business needs.

PG&E is providing the salary range that can reasonably be expected for this position at the time of the job posting. This salary range is specific to the locality of the job. The actual salary paid to an individual will be based on multiple factors, including, but not limited to, internal equity, specific skills, education, licenses or certifications, experience, market value, and geographic location.The decisionwill be made on a case-by-casebasis related tothese factors.This job is also eligible to participate inPG&E's discretionary incentive compensation programs.

Bay Area: $122,000 - 173,800

Job Responsibilities

  • Work closely with Subject Matter Experts (SMEs) to design and develop data model, data pipelines and front-end applications.
  • Implement data transformations to derive new datasets or create ontology objects necessary for business applications.
  • Design and optimize data workflows within Palantir Foundry, including ontology modeling, pipeline orchestration, and data lineage tracking.
  • Monitor and debug critical issues such as data staleness or data quality.
  • Perform impact analysis for ontology or schema changes.
  • Improve on performance of data pipelines (latency, resource usage)
  • Implement data visualizations using Foundry Tools (Workshop, Quiver and Contour).
  • Maintain applications as usage grows and requirements change.
  • Available for 7x24 operational support.

Qualifications

Minimum:

  • Bachelors Degree in Computer Science or job-related discipline or equivalent experience
  • 5 years experience with data engineering/ETL ecosystems Palantir Foundry, Informatica, SAP BODS, OBIEE or related

Desired:

  • Knowledge with commercial visualization tools such as Tableau or Power BI.
  • Databases - familiarity with common relational database models and proprietary instantiations, such as SAP, Salesforce etc.
  • Strong foundation in data modeling, schema design and data quality best practices, with functional experience working on cloud.
  • Experience working with cloud technologies (AWS, Azure, or GCP) for data storage, compute, and Integration.
  • GIT - knowledge of version control / collaboration workflows and best practices.
  • Agile - familiarity with agile and iterative working methodology and rapid user feedback gathering concepts including working on tools like JIRA.
  • Excellent communication skills and ability to work cross-functionally with technical and non-technical partners.
  • Knowledge of Snowflake for data warehousing, including query optimization and cost management.
  • Familiarity with ETL tools like Informatica (IDMC) for integration workflows.
  • UX design - knowledge of best practices and applications.
  • Data literacy - data analysis and statistical basics to ensure correctness in data aggregation and visualization.
Applied = 0

(web-df9ddb7dc-vp9p8)