Quantcast

Gcp Data Engineer Resume Philadelphia, P...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title gcp data engineer
Target Location US-PA-Philadelphia
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Engineer Engineering Princeton Junction, NJ

Data Engineer Senior Piscataway, NJ

Data Engineer Engineering Philadelphia, PA

Data Engineer Science Wilmington, DE

Data Engineer Big Edison, NJ

Data Engineer Processing Edison, NJ

Data Engineer Cloud Services Newark, DE

Click here or scroll down to respond to this candidate
 Name: Candidate's Name
GCP Data EngineerEmail Id: EMAIL AVAILABLE                                                                Mobile No: PHONE NUMBER AVAILABLE

Professional Summary:      Having 3 years of hands-on expertise in Data Engineering, encompassing the complete spectrum of Data Pipeline Design, Development, and Implementation, coupled with proficiency across the entire Software Development Life Cycle.      Designed and implemented scalable data warehouses on GCP for optimized analytical queries and reporting.      Automated infrastructure provisioning, configuration, and management on GCP using Terraform scripts.      Collaborated with Data Science teams to successfully productionable machine learning models on GCP.      Established robust data integration pipelines for seamless connectivity between various e-commerce platforms on GCP.      Implemented real-time data processing solutions capturing and analyzing e-commerce transactions promptly on GCP.      Integrated Databricks with Google Cloud Platform (GCP), leveraging Databricks Unified Analytics Platform for scalable data processing and machine learning workflows on GCP infrastructure.      Designed and implemented data engineering pipelines using Databricks on GCP, leveraging Apache Spark for distributed data processing and transformation tasks, ensuring efficient ETL (Extract, Transform, Load) operations.      Developed interactive data analysis and visualization notebooks in Databricks on GCP, utilizing Databricks Notebooks with Apache Spark SQL, Python, and Scala for exploratory data analysis and ad-hoc querying.      Led the migration of on-premises ETL processes to Google Cloud Platform (GCP) using cloud-native tools (Big Query, Cloud Data Proc, Google Cloud Storage, Composer, API).      Proficient in GCP technologies like Dataflow, Pub/Sub, Big Query, and GCS Bucket.      Created Tableau visualizations for effective data analysis and reporting within the GCP environment.      Developed and maintained centralized repositories using Informatica PowerCenter on GCP.      Transferred streaming data and data from different sources into HDFS and NoSQL databases using Apache Flume on GCP, with cluster coordination services through Zookeeper.      Utilized PySpark/Spark SQL extensively for data cleansing, generating data frames, and RDDs within GCP environments.      Adept in cloud migrations and constructing analytical platforms on GCP, with knowledge in Kubernetes service deployments.      Responsible for loading processed data into AWS Redshift tables for business reporting and creating views in AWS Athena for secure and streamlined data analysis access.       Skilled in designing and querying complex SQL databases.      Expertise in ensuring data integrity and optimizing query performance.      Proficient in writing clean and efficient Python code for backend development and automation.      Experienced in scripting and developing Python applications. Technical Skills:Big Data ToolsGCP Cloud Dataproc, GCP Cloud Dataflow, Big Query, Apache Spark, Apache Beam, Pub/Sub, Apache Airflow, HDFS, YARN, Data fusion,
Cloud TechnologiesGCP Cloud Storage, Cloud Bigtable, Cloud Spanner, Cloud SQLETL ToolsBig Query, Google cloud dataflow, Google cloud Dataproc, Google Cloud Data PrepModeling and Architect ToolsStar-Schema, Snowflake-Schema Modeling, FACT and dimension tables, Google Cloud ComposerDatabaseOracle, MySQLOperating SystemsMicrosoft Windows, LinuxReporting ToolsLooker, Data Studio, Tableau, Power BI, Google SheetsMethodologiesAgile, System Development Life Cycle (SDLC)Python and R LibrariesPySpark,NumPy, Pandas, Matplotlib, Seaborn, Scikit-learn for PythonProgramming LanguagesSQL, Python (Jupiter Notebook, PyCharm IDE)   					 Professional ExperienceTop of FormMARCELLUS INFOTECH PVT LTD, Bangalore, INDIA                                                      July 2020   Nov 2022Client: Chubb InsuranceRole: Data EngineerResponsibilities:      Designed, developed, and maintained scalable data pipelines using Python and SQL for efficient processing of large datasets.      Deployed and managed data infrastructure on Google Cloud Platform (GCP), including services like Cloud Storage, BigQuery, and Cloud Dataflow.      Developed and optimized SQL queries for data extraction, transformation, and loading (ETL) processes, ensuring high performance and data integrity.      Automated data ingestion and ETL processes, integrating data from various sources into centralized data warehouses.      Implemented data validation and quality checks, ensuring the accuracy, reliability, and consistency of data.      Collaborated with cross-functional teams to understand data requirements, troubleshoot issues, and document data workflows and architecture.      Designed, deployed, and managed data infrastructure on GCP using services like BigQuery, Cloud Storage, and Cloud Dataflow.      Built and maintained end-to-end data pipelines using GCP tools and Python for large-scale data processing.      Automated ETL processes to ensure seamless data ingestion, processing, and integration from various cloud-based sources.      Managed and optimized data warehouses on GCP, ensuring data integrity, performance, and scalability.      Implemented security best practices, including encryption and access control, to ensure data protection and compliance.      Monitored and optimized the performance of cloud resources, data pipelines, and queries to enhance efficiency and reduce costsEnvironment: GCP, Big Query, Python, Pyspark, SQL, Pub/Sub, Dataproc, Cloud Storage, Apache Airflow, ETL, Google Dataflow Top of FormMARCELLUS INFOTECH PVT LTD, Bangalore, INDIA                                                        Jan 2020 -June 2020Role: Data EngineerResponsibilities:      Assisted in designing, building, and maintaining data pipelines for transforming raw data into usable formats.      Supported data ingestion from various sources and ensured data integrity through validation processes.      Performed data wrangling and preprocessing to prepare datasets for analysis and visualization.      Collaborated with cross-functional teams to gather data requirements and provide technical solutions.      Used SQL to query databases and generate reports for business decision-making.      Conducted exploratory data analysis to identify trends, patterns, and insights.      Automated repetitive tasks and optimized processes for improved efficiency.      Assisted with maintaining data documentation and ensuring compliance with company policies.Educational Details:SL No.DegreeUniversityYear of graduation1Master s in information technologyWilmington UniversityJuly 2024
2Bachelor s in mechanical engineeringVTUMay 2020

Respond to this candidate
Your Email «
Your Message
Please type the code shown in the image:
Register for Free on Jobvertise