Quantcast

Data Engineer Processing Resume Leander,...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Engineer Processing
Target Location US-TX-Leander
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes
Click here or scroll down to respond to this candidate
Candidate's Name
Email: EMAIL AVAILABLEPhone: PHONE NUMBER AVAILABLELinkedIn: https://LINKEDIN LINK AVAILABLE Professional Summary:Experienced Data Engineer with 7+ years of expertise in designing, implementing, and optimizing data solutions. Procient in Python, SQL, Snowake, Azure, Databricks, CI/CD pipelines, AWS Glue, S3, Athena, Airow, ETL, NoSQL databases, and DataStage. Adept at building scalable data pipelines, managing databases, and deploying data infrastructure in cloud environments. Strong problem-solving skills and a proven track record of improving data processing eXiciency.Skills Programming Languages: Python, SQL, Shell Scripting, NoSQL (MongoDB) Cloud Platforms: AWS (S3, Glue, Athena), Snowake, Azure, Databricks Data Engineering Tools: Apache Airow, CI/CD pipelines Databases: Snowake, SQL Server, Oracle, Teradata, MySQL,Couchbase, MongoDB, Cassandra, PL/SQL, RDBMS, Oracle, AWS, Microsoft SQL Server Version Control: GitHub, GitLab, Jenkins, BitBucket ETL Processes: AWS Glue, Python ETL scripts, DataStage Scheduling: Autosys, One Automation Monitoring & Logging: SplunkProfessional Experience Data Engineer Tabner Inc Client: TIAA Dec 2022 - Present Developed custom Python scripts to automate data cleaning and transformation tasks, reducing manual eXort by 50%. Created reusable Python modules for common ETL tasks, standardizing data processing across projects. Integrated PySpark with AWS Glue for scalable and serverless ETL processing. Designed and optimized Snowake schemas for eXicient data storage and retrieval. Utilized Snowake's built-in functions for advanced analytics and data transformations. Optimized data storage and retrieval using Amazon S3, reducing data access time by 25%. Managed and administered Snowake data warehouse, enabling advanced analytics capabilities. Developed robust ETL pipelines using Python and SQL to streamline data integration from various sources into Teradata and Oracle databases. Conducted performance tuning and optimization for large-scale data warehouses in Teradata and Oracle environments. Utilized One Automation to automate complex data workows, signicantly reducing manual intervention and improving eXiciency. Created interactive dashboards and reports using MicroStrategy to provide actionable insights for business stakeholders. Managed code repositories, performed code reviews, and collaborated with cross- functional teams on GitHub to ensure code quality and consistency. Deployed and maintained data infrastructure on AWS, utilizing services such as S3, Redshift, and Lambda for scalable data processing and storage solutions. Developed complex SQL queries for data extraction and analysis from SQL Server, Teradata, and Oracle databases. Implemented ETL processes using Azure, enhancing data integration and workow eXiciency. Managed version control and code collaboration using GitLab, ensuring streamlined development processes. Automated data processing workows using Autosys and One Automation, improving eXiciency by 30%. Utilized Splunk for monitoring and logging data pipeline activities, ensuring data integrity and system performance.Data Engineer Tabner Inc Client: Charter Communications May 2018 - Dec 2022 Established a primary data lake for storing raw data from diverse sources using AWS S3. Automated ETL processes through AWS Glue for data cataloging and transformation. Developed Airow jobs to schedule PySpark scripts. Implemented cloud-based data warehousing solutions (Redshift & Snowake) for complex queries and analysis. Engineered scalable data processing workows using PySpark, ensuring eXicient transformations and aggregations. I developed, implemented & managed a robust data transformation processes using ETL frameworks, SQL, Python . This involved extracting data from various legacy systems(BHN, CHTR, TWC), transforming it to meet the required formats, and loading it into Snowake. Utilized AWS Glue to automate and manage ETL workows, ensuring eXicient data processing and integration. Utilized version control and collaborative development, ensuring high code quality and traceability. Analyzed large datasets to identify trends and insights using statistical analysis tools. Implemented data governance frameworks to ensure data quality and compliance with industry standards. Actively participated in Scrum and Agile development methodologies to deliver the project in iterative phases. Collaborated with cross-functional teams, including data engineers, analysts, and business stakeholders, to ensure the project met all requirements and delivered maximum value. Conducted regular sprint reviews, retrospectives, and planning sessions to maintain project momentum and address any emerging challenges promptly. Graduate Research Assistant South Dakota State University Aug 2016 - Mar 2018 Congured reports from diXerent data sources using at les, CSV, Excel, MySQL Server, Oracle database. Created custom calculations in Tableau including string manipulation, basic arithmetic calculations, and custom aggregations. Processed large datasets for data association and provided insights into meaningful trends. Developed statistical models for predicting products for commercialization using Machine Learning algorithms.Data Analyst ATOM SOLUTIONS Jan 2015 - Jun 2016 Created a web application to narrow job search for job seekers. Scheduled jobs to automate database activities such as backups and monitoring disk space. Applied text mining techniques to infer information from unstructured data. Used clustering algorithms to categorize customers into groups. Performed exploratory data analysis and feature engineering to t regression models. Implemented forecasting models to predict car sales for future seasons. Education:Masters in Data Science South Dakota State University Aug 2016  Dec 2018 Bachelors in Computer Science GITAM University May 2012  May 2016

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise