Quantcast

Data Engineer Big Resume Memphis, TN
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Engineer Big
Target Location US-TN-Memphis
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Engineer Network Memphis, TN

Lead Network Engineer Memphis, TN

Electrical Engineer Part-Time Memphis, TN

Data Developer Collierville, TN

System Engineer Memphis, TN

Mechanical Engineer Memphis, TN

Petroleum Engineer Service Representative Memphis, TN

Click here or scroll down to respond to this candidate
Candidate's Name
DATA ENGINEERPHONE NUMBER AVAILABLE EMAIL AVAILABLE LinkedIn TNSUMMARYData Engineer with 4 years of experience in big data technologies to extract insights from large and complex datasets. Expertise lies in building scalable data pipelines, optimizing workflows, and communicating technical findings clearly to stakeholders. Passionate about driving data-based solutions to inform business strategies and process optimizations. Thrives in fast-paced environments.SKILLS Programming Languages: SQL, Python, R, Java, Unix, Pyspark, Scala Big Data Ecosystems: Hadoop, MapReduce, Hive, Spark Cloud Platforms: AWS (EMR, EC2, S3, Glue, Redshift), Azure (ADF, Databricks, Synapse), Google Cloud ETL/DWHTools: Tableau Prep, Alteryx, Informatica Power Center, Data Stage, SSIS, Ab Initio Visualization Tools: Tableau, Power BI, Tibco Spotfire, Excel Database Systems: MS SQL Server, Teradata, Oracle, PostgreSQL, MongoDB Methodologies: Agile, Scrum, SDLC Version Control: Git, GitHub, SVN Soft Skills: Critical Thinking, Problem-solving, Creativity, Collaboration and Communication EDUCATIONMaster of Science in Information Systems University of Memphis EXPERIENCEMcKesson, TN Data Engineer Jan 2024  Current Extracted diverse datasets from various sources including relational databases, APIs, JSON, and CSV files, ensuring comprehensive data collection for business analysis. Engineered robust ETL (Extract, Transform, Load) pipelines to efficiently ingest raw data into Azure Data Lake. Storage (ADLS), employing Azure services like Azure Data Factory for seamless integration. Applied complex transformations using SQL and Python within Azure Databricks, aggregating and cleansing data to meet specific business requirements and enhance data quality. Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool, and backward. Developed Kafka producers and consumers to ingest and integrate data from various sources and systems, including real-time event streams and log data. Hands-on experience in leveraging Snowflake for data loading, manipulation, and transformation tasks, ensuring data quality and consistency. Developed Pyspark Data Frames in Azure Databricks to read data from Data Lake or Blob storage and utilized Spark SQL context for transformation. Streamlined CI/CD processes with the Ops team using Git and Kubernetes, achieving a 40% reduction in deployment time.Byte Soft Solution, India Data Engineer May 2019 - Jun 2022 Migrated data to thecloud using AWS and Azure, ensuring 99.9% data accessibility and reliability. Extensive experience in using SQL Management Studio, and SQL Server Business Intelligence Solutions like SSRS and SSIS. Worked on data archival by transferring data across platforms, validating data while transferring, and archiving data files for various DBMS by creating dynamic SSIS packages. Used SSMS to manage SQL Server databases, including creating, modifying, and deleting database objects such as tables, views, stored procedures, and indexes. Developed end-to-end data analytics framework utilizing Amazon Redshift, Glue and Lambda enabling business to obtain KPIs faster with reduced costs. Implemented data cataloging and metadata management with AWS Glue and Lake Formation, cutting data duplication by 80%. Experience in scaling Snowflake environments to handle large volumes of data and adapting solutions to meet evolving business needs. Using Lambda and DynamoDB, a security framework was designed and created to grant granular access to items in AWS S3. Transformed and moved massive volumes of data into and out of various AWS databases and data stores, such as Amazon's basic storage solution, DynamoDB, using AWS EMR. CertificationMicrosoft Certified: Azure Data Engineer Associate  Credential ID: 70A82008B6670675

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise