Quantcast

Data Engineer Pl Sql Resume Mckinney, TX
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Engineer Pl Sql
Target Location US-TX-McKinney
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Engineer Big Denton, TX

Real Estate Data Engineer Dallas, TX

Data Engineer Senior Plano, TX

Data engineer Plano, TX

Data Engineer Senior Denton, TX

Data Engineer Dallas, TX

Android Engineer Data Science Irving, TX

Click here or scroll down to respond to this candidate
Candidate's Name
PHONE NUMBER AVAILABLE; e-mail: EMAIL AVAILABLEEDUCATIONSouthern Arkansas UniversityM.S., Computer and Information Science GPA 3.Street Address
TECHNICAL SKILLSWeb Technologies: HTML, CSS, JavaScript, Bootstrap,Programming: Python, RTools: TOADIDE: EclipseDatabases: MySQl, Oracle SQL and PL/SQL.Sap Platform: Native HANA, ABAP, Bo, BW, Business Application Studio(BAS) and BTP Cockpit, Cap DB Modelling, Fiori, SDI, SAP Cloud Platform, Sap HANA XSA and Cloud FoundryETL : DatastageCloud Technologies: Azure, AWS RDS, EC2, S3 Buckets, EMR,Operating Systems: Linux (Red Hat, CentOS 7), Windows SystemsBig Data Ecosystem: Hadoop, HDFS, Hive, Sqoop, HBase, Oozie, Kafka, Spark, AirflowJava Technologies: Bascis of Core Java.PROFESSIONAL EXPERIENCEData Engineer - Intern Aug 2023  Nov 2023Responsibilities:Mined and transformed complex healthcare data (RDBMS: Oracle, MySQL; SFTP) using Python and Shell Scripting, enabling data-driven decision making.Worked on Sqoop jobs for ingesting data from MySQL to Amazon S3Created hive external tables for querying the data.Use Spark Data frame APIs to inject Oracle data to S3 and store it in Redshift. Write a script to get RDBMS data to Redshift.Developed automated data pipelines using Sqoop, Spark SQL, and AWS services (S3, EMR) to efficiently ingest, transform, and store data in desired formats (CSV, JSON).Optimize Hive and spark performance. identify the errors using logs.Automatically scale-up the EMR Instances based on the data. Apply Transformation rules on the top of Data Frames.Run and schedule the Spark script in EMR Pipes. Process Hive, csv, Json, oracle data at a time (POC).Validate and debug the script between source and destination. Validate the source and final output data.Collaborated with the reporting team to create insightful dashboards for data visualization, supporting informed business strategies.Demonstrated understanding of data quality principles through data validation and debugging processes.Description:Zocdoc is a health care domain, Get the RDBMS data (Oracle/MySQL) and SFTP data, store in AWS-S3. Import and export data from Oracle, and MySQL through Sqoop) and store in S3. Apply Transformation rules on the top of different datasets and finally store in the desired output (CSV to JSON). Schedule the tasks in oozie, scale up automatically based on data. Finally store the desired data in RedShift and S3 in the desired format. Coordinate with Reporting team to create dashboards.Data Engineer - Intern Jan 2023  July 2023Description.Snack Pack DeliveryThe dataset to be used is connected to car sales. Choose the Name and Weight columns as your first two data columns. After that, order the dataset by Weight, highest to lowest. Print the first five rows of data alone. Use SPARK in three distinct methods to accomplish it. For coding, you can use Google Colab or HDP shell in a box.1) Worked on RDD data structure2) A data frame format that require any SQL queries3) SQL queries and Data Frame data structures. Reading and uploading the csv and txt files into Spark Structured Streaming program in Google Collab.4). Design data structures and relationships specifically for cloud-based storage solutions..Software Engineer - Accenture Jun 2021  Aug-2022Rolls Performed.Analyzing the data from different dimensions and bring it into HANA Data Base.Hands on experience in BASWorked on cloud DB modelling.Applied HANA modeling skills to design complex models (Calculated Views) in the model.cds file, optimizing data access and performance within the SAP HANA platform.Worked on table functions.Experienced on creating the ABAP views.Experienced on uploading the CSV files in BTP Cockpit and create the new tables.Preparing Functional Specification, Unit Test Plan document and Operation guide.Contributed to the development of the OSND Carrier Portal replacement utilizing SAP Cloud Platform (SCP App), leveraging your understanding of cloud-based solutions and application development.Collaborated with the reporting team to create insightful dashboards for data visualization, supporting informed business strategies.Involved in build and deployment and pushed the code to git.Knowledge on SDIDescription.OSND Carrier PortalIts a carrier portal that allows carriers to submit OS&Ds (Overages, Shortages, and Damages) and OS&D claims, and inquire load status at DCs (5 days out).The current portal is home-grown, built with 20-year-old technology, and unsupported  this is a major risk for implementation of S/4 project. If not replaced, it hinders the introduction of positive changes, e.g., the simplification of the shipment structure. It intends to utilize an application based on the SAP Cloud Platform (SCP App) to replace the existing Carrier Portal.A carrier can carry out the following functions:Inquire about status of loads in the upcoming mentioned days.Submit Overage, Shortage and Damages. Upload supporting documents on the portal.Submit Accessorial requests.Create OS&D request by the Carriers, Warehouse supervisor and Transportation team using External facing application.View the report of all the OS&D requests raised in last few days.Software Engineer-Technosoft October 2019  Mar 2021Rolls Performed.Involved in requirement gathering, Analysis, Design, Development and UAT in Agile methodology.Regular interaction with clients to understand requirement, clarification and UAT for offshore driven projects.Analyzing the data from different dimensions and bring it into HANA Data BaseCreated complex models (Attribute, Analytical, Calculated Views)Preparing Functional Specification, Unit Test Plan document and Operation guide.Followed Agile methodologyBreak down of Functional design into High level of detail technical design.Involved in System Integration test with downstream systems.Involved in Design, develop and maintain Webi (Reports)Involved in various meetings with Business analysts and developers.Description:TMS - Transport Management SystemIts easy to use database giving us flexibility for future reporting which would consolidate information on where we are sourcing from and where we are shipping from with regards to weight, volume, values, carriers and everything which matter to a logistics organization.The below are the Key areas in Logistics.To improve consolidationTo have better carrier selectionTo have a central track & trace systemTo have freight data up to the material/customer/order levelTo have more accurate auditing of freight invoicesTo have full freight spend visibility in SAPSoftware Engineer  APEX_IT June 2017  September 2019Rolls Performed.Involved in requirement gathering, Analysis, Design, Development and UAT in Agile methodology.Regular interaction with clients to understand requirement, clarification and UAT for offshore driven projects.Created Table Functions, Designing tables, Building HANA Views (Attribute, Analytical & Calc Views), testing with BI Reporting tools.Understanding the Greenplum complex procedures and Function and realize into HANA Models and SQL Scripting.Analyzing the data from different dimensions and bring it into HANA Data BaseInvolved more into sql scripting provide test scripts to team.Preparing Functional Specification, Unit Test Plan document and Operation guide.Prepared the test cases based on the detailed design specifications and executed the same.Involved in System Integration test with downstream systems.Involved UNIX shell script.Description.EDW  HANA MigrationVMWARE Currently using Greenplum database based EDW & Reporting Landscape, It has different subject areas. Now everything brought into HANA platform and building reports. Need to validate the data in each and every business layerEDW  Information ModelsBase tables from the existing Greenplum EDW will be realized as virtual data models (base information views/models) in HANA EDW incorporating the GP routine logic through HANA procedures and scripted calc views.

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise