Quantcast

Azure Data Supply Chain Resume Irving, T...
Resumes | Register

Candidate Information
Title Azure Data Supply Chain
Target Location US-TX-Irving
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes
Click here or scroll down to respond to this candidate
Naveen Peddolla : EMAIL AVAILABLE : PHONE NUMBER AVAILABLEProfile Summary: Close to 10 years of experience in Data warehousing with exposure to Cloud Architecture Design, Modelling, Development, Testing, Maintenance, and customer support environments on multiple domains like Supply chain, Insurance, Banking and Retail. 2 years of Data engineering analyst role to Collection of Data,Cleaning, Exploration,Statistical Analysis, Data Visualization and technologies used Azure data factory, Azure data bricks, Snowflake and Tableau. 2 years of experience in Snowflake, Azure Cloud, Azure Data Factory, Data bricks, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services. In-depth understanding of Spark Architecture including Spark Core, Spark SQL, Spark Streaming, and creating the Data Frames handle in Spark . Strong experience in writing applications using Python using different libraries like Pandas, NumPy 5+ years of experience with design and develop jobs in Big Data Technologies (Hadoop and Apache Spark) and ETL developer (DataStage) for relational databases (Teradata, Oracle) collecting and storing data and build and design data input and data collection mechanisms. Strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis, Design Specification, and Testing as per Cycle in both Waterfall and Agile methodologies. Technical Skills:Page  1Databases Snowflake, Oracle, MySQL, SQL Server, MongoDB. ProgrammingLanguagesPython, PySpark, Shell script, Perl script,Cloud Services ADF, Azure Data Lake Storage (ADLS), Azure Synapse Analytics (SQL Data warehouse), Azure SQL Database, Azure Cosmos NoSQL DB, Azure Key vaults, Azure DevOps, Big Data Technologies like Hadoop, Apache Spark and Azure Data bricks. Tools Hive, HBase, Apache Spark, PyCharm, Eclipse, Visual Studio, SQL*Plus, SQL Developer, SQL Server Management Studio, Postman, SQOOP, Kafka, Yarn. CI/CD Tools Terraform, JenkinsVersion Control SVN, GIT, GitHub.Operating Systems MacOS/Windows 10/7/XP/2000/NT/98/95, UNIX, LINUX Visualization/ReportingTableauProfessional Experience 1: November 2022  May 2024 TCS (Apple Client ),Role: Sr Data Engineer analyst,Responsibilities: Ingestion of data into one or more Azure Services, such as Azure Data Lake, Azure Storage, and data processing in Azure Databricks. Used Azure Data Lake or Azure blob as the source to get the data and created Azure Data Factory (ADF) pipelines. Utilized aspects of the ADF such as the stored procedure, lookup, execute pipeline, data flow, copy data, and azure function. Data processing in Azure Databricks after being ingested into one or more Azure Services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW). Using Azure Databricks to build Spark clusters and set up high concurrency clusters to hasten the preparation of high-quality data. With Azure Databricks and Data Factory, create and manage an ideal data pipeline architecture on the Microsoft Azure cloud. Created Spark code in the Python and Spark SQL environments to speed up testing and data processing, load data into Spark RDDs, and perform in-memory calculations to produce output responses with less memory use. Developed Spark applications using PySpark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats. Conducting code reviews for team members to ensure proper test coverage and consistent code standards. Worked in an Agile development environment in sprint cycles of two weeks by dividing and organizing tasks. Strong experience in migrating other databases to Snowflake. Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines. Performance Spark Tuning and Query Optimization Created a Git repository and added the project to GitHub. Professional Experience 2: December 2017- November 2022 OPTUM,Role: Sr Data Engineer,Responsibilities: Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, Azure Data Bricks. Extensively used azure key vaults to configure the connections in linked services. Designed and implemented data pipelines using Python and PySpark to process and transform large Volumes of structured and unstructured data in Azure Data Bricks. Data loaded through different layers from azure blob storage to Snowflake through Azure Data Bricks. Page  2 Worked on different ways to load processes like data load from on-prem server to snowflake table by using SnowSQL and data load from azure blob storage to snowflake table by using Snowpipe. Data loaded directly from On premise to snowflake through SnowSQL by using put and copy commands Data loading has been automated by using Snow Pipe for continuous data loading from azure blob storage to snowflake table when as soon as file arrives at blob storage. Understand the On-prim transformation logic and redesign the equivalent functionality in snowflake. Built framework for tracking system to identify data flow in different layers like acquisition, integration, and extraction. Extensively used GitHub for code migration to maintain versions and used RCA for approvals. Designing and Develop jobs according to requirement specified in rally Create TWS job streams/jobs to run as per schedule time. Existing process for 4TB data extraction time was reduced drastically by using the TPT script. Professional Experience 3: Sept 2015 - March 2017HSBC IndiaRole: Big Data DeveloperResponsibilities: Worked on requirements gathering, business analysis, and the technical design of Hadoop and Big Data systems. Participated in the implementation of SQOOP, which facilitates data loading from various RDBMS sources to Hadoop systems and vice versa. Created HBase tables to store various data formats coming from different applications. Created Python scripts to extract the data from the output files of the web server and load it into HDFS. Working with Avro and Parquet files extensively, parsing semi-structured JSON data, and converting it to Parquet using Data Frames in PySpark. Hadoop integration into conventional ETL to speed up the extraction, transformation, and loading of large amounts of semi-structured and unstructured data. filled the Hadoop distributed file system with unstructured data (HDFS). Built HIVE Tables with efficient buckets and dynamic and static partitioning. Moreover, external HIVE tables were constructed for staging. Data was loaded into HIVE tables, MapReduce queries were written, and created a specially designed BI product for manager teams who use HiveQL for query analytics. Worked as BCP coordinator for project level.Page  3Professional Experience 4: May 2014 - Sept 2015Symphony Teleca Private limitedRole: ETL DeveloperResponsibilities: Involved in complete Software Development Life Cycle (SDLC) process by analyzing business requirements and understanding the functional workflow of information from source system to destination system. Responsible for analysis of requirements and designing generic and standard ETL processes to load data from different source systems. The developed objects are tested for unit/component testing and prepared test case documents for mappings roles defined based on business requirements and developed ETL logic to load the data into Teradata tables. Involving in the daily status meeting and interacting with the business team through mails/calls to follow up the module to resolve the data/code issues. Used stars schema for loading the data into detention and fact tables. Performed in Unit Testing as per mapping roles/source2target document transformations. Prepared design documents for new/enhancements for the system. Education Qualifications: Master of Science in information technology in University of Ballarat in Sydney, Australia (2011). B.Tech ( Computer Science and Engineering ) from JNTU Hyderabad, Telangana, India( 2007 ). Achievements: Certified for Azure data engineer associate in cloud technology (2021). Completed Medicare & Retirement level-3 certification for insurance domain (2021). Page  4

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise