Quantcast

Data Engineer Resume Holly springs, NC
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Engineer
Target Location US-NC-Holly Springs
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Engineer Durham, NC

Azure Data Engineer Cary, NC

Data Engineer Azure Raleigh, NC

Customer Service, MS Office, Network Engineer, Cisco, CCNA, Data Raleigh, NC

Data Engineer Senior Raleigh, NC

Data Center Network Engineer Durham, NC

Data Engineer North Carolina Apex, NC

Click here or scroll down to respond to this candidate
Name: Candidate's Name
Role: Azure Data Engineer | Data Engineer | ETL Developer
Mobile: PHONE NUMBER AVAILABLE | Email: EMAIL AVAILABLECurrent Location: Holly Springs, North CarolinaLinkedIn: https://LINKEDIN LINK AVAILABLEPROFESSIONAL SUMMARY:      10+ years of experience working in BI Development and Application Maintenance using Microsoft Technologies.      Experience in Azure SQL server, Azure Data Factory and Azure Data Lake, Azure Data Bricks, Azure Synapse, SSIS, Power BI and Microsoft SQL Server.      Experience in developing Spark applications using Spark-SQL in Databricks for data extraction, transformation, and aggregation from multiple file formats for analysing & transforming the data to uncover insights into the customer usage patterns.      Extract, Transform and load data from source systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL and Azure Data Lake Analytics. Data Ingestion to one or more Azure services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks.      Good understanding of Spark Architecture including spark core, Spark SQL, DataFrame, Spark streaming, Driver Node, Worker Node, Executors and Tasks, Deployment modes, the Execution hierarchy, fault tolerance, and collection.      Experience in migrating on-premises DevOps platform to Azure CI/CD process by utilizing ARM templates, Azure DevOps App services such as Repos, Test Plans, Pipelines, Web Apps, and Application Insights.      Strong experience with data migration, cloud migration, and ETL processes.         Implement DevOps and automation practices to automate data pipelines, monitor data processes, and manage deployments using Azure DevOps, Azure Monitor, and Azure Automation.      Design and implement data models and data warehouses using SSAS.      Experience in Power BI (Power Pivot/View) to design multiple dashboards to display information required by different departments and upper-level management.      Experience in developing formatted, complex reusable formula reports with advanced features such as conditional formatting, built- in/custom functions usage, multiple grouping reports in PowerBI and Tablueau.      Created SSIS packages for transferring data from various data sources like DB2, Oracle, Excel,.Txt file, CSV files.      Hands on Experience in Logic Apps and Orchestration tools line Autosys, Oozie.      Experience with Azure Databricks Medallion Architecture with Delta Live tables.      Hands on Experience in Designing and implementing Relational and non- relational database.      Good knowledge and hands on experience in Azure Event hubs,Azure Data Bricks, PySpark and Python      Hands on experience in working on Microsoft SQL Server database and Oracle, T- SQL, PL/SQL.      Good knowledge and experience with big data technologies, such as Kafka, Hadoop, Hive, etc.      Good knowledge on AWS and Snowflake.      Hands on experience in Creating Stored Procedures, Functions, Triggers, Indexes, Views and CTE.      Expertise in monitoring and analysing the issues related to Jobs implemented using ADF Pipelines.      Experience in Extraction, Transformation, and Loading of data directly from different heterogeneous source systems like Flat File, CSV, XLS, REST API and Relational Database, Salesforce.      Experience in DevOps integration for CI/CD and Agile Environment.      Good understanding of scripting languages like PowerShell.
      Hands on experience in writing SQL, Spark or KQL code and integrate with enterprise CI/CD processes.      Worked on Git hub version control system.      Good communication, analytical, problem-solving skills, and a good team player.      Proficient in technical writing and presentations
      Good team player, strong interpersonal and communication skills combined with self-motivation, initiative, and the ability to think outside the box.      Hands on experience in design & documentation, Project cutover & sanity checks.      Experience in preparing the project scope, deliverables timeline, and project execution plan.      Collaborated closely with cross-functional teams to gather integration requirements and design robust, scalable integration solutions by considering the performance and high availability.KEY ACHIEVEMENTS      Received Star Performer award for outstanding performance.EDUCATION DETAILS      Bachelor of computer science from Andhra University, India.  Year of passing 2005      Masters on computer Applications from Adikavi Nannaya University, India. Year of passing 2009CERTIFICATIONS      Microsoft Certified Azure Data Engineer Associate
SKILL MATRIX:DatabaseSQL Server, Oracle, MongoDBCloud Technologies
Azure Data Factory, Azure Data Warehouse, Azure Sql server, Azure Data Lake and Azure Data Bricks, Azure Synapse, LogicAppsReporting ToolsSSRS and Crystal Reports, Power BI, TableauData Integration ToolsAzure Data Bricks, Azure Data FactoryCode RepositoriesGitHubData SourcesFlat files, ADLS, Azure Blob storageScrum BoardsAzure DevOps, Synapse KANBAN BoardDomains WorkedInsurance, E-CommerceProgramming languagesC#.Net, PythonScheduling ToolsADF Scheduled TriggersPROFESSIONAL EXPERIENCE:Client: CDW Corporation, Chicago USA 								May 2021 to till dateRole: Azure Data EngineerCDW is a provider of technology products and services for the e-commerce business. CDW s full stack engineering services team focuses on digital transformation from code to cloud and datacentres to database.
Responsibilities:      As an Azure Data Engineer, I am responsible for implementing Azure Data Factory (ADF) extensively for ingesting data from different source systems like relational and unstructured data to meet business functional requirements.      Design and developed Batch processing and real-time processing solutions using ADF, Databricks clusters and stream analytics.      Created numerous pipelines in Azure using Azure Data Factory V2 to get the data from disparate source systems by using different Azure activities like move & Transform, copy, filter , for each. Databricks etc. maintain and provide support for optimal pipelines, data flows and complex data transformations and manipulations using ADF and PySpark with Databricks.      Implemented data ware housing solution using Azure Synapse Analytics.      Automated jobs using different triggers like Events, Schedules and Tumbling in ADF.      Created, provisioned different Databricks clusters, notebooks, jobs and autoscaling. Performed data flow transformation using the data flow activity.      Implemented Azure, self-hosted integration runtime in ADF.      Perform ongoing monitoring, automation, and refinement of data engineering solutions.      Improved performance by optimizing computing time to process the streaming data by optimizing the cluster run time.      Implemented change data capture (CDC) mechanisms and real-time data replication processes to ensure data consistency and integrity      Involved in loading real time data streaming using Azure Event hubs.      Scheduled, automated business processes and workflows using Azure Logic Apps. Created Linked services to connect the external resources to ADF.      Worked with complex SQL queries, views, Stored Procedures, Triggers, functions, Merge statements, Indexes in large databases from various servers.      Migrating of on-premise data to Azure data lake storage (ADLS) using Azure Data Factory and Azure Synapse Pipelines.Environment: Azure Data Factory (ADF V2), Azure SQL Database, Azure Data Lake storage (ADLS), BLOB Storage, SQL Server, ADLS Gen 2, Azure Synapse Analytics, Azure Logic Apps, Spark, Spark SQL, Azure Data Bricks, Python, PySpark, Salesforce.Client: TARDIS WAREHOUSE							Jun 2014   April 2021Role: Data EngineerTARDIS is a centralized data warehouse across entire Torus Insurance and all end users would connect to the SSAS cube and generate their own reports. TARDIS consists of enterprise data which would come from different sources (GENIUS, DUCK CREEK, CARLOS, FIG and CLAIM CENTER) and would store in TARDIS ware house. TARDIS is very critical to the entire Torus Insurance as it is single version of truth for every line of business which Torus Insurance operates across the global. Torus Insurance is operating in many lines of businesses like Property, Casualty, Liability and Specialty. TARDIS would refresh in every 15 minutes and almost latest data would be available for end users.Responsibilities:
      Involved in the migration of on premise data to Azure Data Lake using Azure Data Factory.      Design & implement migration strategies for traditional systems on Azure (Lift and shift/Azure Migrate).      Created Pipelines in ADF using Linked Services/Datasets/Pipelines/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, Write-back tool and backwards.      Design & implement migration strategies for traditional systems on Azure (Lift and shift/Azure Migrate).      Created, provisioned different Databricks clusters, Pyspark notebooks, jobs and autoscaling. Performed data flow transformation using the data flow activity.      Created Pipelines in ADF using Linked Services/Datasets/Pipelines/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, Write-back tool and backwards.      Used Control Flow Tasks like For Loop Container, For Each Loop Container, Sequential Container, Execute SQL Task and Data Flow Task.      I also worked in Data Extraction, Data Transformation and Loading the Data, also developing of packages.      Creating SSIS Packages by using different Data Transformations like Derived Column, Look up, Conditional      Split, Merge Join, Multicast while loading data into DWH.
      Working with Dataflow Task, ExecuteSqlTask Frequently.      Developed and configured PowerBI reports and dashboards from multiple datasources using data blending.      Explored data in a variety of ways across multiple visualizations using PowerBI.      Creation of Configurations, Deploying packages to target server.      Creation of SQL Server agent jobs with SSIS packages to run daily ETLs at schedule times.      Implementing Transformations in dataflow to clean data.      Creation of reports from DWH using SSRS and PowerBI.      Created multiple transformations and loaded data from multiple sources to PowerBI Dashboard.      Creation of Tabular, Matrix, and Chart types of reports in SSRS as per end user requirement.      Creation of report subscriptions to send the reports by mails by scheduling the reports.      Performed PowerBI desktop data modelling which cleans, transforms and mashup data from multiple sources.      Design and implement data models and data warehouses using SSAS.      Created cubes and involved in MDX calculations using SSAS cubes.      Providing report level securities for End user.      Hands on experience with Unix/Linux Servers and commands      Creation of Pyspark and Python notebooks.Environment: SSIS,SSAS,SSRS, DB2, Azure Data Lake, Azure Data Factory, Azure Data bricks, Pyspark SQL Server, Power BI, SQL, T-SQL,PL/SQL

Respond to this candidate
Your Email «
Your Message
Please type the code shown in the image:
Register for Free on Jobvertise