Quantcast

Azure Data Engineer Resume Lawrenceville...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Azure Data Engineer
Target Location US-GA-Lawrenceville
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

azure data engineer Atlanta, GA

Data Engineer Power Bi Marietta, GA

Devops Engineer Azure Atlanta, GA

Software Engineer Data Science Atlanta, GA

Python Developer Data Engineer Marietta, GA

senior data engineer Alpharetta, GA

Azure Administrator System Engineer Atlanta, GA

Click here or scroll down to respond to this candidate
Candidate's Name
Sr. Cloud Data EngineerContact: PHONE NUMBER AVAILABLEEmail: EMAIL AVAILABLELinkedIn: LINKEDIN LINK AVAILABLEPROFESSIONAL SUMMARY: Highly skilled certified Data Engineer with nearly 9 years of experience in designing, implementing, and maintaining data solutions on various Cloud platforms. Proficient in Data Warehousing, SQL Server, Azure Data Bricks (ADB), Azure Data Factory (ADF V2), ETL Processes, Data migration, Data Modeling, Data Visualization, Azure Data Lake Storage (ADLS), and Data LakeAnalytics, S3, Redshift, AWS Lambda, EC2 Instance and PowerBI. Led data engineering initiatives spanning multiple sectors such as Consumer Goods, Banking, Healthcare, andInsurance, implementing solutions to meet industry needs using an agile framework methodology. Migration of on-premises data (VDrive (client Location) /Oracle / SQL Server/DB2) to Azure Data Lake Store (ADLS) using Azure Data Factory (ADF V2) and Azure Data Bricks (ADB). Managed the setup, fine-tuning, and enhancement of the Databricks platform, facilitating data analysis, machine learning, and data engineering tasks throughout the organization. Developed notebooks and implemented logic utilizing PySpark, Spark SQL, and SQL to ingest, transform and orchestrate data. Set up, customized, and managed Databricks clusters and workspaces to handle the organization's data needs. Involved in production support activities, Monitored and managed cluster performance, resource usage, and platform costs, while troubleshooting any issues to maintain optimal performance. Worked on Azure transformation projects, designed pipelines and implemented ETL and data movement solutions using Azure activities for various front-end applications. Designed highly available, cost effective and fault tolerant systems using EC2 instances, Auto Scaling, Elastic Load Balancing, and managing security groups for EC2 servers with AWS CLI and SDK tools. Implemented robust security measures by configured Azure Multi-Factor Authentication (MFA) and managing secrets using Azure Key Vault, ensuring secure storage & control of sensitive data assets. Designed roles, groups using AWS Identity Access Management (IAM), Aurora, DynamoDB, SQS, SNS. Transformed existing application logic and features into a more efficient setup within Azure Data Lake, SQL Data Warehouse, Data Factory, and SQL Database environments, optimizing performance and scalability. Designed and implemented strategies for scaling Kafka clusters horizontally to handle increased data loads. Experience in creating CI/CD pipelines using Azure DevOps, Jenkins, Code Pipeline to deploy the pipelines and provide continuous Delivery. Proficient in migrating both infrastructure and applications from on-premises setups to AWS/Azure, as well as facilitating seamless transitions between clous, such as moving from AWS to Azure. Expertise in Azure Development worked on Azure web application, App services, Azure storage, Azure SQL Database, Azure Virtual Machines, Azure AD, Azure Web Roles, and Worker Roles. Experience in Azure storage services, Blob and File Storage, setting up of Azure CDN and load balancers. Performed production support activities like Monitoring data pipelines in production on daily basis and system performance, ensuring seamless operations and high availability. Implemented monitoring solutions using Moogsoft and ServiceNow to track system performance metrics and promptly respond to any deviations from expected norms. Implemented data security measures by setting up firewalls, Authentication, Authorizations, Access controls, Data encryptions, etc. To protect sensitive information in accordance with industry regulations. Skilled in Docker orchestration tools such as Docker Compose and Kubernetes, enabling efficient deployment, scaling, and management of containerized applications. Proficient in Kubernetes, with practical experience in container orchestration, cluster management, and workload deployment across diverse environments. Additionally, I have gained substantial expertise in working with Terraform modules and employing Auto Scaling to deploy cloud instances when launching Microservices. Effectively addressed user-reported issues by identifying underlying causes and implementing solutions, collaborating with teams for improved system performance. Contributed to enhancing incident response protocols and documentation for ongoing refinement. Consistently demonstrated proactive initiative in meeting project deadlines while upholding the highest standards of deliverable quality. Actively listened to customers, handled concerns quickly and escalated major issues to the supervisor. Having Good Communication Skills and Exposure to Client Interaction. TECHNICAL SKILLS:Category SkillsCould Technologies Azure, AWS, GitHub, Bigdata, Google Cloud Platform ToolsAzure Portal, Azure Data Studio, Azure Databricks notebooks, SQL Server Management Studio (SSMS), Informatica, EC2, S3, Redshift,Visual Studio.DatabasesAzure SQL Database, Azure SQL Data Warehouse, Amazon Relational Database Service (RDS), DynamoDB, MongoDB, MySQL,SQL Server, Oracle, Teradata.Scripting Languages T-SQL, Python, PySpark, MySQL, SQL, Shell, Bash, C, Scala Operating Systems Windows, Linux, Mac OS, UnixVersion Control Systems Git, Bitbucket, Azure Repos ApplicationsAzure DevOps (for CI/CD), Azure Data Factory Data Test, JIRA, Wrike, Postman.Visualization PowerBI, Tableau, Microsoft Excel, Databricks Scheduling Tools Control M Workload Automation, Moogsoft CERTIFICATIONS:- Microsoft Azure Data Engineering Associate (DP-203)- Microsoft Azure Fundamentals (AZ-900)WORK EXPERIENCE:Role: Senior Azure Data Engineer November 2022 - Present Company: Deloitte ConsultingClient: Shell Handling end to end data flow for various shell products Data & Analytics in Azure DnA platform for Downstream including data ingestion and transformation across layers. Extract, Transform and Load data from Source Systems to Azure Data storage services using a combination of Azure Data Factory, T-SQL, Spark SQL, Azure Data Lake Analytics. Collaborated and shared workspaces with data engineers and cross-functional teams for communication and enhance collaboration on data analytics and engineering projects. Led the comprehensive administration, configuration, and optimization of the Databricks platform, driving data analytics, machine learning, and data engineering initiatives across the organization. Developed Databricks Notebooks using Pyspark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats for analyzing and transforming data to uncover insights into customer usage patterns. Designed a single pipeline that uses configuration tables from the database to smoothly bring data from various sources into the specified destination. Collaborated with the team to implement cost optimization strategies like identifying the unused resources, internally understanding the time and frequency of the resources used and tuned(disabled) based on the requirement of the product, achieving a 75% reduction in expenses while simultaneously enhancing operational efficiency. Implemented performance tuning strategies by transitioning from DTU-based to vCore-based pricing models in Azure, optimizing resource allocation for improved scalability and efficiency. Worked on data ingestion using ADF and implemented complex transformation logics using Azure Databricks while ensuring performance optimization on Azure SQL queries. Led performance tuning initiatives within Azure Data solutions, optimizing query execution, data processing, and resource utilization to ensure fast retrieval of data from the database and enhance overall system efficiency to meet performance targets effectively. Implemented Moogsoft, an AI Ops tool since it is a licensed component within the shell environment. Integrated Moogsoft into our products, enabling automatic incident and alert generation via email, thereby reducing manual intervention, and enhancing operational efficiency. Implemented robust security measures by creating and managing secrets using Azure Key Vault, ensuring secure storage and access control for sensitive data assets. Led a team of data engineers in the development of data pipelines using Azure Data Factory, ensuring efficient and reliable data movement. Collaborated with cross-functional teams to gather requirements and design data solutions that meet business objectives. Engaged closely with the UAT team to comprehensively address and resolve issues prior to production deployment, ensuring smooth transitions and optimal system functionality. Client: Nestle Contributed to collaborative brainstorming sessions aimed at defining key performance indicators(KPIs) to measure project success. Actively participated in data discovery and analysis processes, leveraging analytical tools and techniques to extract meaningful insights from various data sources. Designed a single pipeline that uses configuration tables from the database to smoothly bring data from various sources into the specified destination. Utilized dimensional modeling principles to design and implement star schemas, effectively identifying dimensions, facts, and hierarchies to enhance data analysis and reporting efficiency. Analyzed the data load flow and frequency, then devised and scheduled pipelines accordingly to align with project requirements. Implemented comprehensive logging mechanisms within the pipeline architecture to capture and track key events, errors, and performance metrics, facilitating real-time monitoring, troubleshooting, and optimization. Utilized logging frameworks and tools such as Azure Monitor and Azure Log Analytics to ensure visibility and traceability across the entire pipeline workflow. Developed robust logging functionality for stored procedures to capture detailed information about execution, errors, and performance metrics. Implemented Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate the movement of data between lower and higher environments. Configured pipeline stages to ensure seamless integration, testing, and deployment of data artifacts, enhancing reliability and efficiency in the deployment process. Integrated version control systems such as Git to maintain consistency and traceability throughout the deployment lifecycle. Designed and implemented transformation logic utilizing stored procedures to efficiently process and manipulate data. Leveraged the power and flexibility of SQL to develop complex transformation algorithms. Collaborated with stakeholders to define business requirements and translate them into optimized stored procedure logic, enhancing data quality and reporting purposes. Role: Senior Data Engineer March 2021 - November 2022 Company: Cognizant Technology SolutionsClient: Network Rail Implemented end-to-end BI solutions using Azure services. Done Lift & Shift of data from Source systems to targeted ADLS Staging tables using ADB Notebooks. Optimized data pipelines, ADB notebooks for performance, scalability, and cost effectiveness. Applying business logic based on functional specifications to finance, supply chain and marketing related data in data bricks. Implemented Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate the movement of data between lower and higher environments. Involved in resolving various incidents raised by business users and doing root cause analysis and tech optimization for high/medium/low tickets. Responsible for creating the test cases Ensuring they cover various scenarios including data retrieval, insertion, updating, and deletion operations on Oracle source objects. Implemented test approach documents and closure reports and conducted walkthrough sessions with senior stakeholders at the conclusion of each sprint. To ensure the data is moved perfectly from on premise to ADLS (Landed & Processed) Performed path validation in ADLS. Create Notebooks in ADB (azure data bricks) and perform operations to validate the data between source, landed and processed layers using automated test scripts. Collaborating with developers and stakeholders to review test cases and ensure alignment with project requirements and objectives. Executing test cases manually to verify the behavior of Oracle source objects and documenting test results accurately. Reporting defects and issues identified during testing, providing detailed descriptions and steps to reproduce to facilitate resolution by development teams. Role: Associate Consultant October 2016 - March 2021 Company: CapgeminiClient: Unilever Involved in the Data migration activities from on-premises to Azure Dev, QA, UAT and Production. Understood the workflows and designed the pipelines to move the data from client Vdrive location to Azure data lake storage. Developed a specialized pipeline to seamlessly handle Slowly Changing Dimension (SCD) type 2 data, ensuring accurate tracking of historical changes and preserving data integrity. This pipeline facilitates effective management of evolving data over time for comprehensive analysis and reporting. Created an archival storage solution utilizing Azure Data Factory to efficiently remove outdated records from databases, enhancing database performance and optimizing storage resources. Designed pipelines to generate Parquet files and subsequently refresh external tables within Azure Synapse Analytics daily. Implemented logging for each pipeline run and integrated email notifications to receive alerts in case of pipeline failures enhancing monitoring and troubleshooting capabilities, Implemented time-based triggers utilizing Azure Data Factory to orchestrate pipeline execution in accordance with the Coordinated Universal Time (UTC) time zone format. Performed unit testing across the entire data flow, validating the functionality and integrity of ETL(Extract, Transform, Load) processes, data transformations, and data validations. Employed testing frameworks and methodologies such as PyTest or JUnit to systematically validate the behavior of individual components and ensure data quality and consistency throughout the flow. Identified and implemented industry-leading best practices, tools, and services to optimize data engineering workflows, leveraging technologies such as Azure Data Factory and Azure Synapse Analytics for efficient and scalable data solutions. Client: Bank of America Extensive experience utilizing Amazon S3 as a reliable and scalable cloud storage solution for managing and accessing data objects for the Customer Business Solutions project. Worked on AWS cloud services like EC2, IAM, S3, RDS, ELB, ECS, EBS, Lambda, Route 53, Auto Scaling groups, CloudWatch, SageMaker, CloudFront, IAM for installing and configuring and troubleshooting on various Amazon images for server migration from physical into cloud. Implemented existing AWS infrastructure to server-less architecture (AWS Lambda) and deployed it via Terraform in different environments, which includes AWS Elastic Beanstalk for app deployments and worked on AWS Lambda with Amazon Kinesis and integrated Dynamo DB using Lambda for value storage. Automated the deployment of various AWS resources (VPCs, ELBs, security groups, SQS queues, S3 buckets) using Terraform. Also orchestrated the infrastructure resources, including VMware and Docker containers using Terraform. Created S3 buckets and managing policies for S3 buckets and using them for storage, backup and archived in AWS and worked on AWS LAMBDA which runs the code with a response of events. Automating AWS components like EC2 instances, Security groups, ELB, RDS, IAM through AWS Cloud information templates. Manage AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments. Configuring and managing AWS Simple Notification Service (SNS) and Simple Queue Service (SQS). Worked on AWS Lambda in running code (Python) in response to events Changes in S3 and DynamoDB, HTTP requests using API Gateway on high availability infrastructure. Leveraged AWS Redshift's integration with AWS Glue and AWS Athena to automate data ingestion and transformation tasks, resulting in faster and more accurate data processing. Developed and implemented ETL workflows using AWS Redshift and AWS Glue to move and transform data between different data stores, resulting in a more streamlined and efficient data architecture. Built AWS server for deployment and data processing and installed Python, Test Rail as part of server. Implemented ArgoCD to establish GitOps practices, ensuring consistent and automated application deployment across multiple environments. Collaborated with development and operations teams to automate CI/CD pipelines using Golang scripts, reducing deployment time. EDUCATION:Bachelor of Computer Science Engineering September 2012 - May 2016 MLR Institute of Technology, JNTU, Hyderabad

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise