Quantcast

Azure Data Engineer Resume Chicago, IL
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Azure Data Engineer
Target Location US-IL-Chicago
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes
Click here or scroll down to respond to this candidate
                                                       	          Candidate's Name
                                                            Sr. Data Engineer/Azure Data Engineer                               Chicago, IL      PHONE NUMBER AVAILABLE      EMAIL AVAILABLE            					     Linkedin: LINKEDIN LINK AVAILABLEPROFESSIONAL SUMMARY:      Certified Professional with an Individual passion with over 10 years for data around the spectrum in data Engineering, data Sciences, data wrangling and data modeling for tackling challenging problems and thereby helping businesses in making data driven decisions.      Excellent problem-solving and analytical skills, with the ability to translate complex business Rules into technical specifications within the Microsoft Azure serverless framework.      Skilled communicator adept at collaborating with cross-functional teams to identify key business requirements and deliver customized BI solutions tailored to organizational needs.      Proven track record of implementing data-driven strategies that drive revenue growth and improve competitive advantage.      Experience in Azure data factory and data Catalog to ingest and maintain data sources, integration, testing, releasing, deployment and infrastructure management.      Experience in Designing, configuring and deploying Microsoft Azure for a multitude of applications utilizing the Azure stack (such as Compute, Web, Blobs, Resource Groups, Azure SQL, Cloud Services) focusing on auto-scaling.      Strong Knowledge in implementing architectures using Azure Data platform capabilities like Azure Data Lake Storage (Gen 2), Azure Data Factory, Azure SQL Server      Good understanding  of data warehousing comprising hundreds of tables and handling data with  millions of records      Experience on  Spark Architecture in setting up Azure Databricks Workspace for Data Analytics, Managing Clusters in Databricks,      Strong proficiency in programming languages such as SQL, R, Python, PySpark and expertise in distributed SQL and database technologies.
      Extensive experience in dealing with Relational Database Management Systems, including normalization, PL/SQL,stored procedures, constraints, querying, joins, keys, indexes, data import/export, triggers and cursors.      Proficiency in creating and optimizing data pipelines on Azure using services such as ADF, Azure Data Lake Storage Gen2
      Strong Experience in ETL processes, data transformations and master data management principles within an Azure Serverless context      knowledge of Autosys job types such as command jobs, box jobs, file watcher jobs, and job dependencies.      Experience with Jupyter notebooks using python libraries (pandas,numpy,scikitlearn),Pyspark.
      Experience with version control systems (Gitlab/Github) and CI/CD strategies within Microsoft Azure.      Automated data extraction and data transformation process using (python, Bash) leading 20% reduction in manual effort      Created Interactive dashboards with data insights tools such as Tableau, Power BI, Looker studio to ensure data consistency ,validation and accuracy.      Extensive experience with Agile Development and Scrum Management.      Excellent communication skills and ability to work independently.      Good Experience with 5G, C-Band, Wireless technologies.TECHNICAL SKILLS:Data Sources/DatabasesHDFS, SQL Server, Oracle, PostgreSQL, Flat files, Hive, Impala, NoSQLProgramming LanguagesPython (NumPy, pandas), R, SQL, PL/SQL, HIVE, PySpark, Unix/Linux shell ScriptingCloud PlatformsMicrosoft Azure (Azure Data Factory, Azure synapse analytics, Azure Data Lake Storage Gen 2, Azure Databricks, Azure SQL Database, Azure Blob Storage, Azure Event Hub, Azure Functions, Delta Lake),AWS,SnowflakeRepositoryGithub, GitlabIDE s and Notebooks
Spyder, Pycharm, Jupyter Notebook, Azure DataBricks NotebookSoftware & ToolsJenkins, Looker Studio, Tableau, Power BI, GitLab ,Oracle ,Postgres ,Jira,Informatica.Big data Stack
Spark, Hadoop, Kafka, Hive, impala, Sqoop, Map reduce, AutoSys. PROFESSIONAL EXPERIENCE:Verizon -- Tampa, Florida                                                                                          Aug 2021   CurrentAzure Data Engineer
Responsibilities:      Designing and implementing a scalable data pipeline using Azure Functions, Data Factory, and Data Flows to ingest, transform, and load data from various data sources such as Event Hub, on-prem data warehouses  into an Azure Data Lake using Azure Services.
      Implementing a robust data lake architecture  ensuring data quality and facilitating efficient data analysis leading to an 80% increase in data processing.      Designing and implementing scalable ETL/ELT pipelines to ingest, transform, and load large datasets in Delta tables for analytical workloads.      Utilizing Azure Functions and Azure Synapse Analytics to create serverless data processing solutions for real-time data ingestion
      Using Azure Key Vault to encrypt keys and store secrets to Secure key management to protectdata.
      Developing data pipelines using PL/SQL, Spark, PySpark, and Python to automate data movement, transformation and ensuring data integrity and analysis readiness using notebooks      Managing  data governance by using unity catalog for Data Lineage,Data Audit,Data Discoverability and Data Access Control.      Integrating Azure DevOps for continuous integration and deployment (CI/CD), streamlining data pipeline updates and maintenance for a reliable and agile data flow.
      Developing dashboards using Tableau, Power BI ,Looker Studio for Stakeholders(BO s and PO s) of different  departments
Environment: Azure Data Factory ,Azure data lake storage, Azure Databricks,Azure Synapse Analytics, Serverless SQL Pool, Delta Lakes, Delta Tables,Unity Catalog, Event Hub,ETL/ELT, Python, SQL,PL/SQL, Azure DevOps, Azure Key Vault, Bash Shell Scripting,Oracle,Postgres,SQLData Analytics, Data Engineering, Data Warehousing,Apache Spark, Spark SQL, PySpark, Power BI, Tableau,Gitlab,Putty,JIRA
Mississippi Department of Child Protection Services   Jackson, MS                  May 2019   July 2021 Sr. Data EngineerResponsibilities:      Designed and implemented robust data pipelines using Azure Data Factory (ADF) and Data Flows to automate data movement and transformation between on-premises sources MS SQL Server and Azure Data Lake Storage (ADLS) by resulting in  improved data accessibility reducing manual intervention by 80%.      Built a scalable data lake architecture on ADLS, incorporating raw, enriched, and curated data layers for efficient data management and analysis facilitating a centralized and organized data repository.      Leveraged serverless Azure Functions and Azure Synapse Analytics to create cost-effective data processing solutions for real-time data ingestion and complex data transformations ensuring accurate data availability for analytics.      Performed ETL/ELT using PL/SQL, Spark, PySpark, and python by enabling extraction, transformation, and loading of data into Delta lakes for Business Insights using Azure DataBricks.      Integrated streaming data feeds from Event Hub with Azure Stream Analytics to generate real-time insights. This provided valuable information for critical decision-making based on current data flows.      Deployed ETL pipelines using Azure DevOps for continuous integration and continuous delivery (CI/CD) ensuring  streamlined and reliable data delivery process.Environment: Azure Data Factory ,Azure data lake storage, Azure Databricks,Azure Synapse Analytics,Event Hub, Delta Lakes, Delta Tables ETL/ELT, Python, T-SQL,PL/SQL, Azure DevOps Bash Shell Scripting,Oracle,Postgres,Data Analytics, SQL Server, Data Warehousing,Apache Spark, Spark SQL, PySpark, Power BI, Tableau,Gitlab,Putty,JIRA
State Street -- Quincy, MA               			                                       June 2016   April 2019
Data EngineerResponsibilities:      Developed Python scripts to transform the raw data into intelligent data as specified by business users.
      Worked closely with the data modelers to model the new incoming data sets.
      Designed and implemented complex ETL processes using Informatica PowerCenter to integrate data from multiple source systems into a centralized data warehouse.      Implemented partitioning and parallel processing strategies in Informatica PowerCenter to enhance ETL job performance and scalability.      Involved in the start-to-end process of Hadoop jobs that used various technologies such as Sqoop, Hive, Map Reduce, Spark and Shell scripts (for scheduling of few jobs).
      Expertise in designing and deployment of Hadoop clusters and different Big Data analytic tools including Hive, Oozie, Autosys, Sqoop, Spark, Hive, and Cloudera.
      Monitored job execution and performance using Autosys job status reports and alerts.      Involved in creating tables, loading data, and writing queries.
      Assisted in upgrading, configuration and maintenance of various Hadoop infrastructures like Hive, Spark and HBase.
      Exploring with Spark to improve the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, and Spark YARN.
      Design and Develop ETL process for Data Extraction, aggregation of data.
      using Python and Create external tables with partitions.
      Developed Spark code using Python and Spark-SQL/Streaming for faster testing and processing of data.
      Worked on tuning Hive to improve performance and solve performance-related issues in Hive with a good understanding of Joins, Group, and aggregation.      Formed data pipeline using CI/CD with Jenkins to store data into HDFS.
Environment: Hive, Oozie, Autosys, Sqoop, Spark, Hive , Map Reduce,ETL,Python,SQL,Unix Shell Scripting, Data Analytics, Data Engineering,AutoSys, Data Warehousing,Apache Spark, Spark SQL, SQL,PL/SQL,Informatica,PySpark, Power BI, Tableau,Gitlab,Putty,JIRAOSI Consulting -- Hyderabad, India                                                                     	   Aug 2012   Jul 2014 SQL BI DeveloperResponsibilities:      Involved in gathering business requirements from users and translating them as technical specifications and design documents for solution development.      Worked as a developer and acquired excellent skill in creating complex Stored Procedures, Functions and Triggers for automating tasks, Views and complex SQL queries for applications.      Cleansed and transferred data from various OLTP data sources, such as Oracle, MS Access, MS Excel, Flat Files, CSV files into SQL Server 2008 by using SSIS packages.      Created ETL packages with different data sources (SQL Server, Flat Files, Excel source files, XML file) and then loaded the data into destination tables by performing different kinds of transformations using SSIS packages.      Migrated data from Heterogeneous data Sources and legacy system (Access, Excel) to centralized SQL Server databases using SQL Server Integration Services (SSIS) to overcome transformation constraints.      Created and scheduled SSIS packages to pull data from SQL Server and exported to Excel Spreadsheets and vice- versa.      Generated drill down reports, parameterized reports, linked reports, sub reports, matrix dynamics and filters, charts using SSRS (SQL Server Reporting Services).      Performed unit testing to verify the functionality after developing various deliverables.      Assisted in change/release management and deployed various packages in different lower environments.Environment: MS SQL Server Management Studio, ETL, Integration Services, Reporting Services, Analysis Services, Crystal Reports, Data Modeling, OLAP, Data ware Housing, T-SQL, Windows 2000, Data AnalysisEDUCATION:Bachelor s in Computer Science Engineering, Osmania University, 2012Master s in Computer Science Engineering, Fairfield University, 2016

Respond to this candidate
Your Email «
Your Message
Please type the code shown in the image:
Register for Free on Jobvertise