Quantcast

Data Engineer Resume Elkhorn, WI
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Engineer
Target Location US-WI-Elkhorn
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Engineer Elkhorn, WI

Data Analyst Engineer Hoffman Estates, IL

Big data engineer Milwaukee, WI

Data Engineer South Elgin, IL

Azure Data Engineer Schaumburg, IL

Senior Data engineer Elmhurst, IL

Diesel Engine Test Data Lombard, IL

Click here or scroll down to respond to this candidate
 Narender DevanapalliPhone: PHONE NUMBER AVAILABLE                                                                                                                                                                                                  Email: EMAIL AVAILABLE                                                                                                                                                                 linked-in: LINKEDIN LINK AVAILABLEPROFESSIONAL SUMMARY      10+ years of experience in Data warehouse, Azure with Snowflake, AWS services, and scalable data ingestion pipelines. Skilled in Azure Data Factory architecture, enabling seamless integration between on-premises and Azure Cloud using Python, PySpark, and Microsoft Azure Cloud services.      Hands-on experience in working with Azure Cloud and its components like Azure Data Factory, Azure Data Lake Gen2, Azure Blob Storage, Azure Databricks, Azure Synapse Analytics, Logic Apps, Function apps, Azure Key Vault.      Have Extensive Experience in IT data analytics projects, Hands on experience in migrating on premise ETLs to Google Cloud Platform (GCP) using cloud native tools such as BIG query, Cloud Data Proc, Google Cloud Storage, Composer.      Developed and maintained scalable data pipelines using Apache Airflow in Google Cloud Platform (GCP) for ETL processes, leveraging various Airflow operators for optimized workflows.      Proficient in utilizing GCP services such as Dataproc, Google Cloud Storage (GCS), Cloud Functions, and BigQuery for efficient data processing and analytics.      Experienced in orchestrating data transfer and integration between GCP and Microsoft Azure using Azure Data Factory, ensuring seamless cross-platform data flow.      Expertise in creating high-performance Power BI reports on Azure Analysis Services, enhancing data visualization and business intelligence capabilities.      Skilled in configuring and managing GCP services including Dataproc, GCS, and BigQuery through Google Cloud Shell SDK for streamlined cloud operations.      Very keen in knowing newer techno stack that Google Cloud platform (GCP) adds.      Can work parallelly in both GCP and Azure Clouds coherently.      Hands-on experience working with Healthcare, Financial Services & Retail domains.      Proficient in managing and configuring Azure Blob Storage, File Storage, Queue Storage, and Table Storage.      Skilled in developing robust Data Lake data ingestion pipelines, performing data extraction, transformation, and loading (ETL) processes to ensure data quality and availability.      Implemented data ingestion pipelines using Azure Synapse Analytics to efficiently extract, transform, and load (ETL) large volumes of structured and unstructured data into the data warehouse.      Collaborated with data scientists and analysts to deploy machine learning models within Azure Synapse Analytics, enabling predictive analytics and automated decision-making based on historical and real-time data.      Designed, built, and deployed a multitude application utilizing almost all AWS stack (Including EC2, R53, S3, RDS, HSM Dynamo DB, SQS, IAM, and EMR), focusing on high-availability, fault tolerance, and auto-scaling.      Extensive experience in Amazon Web Services (AWS) Cloud services such as EC2, VPC, S3, IAM, EBS, RDS, ELB, VPC, Route53, Ops Works, DynamoDB, Autoscaling, CloudFront, CloudTrail, CloudWatch, CloudFormation, Elastic Beanstalk, AWS SNS, AWS SQS, AWS SES, AWS SWF & AWS Direct Connect.      Hands-on experience in working on real-time data processing solutions using Azure Synapse Analytics, leveraging its capabilities to handle streaming data and perform near real-time analytics on high-velocity data streams.      Proficient at using Databricks notebooks for data exploration with Pyspark/Scala, scripting using Python/SQL, and deploying APIs for the analytics team.      Hands-on experience on Azure function apps as API services to communicate with various Databases.      Automated dataflows using Logic apps and Power Automate (Flow) which connects different Azure services and Function apps for customizations.      Designed and implemented scalable and automated data integration workflows using Azure Logic Apps, enabling seamless data transfer and synchronization between various systems, applications, and data sources.      Experience in setting up and managing ELK (Elastic Search, Log Stash & Kibana) Stack to collect, search and analyze logfiles across servers, log monitoring and created geo-mapping visualizations using Kibana in integration with AWS CloudWatch and Lambda.      Proficient in working with Hadoop ecosystem technologies such as HDFS, MapReduce, YARN, Sqoop, Cassandra, Pig, Kafka, Zookeeper, and Hive.      Expertise in large-scale data processing, machine learning, and real-time analytics using Apache Spark.      Experience in using Apache Sqoop to import and export data to and from HDFS and Hive.      Strong expertise in loading unstructured and semi-structured data into Hadoop clusters coming from different sources using Flume.      Performed complex data workflows using Apache Oozie for efficient data processing and workflow automation.      Expertise AWS Lambada function and API Gateway, to submit data via API Gateway that is accessible via Lambda function.      Managed configuration of Web App and Deploy to AWS cloud server through Chef.      Created instances in AWS as well as worked on migration to AWS from data center.      Developed AWS Cloud Formation templates and set up Auto scaling for EC2 instances.      Championed in cloud provisioning tools such as Terraform and CloudFormation.      Responsible for distributed applications across hybrid AWS and physical data centers.      Wrote AWS Lambda functions in python for AWS's Lambda which invokes python scripts to perform various transformations and analytics on large data sets in EMR clusters.      Implemented CI/CD pipelines using Azure DevOps to streamline data engineering processes and ensure efficient and reliable delivery of data solutions.EDUCATION      Bachelor s in Computer Science, Sathyabama University, Chennai.
TECHNICAL SKILLSAzure ServicesAzure data Factory, Azure Data Bricks, Logic Apps, Function Apps, Snowflake, Azure DevOpsBig Data TechnologiesMapReduce, Hive, Tez, Python, PySpark, Scala, Kafka, Spark, Oozie, Sqoop, Zookeeper, Cassandra, Flume, Pig, Apache Spark StreamingHadoop DistributionCloudera, Horton WorksLanguagesSQL, PL/SQL, Python, HiveQL, Scala, U-SQL, and NoSQL.Web TechnologiesHTML, CSS, JavaScript, XML, JSP, Restful, SOAPOperating SystemsWindows (XP/7/8/10), UNIX, LINUX, UBUNTU, CENTOS.Build Automation toolsAnt, Maven, PowerShell scriptsVersion ControlGIT, GitHub.IDE &Build Tools, DesignEclipse, Visual Studio.DatabasesMS SQL Server 2016/2014/2012, Azure SQL DB, Azure Synapse. MS Excel, MS Access, Oracle 11g/12c, Cosmos DB, MongoDB, K-12, Milvus Vector DBWORK EXPERIENCESr.Azure Databricks Engineer       						                                          Feb 2023 to Till dateClient: Walmart, Bentonville, AR.Responsibilities:
      Designed and implemented scalable data ingestion pipelines using Azure Data Factory, ingesting data from various sources such as SQL databases, CSV files, and REST APIs.      Utilized PolyBase to establish seamless integration between heterogeneous data sources, enabling efficient querying and analysis across various platforms, such as SQL Server & Azure SQL Database.      Developed data processing workflows using Azure Databricks, leveraging Spark for distributed data processing and transformation tasks.      Build data pipelines in airflow in GCP for ETL related jobs using different airflow operators.      Experience in GCP Dataproc, GCS, Cloud functions, BigQuery.      Experience in moving data between GCP and Azure using Azure Data Factory.      Leveraged Azure Logic Apps for orchestrating complex workflows, integrating various data services and triggering actions based on events.
      Utilized Azure Function Apps for serverless computing in data engineering tasks, enabling the execution of discrete functions without managing infrastructure.      Proficient in leveraging Azure Machine Learning services to design, develop, and deploy machine learning models, demonstrating a strong understanding of the end-to-end machine learning lifecycle.      Managed and optimized OLTP systems to ensure real-time, high-speed processing of transactional data, enhancing the efficiency and responsiveness of critical business operations.      Led the end-to-end migration of data and analytics workloads from AWS Redshift to Databricks, ensuring minimal downtime and data integrity.      Proficient in writing and optimizing T-SQL queries for Microsoft SQL Server, demonstrating the ability to retrieve, manipulate, and analyze data efficiently.      Proficient in designing, developing, and maintaining reports using SQL Server Reporting Services (SSRS), creating visually appealing and insightful reports for business stakeholders.      Experienced with popular RDBMS platforms, such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server, enabling effective database management and development.      Managed and optimized resource allocation for big data processing using Apache YARN on Azure HDInsight.      Extensive experience in utilizing PySpark for ETL (Extract, Transform, Load) operations, enabling efficient data processing, cleaning, and transformation in big data environments.      Proficient in designing, implementing, and maintaining RDBMS solutions, including data modeling, schema design, and database optimization, ensuring efficient data storage and retrieval.      Led or participated in the end-to-end implementation of SAP HANA solutions, including installation, configuration, and customization based on organizational requirements.      Proficient in Azure Data Lake Storage Gen2 (ADLS Gen2), designing and implementing scalable data solutions, optimizing performance, and ensuring data integrity for efficient data processing and analysis.      Monitored and improved CI/CD pipeline performance, identifying and addressing bottlenecks, and reducing build and deployment failures      Proficient in implementing and managing Delta Lake on Azure, leveraging Azure Databricks, Azure Synapse Analytics, or other Azure services to create reliable and scalable data lakes.      Proficient in designing and implementing data warehousing solutions using Snowflake on Azure, ensuring scalable and performant analytics.      Strong proficiency in JavaScript for front-end and back-end web development, including modern frameworks and libraries      Proficient in designing, configuring, and managing data flows using Apache NiFi, ensuring seamless data ingestion, transformation, and routing.      Experienced in Java programming with a strong grasp of object-oriented principles, data structures, and algorithms, enabling the development of efficient and robust software applications.      Proficient in implementing and managing Azure Event Hubs for real-time event streaming and data ingestion in cloud-based solutions, ensuring scalability and reliability      Created PowerShell scripts for managing Azure resources, orchestrating data workflows, and performing administrative tasks, improving operational efficiency.      Designed and implemented robust ETL (Extract, Transform, Load) processes using SSIS, facilitating seamless data integration across diverse sources and destinations.      Built the data pipeline using Azure Services like Data Factory to load the data from the Legacy SQL server to Azure Data Base using Data Factories, API Gateway Services, SSIS Packages, Talend Jobs, and Python codes.      Proficient in PL/SQL, with a strong track record of leveraging its capabilities to develop and optimize database-driven applications, ensuring efficient data management and retrieval.      Build data pipelines in airflow in GCP for ETL related jobs using different airflow operators both old and newer operators.      Leveraged cloud and GPU computing technologies for automated machine learning and analytics pipelines, such as AWS, GCP Experience in GCP Dataproc, Dataflow, PubSub, GCS, Cloud functions, BigQuery, Stackdriver, Cloud logging, IAM, Data studio for reporting etc.
      Developed ELT processes from the files from abinitio, google sheets in GCP with compute being dataprep, dataproc (pyspark) and Bigquery.
      Working knowledge in working around Kubernetes in GCP, working on creating new monitoring techniques using the stackdriver s log router and designing reports in data studio.Environment: Azure Databricks, Data Factory, Snowflake, Logic Apps, Function App, Snowflake, MS SQL, Oracle, HDFS, MapReduce, YARN, Spark, Hive, SQL, K-12, Python, Scala, PySpark, PowerBI, Powershell and Kafka.Azure Snowflake Data Engineer      							              July 2021   Jan 2023Client: Citi Bank, Irving, TX.Responsibilities:      Implemented end-to-end data pipelines using Azure Data Factory to extract, transform, and load (ETL) data from diverse sources into Snowflake.      Designed and implemented data processing workflows using Azure Databricks, leveraging Spark for large-scale data transformations.      Proficient in Java for designing, coding, testing, and debugging applications, contributing to the development of reliable and scalable solutions, and enhancing software development capabilities.      Leveraged Matillion's capabilities to seamlessly integrate data with popular cloud platform such as Azure ensuring data consistency and reliability      Designed and implemented data models in SAP HANA to optimize performance and enable efficient data retrieval.      Proficient in utilizing Spark SQL for querying and analyzing large-scale structured data within Apache Spark, enabling seamless integration of SQL queries with Spark's distributed computing capabilities for efficient data processing and analytics      Integrated Azure Machine Learning with other Azure services, such as Azure Databricks, Azure Synapse Analytics, and Azure Data Factory, to create comprehensive data pipelines and enable seamless integration of machine learning solutions into broader data workflows.      Demonstrated expertise in translating complex data sets into clear and meaningful visualizations, utilizing SSRS to present key performance indicators, trends, and actionable insights for informed decision-making.      Applied TDD methodologies to create high-quality, maintainable code by writing unit tests before implementing new features.      Proficient in healthcare data exchange standards including X12, HL7, and FHIR for streamlined interoperability and data integration in healthcare systems.      Proven ability to ensure data integrity, consistency, and security within RDBMS through user access control, backup and recovery strategies, and performance tuning.      Implemented security measures for DLT networks, including encryption, key management, and access controls, ensuring the integrity and confidentiality of distributed ledger data.      Designed and implemented data streaming pipelines using Confluent Kafka for real-time event processing.      Skilled in SnowSQL to develop and maintain data workflows, ensuring data integrity and accessibility for informed decision-making.      Implemented product navigation, search functionalities, and interactive features within the Unity catalog, enhancing user engagement and satisfaction.      Demonstrated expertise in designing and maintaining Snowflake data warehouses, implementing data security best practices, and collaborating with cross-functional teams to ensure seamless data integration, storage, and retrieval for organizational needs.      Integrated Snowflake with Power BI and Azure Analysis Services for creating interactive dashboards and reports, enabling self-service analytics for business users.      Skilled in data modeling, DAX calculations, and data transformation within the Power BI ecosystem.      Quantified the impact of Snowflake implementation on healthcare outcomes, such as improved patient care, reduced costs, and enhanced data-driven decision-making.      Collaborated with cross-functional teams including data scientists, data analysts, and business stakeholders to understand data requirements and deliver scalable and reliable data solutions.Environment: Azure Databricks, Data Factory, Logic Apps, Snowflake, Functional App, Snowflake, MS SQL, Oracle, HDFS, MapReduce, YARN, Spark, Hive, SQL, Python, Scala, PySpark, Tableau, shell scripting, KafkaBig Data Developer 		      July 2019   Jun 2021Client: T-Mobile, Plano, TX.Responsibilities:
      Imported data from MySQL to HDFS on a regular basis using Sqoop for efficient data loading.
      Performed aggregations on large volumes of data using Apache Spark and Scala and stored the results in the Hive data warehouse for further analysis.
      Worked extensively with Data Lakes and big data ecosystems, including Hadoop, Spark, Hortonworks, and Cloudera.      Loaded and transformed structured, semi-structured, and unstructured data sets efficiently.
      Developed Hive queries to analyze data and meet specific business requirements.
      Leveraged HBASE integration with Hive to build HBASE tables in the Analytics Zone
      Utilized Kafka and Spark Streaming to process streaming data for specific use cases.
      Developed data pipelines using Flume and Sqoop to ingest customer behavioral data into HDFS for analysis.
      Utilized various big data analytic tools, such as Hive and MapReduce, to analyze Hadoop clusters.
      Implemented a data pipeline using Kafka, Spark, and Hive for ingestion, transformation, and analysis of data.
      Migrated data from RDBMS (Oracle) to Hadoop using Sqoop for efficient data processing.
      Developed custom scripts and tools using Oracle's PL/SQL language to automate data validation, cleansing, and transformation processes.
      Implemented CI/CD pipelines for building and deploying projects in the Hadoop environment.
      Utilized JIRA for issue and project workflow management.
      Utilized PySpark and Spark SQL for faster testing and processing of data in Spark.
      Configured and customized Hadoop services using Ambari, ensuring optimal resource utilization and performance in data engineering processes.      Used Spark Streaming to process streaming data in batches for efficient batch processing.
      Leveraged Zookeeper to coordinate, synchronize, and serialize servers within clusters.
      Utilized the Oozie workflow engine for job scheduling in Hadoop.
      Managed Agile boards, sprints, and backlogs within JIRA for improved project visibility and coordination.      Utilized PySpark in SparkSQL for data analysis and processing.
      Implemented a serverless architecture using API Gateway, Lambda, and Dynamo DB and deployed AWS Lambda code from Amazon S3 buckets. Created a Lambda Deployment function, and configured it to receive events from your S3 bucket      Collaborated with cross-functional teams to define data requirements and build custom data solutions tailored to business needs.      Integrated machine learning models into data pipelines for predictive analytics and business intelligence using GCP AI Platform.      Conducted performance tuning and troubleshooting of data workflows to ensure high availability and reliability of ETL processes.      Provided technical leadership and mentoring to junior data engineers, fostering a culture of continuous learning and improvement.      Implemented robust data governance and security measures, ensuring compliance with industry standards and regulations.Environment: Sqoop, MYSQL, HDFS, Apache Spark Scala, Hive Hadoop, Cloudera, Kafka, MapReduce, Zookeeper, Oozie, Data Pipelines, RDBMS, Python, PySpark, Ambari, JIRA.
AWS Data Engineer            	                   					                          Jan 2017   Jun 2019Client: INFOR, Saint Paul, MNResponsibilities:      Designed and implemented end-to-end data pipelines using AWS services for efficient data ingestion, transformation, and loading (ETL) into Snowflake data warehouse.      Utilized AWS EMR and Redshift for large-scale data processing, transforming, and moving data into and out of AWS S3.      Developed and maintained ETL processes with AWS Glue and Databricks, migrating data from various sources into AWS Redshift.      Implemented serverless computing with AWS Lambda and Databricks, executing real-time Tableau refreshes and other automated processes.      Utilized AWS SNS, SQS, and Kinesis for efficient messaging and data streaming, enabling event-driven communication and message queuing.      Designed and orchestrated workflows with AWS Step Functions and Databricks, automating intricate multi-stage data workflows.      Implemented data movement with Kafka, Spark Streaming and Databricks for efficient real-time data ingestion and transformation.      Integrated and monitored ML workflows with Apache Airflow and Databricks ensuring smooth task execution on Amazon Sage Maker.      Leveraged Hadoop ecosystem tools, including Hadoop, MapReduce, Hive, Pig, and Spark for big data processing and analysis.      Managed workflows with Oozie and Databricks, orchestrating effective coordination and scheduling in big data projects.      Utilized Sqoop for data import/export between Hadoop and RDBMS, importing normalized data from staging areas to HDFS and performing analysis using Hive Query Language (HQL).      Ensured version control with Git/GitHub, maintaining version control of the codebase and configurations.      Automated deployment with Jenkins and Terraform, facilitating the automated deployment of applications and data pipelines.      Worked with various databases, including SQL Server, Snowflake, and Teradata, for efficient data storage and retrieval.      Performed data modeling with Python, SQL, and Erwin, implementing Dimensional and Relational Data Modeling with a focus on Star and Snowflake Schemas.      Implemented and optimized Apache Spark applications, creating Spark applications extensively using Spark DataFrames, Spark SQL API, and Spark Scala API for batch processing of jobs.      Collaborated with business users for Tableau dashboards, facilitating actionable insights based on Hive tables.      Enhanced performance using optimization techniques, leading to the optimization of complex data models in PL/SQL, improving query performance by 30% in high-volume environments.      Developed predictive analytics reports with Python and Tableau, visualizing model performance and prediction results.Environment: AWS S3, AWS Redshift, HDFS, Amazon RDS, Apache Airflow, Tableau, AWS Cloud Formation, AWS Glue, Apache Airflow Apache Cassandra, Terraform.Data Warehouse Developer                							            May 2014 - Mar 2016Client: OASYS Cybernetics, Chennai, IndiaResponsibilities:      Designed and implemented scalable and efficient data processing pipelines using technologies such as Apache Hadoop and Apache Spark.      Conducted in-depth data analysis to extract valuable insights and support data-driven decision-making processes.      Developed and maintained large-scale distributed databases, optimizing performance and ensuring data integrity.      Implemented data warehousing solutions for efficient storage, retrieval, and analysis of structured and unstructured data.      Ensured seamless data flow between SQL Server databases, Cosmos DB, and the Hadoop/Spark ecosystem for comprehensive analytics.      Proficient in programming languages such as Java, Python, and Scala for developing robust data applications.      Created and optimized scripts for data extraction, transformation, and loading (ETL) processes.      Extensive experience with big data technologies, including Apache Hadoop ecosystem components (HDFS, MapReduce) and Apache Spark for large-scale data processing.      Implemented security measures within Cloudera Manager to control access, ensure data integrity, and comply with regulatory requirements.      Implemented automated alerts within Ambari for proactive cluster management.      Implemented and managed Apache Zookeeper for distributed coordination and synchronization in the Hadoop and Spark ecosystem.      Managed Agile boards, sprints, and backlogs within Jira for improved project visibility and coordination.      Utilized shell scripting to automate system tasks, manage file manipulations, and orchestrate data processes in the Hadoop and Spark clusters.      Utilized tools like Apache Hive and Apache Pig for data transformation and analysis.      Implemented data ingestion pipelines to efficiently handle structured and unstructured data from diverse sources into GCP.      Optimized BigQuery SQL queries to enhance performance and reduce cost, ensuring efficient data retrieval and processing.      Designed and deployed automated data validation and quality checks within data pipelines to maintain data integrity and accuracy.      Developed and managed real-time data processing frameworks using Apache Kafka integrated with GCP services.      Automated infrastructure provisioning and management using Terraform for consistent and reproducible GCP environments.      Ensured the availability and reliability of real-time data streams for immediate business insights.      Utilized version control systems (e.g., Git) for managing codebase and ensuring collaboration efficiency.      Maintained comprehensive documentation for developed data solutions, ensuring knowledge transfer and team continuity.Environment: SQL Server, Cosmos DB, Informatica, SSIS, Sqoop, MYSQL, HDFS, Apache Spark Scala, Hive Hadoop, Cloudera, HBASE, Kafka, MapReduce, Zookeeper, Oozie, Data Pipelines, RDBMS, Python, PySpark, shell script, Ambari, ETL, JIRA.ETL Developer							                                                        Nov 2012 - Apr 2014Ananya Coreint Tech, Banalore, India
Responsibilities:      Designed, developed, and maintained end-to-end ETL processes, ensuring seamless data extraction, transformation, and loading from source to target systems.      Orchestrated data workflows to support business intelligence, analytics, and reporting requirements.      Designed and developed efficient ETL pipelines using Google Cloud Dataflow, enabling seamless data transformation and integration across multiple sources.      Leveraged Google Cloud Pub/Sub for real-time data ingestion and messaging within ETL workflows, ensuring low-latency data processing.      Utilized Google Cloud Composer to orchestrate complex ETL workflows, ensuring reliable and scalable data pipeline management.      Implemented data extraction and loading processes using Google Cloud Storage (GCS) for efficient data staging and archival.      Developed and optimized BigQuery jobs for large-scale data analysis, leveraging partitioning and clustering for enhanced performance.      Integrated Google Cloud Functions for serverless ETL operations, reducing operational overhead and improving scalability.      Implemented indexing, partitioning, and caching strategies to enhance ETL process performance.      Established and implemented data quality checks within ETL workflows to identify and address anomalies or discrepancies in the data.      Built Apache Airflow with AWS to analyze multi-stage machine learning processes with Amazon SageMaker tasks.      Developed streaming pipelines using Apache Spark with Python.      Designed and developed Security Framework to provide fine grained access to objects in AWS S3 using AWS Lambda      Used AWS EMR to move large data (Big Data) into other platforms such as AWS data stores, Amazon S3 and Amazon Dynamo DB.      Developed AWS lambdas using Python & Step functions to orchestrate data pipelines.      Collaborated with data stewards and business users to define data quality rules and metrics.      Integrated ETL processes with diverse source systems, including databases, APIs, flat files, and cloud-based platforms.Environment: Informatica Power Center 10.5, SQL Developer, MS SQL Server, Flat Files, XML files 89 10g, DB2, SQL, PL/SQL, Unix/Linux, Putty, FileZilla.Certifications:
      Microsoft Azure Data Engineer Associate: DP-203

Respond to this candidate
Your Email «
Your Message
Please type the code shown in the image:
Register for Free on Jobvertise