Quantcast

Senior Analyst Data Engineering Resume K...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Senior Analyst Data Engineering
Target Location US-MO-Kansas City
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

senior Data Analyst Kansas City, MO

Data Engineer Senior Overland Park, KS

Business Analyst Senior Kansas City, MO

Sr.Data Analyst Overland Park, KS

Data Analyst Engineer Overland Park, KS

Financial Analyst Senior Spring Hill, KS

Sr Data Analyst Kansas City, MO

Click here or scroll down to respond to this candidate
Candidate's Name
Senior Data EngineerEmail: EMAIL AVAILABLEContact: PHONE NUMBER AVAILABLELinkedIn: https://LINKEDIN LINK AVAILABLEProfessional Summary:Versatile professional with over Ten years of experience playing various roles, currently serving as Data Engineering - Senior Analyst. Excellence in developing PySpark and Scala jobs for data loading, transformation and aggregation. Proficient in working with Databricks with various data sources and ETL of Big Data using Apache Spark and Pipelining, Streamlining the data using ADF and Control- M Jobs. Articulate communicator with outstanding analytical abilities, problem-solving, decision-making and interpersonal skills. CORE  COMPETENCIES: Having good knowledge and hands-on experience in Azure. Having experience and good knowledge in Spark, Spark SQL, Spark-Streaming, Kafka, Hadoop (HDFS) Having good Knowledge in Data Warehouse Concepts, Data Lake, Delta Lake. Strong Experience on SQL Scripting. Having experience and good knowledge in Python, Scala (basic). Having experience and good knowledge in Databricks in Azure. Having experience and good knowledge in MPP database like Azure SQL DW, Synapse. Worked exceptionally with Unity Catalog creating tables for external sources. Having good Knowledge of Airflow. Having experience and good knowledge and hands on in EL, ELT, ETL. Having experience and good knowledge in SCD Types (1,2,3,4,6), CDC. Having experience and good knowledge in database like SQL Server 2012, and Oracle. Profound technical knowledge and proficiency in relevant BigData Technologies and Java J2EE. Having experience and good knowledge and Hand on deployment in CICD (ADO). Having experience Bug fixes, investigating bugs only daily sprint level and handing Performance issue day to day. Having hand-on knowledge handling different data Structured, Semi -Structured and Unstructured data. Good understanding on Agile and Waterfall Project methodologies. Having Experience and good Knowledge of the Complete System Development Life Cycle and Implement. Having good knowledge on ETL & Big Data & BI Testing. Ability to learn new technologies and processes rapidly and implement them in the project. Good in effectively communicating with the team and a good Team Player with a great Attitude towards work. TECHNICAL SKILLS: Root Cause Analysis  Requirements Comprehension  Data Migration Code Refactoring  Code Development  SOAP Web-Services Configuration Project Management  Data Loading  Code Maintenance Implementation Azure Data Lake  Hadoop  Scala ADF  Hive  R Pipelining  Kafka  Python DevOps  Rest Web-Services  SAP-Hana Databricks  Java  Apache Spark Domain Knowledge  Software Development Lifecycle (SDLC)  Data Analytics Certification: Microsoft Certified Data Engineering on Microsoft Azure  DP203 Databricks Certified Associate Developer for Spark 3.0 Cloudera Certified CCA-175 Spark and Hadoop Developer Work Experience:Data Engineering  Senior AnalystAccenture (Zoetis Inc), HyderabadProject Sales Maturity ScoreCardDomain PharmaceuticalDuration Apr 2022  Jan 2024 Understanding and implementing based on Business Requirement which is provided by Business Analyst. Creating Databrick Notebook to extract raw data from ADLS (Data Lake) using Pyspark Azure Databrick and writing into Unmanaged delta table (Business Layer). Applying Business Transformation from Business Layer to Agg Layer using Pyspark Azure Databrick. Final Data to be stored as parquet tables and copied to in Azure SQL. Scheduled & Monitoring Workflow Jobs in ADF (data Injection & data processing using python) Striving on different file formats, namely Avro, JSON, Parquet, ORC and various compression formats, including Snappy. Writing the reporting tables in a Databricks unity cata log for external use. Unit Testing and updated code pushed to Git Repo. Bug fixes, investigating bugs only daily sprint level and handing Performance issue day to day. Having Daily Scrum meeting and updating day to day status on each use case. Environment: Azure ADLS Gen2, Databrick, Azure SQL, ADF, PySpark, SQL, Azure DevOps, Git, Databricks Unity Catalog. Senior Software EngineerValueLabs LLC, Hyderabad Engaging with business teams to comprehend input data sources and formats. Creating Standardized mapping documents for reuse based on BRD and data dictionary document. Extracting data from different sources including Touchpoint, SharePoint and other CSV files for data Ingestion (Landing to Raw) Writing transformation logics and scripts for Data Processing (Raw to Transformed) Writing aggregations and functions for rollup tables and the reporting tables for the reports (Transformed to Governed) Creating unit test scripts to verify data quality in Raw and Transformed layer. Governed Data final moved into SQL databases. Utilizing Azure Databricks intern using PySpark, interactive querying to load data and transform and process to develop numerous aggregated tables, dimensions and facts for Reporting. Deploying Databricks notebooks and ADF pipelines to QA, Prod. Part of Status Meeting with Team and updating the Status on ADO board and reporting to Sr. Manager Environment: Azure ADLS Gen2, Databrick, Azure SQL, ADF, PySpark, SQL, Azure DevOps, Git. Senior Software EngineerValueLabs LLC, HyderabadProject InfoLinkDomain SecurityDuration Aug 2019  Apr 2020 Liaised with business teams, namely GNS and BDW to interpret input data sources and their business context and formats. Involvement in developing the Technical Design documents. Developed data pipeline for ingesting and transforming data from different input sources to HDFS. Created jobs accountable for data cleaning and data munging with help of scala and spark functions. Involved in Data Validation Source to Target as per the Mapping. Involved in Developing the Automation Testing Frame using python. Imported data from various sources like HDFS.Project Distributor PortalDomain PharmaceuticalDuration Apr 2020  Mar 2022 Created reports by running aggregate queries on top of the log data on daily basis for the GNS team as per the requirement and made use of multiple visualisation libraries for visuals. Handling issue and providing fix for SIT & UAT and Production Utilizedo Scala and Spark libraries to load data, transform, process and store it back to HDFS. o Spark for interactive query, processing streaming data and integrations. Senior Software EngineerValueLabs LLC, HyderabadProject CornerStone - eSalesDomain HealthcareDuration Jul 2017 - Aug 2019 Designed and developed data integration workflows on big data technologies and platforms - Hadoop, Spark, Scala, Python, MapReduce, Hive, Sqoop, HBase. Performed Hive and Spark tuning with partitioning and bucketing concepts in parquet files within executors/driver's memory. Developed Hive queries and Sqooped data from MYSQL to Hadoop staging area. Designed, developed, and enhanced PySpark scripts for several data implementation pipelines. Worked on developing the Open Enrollment Analytics Insights Using HIVE-HBase Integration. Worked on Preparing the Data flow Tech design for Mobile App claims data based on Date range. Developed dataflows and processes using Spark SQL & Spark Data Frames. Experienced in handling data on different platforms like HDFS, S3, Hive in multiple file formats like CSV, Text, Parquet and ORC. Created the Complex Hive Queries for Reporting Insights. Worked on Performance Tuning on Hive Queries. Worked on developing Multi Level Sub Workflows Oozie Coordinators for Claims Loading to Hive Tables. Worked with Oozie workflow engine to schedule time-based jobs to perform multiple actions. Worked in Agile methodology and actively participated in standup calls, planning and work reported. Involved in Requirement gathering and prepared the Design documents. Attending calls with the Product managers regarding Bi- Weekly product releases, daily Scrum Calls and Sprint Planning. Preparing technical Documentation for all the new Developed or Enhancements Stories and technical learnings acquired whole working on the tasks. Offshore-Onsite and Co-ordination for all the Development and QA work. Environment: Spark, Hadoop- HDFS, HIVE, HBase, Oozie, SQL, SQOOP, HIVE-HBase Integration. Software EngineerAtachi Systems Pvt Ltd, HyderabadProject NGIMESDomain MESDuration Jul 2016 - Jul 2017Technologies SAP-UI5, SAP-HANA, SAP-XS, J2EE Technologies - REST Services, JavaScript, JQuery, HTML Served as team member to comprehend business requirements. Designed and developed webpage and incorporated backend using SAP-XS and Java REST Services per user-story description and rules. Managed responsive designs with SAPUI5 and OpenUI5 Generated reports and designed PDF templates utilizing external APIs using Java and integrated them into system. Took part in code review, root cause analysis process and code refactoring for better performance of application. Engaged with Product owners for requirement understanding and analysis. Played significant role in upgrading applications with latest standard framework versions accessible. Design the layout and structure of reports using the selected reporting APIs and JARs. This include adding tables, charts, head- ers, footers, and other elements. Populate the report template with the retrieved data. This involves binding the data to the appropriate fields or placeholders within the report layout.Software EngineerBhawanITON Pvt Ltd, Hyderabad & OmanProject ITON-DMSDomain EDMS/GovernmentDuration Dec 2013  May 2016Technologies Java-J2EE, SpringBoot, REST web services, JPA, Servlets JSP, HTML5, CSS, JQuery Ajax, JavaS- cript, SVN, ZOHO, Microsoft SQL Server 2011 Communicated with clients for functional alterations. Comprehended requirements and design specifications. Evaluated requirements and developed them successfully and released enhancement version. Involved ino production support tasks and addressed number of production concerns o debugging utilizing firebug and web developer tools on Chrome, Firefox and Internet Explorer Took part in code reviews. Specified structure of every component, service and module per requirement Utilized jQuery, cross-browser JavaScript library to dynamically update page content on client side. Enhanced performance and response time of application by code refactoring Coordinated with business analysts to interpret their business models and objectives. Added latest functionality, developed modules and integrated when needed by business. Engaged with Product owners for requirement understanding and analysis. Played significant role in upgrading applications with latest standard framework versions accessible. Education:Jawaharlal Nehru Technological University, Hyderabad, India. Bachelor of Technology  Computer Science and Engineering 2009-2013

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise