Quantcast

Data Engineer Data Analyst Resume Chanti...
Resumes | Register

Candidate Information
Title Data Engineer/ Data Analyst
Target Location US-VA-Chantilly
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Center Network Engineer Chantilly, VA

Data Engineer Cloud Arlington, VA

Senior Data Engineer CLARKSBURG, MD

Data Engineer Solutions Germantown, MD

Data Engineer Software Odenton, MD

data engineer Reston, VA

Data Engineering Manager Washington, DC

Click here or scroll down to respond to this candidate
Swarnamayee P Data EngineerEMAIL AVAILABLEPHONE NUMBER AVAILABLEhttps://LINKEDIN LINK AVAILABLESUMMARYAround 7 years of technical expertise in complete software development life cycle (SDLC), hands-on experience working with cloud environments AWS and Azure. Good proficiency as a Data Analyst in ETL, SQL coding, Data Modeling, Data Analysis, development, testing, monitoring, and reporting.Leveraged Great Expectations to define and enforce data quality checks, validating incoming data against the defined standards, created custom validation suites using great expectations to enforce data quality standards and expectations across stages of the data pipelines.Strong knowledge and experience on Data Visualization, Data analysis, Data Lineage, Data pipelines, Data Quality, Reporting and monitoring, Data Reconciliation, Data transformation rules, Data flow diagram including Data replication, Data integration and Data orchestration tools.Worked on various Operating Systems which include Windows, Linux, UNIX.Experience with AWS cloud services like EC2, S3, Glue and Athena for Data Transformations and Querying.Experience working with Azure cloud services like Blob storage, Data Lake, Data Factory, Databricks, HDInsight, Synapse AnalyticsStrong working experience with SQL (DML, DDL, DCL), UNIX shell scripting, implemting various test cases (for functional testing, UAT, System testing, Unit testing, regression testing), debugging.Hands on experience on Python programming for data processing and to handle Data integration between on-prem and Cloud DB or Datawarehouse.Experience in Data Modeling making use of Dimensional data modeling, Star schema, and creating Fact and Dimension tables.Experience in writing python programs using Pandas, Numpy towards verification and validation of the raw data in HDFS environment.Experience in converting Legacy reports to Tableau dashboards connecting to SQL server, built and published interactive reports and dashboards using Tableau Desktop and Tableau Server.Experience in creating Power BI dashboards  Power view, Power query, Power Pivot, Power Maps.Wrote Python and Spark scripts to build ETL pipelines to automate data ingestion, update data to relevant databases and tables. Worked closely with DB team in order to modify any table and working in parallel for the related ETL changes.Experience in developing Business Intelligence assets using tools like Tableau.Expertise in gathering Business/Functional user requirements, with the help of UML (Unified Modeling Language) and creating activity flow diagram, sequence diagram, use case, workflows, and data flow diagrams.Extensive experience in working collaboratively with Senior Quality Assurance Analysts to develop a Data Quality (DQ) framework and implement quality assurance (QA) standards, procedures. Determined testing measures for monitoring the business functions of enhanced systems.Good Understanding of Spark Architecture with Databricks. Setting up Microsoft Azure with Databricks, Databricks workspace for business analytics and Manage clusters in Databricks.Attending daily sync-up calls between onsite and offshore team and discuss the ongoing features/ work items, issues, blockers and discuss ideas to improve the performance, readability and experience of the data presented to end users.Experience working on Agile development methodologies, Experience in issue tracking systems (JRIA).Configured version controls like GIT, Bitbucket for integrating code for application development.Technical ExpertiseOperating SystemsWINDOWS, UNIX, LINUXLanguagesSQL, Shell Scripting, Python, R, CDatabases & ToolsSnowFlake, SQL, Teradata, Azure SQL, ER modeling, star Schema modeling, SQL Profiler, Management Studio, MS SQL Server, AWS EC2, AWS RDS, MySQL workbench, ClouderaBig Data TechnologiesHadoop, MapReduce, HDFS, HIVE, Hbase, Apache Spark, Scala, Apache AirflowCloud ComputingAWS, AzureData AnalysisPython, Pandas, NumPy, greatExpectations, Power BIETL ToolsSSIS, ADFVisualization/ Reporting ToolsPower BI, Tableau, Excel, matplotlib, ggplot2, SSIS, SSAS, SSRS, seabornSource ControlGithub, BitbucketTools & UtilitiesVisual Studio Code, Notepad++, PyCharm, Jupyter, Databricks, SAP HANA, SAP BW, GIT, BitBucket, JIRA, Confluence, Agile, Excalidraw, MS Visio, Service Now, Sublime, EclipseWork ExperienceClient: Guardian Life, PA Aug 2023  Jan 2024Role: Data Quality/ Data Engineer - AWSProject Outline:In my role as a Data Quality  Data Engineer in CRP (Customer Reporting Platform) project at Guardian Life Insurance, everything was pretty done but the Data Quality framework (Proof-Of-Concept state) is totally missing which I had to take care of from the scratch initially using Great Expectations (GX) as a pivot tool in the DQ framework until the first release and later using Collibra as a tool for DQ framework.The strategic goals include defining and implementing DQ rules tailored to the requirements of the business data, in-depth data profiling to identify anomalies, validate distributions, ensure the accuracy, check for threshold values if they fall within the desired range and integrity of the datasets.GX therefore provides automated validation checks, seamless and efficient data quality monitoring, real-time alerts, enhance transparency and traceability, providing a clear and documented foundation for data quality rules and processes within the Collibra governance platform. Also, my role included implementing data pipelines.Roles & Responsibilities:Built python jobs which read the data from mainframe systems in fixed-width format and land on HDFS as raw data. Built PIG jobs which will read the fixed-width files and re-format data as per the defined schema and store in HDFS.Build python jobs to run data quality checks using the python package greatExpectations and depending on the results of the DQ checks respective pipelines will be triggered to perform tasks in the cases of failure and success.Implemented dynamic expectation generation using Python scripts within a local IDE (PyCharm), engineered custom validation functions within GX for complex business logic and domain-specific rules, established automated remediation workflows within PyCharm triggered by GX failure enabling proactive resolution and reducing manual intervention in the data validation.Experience in designing and creating RDBMS tables, views, user defined data types, indexes, stored procedures, cursors, triggers, and transactions.Created a Datawarehouse and integrated different systems data into the warehouse by working with multiple teams.Implemented data security and privacy measures in SAP HANA, ensuring compliance with HIPAA, GDPR, PHI to protect sensitive patient information.Built frameworks which are configurable, metadata driven and customized. Developed ETL pipeline using Databricks to build SILVER/GOLD layer tables using PySpark.Implemented policies on Databricks to control table and DBFS access restricted to user groups. Integrated GIT repository for versioning into Databricks clusters.Enabled GIT on Databricks for versioning and used widgets for setting the parameters in the script. Implemented sub-routines using remote.Developed automated AWS Glue jobs to ingest data coming from SFTP vendor systems and then implemented AWS Crawler and catalog tables to automatically manage the schema changes of incoming files.Built AWS Lambda jobs to segregate files based on schema into sub-directories. Setup IAM roles on top of S3 buckets so that AWS Lambda has specific access required to access respective buckets. Setup configuration such that AWS Lambda gets triggered automatically as soon as file lands into source S3 bucket.Built ETL jobs to load fixed-width files from the mainframe server to get loaded into Redshift daily using batch jobs.Update specific bucket policy with IAM-created roles to restrict specific access to users. Configured AWS IAM group and users for improved login authentication.Developed Lambda functions and assigned IAM roles to run Python scripts along with various triggers (SQS, Event Bridge, SNS). Well-versed with AWS products like EC2, S3, EBS, IAM, CloudWatch, CloudTrail, VPC, and Route53.Worked on AWS CLI Auto Scaling and Cloud Watch Monitoring creation and update, allotted permissions, policies and roles to users and groups using AWS IAM.Developed process to monitor and validate CDM ID Sync between salesforce and CDM.Designed and developed actionable address data quality trends and dashboard.Built python code to extract data from AWS S3 and load into SQL server for one of my business teams as they are not exposed to cloud.Design and develop ETL processes in AWS Glue to migrate the data from external sources like S3, Parquet/Text file into AWS Redshift.Used AWS Glue catalog with Crawler to get the data from S3 and perform SQL query operations using AWS Athena.Created a Lambda function to run the AWS Glue jobs based on the AWS S3 event. Created monitors, alarms, notifications and logs for Lambda functions, Glue jobs using CloudWatch.Built ELT jobs to load fixed width files from mainframe server to get loaded into Redshift daily using batch jobs.Proficient in SQL and programming languages such as Python or Scala for data manipulation and transformation tasks.Migrated historical data to S3 and developed a reliable mechanism for processing the incremental updates. The data has been migrated from Hadoop cluster using DIST CP command to migrate large datasets to AWS S3.Built POCs to check the functionality batch jobs with dependencies using AWS Batch service. The jobs are triggered through dependencies and input to the script is given using JSON file and initial triggering was done using batch operators in airflow.Retrieved data from Hadoop Cluster by developing a pipeline using SQL to retrieve data form DW and used ETL for data transformation.Performed root cause analysis of customer data quality Issues and categorize for proper remediation.Writing PySpark and spark-SQL transformation in Azure Databricks to perform complex transformations for business rule implementation.Exported the data into RDBMS for the BI team to perform visualization and generate reports. And to perform data analytics BI team used AWS QuickSight.Extracted JSON data using SUBSTR command and positional parameters through SnowSQL scripts. Implemented multiple load parameters to handle edge cases during load failures.Built SnowSQL scripts consisting of complex ETL logic involving multiple joins and materialized views.Built stage objects to load from the local file system (excel files from business) into Snowflake internal stage.Implemented DDL Curated Data Store Logic using Spark concepts.Used SnowSight UI to find the query history and debug failed load processes. Saved error records to separate tables using COPY INTO command on top of the SnowSQL query.Performed bulk load of JSON objects into Snowflake tables using SnowSQL and a custom file format.Built data quality rules and report to improve overall Data Quality (Ambiguous, Partially Verified and Unverified address). Suggested process improvements to improve address data quality.Conduct code reviews daily. Provide low level architecture design for the pipelines. Interacting with SLT to gather additional requirements and provide demos to cross-functional teams.Used JIRA for project tracking and participated in daily scrum meetings.Environment: AWS, Python, Databricks, PyCharm, Git/Bitbucket, Great Expectations, Colibra, S3, HDFS, PySpark, SQL, QuickSight, Pandas, MySQL Workbench, ETL, Avro, Snowflake, JIRA, DDL, Spark, Redshift, HIVE, SnowSQL, SnowSight UIClient: Intellilink Technologies, NJ Jan 2023  Jul 2023Role: Data EngineerResponsibilities:Queried Structured data using Spark SQL in order to enable rigorous analysis.Used Regular Expressions (Regex) in Python and SQL to extract and extrapolate vital metrics for analytics of data.Good understanding of Hadoop architecture and various components such as HDFS, Application Master, Node Manager, Name Node, Data Node, and MapReduce concepts.Enhanced the vendors ETL frameworks to ensure a more dynamic and parameterized design to extract data from source tables using ETL config tables.Designed ETL jobs to move, integrate and transform Data from various sources to a single Target database.Performed Data transformation and wrangling using Python scripts.Created Unix scripts to execute jobs automatically and in a schedule.Created Data validation scripts that would be used for verification correctness of ETL logic and transformations between source system and dimension tables.Designed both managed and external tables in HIVE to optimize performance.Experience with different file formats  Avro, Parquet, JSON, XML etc.Conducted Data preparation using SQL for Tableau consumption and publishing data sources to Tableau Server.Designed conceptual, Logical and Physical data modeling using E/R Studio Data modeling tool.Conducted EDA (Exploratory Data Analysis) on data using Python packages NumPy, Pandas. Used the python libraries matplotlib, seaborn to discover data patterns and display graphs and visualization using Jupyter Notebook.Written multiple MapReduce Jobs using PIG and HIVE for data extraction, transformation and aggregation from multiple file formats including Avro, Parquet, XML, JSON, CSV etc and other compressed file formats like gZap, Snappy etc.Used HIVE to analyze data ingested into HBase by using Hive-HBase integration and various compute metrics for reporting on the dashboard.Worked on installing cluster, commissioning & decommissioning of data node, name node recovery, capacity planning, and slots config.Developed data pipeline programs with Spark APIs, data aggregations with HIVE and formatting data for visualization.Developed a fully automated Continuous Integration system using GIT, Jenkins, MySQL tools and custom tools developed in Python and Bash.Performed the migration of large datasets to Databricks, create and administer cluster, load data, configure data pipelines, loading data from ADLS Gen2 to data bricks using ADF pipelines.Created Databricks notebooks to streamline and curate the data for various business use cases and also mounted blob storage on Databricks.Documented, tabulated, and reviewed methods of valuation in effect, using SQL and worked on Client/Server tools like SQL Server Management Studio to administer SQL Server.Ingested data in mini-batches and performed RDD transformations, Integration Runtimes, Azure Key Vaults, Triggers, and migrating Data Factory pipelines to higher environments using ARM Templates.Engaged in 3-week sprint in Scrum Development methodology in fast paced work environment, participated in daily Standup meeting.Environment: Python, SQL, Tableau, Databricks, RDD, ARM Templates, ADF pipelines, Avro, Parquet, JSON, UNIX, ETL, Regex, Azure cloud, EDA, PIG, HIVE, HBASE, MapReduce, GIT, Jenkins, Bash, MySQL, Spark APIs, ADLS Gen2.Client: Wolters Kluwer, India Jun 2019  Dec2021Role: Data EngineerProject Overview:Performed Data Analytics, Exploratory data analysis, scheduling reports in Power BI, writing complex queries in Power BI, involved in full SDLC process, performed SQL operations and some ETL tasks.Roles and Responsibilities:Full Data Pipeline (FDPL) -Building frameworks that are configurable, metadata-driven & customized. Developed ETL pipeline using Databricks to build SILVER/GOLD layer tables using PySpark.Experience sizing clusters for development and GIT integration with Azure DevOps. Data Ingestion to ingest data from various data sources using Data Factory and Azure web apps.Migrated the Databricks ETL jobs to Azure Synapse Spark pools.Built Synapse spark ETL notebooks to standardize the data coming from vendor specific SFTP accounts and write the resultant data into Azure blob storage.Used Azure Synapse for a dedicated SQL database to build a data model using Fact and dimensional tables for KPI using 24-hour delta records.Integrated the marketing data from the sales team into the data warehouse using Python and Snowflake. The marketing data is then loaded into dimensional tables on Snowflake.Built Synapse Dataflows for standard KPI model building by reading the data from shared data sources (internal data teams/pods) and loading the data into SQL Pools (MPP). The data will then be pulled into Power BI for dashboarding.Migrated the legacy workflows from batch jobs on-premises systems (XML, CSV, XLS files) to Data Lake using Synapse pipelines.Data Integration to apply business rules & make data available to different consumers using Databricks spark. Data delivery FW for data-driven caching, Ad hoc data access, Vendor & API integration.Migrated Teradata warehouse to Snowflake using AzCopy and Snowflake external storage integration.Migrated the raw data from Teradata to Azure blob using Teradata parallel explorer (TPT) and used Snowflake external storage integration to load data from blob to Snowflake internal stage.Built Snowflake stored procedures to write ETL scripts that data into dimensional tables. Built views on top of stage tables to expose data to business teams.Developed file formats to load the custom CSV files with multiple separators and escape characters.Created stage tables on top of the external data stored on Azure blob for ad hoc querying, and data profiling. Loaded data from stage to Snowflake target tables using COPY INTO command.Building a data-driven Caching Layer for data delivery of the Sales Dashboard using Power BI and Snowflake. Building a standardized automated Vendor Integration Model (recognized as a standard template by Eng. org).Migrated two Data marts from Teradata into Snowflake using SnowSQL and External storage integration. Used COPY INTO command for ingesting large files from Azure Blob to Snowflake stage and automated the ingestion process using SnowSQL commands with shell scripts.Conduct code reviews daily. Provide low-level architecture design for the Azure pipelines. Interact with SLT to gather additional requirements (stretch goals) and provide demos to cross-functional teams.Developed workflows using Databricks Delta live tables and used MERGE SQL to perform change data capture for implementing the SCD type-2 tables. Used Z-optimize for data compaction and VACCUM commands for maintaining the lifecycle of datasets.Created a Mount point on Databricks to connect with blob storage to retrieve data and perform data analysis using PySpark on Databricks clusters.Enabled GIT on Databricks for versioning and used widgets for setting the parameters in the script. Implemented sub-routines using remote data bricks job execution in case of master job failure.Used Python SDK to remotely connect to Databricks and Azure DataFactory to run jobs/pipelines.Optimized the PySpark jobs on Databricks using memory fraction/ storage fraction limit changes and other custom Spark configurations.Implemented streaming pipeline on Clickstream data by connecting Databricks with Azure Event-hubs and with Azure Stream Analytics.Use Azure databricks/ apache spark clusters for ETL for the unification of data.Ingest data from SAP HANA into Azure using ADF copy data functionality and build solutions for advanced analytics and reporting.Migrated Apache Hive tables/models from Hadoop to Databricks on Azure, implemented access policies to restrict user access on Table and Schema level for all Databricks tables. Currently working on SnowPark to convert the PySpark jobs into Snowflake equivalent SnowPark jobs.Built Shell scripts that load the data from SFTP and land in HDFS as raw data. Build Python jobs to run data quality checks using DBT and using great expectations (this is a Python package name).Built Python jobs to read data from mainframe systems in fixed-width format and land on HDFS. Built PIG jobs that will read the fixed-width files and re-format data as per the defined schema and store them in HDFS.Installed and configured Hive and wrote Hive UDFs and used piggy bank a repository of UDFs for Pig Latin.Experience in building robust and scalable data ingestion pipelines, integrating with various data sources such as streaming platforms, databases, and distributed file systems.Designed and developed data pipelines that transform and cleanse data from multiple sources before loading it into Druid.Identified and optimized the SQL queries having bad execution plan.Developed SQL packages, procedures, Functions, Views etc. for project enhancement.Wrote scripts to perform data cleansing, data validation, data transformation on the incoming data from various source systems using Python.Responsible for health checks, emergency bug fixing and data corrections.Coordinated with both Onsite and Offshore teams.Migrated legacy workflows from batch jobs on-premises systems (XML, CSV, XLS files) to DataLake using Synapse pipelines.Used Python SDK to remotely connect to Databricks and ADF to run jobs/pipelines.Developed Complex calculate measure, calculated column and role level security using DAX in Power BI Desktop, scheduling reports in Power BI Service.Updated data in tables on the user requirement using temp tables, CTEs and joins.Developed packages, stored procedures, functions, and UNIX shell scripts.Worked with the DBA team to perform data refresh and QA environment validation.Responsible for various transformations like sort, join, aggregate, filter in-order to retrieve various datasets. Design and build ETL pipelines to automate ingestion of structured and unstructured data using Azure Data Factory (ADF).Create New pipelines or modify existing pipelines to orchestrate the data movement from On-Prem Sources to Azure Data Lake.Created/ modified Linked Services, Triggers, and schedule the pipeline runs.Utilized ADF for orchestrating ETL workflows and Azure Databricks for scalable data processing.Implemented data governance practices to ensure data quality, privacy and compliance.Experience in Data Ingestion from various sources to one or more Azure Services  (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks. Implemented Copy activity, Custom ADF Pipeline activities.Used Python for SQL/CRUD operations in DB, file extraction/ transformation/ generation.Building Reusable Data ingestion and Data transformation frameworks using Python.Environment: Python, SQL, Power BI, UNIX, Azure Services, Databricks, DBT, Shell Scripting, HDFS, SFTP, DBT, SAP HANA, SnowSQL, Snowflake, Teradata, Power BI, GITClient: IBM India Pvt Ltd., India Jun 2016  May 2019Role: Data Analytics Engineer - AzureProject Overview:The project at IBM for Suncor Data Integration and Analytics involved designing and implementing data solutions that seamlessly integrated SAP HANA, SAP BW, and various data sources, architecting a robust data flow, identifying patterns, efficient data processing, reducing data latency and enhancing overall system performance. Core duties include using Python for data engineering tasks, implementation of Azure cloud services, integrating necessary Big Data tools, using BI tool  Power BI to keep a track of all the insights. Majorly worked on Azure Cloud, Power BI to create applications that provide an insight of real-time data using reports, dashboards, applications, and various visualizations.Roles and Responsibilities:Leveraged Python for Data Engineering tasks, I developed custom scripts and automation tools to ensure data quality, cleansing, transformation, and enrichment.Implemented Azure cloud services, including Azure Data Factory and Databricks, provided a scalable and flexible environment for hosting and managing the data integration and analytics platform.Understand the Database schema and the relationships between Fact and Dimension tables to have a better knowledge about the project and its workflow.Created various Database Objects like Tables, Views, Stored Procedures based on the requirement.Managing and optimizing databases including Azure SQL DB ensured data consistency and integrity.Using Power BI, I developed dynamic reports and dashboards, facilitating data-driven decision making and real-time analytics.Created Pivot tables and charts in Excel to display, report logistic rules and implemented complex metric and transformed them with Excel formulas to allow for a better data slicing and dicing.Used Python and other programming languages to create and improve a ML pipeline.Created packages in SSIS with error handling as well as created complex SSIS packages using various transformations and tasks like Sequence containers, for loop and For Each loop container, Send Mail, Conditional Split, Merge, Lookup, Derived Column, Row Count, Union All, Data Conversion, File System, DB source and destination etc.Designed and created data extracts, supporting SSRS Power BI visualization reporting applications.Designed ETL packages dealing with different data sources (SQL server, Flat files) and loaded the data into target sources by performing different kinds of transformations using SSIS.Created shared dimension tables, measures, hierarchies, levels, cubes, and aggregations on MS OLAP/ OLTP/ Analysis Server (SSAS) in Tabular Model.Maintained a strong collaboration with IBM teams and Suncor Stakeholders, providing regular updates and aligning technical solutions with the organizations strategic goals.Used DAX functions to create measures and calculated fields to analyze data, used Power Queries to transform data, installed data gateway and configured schedule plans for Data refresh of the reports.Utilized Power BI to create the reports more interactive, analytical dashboards, published and maintained workspaces and allotted the time refresh for the data maintained by the apps and workbooks.Developed Power Bi reports using various types of charts, KPIs and slicers and configuring them.Built Synapse Dataflows for standard KPI model building by reading the data from shared data sources (internal data teams) and loading the data into SQL Pools (MPP). The data will then be pulled into Power BI for dashboarding.Primarily involved in Data Migration using SQL, SQL Azure, Azure Storage, and ADF, SSIS, PowerShell.Experience in all phases of Software Development Lifecycle (SDLC) using waterfall, Agile/Scrum, and Software Testing Life Cycle (STLC).Retrieved data from Hadoop Cluster by developing a pipeline using HIVE (HQL) and SQL to retrieve data from Oracle database and used ETL data transformation.Proficient in SQL and programming languages such as Python for data manipulation and transformation tasks.Used Data Factory, Databricks, SQL DB, and SQL Data Warehouse to implement both ETL and ELT architectures in Azure.Utilized python/ pyspark in Databricks notebooks when creating ETL pipelines in ADF. Used the combination of ADF, Spark SQL, Azure Data Lake Analytics to perform ETL data from various source systems to Azure Data Storage Services.Worked with Azure Blob, ADLS Gen-1 and other data storage options. Experience in using ADF to bulk import data from csv, xml, and flat files. Used Azure DevOps tools, completely automate the CI/CD pipelines.Involved in cluster maintenance, adding and removing cluster nodes, cluster monitoring, and wrote Python scripts to parse CSV files and load the data in the database.Composed Python scripts to parse JSON records and load the information in data set and Python routines to sign into the websites and get information for chosen choices.Designed GIT branching strategies, merging per the needs of release frequency by implementing GIT workflow on Bitbucket.Used Tableau as a front-end BI tool and MS SQL Server as a back-end database to design and develop dashboards, workbooks, applications, and complex aggregate calculations.Developed Python UDF for handling nested JSON data from the source system and flattening to line-item level records. These flattened records are further transformed using Spark transformations for daily aggregations and reporting.Working knowledge of Python programming, including a variety of packages like NumPy, Matplotlib, Pandas.Used Databricks to work extensively on accessing, processing, transforming, and analyzing large amounts of data.Created Python Databricks notebooks to handle large amounts of data, transformations and loading the data to different targets.Embedded Power BI reports on SharePoint portal page and managed access of reports and data for individual users using roles.Working with application developers, system engineers, database administrators on planned or unplanned maintenance activities knowledge transfer and training activities for business logic and application changes.Implemented Breakpoints, set precedence constraints and used checkpoints for re-running the failed packages.Conduct thorough Unit Testing and Integration testing to check for data consistency before deploying it to the stage/ Prod.Worked with Service Now to maintain/ update the status for all sub tasks for the User Stories along with Defects tracking.Environment: Power BI, Python, DAX, Azure ADF, Azure Storage, SQL, ETL, ELT, Databricks, SSIS, SSRS, SSAS, reports and dashboards, Excel, SDLC, STLC, Service Now, Pandas, Matplotlib, Tableau, Spark, Snowflake, Azure SQL DB, PySpark, Azure Synapse, Azure SQL Pools, GITEducationMaster of Science in Data Science  Kent State University, Kent, OH, USA  May 2022Bachelors in Computer Science and Engineering -

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise