| 20,000+ Fresh Resumes Monthly | |
|
|
| | Click here or scroll down to respond to this candidateCareer ObjectiveHighly experienced Data Analyst with 4+ years of expertise in data mining, statistical analysis, and data visualization to drive strategic business decisions. Proficient in using tools such as SQL, Python, R, Tableau, and Power BI to analyze large datasets and develop actionable insights.Professional Summary Around 4+ years of progressive experience in Business Analysis, Reporting and Dash-boarding for enterprise-wide applications using Power BI and Tableau. Experience in full dashboard development lifecycle from gathering requirements and designing solutions, through developing code / reports, to testing and release/deployment. Experience in loading data from SQL Server DB, Azure SQL DB to Power BI to generate reports. Familiar with Power BI Dataflows, Power Apps, Data modeling, Power automates, Row Level Security. Experience in developing various SSIS packages to extract data from different heterogeneous sources like flat files, Oracle, SAP, SQL Server etc. Wide experience in creating OLAP cubes, identifying dimensions, attributes, hierarchies and calculating measures and dimension members in SQL Server Analysis Services (SSAS). Knowledge on Azure Data Explorer and Kusto Query Language (KQL) Strong analytical and problem-solving skills and good knowledge of Data Warehouse and BI Infrastructure. Strong knowledge in dimensional Star Schema and Snowflakes Schema. Strong working knowledge of Microsoft Office Excel. Solid working experience with SQL including MySQL and MS SQL Server. Strongly skilled in writing stored. procedures, triggers and complex queries containing multiple joins, subqueries and window functions to create reports and perform analysis. Worked on creating dashboards in Tableau for reporting and data visualization, and guided business decision-making for multiple stakeholders. Worked on developing Python script for the whole data science project life cycle including data acquisition, data cleaning, data exploration, and data modeling using libraries such as Pandas and Sklearn. Hand-on experience performing statistical analysis, causal inference and statistical modeling in R and Python, interpreting and analyzing results of hypothesis tests, A/B tests and multivariate tests and providing recommendations based on data. Experience building and interpreting machine learning models including Linear Regression, Logistic Regression, Random Forest, Xgboost, Kmean, KNN and Neural Network. Experience working with NoSQL databases and big data tools including Hadoop, Hive and Spark. Built ETL pipelines to extract, transform and load into analytical databases, schedule and automated pipelines using Apache Airflow. Solid experience working with cloud platforms such as AWS and Google Cloud Experience working with Shell Scripting in operating systems such as Linux and version-control tools such as Git. Detail-oriented and self-starter with strong communications skills presenting results of analysis to both technical and non- technical audiences and experience collaborating within cross-functional teams. Worked on designing, developing and tracking Key Performance Indicators (KPI) and creating dashboards to monitor them. Strong time management skills to manage work plans, handle tight deadlines and multiple projects. Experience translating business requirements into technical requirements, and enable decision making by retrieving and aggregating data from multiple sources.Education Masters from Kent State University, USAShiva Chaitanya Goud GadilaData AnalystPHONE NUMBER AVAILABLE EMAIL AVAILABLETechnical SkillsLanguage Python, R, SQL, BashPackages Numpy, Pandas, Scikit-learn, TensorFlow, Matplotlib, Seaborn, P lotly, NLTK Cloud AWS (EC2, S3, RDS RedShift, EMR), Google Cloud (Big Query, Kubernetes) Databases MySQL, PostgreSQL, MS SQL Server, MongoDB, Hive, Presto, AWS RDS, AWS Redshift, AWS Redis, Big QueryTools: Tableau, Hadoop, Hive, Apache Airflow, Apache Spark, F lask, Apache Kafka, Jupyter Notebook, Excel, Jira, Git, Docker, Kubernetes, Power BI, ExcelOperatingSystemWin 07 & 10, Windows 2003 serverEnvironment Power BI, Tableau 7,8.1,9.3,2019.4ETL Tool SQL Server Integration service (SSIS)Professional ExperienceClient: Fifth Third Bancorp, Cincinnati, Ohio, USA Jan 2024 - Present Role: Sr. Data AnalystDescription: Fifth Third Bancorp is a diversified financial services company. It specializes in small business, retail banking, investments. Developed data analysis frameworks and visual dashboards to enhance user experience, optimize retention strategies, and provide actionable insights for business decision-making. Responsibilities: Responsible for providing data analysis that focused on improving user experience and optimizing user retention. Wrote complex ad-hoc MySQL queries involving correlated subqueries, window functions and common table expressions to track user activity metrics such as retention rate and daily active user. Optimized existing queries to run faster on large dataset. Designed and maintained MySQL databases, and created pipelines using user-defined functions and stored procedures for daily reporting tasks. Developed dashboards in Tableau and present data to track KPIs and product performance using data from various sources such as MS Excel, AWS S3, AWS RDS, AWS Redshift, JSON and XML. Spotted and analyze trends in user activity data, identify underperformed user segmentation report insight to stakeholders. Designed metrics for A/B testing, creating dashboards in Tableau to monitor test processes, analyzing test results, interpret and give recommendations to stakeholders. Performed statistical analysis such as hypothesis testing causal inference and bayesian analysis. Communicated between the business, product manager and data engineers to ensure the data quality. Built ETL pipelines to retrieve data from NoSQL databases and load aggregated data into the analytical platform. Managed data storing and processing using big data tools such as Hadoop, HDFS, Hive and Spark. Developed Python scripts to automate data validation and data cleaning processes such as deduplicating and checking data consistency using Pandas and Apache Airflow. Analyzed large scale user log data and generated features for classification models using SparkSQL in Spark. Implemented scalable machine learning models Random Forest using SparkML and Python to predict customer churn. Applied advanced classification models such as XGboost, SVM, Neural Network to train data using Python packages such as Scikit-learn. Defined metrics to estimate impact of new features and give recommendations for business decisions based on data analysis. Create dynamic dashboards and reports using Tableau, Power BI, Looker, or Qlik to communicate insights visually. Build data visualizations (charts, graphs, heatmaps) using Python libraries (e.g., Matplotlib, Seaborn) or R packages(ggplot2, Plotly). Ensure data privacy, integrity, and security, adhering to protocols like GDPR or HIPAA. Implement and follow data governance standards for managing sensitive or confidential data. Participated in data project planning, gathering business requirements and translating them into technology requirements. Environment: Tableau, KPIs, MS Excel, AWS S3, AWS RDS, AWS Redshift, JSON, XML, ETL pipelines, Python scripts, Pandas, Apache Airflow, SparkSQL, Hadoop, HDFS, Hive, Spark. Client: Bon Secours Mercy Health, Cincinnati, Ohio, USA Apr 2023 - Dec 2023 Role: Sr. Data AnalystDescription: Bon Secours Mercy Health is an operator of health care system intended to transform health care delivery and services. Led the development of data models and dashboards to transform business requirements into actionable insights, enhancing decision-making and performance tracking across various teams. Responsibilities: Understand the business requirements and develop data models accordingly by taking care of the resources. Created DAX Queries to generate computed columns in Power BI. Created Row level security with power BI and integration with power BI service portal. Created reports using dynamic column selection, conditional formatting, sorting, and grouping Extensively used SSIS transformations such as Lookup, Derived column, Data conversion, Aggregate, Conditional split, SQL task, Script task and Send mail task etc. Worked on all types of transformations that are available in Power BI Query editor. Published Power BI Reports in the required originations and Made Power BI Dashboards available in Web clients and mobile apps, Power Apps Used Power BI, Power Pivot to develop data analysis prototype, and used Power View and Power Map to visualize reports. Use SQL or other query languages to extract data from relational databases (e.g., MySQL, PostgreSQL, SQL Server). Retrieve and aggregate data from various sources (APIs, cloud databases like AWS, Google BigQuery, or Snowflake) for analysis. Implement data pipelines using tools like Python or ETL tools (e.g., Alteryx, Apache NiFi). Translate data into actionable insights that help business teams make informed decisions. Monitor and report key performance indicators (KPIs) using BI tools to track business performance and forecast trends. Apply predictive modeling techniques such as linear regression, decision trees, or clustering algorithms using Python(Scikit-learn) or R for forecasting and trend analysis. Work with machine learning models for predictive analytics, customer segmentation, and optimization tasks. Manage data in cloud data warehouses like AWS Redshift, Google BigQuery, or Snowflake for efficient data storage and processing. Use SQL for querying and managing data within data lakes or data warehouses. Used Dataflows, Power Automate features to work efficiently for further dashboards Involved in creating new stored procedures and optimizing existing queries and stored procedures. Environment: DAX Queries, Power BI, AWS Redshift, Google Bigquery, Kusto Query Language (KQL), Dataflows, dashboards, SQL, Lookup, SSISClient: Citi Bank, Chennai, India Nov 2021 - Dec 2022 Role: Data AnalystDescription: Citi Bank is a global financial institution providing banking and investment services to individuals and businesses. Identified the key performance indicators (KPIs) and create data visualizations to track business performance. Responsibilities: Responsible for creating reporting dashboards, performing data mining and analysis to understand customer purchase behavior. Perform data cleaning, transformation, and normalization using Python (pandas, NumPy) or R to prepare datasets for analysis. Handle missing or inconsistent data by applying appropriate techniques (e.g., interpolation, data imputation). Created real-time dashboards in Tableau to visualize and monitor key metrics and A/B test processing using both external and internal data. Automate data extraction, transformation, and loading (ETL) processes using Python, SQL, or tools like Apache Airflow. Collaborated with the marketing team to analyze marketing campaign data and perform analysis involving segmentation, cohort analysis. Designed MySQL table schemas and implemented stored procedures to extract and store customer purchase and session data. Streamline reporting processes through automation scripts, reducing manual work and improving efficiency. Queried data from MySQL database and validated and detected inconsistent data using Python packages Pandas and Numpy. Actively involved in designing A/B tests, defining metrics to validate new user interface features, calculating sample size and checking statistical assumptions for tests. Performed statistical analysis such as hypothesis testing, regression analysis, confidence interval and P-value calculation using R to find insights to increase click through rate and sales and built web applications for ad-hoc interactive dashboard. Performed Exploratory Data Analysis to identify trends using Tableau and Python (Matplotlib, Seaborn, Plotly Dash). Wrote scripts to store data into Hadoop, HDFS from various sources including AWS S3, AWS RDS and Web API and NoSQL Database MongoDB. Deployed Big Data tool Spark and Hive to analyze large datasets up to 2TB stored in Hadoop HSDF, including performing filtering and aggregation using SparkSQL based on Spark DataFrame. Developed Python scripts to do data preprocessing for predictive models including missing value imputation, label encoding and feature engineering. Implemented machine learning model including Decision Trees and Logistic Regression to predict revenue from returning customers to help the market team take appropriate promotion strategy using Python. Communicated key findings from data to multiple stakeholders to facilitate data-driven decisions using tools MS PowerPoint Tableau and Jupyter Notebook.Environment: Tableau, PowerPoint Tableau, Jupyter Notebook, Python, Hadoop, HDFS, AWS S3, AWS RDS, Web API, NoSQL, MongoDB.Client: CaratLane, Chennai, India Mar 2020 - Oct 2021 Role: Programmer AnalystDescription: CaratLane is an Indian physical and online jewellery retailer. Regularly used data analysis tools such as SQL, Python, R, or SAS for deep data analysis. Utilized data visualization tools (e.g., Tableau, Power BI) to create user-friendly dashboards.Responsibilities: Imported data from SQL Server DB, Azure SQL DB to Power BI to generate reports. Designed and worked on extraction, Transformation and Loading (ETL) process by pulling up large volume of data from various data sources using SSIS. Work with Azure Data explorer and use Kusto Query Language (KQL) for querying. Created Dax Queries to generate computed columns in Power Bl. Creating Row level security with power BI and integration with power BI service portal. Developed Quote Win Ratio, profit analysis, sales/service backlog reports and dashboards by products and customers. Evaluation/design/development/deployment of additional technologies and automation for managed services on Azure Designed SSIS packages to transfer the data from flat files, Access database and excel documents to the Staging Area in SQL Server 2012. Created different chart objects like Bar Chart, Line Chart, Text Tables, Tree Maps and Scatter Plot. Coordinated with source system owners, day-to-day ETL progress monitoring, Data warehouse target schema design (Star Schema) and maintenance. Developed and maintained various cubes using MDX complex queries in SSAS. Used Sets, Bins, Parameters, Aggregate Functions, Filters. Use Tableau Desktop to analyze and obtain insights into large data sets. Preparing Dashboards using calculations, parameters in Tableau. Written stored procedures and used them in packages. Created reports using time intelligence calculations and functions. Environment: SQL Server DB, Azure SQL DB, Power BI, ETL, SSIS, Dax Queries, Bar Chart, Line Chart, Text Tables, Tree Maps, Scatter Plot, Tableau Desktop . |