| 20,000+ Fresh Resumes Monthly | |
|
|
| | Click here or scroll down to respond to this candidateCandidate's Name
EMAIL AVAILABLEPHONE NUMBER AVAILABLESUMMARY:Having 6 years of experience as a Data Analyst specialized in the Banking, Insurance and Healthcare domain, with strong record of transforming complex data into strategic insights to drive business growth and efficiency.Solid understanding of Business Requirements Gathering, Data warehousing, Data Modeling, Evaluating Data Sources, Translating Requirements into Specifications, Application Design.Expertise in gathering business requirements, performing in-depth analysis, and creating insightful reports and dashboards using tools like Power BI, Tableau, Excel.Experienced Data Analyst with solid understanding of Data Mapping, Data warehousing (OLTP, OLAP), Data Mining, Data Governance and Data management services with Quality Assurance.Strong experience in creating various reports like Drill through, drill down, Sub reports, lined reports, Tablix, matrix reports and Ad-hoc according to the user requirement using SQL Server Reporting Services (SSRS).Well versed in Guidewire Policy Center, Billing Center, Claim Center and Contact Center Data Analysis, Mapping, Conversion, Validation and Reconciliation Processes.Expertise in generating reports using SQL Server Reporting Services (SSRS), (SSIS), Crystal Reports, and Excel spreadsheet.Performed detailed data analysis and deployed insights by querying and manipulating large datasets using SQL, Python and R, driving data-driven decision making for cross-functional teams.Designed and implemented scalable data pipelines using AWS services such as S3, Redshift, and Lamba for efficient data storage and retrieval.Proficient in creating and maintaining standard document templates in Guidewire, using XML.Experience in creating Power BI Dashboards like Power View, Power Query, Power Pivot, Power Maps.Proficiency in preparing ETL Mappings (Source-Stage, Stage-Integration, ISD), Requirements gathering, Data Reporting, Data visualization, Advanced business dashboard and presenting in front of clients.Extensive experience in Object Oriented Analysis and Design (OOAD) techniques with UML using Flow Charts, Use Cases, Class Diagrams, Sequence Diagrams, Activity Diagrams and State Transition DiagramsWell - versed in designing Star and Snowflake Database schemas pertaining to relational and dimensional data modeling.Expertise in Using DAX functions to create measures and calculate columns, relationships and perform time series analysis in Power BI.Collaborated with IT teams to migrate legacy systems to cloud platforms (AWS, AZURE), improving data accessibility and scalability for analytics operations.Developed data processing scripts using Python (Pandas, NumPy) and SQL, automating data workflows and enhancing operational efficiency within the analytics team.Demonstrated expert-level proficiency in Azure Data Factory, Azure SQL, Azure Data Lake, driving successful data integration.Expertise in development and design of ETL methodology for supporting data transformations and processing, in a corporate wide ETL Solution using Informatica Center.Strong working experience in the Data Analysis, Design, Development, Implementation and Testing of Data Warehousing using Data Conversions, Data Extraction, Data Transformation and Data Loading (ETL).Strong SQL query skills and Spark experience with designing and verifying Databases using Entity-Relationship Diagrams (ERD) and data profiling utilizing queries, dashboards, macros etc.Extensively worked on Data Extraction from SAP ECCor Legacy Systems to SAP BW system as a Target By using SAP BODS.Wrote PL/SQL statement and stored procedures in Oracle for extracting as well as writing data.Technical Skills:Reporting ToolsPower BI, Tableau, QlikView, Crystal Reports, SQL Server Reports (SSRS)ETL ToolsInformatica, SSMS, Composite studio, TalendProject Management ToolMS Project, JIRA, Scrum, Kanban.Testing Tools:HP ALM, Informatica Splunk, QlikView, Postman, Python, Apache JMeter, SQL, ExcelOperating SystemsMS Windows NT/98/95/2000, UNIX, LinuxLanguagesPython, SQL, UML, JavaScript, Gherkin, R, DAXDatabaseOracle 8i/9i, Microsoft SQL Server, PostrgeSQL, RDBMSData AnalysisPython, SQL, Pandas, NumPy, Cognos, Apache Spark, Financial AnalyticsData VisualizationTableau, Power BIDatabase ManagementSQL Server, MySQL, MongoDB, IBM DB2, SnowflakeToolsBig DataExcel, Jupyter Notebook, Rapid Miner, AlteryxHadoop, Spark, Hive, Kafka, Azure Data FactoryCloudAWS (S3, Glue, Redshift), MS Azure, GCP, Apache Spark.Professional Experience:Client: USAA, San Antonio, TX Feb 2022 Till DateRole: Data AnalystDescription: The project aimed to upgrade the existing website to an E-service platform providing enhanced features for policyholders and new customers. Responsibilities spanned from data governance and quality to developing machine learning models and automating data processes.Developed and maintained interactive dashboards and reports in Power BI, providing stakeholders with real-time insights into key performance metrics, leading to informed decision-making and a 25% reduction in reporting time.Led knowledge transfer sessions and expanded adoption of Data Governance and Data Quality practices across the organization, enhancing data consistency and reliability.Wrote complex SQL queries to extract, transform, and load (ETL) data from Oracle and other relational databases, ensuring data availability for business processes.Utilized Python and libraries such as Pandas and NumPy to automate routine data cleaning, data transformation, and reporting tasks, reducing manual efforts by 40%.Integrated Power BI with Azure and other cloud platforms, enabling seamless data analysis and reporting across multiple environments.Optimized DAX queries and Power BI data models to improve report performance, reducing loading times by 25%.Designed and developed ETL processes to extract data from Guidewire core systems (Policy Center, Billing Center) and load into an Enterprise Data Warehouse (EDW).Created and published interactive dashboards and reports using Power BI and SSRS. Leveraged real-time and relational data sources to visualize key performance indicators (KPIs) and business metrics.Developed predictive models using Scikit-Learn and Python, improving accuracy by 15% in forecasting customer behavior.Involved in Designing, developing and deploying various kinds of Ad-hoc/Real-time reports using SSRS/Power BI using relational and multidimensional data in SSMS.Extensively worked on Data Extraction from SAPECC or Legacy Systems to SAPBW system as a Target By using SAP BODS.Experience in working on legacy policy Conversion process, which involves comparing the data, validations, rating, status between legacy and Guidewire PC.Utilized SQL to extract, manipulate, and analyze large datasets from SQL Server, ensuring data accuracy and integrity for business intelligence reporting and strategic analysis.Developed and enhanced data marts using Star Schema design methodologies for comprehensive reporting across business units.Automated business intelligence (BI) reporting processes using Python and Matplotlib, significantly improving the efficiency of report generation.Collaborated in Agile environments, regularly attending stand-up meetings, reviewing progress with stakeholders, and breaking down the project into manageable sprints.Conducted integration, system, regression, and performance testing across multiple applications (e.g., Guidewire, Payment Manager). Performed user acceptance testing (UAT) to validate business processes.Developed data integration pipelines using SSIS for moving data into the Data Hub and automated job scheduling with Autosys.Translated raw data into actionable insights by creating advanced charts and drill-down reports using Power BI and Google Analytics.Designed operational reports with SSRS for the business teams, incorporating SQL Server cubes in SSAS to enable multidimensional data analysis.Implemented A/B testing methodologies in collaboration with marketing and product teams, using tools like Google Analytics, Adobe Analytics Optimizely, and Excel to evaluate the impact of changes on user engagement, conversions, and overall business performance.Performed statistical analysis (regression analysis, hypothesis testing, t-tests, chi-square tests) using R and Python to identify key business trends, validate assumptions, and guide business strategy.Contributed to the maintenance of a shared Git repository for data analysis scripts, ensuring team members had access to the latest data models, dashboards, and reports.Developed time-series forecasting models using ARIMA and Prophet in Python to predict customer behavior trends, sales, and demand, achieving improved forecast accuracy by 20%.Processed and analyzed large datasets using Apache Hadoop and Hive for distributed data storage and querying, handling billions of records to derive meaningful insights for business teams.Implemented predictive analytics models using Python and R to identify potential business risks and opportunities, utilizing decision trees and logistic regression models to provide actionable business insights.Utilized advanced SQL techniques such as window functions, common table expressions (CTEs), and recursive queries to analyze complex datasets, resulting in optimized queries and reduced processing time by 30%.Developed and applied prescriptive analytics models by combining predictive analytics with optimization techniques, suggesting actionable recommendations for marketing campaigns and operational efficiency.Effectively communicated complex data findings to non-technical stakeholders through clear and concise data storytelling, ensuring alignment between data insights and business strategy during executive presentations.Worked as a member on several small teams to work on data analysis as well as a data hub design for migration of data from MS Excel to MS SQL Server.Worked with Azure Platforms to implement scalable data solutions, including Apache Spark for processing large datasets, driving efficiency and scalability in analytics processes.Improved query performance by 35% using Hadoop and Hive for distributed processing of large datasets, allowing for faster decision-making across multiple departments.Automated repetitive data extraction, transformation, and loading (ETL) tasks using Python (Pandas, NumPy, SQLAlchemy) and SQL, reducing manual workload and increasing efficiency by 50%.Enabled business stakeholders to access and interpret data independently by designing self-service BI solutions using Tableau, Power BI, and Looker, empowering non-technical teams to create custom reports and dashboards.Experience with Azure Databricks and Azure Data Factory, Optimizing data workflows and processing large datasets for analytical purposes.Collaborated with the data engineering team to improve data pipelines and workflows, ensuring seamless integration of data from disparate systems (e.g., Salesforce, CRM, Google Analytics) into the data warehouse using ETL tools such as Talend and Informatica.Ensured compliance with data security and privacy regulations (e.g., GDPR, CCPA) by implementing robust data governance policies, data masking, and encryption techniques for sensitive customer data.Migrated traditional on-premise reporting solutions to cloud-based platforms like AWS QuickSight and Google Data Studio, improving the scalability and availability of business intelligence tools for a global audience.Produced operational reports in SSRS i.e. drill-down, drill-through, dashboard and matrix reports. Responsible for ETL through SSIS and loading Data to DB from different input sources.Client: Infosys, Hyderabad, India Mar 2018 Aug 2021Role: Data AnalystResponsibilities:Assisted project managers with developing project plans and milestone tracking using MS Project. Participated in daily Agile stand-up meetings and shared updates with stakeholders.Designed and implemented cloud-based analytics solutions using AWS QuickSight and Google BigQuery, enabling scalable reporting and real-time insights for business users, which improved decision-making across global teams.Developed Tableau reports and dashboards, utilizing data blending and complex calculations to create interactive and actionable visualizations. Optimized KPI scorecards for business profitability improvements.Used SQL Server and Oracle SQL Developer for querying and managing large-scale datasets in the data warehouse. Engaged in dimensional modeling (Star Schema) for building logical data models.Preprocessed and cleaned large datasets using Python, ensuring high-quality input data for machine learning models. Performed detailed data sanity checks using R Studio and Pandas.Worked on projects involving the migration of on-premises databases to AWS Redshift, RDS, and S3, and utilized Spark for large-scale data processing.Collaborated closely with data engineering teams to design and optimize data pipelines, improving data ingestion and ETL processes for real-time analytics using Apache Airflow and AWS Lambda.Created SSIS packages for automating data extraction and transformation tasks from various sources, including SAP and legacy systems, to the enterprise data warehouse.Selected and implemented machine learning models, such as decision trees, random forests, and neural networks, based on business problems and data characteristics.Supported the Data Governance team by documenting requirements and assisting in transforming raw data into regulatory-compliant data models for the data hub.Developed SSRS reports and Tableau dashboards to support real-time data analysis for stock market trading, integrating advanced DAX calculations.Automated ETL job scheduling with Autosys and significantly improved reporting efficiency by reducing data processing time by 30%.Worked closely with cross-functional teams, including data engineers, QA testers, and business stakeholders, to ensure alignment on project goals. Facilitated A/B testing using Google Analytics to assess website traffic and improve user experience.Enabled self-service analytics by designing self-service BI solutions using Tableau Prep and Looker, allowing non-technical stakeholders to create custom reports and dashboards, empowering teams to make data-driven decisions independently.Collaborated with the Data Hub team to define detailed requirements for retrieving and transforming data, ensuring seamless integration between data sources like Salesforce CRM and risk management platforms.Provided support for developing detailed test plans, test scenarios, and test cases to ensure the accuracy of data in ETL pipelines. Documented machine learning workflows and processes for knowledge sharing and reproducibility.Analyzed the statistical significance of A/B test results using Python (SciPy, statsmodels) and Excel to ensure data-driven decision-making for optimizing website content and marketing campaigns.Conducted hypothesis testing and developed statistical models to forecast key performance indicators (KPIs), improving decision-making for marketing and operations teams.Designed dimensional models (Star Schema, Snowflake Schema) and created OLAP cubes using SQL Server Analysis Services (SSAS) to support multi-dimensional data analysis and reporting.Utilized Git and GitHub for version control and collaborative development, ensuring proper versioning and management of data analysis scripts and reporting templates.Implemented clustering algorithms (K-Means, DBSCAN) for customer segmentation and used principal component analysis (PCA) for dimensionality reduction, identifying key customer groups for targeted marketing strategies.Leveraged Apache Spark to run distributed data processing tasks for faster data analysis on large datasets, significantly improving performance for data transformations and aggregations.Worked on prescriptive analytics by combining predictive models with optimization techniques to suggest the best courses of action for marketing and operations decisions.Created automated pipelines for processing and visualizing data, leveraging AWS Lambda and Apache Airflow for efficient job scheduling and execution.Trained business users to use Power Query and Tableau Prep for basic data transformations, allowing them to explore and manipulate data on their own.Optimized data warehouse performance by implementing indexing and partitioning strategies in SQL Server and Redshift, leading to a 30% reduction in query execution times.Participated in data audit processes, ensuring that the company's data handling practices adhered to industry regulations and best practices for data privacy and integrity.Used Google BigQuery to manage large datasets in the cloud, optimizing queries and implementing data partitioning strategies to reduce cost and improve query efficiency.Leveraged window functions to perform advanced aggregations (e.g., running totals, moving averages) across large datasets for trend analysis and insights, improving report accuracy for business stakeholders.Contributed to the CI/CD pipeline for data models and reports using Git/GitHub for version control, ensuring smooth integration between the development and production environments in the cloud.Migrated large datasets to AWS Redshift and optimized queries using Redshift Spectrum, achieving a 40% reduction in query times for business-critical reports.Automated data extraction, transformation, and visualization tasks using Python, Pandas, and NumPy, reducing manual data processing efforts by 50% and ensuring timely delivery of critical business reports.Collaborated with the compliance team to develop and enforce data governance policies, ensuring data accuracy, security, and accessibility across multiple platforms and systems.Utilized SSAS and QlikView/Qlik Sense to design and optimize BI solutions, driving data-driven insights that supported critical business initiatives.Worked on Relational Database Management systems (RDBMS), programmed SQL Queries, Metadata Analysis and documented Data Dictionaries and Data Mapping Artifacts.Used linear programming and optimization algorithms to solve complex business problems, such as supply chain optimization and resource allocation, resulting in a 15% cost reduction.Facilitated cross-functional communication between business teams, data scientists, and engineers, translating technical requirements into business solutions, improving project delivery timelines by 20%.Designed efficient data partitioning strategies in Hive to minimize data retrieval time and optimize storage costs, handling data volumes in excess of 1 billion records.Education:Masters in computer science from Sacred Heart University, 2022Bachelors in Computer and Information Science, Osmania University 2018. |