| 20,000+ Fresh Resumes Monthly | |
|
|
| | Click here or scroll down to respond to this candidate SRAVANI VEmail Id: - EMAIL AVAILABLE
Mobile: - PHONE NUMBER AVAILABLE
PROFESSIONAL SUMMARY: Having 6+ years of experience in Data Reporting and Analytics across various domains combined with data migration and project management, creating solutions for business problems through data driven decisions. experience in IT Industry with core competency in Data Warehousing (DW), Extract-Transform-Load (ETL) development with Informatica and Business Intelligence. Ability to write complex SQL queries. Good knowledge of UNIX commands. Applied generative AI for anomaly detection, identifying outliers and irregular data patterns. Utilized generative AI models for synthetic data generation to enhance predictive modeling. Knowledge of performance tuning in Informatica and SQL Capable of XML import and exporting of Informatica workflows Proficient in Data Analysis Having good knowledge in terraform. Proficient in utilizing informatica intelligent cloud services (IICS) for seamless data integration, transformation and management. Leveraged generative AI in NLP projects for extracting insights from unstructured text data. Experienced data analyst with passion for using mathematics and statistics to develop key performance indicators of production automation in the health, insurance, and finance industries. Developed and maintained ETL (Extract, Transform, Load) processes using Antiunity replicate for real-time data integration, ensuring efficient and accurate data movement between heterogeneous systems. Experience with data cleaning, data manipulation, database testing, and developing controls using python programming. Performed Data visualization, designed dashboards with Tableau and power BI and generated complex reports including charts, summaries, graphs. Developed a process for the batch ingestion of CSV Files, Sqoop, and data bricks from different sources and also generated views on the data source using python. Experienced in JIRA and ability to work in both kanban and sprint models. Worked with multiple project teams of technical professionals through all phases of the SDLC using technologies including oracle, data warehousing. Experience utilizing Tableau visual analytics and generating reports using Tableau visualizations like bars, lines, pies, scatter plots, heat maps, bubbles, etc., according to the end-user requirements. Dedicated and highly ambitious to achieve personal as well as organizational goals. Creative, self-motivated, punctual with good communication and interpersonal skills.TECHNICAL SUMMARY:Database Technologies: MySQL, Microsoft SQL, Oracle, Snowflake, AWS S3, Azure PySpark, Data ware house concepts, Gen AI.ETL Tool: Informatica Power Center, Informatica IICS, Pentaho Data Integration, Data DesigningDefect/Change Tracking: JIRA, GitHub, Service now.Programming: Python, R, SQL, Oracle, UNIX, SSMS, CMAPAnalytics Software: Tableau, Power BI, Terraform, SAS, MS Excel.Text Mining: Text Analytics, NLPSIMMONS BANK, Chicago, IL Jan 2023-presentSr. Data AnalystDetail-oriented banking professional with [X] years of experience in retail and commercial banking. Skilled in customer relationship management, loan processing, and financial advisory. Proven ability to identify client needs and provide tailored financial solutions. Strong knowledge of banking regulations and compliance. Excellent communicator committed to delivering exceptional service. Seeking to contribute to Simmons Bank's success through my expertise and dedication.Responsibilities: Collaborated with cross-functional teams to translate business requirements into real-time materials purchasing solutions. Support material purchasing activities for multiple product portfolios. Developed and implemented generative AI models to enhance data analysis processes, improving efficiency and accuracy in data interpretation. Design and build ETL/ELT tracks, ensuring efficient data extraction, transformation, and loading processes. Able to work with large datasets and skilled in mapping data from diverse source CMAP locations to SSMS. Demonstrated expertise in optimizing ETL workflows to enhance performance and minimize data latency, resulting in streamlined data pipelines and improved operational efficiency.
Skilled in leveraging IICS functionalities such as data integration, data quality, and data governance to optimize data pipelines. Integrated generative AI tools to automate data cleaning and preprocessing, significantly reducing manual effort and ensuring high-quality datasets. Using terraforms declarative approach, maintain and replicate the pipelines. Utilize experience with Analytics & BI tools and services, including SQL, Redshift, and Tableau, to perform data analysis and reporting tasks. Building out the data infrastructure from the ground up using power BI and SQL to provide real time insights into the product and business KPIs. Conducted experiments and evaluations of generative models to optimize their performance and applicability in various data analysis scenarios. Designed and implemented data lakes to consolidate large volumes of structured and unstructured data from diverse sources. Developed ETL (Extract, Transform, Load) pipelines to streamline data integration and processing within the data lake. Import data to SQL server management studio (SSMS) for further analysis and manipulation. Using Terraform to set up an Apache Airflow environment to orchestrate ETL processes. Provide engineering and technical support for testing laboratories data analytics activities. Performed predictive modeling and forecasting using Python. participated in staff meetings with business partners to gain a deep understanding of their objectives and challenges. Developed reports and dashboards using data visualization tools (Tableau) that clearly convey insights and enable effective decision making. Gathering requirements from stakeholders to design effective data analysis and reporting solutions. Generated reports and obtained data to develop analytics on key performance and operational metrics. Environment: Python, Attunity, Tableau, SQL, MS Excel, Redshift, ETL/ELT, Analytics & BI ToolsAmtrust Insurance Jul 2019-Jul 2022Sr. Data AnalystAmTrust Insurance Company, established in 1998, is a leading provider of property and casualty insurance. Specializing in small business insurance solutions, the company offers products like workers' compensation, general liability, and commercial auto insurance. Known for financial stability and innovation, AmTrust emphasizes superior customer service and customized solutions. With a global presence, the company is committed to long-term client relationships and regulatory compliance. AmTrust continuously invests in technology and employee development to drive growth and operational efficiency.Responsibilities: Analyzed and recommended various business strategies to stakeholders and worked with cross-functional teams to deliver revenue generating marketing programs for 15M+ card customers. Create and maintain ETL AWS Databricks jobs for business using Pyspark and SparkSQL. Designing and implementing data models within snowflake, including defining tables, relationships, and schemas. Able to transform complex datasets into valuable insights through deep data analysis and data transformation. Able to utilize excel functions and power queries to handle and manipulate large data sets effectively. Supported data warehouse projects with logical and physical data modeling in Oracle environment and assured accurate delivery of DDL to the development teams. Documented the data requirements and system changes into detailed functional specifications. Took ownership of Source to Target Mapping and tracked and maintained changes. Performed Extraction, Transformation and Loading (ETL) using Informatica power center. Designed and implemented data profiling and data quality improvement solution to analyze, match, cleanse, and consolidate data before loading into data warehouse. Extensively used ERWin to design Logical, Physical Data Models, and Relational database and to perform forward/reverse engineering. Responsible in designing new FACT or Dimension Tables to existing Models. Implemented stored procedures, functions, views, triggers, packages in PL/SQL. Work on creating relational and NoSQL data models to fit the diverse needs of data consumers. Schedule ETL jobs in Databricks to monitor file drops in AWS S3 storage and run the Python scripts to load the processed data in S3 for other teams to consume. Create ETL pipelines to consume data from AWS S3 storage and perform transformations in Databricks using Spark scripts and load data into Snowflake Datawarehouse for downstream systems to consume the data. Built and maintaining an automated Spark script in Databricks to deliver an Excel file bi-weekly, for the internal letters team to send letters to the customers segmented.Environment: SQL, Python, AWS S3, ETL (informatica), Databricks, PySpark, tableau, MS Excel, SnowflakeThrive Global, New York, NY July 2017-February 2020Data AnalystExpertise in working with large datasets, electronic health records (EHRs), and various data analysis tools such as SQL, Python, R, and Tableau. Strong background in identifying trends, patterns, and insights that inform strategic initiatives and policy development. Excellent communication and collaboration skills, with a proven ability to work effectively with cross-functional teams, including clinicians, administrators, and IT professionals. Committed to maintaining the highest standards of data integrity, privacy, and security.
Responsibilities: Adept in designing complex mappings and ETL processes to extract data from various sources and load data into target data warehouses using kettle transformations and jobs in Pentaho. Responsible for ongoing operation of data warehouses so extract data from databases, transforming data according to reporting business needs and loading data into targeted marts and data warehouses dynamically. Identify business, functional and technical requirements through meetings and interviews and JAD sessions. Define the ETL mapping specification and Design the ETL process to source the data from sources and load it into DWH tables. Designed the logical and physical schema for data marts and integrated the legacy system data into data marts. Integrate Data stage Metadata to Informatica Metadata and created ETL mappings and workflows. Designed mapping and identified and resolved performance bottlenecks in Source to Target, Mappings. Developed Mappings using Source Qualifier, Expression, Filter, Look up, Update Strategy, Sorter, Joiner, Normalizer and Router transformations. Automated/scheduled active jobs on Linux server to run daily with email notifications for any failures, overrun and underrun. Familiar using various steps such as set environment variables, stream lookup, value mapper, and constraints, text file output, table input, and table output. Resolved issues of various table loading logics and extracting data according to business needs using SQL. Proficient in writing shell scripting for various ETL needs.Environment: SQL, Excel, Python, Tableau. Informatica (ETL Tool), Data warehouse, Linux.
Profit Logic, India Jul 2017-May 2019Data Analyst
Profit Logic, acquired by Oracle in 2006, was a leading provider of retail analytics solutions. The company specialized in advanced software for pricing, inventory optimization, and merchandising strategies. Their solutions enabled retailers to make data-driven decisions, improving profitability and customer satisfaction. Profit Logic s analytics tools helped clients analyze sales trends, forecast demand, and manage inventory more effectively. By integrating these insights, retailers could optimize their operations and enhance overall business performance.
Responsibilities: Ensure ongoing activities/project to check all the data are accurate and available to meet established deadlines. Worked with extract-transform-load (ETL) processing tool for data mining, data warehousing, and data cleaning using SQL performed cross validation on a single holdout set to evaluate the model s performance on training data sets and fine-tuning the model upon arriving of new data. Performed data validation, data cleansing, data integrity, and data quality checking before delivering data to operations, business by using SQL. Identify business, functional and technical requirements through meetings and interviews and JAD sessions. Define the ETL mapping specification and Design the ETL process to source the data from sources and load it into DWH tables. Designed the logical and physical schema for data marts and integrated the legacy system data into data marts. Integrate Data stage Metadata to Informatica Metadata and created ETL mappings and workflows. Designed mapping and identified and resolved performance bottlenecks in Source to Target, Mappings. Developed Mappings using Source Qualifier, Expression, Filter, Look up, Update Strategy, Sorter, Joiner, Normalizer and Router transformations. Collecting, analyzing, interpreting, and summarizing data in preparation for the generation of statistical and analytical reports, developed reports, and presentation of dashboards in Tableau for Healthcare Medicare projects. Monitoring quality metrics with analytical skills. Documenting all requirements for business needs. Application of mathematics, statistics, modeling, business analysis, and technology to transform high volumes of complex data into advanced analytic solutions. Created dynamic Tableau dashboard to indicate the branch locations of the firm in India which have a narrow profit margin to improve strategy models for such locations, improving revenues by 10%. Programmed visualizations of consumer behaviors which resulted in increased customer satisfaction by 15%.Environment: ETL Tool, Data mining, SQL, Tableau, Snowflake, ExcelEducation:Bachelor of science in Computer Science & Engineering 2010-2014Computer Science & Engineering at ELLENKI COLLEGE OF ENGINEERING AND TECHNOLOGY-Hyderabad, IN |