Quantcast

Data Analyst Adobe Analytics Resume Dall...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Analyst Adobe Analytics
Target Location US-TX-Dallas
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Sr data analyst IRVING, TX

Data Analyst Risk Management Frisco, TX

Project Management Data Analyst Dallas, TX

Data Analyst Plano, TX

Data Analyst Business Intelligence Dallas, TX

Data Analyst Analytics Denton, TX

Data Science Analyst Arlington, TX

Click here or scroll down to respond to this candidate
Candidate's Name
Data AnalystPHONE NUMBER AVAILABLE EMAIL AVAILABLE Plano, TXSUMMARY5+ years of experience in design, development, deploying, and maintenance of cloud-based data assets and ETL pipelinesPassionate about utilizing data to inform strategic initiatives and deliver impactful results.Skilled in leveraging advanced querying tools such as Google BigQuery and AWS Redshift for using advanced SQL queries to extract actionable insights from large datasets.Expert in conducting big-data analytics on Snowflake DWH and data modeling concepts in DBTPassionate about analyzing clickstream data using Adobe Analytics to understand user behavior and optimize digital experiences.Proficient in the development of various reports in SSRS, and dashboards using various Tableau/Looker Visualizations.Experienced in using Google Analytics to monitor and improve website performance, resulting in data-driven decision-making and measurable business growthProficiency in using Version Control systems like Git.SKILLSMethodologySDLC, Agile, Waterfall, CloudFormation, TerraformProgramming Language:Python, SQL, PySpark, RIDEs:PyCharm, Jupyter Notebook, Databricks Notebook, VS CodeAnalytics tools:Google Analytics, Adobe Analytics, Amplitude, HeapBig Data Ecosystem:Hadoop, MapReduce, Hive, Apache Spark, Pig, FlinkETL Tools:AWS Glue, SSIS, DBT,Cloud Services:Orchestration:AWS Redshift, Lambda, Athena, Quicksight, Textract, Google BigQueryApache Airflow, AWS Step function, IBM Datastage, Google DataflowFrameworks:Kafka, Airflow, Snowflake, DockerPackages:NumPy, Pandas, Matplotlib, SciPy, Scikit-learn, Seaborn, TensorFlowReporting Tools:Tableau, Power BI, SSRSDatabase:MongoDB, MySQL, PostgreSQLOther Tools:Git, MS Office, Atlassian Jira, Confluence, Jenkins, Peoplesoft, ALM, PostmanOther Skills:Data Cleaning, Data Wrangling, Critical Thinking, Communication Skills, Presentation Skills, Problem-Solving, Generative AI, Cross Functional collaborationOperating Systems:Windows, Linux CloudFormation, TerraformEDUCATIONMaster in Business Analytics The University of Texas at Dallas Aug 2021- May 2023Bachelor in Computer Science Engineering University of Pune, IndiaCertifications AWS Cloud Practitioner, Azure FundamentalsEXPERIENCEMetlife, USA Sr. Data Engineer Aug 2023 - CurrentUsed the Agile methodology to build different phases of the Software development life cycle with regular code reviewsEngineered and executed efficient data processing ETL pipeline with AWS services successfully, resulting in a 20% reduction in data ingestion time and 30% improvement in data availabilityStreamlined an OLAP solution for historical data form Data Lake, implementing Kimball data modeling methodologies in DBTManaged data batch sizes of upto 100 TB in Snowflake Data Lake using Data warehousing and ETL/modeling principlesWrote high level SQL queries to analyze big data, implemented statistical analysis techniques in R and utilized Tableau to visualize data with multiple layers of abstractionGenerated and maintained a scalable data infrastructure, harnessing cloud-based technologies such as AWS, leading to a decrease in infrastructure costs and a 50% increase in data processing capacity.Designed and implemented Bronze, Silver and Gold layers of data quality using PySpark and SparkSQL in Databricks, resulting in a 15% improvement in overall data quality and 20% lesser data related errorsImplemented Change Data Capture (CDC) mechanism using triggers and event handlers in AWS Lambda and SNS to enable real-time data synchronization and event-driven processingConducted comprehensive data analysis using SQL, PySpark, MongoDB and RESTful APIs driving data-driven decision making and delivering actionable insights that propelled operational efficiency by 10%.Harnessed advanced Tableau features to connect and blend semi structured sensor data from multiple sources to visualize critical metrics leading to enhanced operational efficiency and boosting revenue growth.Vanguard Group Data & Analytics Intern May 2022 - Aug 2022Uncovered and fixed bug in the notification system, ingested real time AWS SNS into a messaging queue AWS SQSOptimized Cloudwatch logs output by streamlining the Python code in AWS Lambda, eliminating redundant functionsFeature enhancement to improve data pipeline health by 5% - analyzed a sample of 700 system fault logs, implementing preventive measures that reduced service interruptions by 18%Trained AWS Textract to detect keywords in input documents and classify various financial document types, potentially increasing prediction accuracy by 25%Magna Infotech, India Sr. Data Analyst Dec 2019 - July 2021Managed a 1 PB data lake, ensuring data availability and reliability for business intelligence and reporting purposes.Boosted Cloud product adoption by 20% by effectively communicating product vision and PLG strategy to stakeholders, influencing product pricing and positioningIncreased feature alignment with business goals by 30% through effective management of the product backlog, conducting A/B testing, prioritizing strategic objectives and enabling cross-functional team collaborationAnalyzed and interpreted Adobe Clickstream data to gain insights into user behavior and website performance, leading to actionable recommendations that increased user engagement and improved conversion rates by 20%Conducted market research using primary and secondary methods, including BI platforms such as Tableau and Adobe Analytics for trend analysis and Heap to unlock and translate customer empathy and market dynamicsEnabled reusability, code modularity, and high availability by introducing appropriate cloud services following comprehensive what-if, Gap, and MoSCoW analyses- leading to significant process improvements and new featuresImplemented A/B testing strategies, optimizing email campaigns and landing pages in Adobe Target, leading to a 15% improvement in CTR within a 3-month timeframeLeveraged Google BigQuery to perform complex data analysis and querying of large datasets, resulting in the identification of key business insights and a 25% improvement in data processing efficiency. Improved customer satisfaction scores by 10% by harnessing MixPanels funneling and user workflow analysis, leading to optimized customer experiences and resource allocationIdentified and implemented key performance metrics that improved business performance tracking by 40%, enabling data-driven decision-making and strategic planningOrchestrated an automated custom workflow for streamlining the ETL process using Apache Airflow, minimizing manual labor costs by 30%, designed efficient customized data models in DBT after thorough requirement elicitationGroovy Web, India Data Engineer Jan 2018 - Nov 2019Worked on Agile project execution methodology, mentored new resources for maximizing team performanceDeveloped complex queries and designed SSIS packages to extract, transform, and load (ETL) data into data warehouses data marts from heterogeneous sourcesQueried database in Snowflake data warehouse and utilised SPSS for statistical analysis on panel dataStreamlined an automated QA solution in Jenkins, UiPath as part of a process improvement initiative saving 30 man-hoursConducted comprehensive exploratory data analysis on financial dataset using Snowflake to uncover hidden patterns and insights, visualized data pipeline health on self-created interactive dashboard in Visual BasicMaintained ETL multi-cloud pipelines, enabling reusability using IAC tools such as CFT and Airflow DAG templatesSlashed 15% of data processing and query times by conducting database tuning and optimization activities in MS SQLDesigned salient SSIS packages with transformation scripts in Python implementing schema modeling best practices in SSAS or DBTDeveloped 5+ dashboards in Tableau with compelling stratified visualizations to drive business decisions, visualize KPIs and validate stakeholders requirementsCreated and maintained detailed project documentation, including Project Charters, Scope Documents, Stakeholder Analysis Reports, also assisted in writing RFPs, BRDs and FRDs with 100% client satisfactionOptimized pipeline architecture by rewriting ETL jobs scripts, ensuring deduplication and normalization

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise