Quantcast

Data Engineer Warehouse Resume Cary, NC
Resumes | Register

Candidate Information
Title Data Engineer Warehouse
Target Location US-NC-Cary
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Engineer Durham, NC

Azure Data Engineer Cary, NC

Data Engineer Azure Raleigh, NC

Data Engineer Senior Raleigh, NC

Information Technology Data Warehouse Cary, NC

Data Engineer North Carolina Apex, NC

Data Engineer Holly Springs, NC

Click here or scroll down to respond to this candidate
VENKATA RAMA RAO GADALA Mail: EMAIL AVAILABLEETL Informatica IICS Data Engineer Phone: PHONE NUMBER AVAILABLELinkedIn: https://LINKEDIN LINK AVAILABLEProfessional Summary:Certified professional in Cloud Data and Application Integration R36.10+ years of experience in the field of Information Technology with an emphasis on Data warehousing in developing Business Intelligence solutions using ETL tool Informatica Power Center 10.x/9.x/8.x and Informatica Intelligence Cloud Services(IICS).Worked as a Senior Informatica developer in onshore client location (Philippines) for 4 months and gathered business requirements from client and coordinated with offshore team to share the requirements.Expertise in Data warehouse / Data mart, OLTP, OLAP Implementations teamed with project scope, analysis, requirements gathering, data modeling, effort estimation, ETL design, Development, system testing, Implementation and production support.Strong knowledge on Data Warehousing dimensional modeling like top down (Bill Inmon) and bottom up (Ralph Kimball) methodologies to build data marts and enterprise data warehouse.Expertise in Informatica power center 10.x/9.x/8.x extracting data from Oracle, Flat files.Experience working with IICS concepts related to data integration, mass Ingestion, monitor, Administrator, deployments, permissions, schedules.Have good Knowledge in integrating the data between multiple applications like Oracle, SFDC (Sales force), Snowflake, PostgreSQL and Red shift using Cloud data (CDI) Integration.Good Knowledge in connecting different Applications using API, and Web service calls.Knowledge on Red shift database to copy and unload data using AWS Red shift Clusters.Knowledge on Snow Pipes to load data from AWS S3 to Snowflake database.Working knowledge on AWS S3 buckets to create and move files and archival the files.Expertise to fix the defects, migrating the code from different environments like DEV, SIT, UAT, PROD by getting them tested and approved in different stages.Worked with various transformations like Normalizer, expression, rank, filter, aggregator, lookup, joiner, sequence generator, sorter, update strategy, source qualifier, transaction control, union, joiner etc.Expertise in implementing complex business rules by creating complex mappings/ mapplets, workflows/ worklets, shortcuts, reusable transformations, and partitioning sessions.Experience in finding the bottlenecks and performance tuning of source, target, mapping, and session level.Strong knowledge in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts and Dimensions.Expertise in SQL programming like creating Tables, Views, and Joins.Expertise on the Strong documentation (Functional, Detailed, and Mapping Specification Documents) skills for maintaining clear and comprehensive technical documentation of ETL processes, mappings, and workflows.Expertise in Strong documentation skills on Unit test case and Migration for future reference and evidences.Have good Experience as Informatica Support team member having good troubleshooting analysis skills.Knowledge on Azure Data Factory (ADF) to create pipelines and monitor and identify the issues and resolve.Knowledge on OBIEE reporting tool to understand the RPD with Physical, BMM, and Presentation/Logical Layers and rectify the issues.Knowledge on Hadoop distribution ecosystem architecture like HDFS, Map Reduce by using external tools like Hive, Pig, Spark.Having good Knowledge on HTML, Java Script, and Python to create scripts.Good knowledge on configuration, administration, and monitoring of data warehouse processes tools like Data warehouse Admin Consol(DAC), AutoSys, and Control-M, TivoliEXPERIENCE SUMMARY:1.Worked as a Senior Consultant in Capgemini India Pvt Ltd, Hyderabad, TS, India from June13 to April22.2.Worked as a Senior Associate in Cognizant Technologies Pvt Ltd, Hyderabad, TS, India from May 22 to Jan 243.Working as a Senior Software Engineer in Kube Clouds Inc, Cary, North Carolina, USA from Jan24 to Till DateCertification:Certified professional in Cloud Data and Application Integration R36.Education:Master of Computer Applications(M.C.A) 2008, Kakatiya University, Telangana, IndiaBachelor of Commerce(B.Com) 2005, Kakatiya University, Telangana, IndiaTechnical Skills:Informatica Power Center: Working knowledge on different types of versions of Informatica power center like 10x/9.x/8.x.Informatica Cloud (IICS): Working knowledge on Informatica cloud modules like data integration, Mass Ingestion, Monitor, Administrator.Databases: Working knowledge on different databases like Oracle, PostgreSQL, and Snowflake to create Data Dictionary Objects like Tables, Views, and Joins and load data using snow pipes from AWS S3 buckets to Snowflake database, and load data to Red shift database using AWS Red shift clusters using Copy, Unload commands.Azure Data Factory (ADF): Working knowledge on the Azure data factory to create pipelines and monitor and identify the issues and resolve.AWS: Working knowledge on the AWS S3 buckets to stores and archive the files while data migration from one application to another also load snowflake database using Snow Pipes from AWS S3.Developer Tools: Worked on different Developer tools like TOAD, SQL Developer, and DBeaver to connect to different databases.Scheduling Tools: Working knowledge on the third party scheduling tools like Data warehouse Admin Consol (DAC), AutoSys, Control-M and TivoliLanguages: Knowledge on C++ with Data Structures (Stack, Queues, Linked List, Array), Java (OOPS Concepts), SQL/PLSQL, HTML, Java Script, Python.Other Applications: Used multiple other applications like JIRA, Confluence, Service Now, Jenkins, Gitlab, GitHub, Putty, and WinScp while project execution.Professional ExperienceClient: Sanofi Pharma & Healthcare, USA May 2022 to Till DateETL Informatica IICS DeveloperSanofi is a multinational pharmaceutical and healthcare company. Sanofi engages in the research and development, manufacturing and marketing of pharmacological products, principally in the prescription market, but the firm also develops over-the-counter medications. The corporation covers seven major therapeutic areas: cardiovascular, central nervous system, diabetes, internal medicine, oncology, thrombosis, and vaccines (it is the world's largest producer of the last through its subsidiary Sanofi Pasteur).The current architecture of HR Workday Portal integration landscape is obsolete with on-premise outdated Tibco as the middleware as well as the Oracle Repo. Security is weak and support for the upgrades is inevitable. The Project goal is used to redesign the HR Integration landscape by leveraging secure and up-to-date architecture to ensure an effective and sustainable data flow between Workday and other downstream/upstream Systems.Responsibilities:Working as a Senior Informatica IICS developer in a project and connecting client directly.Understand the Legacy Tibco code and connect with the TIBCO team if any clarifications needed on daily basis.Since Project working on agile methodology, Attend Scrum calls with client on daily basis and update on going tasks and plan the upcoming sprints and the respective tasks.Worked on Jira for stories and defects for ongoing sprints.Used on the service now to raise service requests for getting all accesses and software.Regular interaction with customer directly on requirements and Understand Functional Requirements and Develop Technical Requirements and Solution Documents.Working on Functional Specification, Detail Specification, Mapping Specification Documents and get them approved with business approvers.Working on Informatica Intelligence Cloud services (IICS) as primary application and using Data Integration create mappings, tasks, task flows.Designed data warehouse schema and star schema data models.Worked on Web Service call to extract data from Workday HR portal on hourly basis schedule and load into PostgreSQL database.Worked on extract data from PostgreSQL and written DB queries by applying most of the business rules into it and implement a mapping flow to generate a file.Extract the data from PostgreSQL and load into sales force objects using Cloud data integration.Extract the field data from PostgreSQL table and using App connectors and service connectors update Sales force pick list constraints to not to reject data while loading from CDI flow.Created mass injection to move file from AWS S3 to Downstream applications and archive the files.Extract the data from PostgreSQL using IICS to load data into Snowflake cloud database as a target database.Implemented Data synchronization from Oracle to Snowflake in Informatica Intelligence cloud Services.Ingest the data from AWS S3 to snowflake using snow pipes.Worked on AWS Red shift Cluster using IICS Data Loader to load data from source to Red shift target.Automated on-demand AWS S3 backups, providing additional layer of data security and reducing manual workload by 50%.Code Migration from DEV to SIT for testing with all documentation approvals from business and unit test evidences.Scheduled the jobs in UAT using AutoSys tool based the job dependencies and data dependencies for user acceptance testing.Deployed the AutoSys jobs to production and make a schedule and monitoring the jobs in hyper care period.Environment: Informatica Intelligence cloud services (IICS), AWS S3, Oracle, PostgreSQL, Snowflake, Sales force and AutoSys, SQL Developer, DBeaverClient: Schlumberger, USA January 2019 to April 2022ETL Informatica DeveloperSchlumberger is the leading oilfield services provider, trusted to deliver superior results and improved E&P performance for oil and gas companies around the world. Through our well site operations and in our research and engineering facilities, we are working to develop products, services and solutions that optimize customer performance in a secure environment.Reporter is a Business Intelligence tool for Schlumberger Oilfield system. Data is extracted from diversified sources, where in it is gathered in to single database read by a reporting Tool OBIEE. With the reporting tool Schlumberger end users are able to build their own queries in several functional domains.Responsibilities:Involved in designing high level & low level design documentation based on the requirement provided by business.Create and modify the ETL mappings and Reports/RPD changes of OBIEE as part of change requests.Created, Scheduled and set the dependencies across various jobs in the system using AutoSys.Created and monitored the Azure ADF pipelines and scheduled and monitored.Performed the code reviews and mentored the new team members.Additional Responsibilities are Quality related documentation, Quality assurance.Involved in Internal Design review and Code review.Performed troubleshooting, fixed and deployed many bug fixes.Worked on extraction, transformation and loading of data directly from different heterogeneous source systems like Oracle and Flat files (ETL) using Informatica power center 9.xtool.Extensively used ETL Tool Informatica to load data from Flat Files, Oracle.Created all the Target Table DDLs and as well the Source table DDLs.Requirement gathering and Understanding of the Functional Business processes and Requirements given by the Business Analyst.Provided KT sessions to support team members to support project in production environment.Involved in Change management activities for environmental setup, code migrations, space procurement for production and non-production servers.Scheduled the jobs in UAT using AutoSys tool based the job dependencies and data dependencies for user acceptance testing.Deployed the AutoSys jobs to production and make a schedule and monitoring the jobs in hyper care period.Environment: Informatica Power Center 10.1.0, OBIEE 11g, Oracle12c, Azure Data Factory (ADF), UNIX, SQL Developer, AutoSys.Client: Farmers Insurance, USA May 2017 to December 2018ETL Informatica IICS DeveloperA farmer is the part of Zurich insurance company. In farmers sales force cloud project we are migrating existing SIEBEL feeds into the sales force environment. Over around 25 feeds like unlicensed and individual agents and CIS to SRM integration with Informatica cloud technology. Managing the Relationships between the sales force accounts, contact, party, product code and account, contact, relationship objects as per the longstanding framers and altered business requirement thoughts.Responsibilities:Involved in gathering business requirements and translating them into technical requirements.Used Informatica Cloud to extract the data from flat files.Responsible in strictly maintaining naming standards and warehouse standards for future development.Developed mappings by using components cloud data integration, Mass Ingestion to meet the data integration and data quality requirements.Created and monitored Task flows using Informatica cloud monitor to load data into target postgreSQL database.Extract the data from PostgreSQL and load into sales force objects using Cloud data integration.Performed Unit testing and User Acceptance testing to pro-actively identify data discrepancies and inaccuracies.Prepared design documents, ETL Specifications and migration documents.Maintained daily Tech Tracker for the updates from the team regarding their objects, issues and progress.Provided KT sessions to support team members to support project in production environmentScheduled the jobs in UAT using AutoSys tool based the job dependencies and data dependencies for user acceptance testing.Deployed the AutoSys jobs to production and make a schedule and monitoring the jobs in hyper care period.Environment: Informatica Cloud spring 2017, Sales force, PostgreSQL, DBeaver, AutoSys.Client: AIA philam life insurance company, Philippines February 2016 to April 2017ETL Informatica DeveloperThe Philippine American Life and General Insurance Company (also commonly known by its trade name, Philam Life) is an insurance company based in the Philippines. It is currently the largest life insurance company in the Philippines in terms of Assets, Net Worth, Investment and Paid-up Capital.Philam Life is an EDW (Enterprises Data Warehouse) project which aims at providing the information of a customer in a 360 degree view. This project deals with building a data warehouse for AIA Philam accounts. The data will be organized according to requirements of the organization, such as by customer details, policy details, party details etc.Responsibilities:Worked as a Senior Informatica developer in onshore client location (Philippines) for 4 months and gathered requirements from client and coordinated with offshore team to share the requirements.Involved in Analysis, Design and Development, test and implementation of Informatica transformations and workflows for extracting the data from the multiple systems.Responsible for project coordination from onshore to offshore team on design and development activities.Used Informatica Power Center to extract the data from various source systems like databases, flat files.Responsible in maintaining naming standards and warehouse standards for future development.Achieved performance improvement by tuning SQL queries, extraction procedures between Oracle and Power Center.Developed complex Power Center mappings by using different transformations and components respectively to meet the data integration and data quality requirements.Created and monitored workflows/sessions using Informatica Monitor to load data into target Oracle database.Created and Monitored the Jobs in Oracle DAC Scheduling tool and configured based on run schedules.Performed Unit testing, Integration Testing, and User Acceptance testing to pro-actively identify data discrepancies and inaccuracies.Involved in performance tuning at source, target, mapping and session level.Prepared design documents, ETL Specifications and migration documents.Maintained daily Tech Tracker for the updates from the team regarding their objects, issues and progress.Involved in providing Informatica Technical support to the team members, as well as, the business.Provided KT sessions to support team members to support project in production environment.Environment: Informatica Power Center 9.6x, Oracle 11g, SQL Developer, Data Warehouse Admin Console (DAC)Client: Barclays Bank, U.K August 2013 to January 2016Informatica DeveloperBarclays UK Retail Banking is a business unit within Barclays Global Retail Banking. They serve more than 15 million individual and business customers with current and savings accounts, mortgages, insurance, loans, credit cards and investment products.The Project requirement is to load the Telephone information of a customer into Teradata ware house tables. The process involved with Publishing files, Change Data Capture (CDC), and maintains history in BIW tables. There are Transactional tables and Non transactional tables. There is multiple intraday executions involved.Responsibilities:Used Informatica Power Center to extract the data from various source systems like databases, flat files.Responsible in strictly maintaining naming standards and warehouse standards for future development.Developed complex Power Center mappings by using different transformations and components respectively to meet the data integration and data quality requirements.Created and monitored workflows/sessions using Informatica Workflow Manager/Workflow Monitor to load data into target Oracle database.Performed Unit testing, Integration Testing, and User Acceptance testing to pro-actively identify data discrepancies and inaccuracies.Involved in performance tuning at source, target, mapping and session level.Prepared design documents, ETL Specifications and migration documents.Maintained daily Tech Tracker for the updates from the team regarding their objects, issues and progress.Prepared Schedule dependency sheet for Tivoli server and get that deployed into Tivoli server and validate the logs and schedule the jobs for user acceptance testing.Deployed the Tivoli jobs to production and make a schedule and monitoring the jobs in hyper care period.Environment: Informatica Power Center 9.0, Informatica Data Transformation, Teradata, UNIX, Tivoli.

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise