Quantcast

Data Warehouse Business Intelligence Res...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Warehouse Business Intelligence
Target Location US-TX-Celina
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Business Intelligence/Data warehouse professional Plano, TX

Business Intelligence Data Warehouse Irving, TX

Business Intelligence Data Warehousing Irving, TX

Data Analyst Business Intelligence Dallas, TX

Warehouse Worker Data Entry Fort Worth, TX

Sql Server Data Warehouse Irving, TX

Data Analyst Warehouse Flower Mound, TX

Click here or scroll down to respond to this candidate
Candidate's Name
Multi-Cloud Architect/EngineerPHONE NUMBER AVAILABLEPROFESSIONAL SUMMARY:Over 16 + years of IT experience in the Analysis, Design, development, testing, support, and implementation of business intelligence solutions using Data Warehouse/Data Mart Design, ETL, OLAP, Client/Server applications.CertificationsSnow Pro ( Snowflake) - 2021IBM Information Suite - 2015PROFESSIONAL EXPERIENCE:Title :Data Warehouse Architect DEC 2023 - Feb 2024BCBS,AlabamaBCBS (Alabama ) has various applications teams that required to standardize, organize and implement the data model in various environments. As part of data model administrator involved in all aspects of modeling.Roles & Responsibilities:Utilizing the Erwin tool for the creation, modification, and deployment of the current data model or table adjustments.Standardizing data elements by transforming business names into technical names.Registering and implementing assets in IGC (Information Governance Catalog).Producing Erwin check/custom reports to pinpoint elements that deviate from established standards.Administering IGC for effective asset management.Leveraging Jira to articulate assigned work, tasks, and stories.Creating an automated script to reduce human intervention and enhance work efficiency by 60%.Facilitated seamless team coordination by establishing clear communication channels and fostering an environment that encourages collaboration and information sharing.Automation of tasks using PowerShell and CLI for Aws, Azure and OCI clouds.Developed Python scripts for deploying the Pipeline in Azure Data Factory (ADF) that processes the data using the Cosmos Activity.Custom block coding in python using python data structures like lists, dictionaries, sets, tuples and strings.Created python scripts for data access and analysis to aid in process and system monitoring.Analyse the various cloud utilization, cost, performance monitoring and tuning reports.Integrate multiple clouds with splunk / azure sentinel for monitoring and serviceability.Experience with Azure Devops CI/CD deployment of applications to Azure IaaS and PaaS infrastructure.Database Architect/SME responsible for designing the database migration strategies and migrating the On-prem databases to Azure Cloud including Azure MySQL, Cassandra, SQL Server, and Oracle databaseAWS Cloud Architect responsible for discovery, analysis of On-prem Data center applications and developing the technical architecture for the successful migration of applications to AWS cloud platform.Azure Cloud Architect and Engineer for one of the largest Azure migration projects for an US health major.Title : Data Engineer Senior OCT 2016 - Aug 2022USAAPlano, TXUSAA uses SIEM systems to collect and analyze security data from various sources in real-time. To connect to live stream data for cybersecurity purposes, ensure integration with the organization's SIEM solution. This involves configuring data connectors or agents to ingest data from sources such as firewalls, intrusion detection/prevention systems, antivirus logs, and other security appliances. By centralizing and correlating this live stream data, security teams can detect and respond to potential security incidents promptly.Roles & Responsibilities:Migration of huge volume of historical data from Netezza to Snowflake.Developing complex sqls in the DBT tool for all data transformations.Developed complex Datastage jobs using AWS Glue, AWS S3, Lake Formation, and other cloud concepts, aligning with business requirements and mapping documents both on-premises and in the cloud.Transformed the data using AWS Glue dynamic frames with PySpark cataloged the transformed the data using Crawlers.Used SNOWSQL, Data Pipelines to load the data into snowflake.End to End solution for hosting the web application on AWS cloud with integration to S3 bucketsCreated python scripts to migrate data from Netezza to snowflake for more than 700 tables.Performed hash key validation for every record between source and target.Created streams and tasks to run data pipelines and load data in to snowflake.Extensively used unix and SQL commands for in day to day activities.Created File formats & internal stages to load intermediate data before loading into actual table.Worked on python scripts for data validation between source and target.Developed common data Ingestion framework to load various source data formats to AWS containers using UNIX scripting.Audit controls for end to end historical process.Worked with Rules to restrict the data to and separate the users accessing all tablesCharles SCHWAB transition and retirement of USAA technical components.Victory capital transition, segregation, support & retirementIdentifying and creating solutions to improve system performance and availability, facilitating root cause analysis of system issues to minimize impact and future occurrences.Working on dbt, Big Insights & Horton works Hadoop ecosystem, Hadoop Distributed File System (HDFS), Map Reduce, Hive, Sqoop(version 1.4.6), Spark (version 2.3), Python (version 2.7), Oozie, Apache NiFi, ETL, SQL, Shell scripting, Cloud services, JavaScript Object Notation(JSON) & XML.Architect, Designing & Developing Data warehouse house applications using Hive, Netezza, IBM DatastageDesigning and developing the architecture for Data Lake, reporting solution from various data sources. Co-ordinate with Data Scientists and support multiple dashboard in Tableau Desktop 2018.2 and Business Objects.Developing ETL Applications using dbt, Hive, Spark & Sqoop, IBM Infosphere Datastage (version 11.5) and automated using Agile workflows and Shell scripts with error handling Systems and scheduled using Control-M (version 9) scheduler.Working in Python using packages like numpy, pandas and IDEs - PyCharm, Spyder, Anaconda, Jupyter Notebook.Developed complex Datastage jobs using AWS Glue, AWS S3, Lake Formation, and other cloud concepts, aligning with business requirements and mapping documents both on-premises and in the cloud.Snowflake is reside on the AWS S3 buckets.Created AWS Lambda, EC2 instances on AWS and implemented security groups, administeredDeveloping ETL/dbt procedures to ensure conformity and compliance with ETL standards with lack of redundancy and translating business requirements.Recommending and implementing ETL/dbt changes to improve maintainability and increaseTitle:Lead Senior Datastage Engineer.Cigna Health Care, January 2016 - OCT-2016Bloomfield, CT.Part of Cyber Privacy Risk Council team which involves identifying the sensitive information (PHI, PII, PCI) information across the organization. Teams involves in identify all applications in Cigna and establish the connection and perform the data profiling using Information Analyzer. Once sensitive elements are identified a data lineage will be added to under the data flow.Roles & Responsibilities:Lean Agile SAFE 4 certified.Identify Sensitive information like PHI, PII, PCI for each application using Information Analyzer based on HIPPA rulesPlace controls/secure on databases where the sensitive information lies in an application to eliminate cyber threat on the data.Generate various reports from IA like data classification, summary, cross/base domain analysis for project requirement .Implementing agile mythologies in each phase of the project.Importing the Meta data using IMM into Meta Data Assest Management to built a centralized repository in IMAM. Data profiling using Information Analyzer tool.Create Data Rules and Custom Classes and Classifications in IA for project requirementUsed IMAM for Centralized metadata repository Admin Activities such connection Setup in IMAM and publish the same in order to utlize all IBM applications.Extensively used unix and SQL commands for in day to day activities.Maintaining Users access and desired rolesData Lineage to identify the data flow across various applications under Cigna Enterprise.Restrict access to sensitive data elements identify in data profiling. Developed complex VB Script (Macro) for project requirements. Which includes combining multiple excel book in to One & Vice Versa. Data Format Changes etc.Planned dataStage upgrade from 9.1 to 11.5Used ESP scheduler for executing IA Jobs in productionIADB Maintenance.Using Rally for team status updates.Used metadex to import metadata from third party applications to IMAM.Title: Senior Datastage Lead.Axiall Corporation, Jan 2015  Jan 2016Chemical Manufacture.Axiall Corp. is Chemistry Company that works on applied chemistry to solve common problems, improve everyday life and drive human progress. The project objective was to collect, organize and store data from different Data sources to provide a single source of integrated SAP system.Roles & Responsibilities:Build RTL Method to load data from Legacy systems to Target SAP.Developed IDoc jobs for Vendors, Customers, Materials and all other transactional data.Effective Use of Information Analyzer, BDR, Fast Track Mapping, Metadata Work Bench to create a robust design for RTL.Used IBM Discover to identify the new legacy data from the source systems.Validate the Idocs in SAP ECC environment to ensure the data.Data Profiling, Standardization, matching and cleansing activities through data quality process.Used Quality Stage for standardizing the names and Address, Identified Duplicates, Matched and Unmatched records in each source system.Worked on BDR (Business Data repository) to identify mandatory fields (Which are in scope) for SAP.Once BDR is created, Mapping is done on Fast Mapping for all Legacies to SAP structure.Created SAP Idoc jobs to load data into SAP Idoc.Created multiple stage on data like, RTL, ALGO, ALG1, PLD and IDOC_PLD to identify the data quality issues in RTL.Extensively used unix and SQL commands for in day to day activities.Title: Technical Lead, DatastageAmerican Express, Phoenix, AZ May 2014  December -2014Banking & Credit CardsAmEx is a diversified global financial services company headquartered in NYC. The company is best known for its credit card, charge card, and travelers cheque businesses. Amex cards account for approximately 24% of the total dollar volume of credit card transactions in the US, the highest of any card issuer.Roles & Responsibilities:Experience with all the new features in datastage 8.5 and migrated all the jobs from 7.5 to 8.1/8.01Lead the team of 8 members for all Production issue fixes team.Involved in Physical and Logical Model design with architects.Used Cobol to produce report by table and report by unit of work; purpose was to provide a quicker review of database changes.Developed Datastage jobs using star schema and snowflake schema modeling and conversant with Data Warehouse concepts like fact tables and dimension tables.Designed Jobs Using complex Stages like XML Input Stage, Complex Flat File Stage, and Hashed file Stage.Designed several parallel jobs using Sequential File, Dataset, Join, Merge, Lookup, Change Apply, Change Data Capture, Remove duplicates, Funnel, Filter, Copy, Column Generator, Peek, Modify, Compare, Oracle Enterprise, Surrogate Key, Aggregator, Transformer, Row Generator stages.Performed extraction, transformation and loading of data using different types of stages and by performing derivations over the links connecting these stages.Created the new jobs to update upsert and insert the DB2Database.Used Autosys scheduler for automating the weekly regular run of DW cycle in both production and UAT and dev flow environments.Implemented Pure Data Methodologies for a big data warehouse.Schedule Datastage jobs in crontab for periodic datastage jobs backup.Extensively worked with Job sequences using Job Activity, Email Notification, Sequencer, Wait for File activities to control and execute the Datastage Parallel jobs.Created Master Job Sequencers to control sequence of Jobs using job controls.Created Test Plans for Unit Testing for designed jobs.File Transfer (SFT) setup to servers.Code management.Title:Datastage DeveloperPitney Bowes, Stamford, CTMigration Project (Datastage 8.1 to 8.5) Sep 2011  Feb 2014Pitney Bowes is one the Major manufacturing group in supplying/Providing postal orders. SAP and XML are two major source systems which we receive data from. Used Source plugin / parsing stages to extract the data From PB Machines, Data will be processed at each customer location about 16 countries.Roles & Responsibilities:Actively involved in an IBM IS (ETL-DataStage) upgrade from 8.1 to 8.5.Added, deleted and moved projects and jobs. Purged job log files and traced server activity.Issued DataStage engine commands from Administrator. Started and stopped DataStage engine and managed logs.Performed Root Cause Analysis [RCA] in production support for tickets.Implemented parallel extender jobs for better performance using stages like Join, Merge, Sort and Lookup, transformer with different source files complex flat files, XML files.Addressed issues/requests from Application teams with answers for queries.Execution of Application jobs as part of the Soft launch (pre-production).Involved in analysis, planning, design, development, and implementation phages of projects using IBM Web Sphere software (Quality Stage v8.0.1, Web Service, Information Analyzer, Profile Stage, WISD of IIS 8.0.1).Worked on pilot project on Metadata work bench for column to column analysis.Used PL/SQL programming to develop Stored Procedures/Functions and Database triggers.Drill through Impact analysis and column derivations and transformations applied to columnsCreated SSIS packages for processing fact and dimension tables with complex transforms and also configured and deployed SSIS packages to import and export data.Tracked deployment of projects through status report and project plan updates.Guided/supported application team in terms of implementation of DataStage Best practices while code development process by being proactive.Involved in tuning of SQL queries by using Quest SQL optimizer and manually by Explain plan.Conducted interviews with key business users to collect requirement and business process information.Facilitated the QA Test process by assigning test cases to the QA test plan in various builds, assigned test cases to the QA testers, and established contact with the testers.Production support, documentation and providing knowledge to the end user.Title: Datastage DeveloperSyntel Technologies, Mumbai, India. Dec 2009  Apr 2011DescriptionWipro Technologies is the on the Major IT sector in India and having various clients across the globe. The aim of the project is to provide technical support across the client when and where project is going to RED. Mainly involved in problem solving, Performance tuning, Maintain code as per company standards.ResponsibilitiesWorked across various clients and identifying the problems in the project.Extensively provided technical support to resolve issues.Modifying the existing code base as per company standardInvolved in unit testing, system testing and bug bixes in Dev, QA and productions.Preparing Function and technical specification documents.Technical interview to the team.Involved in Data modeling and Physical model creations.Actively involved in Yearend activities, Data Purging, backup and Integration.Creating SQL and UNIX scripts for Database backup.Title: Application DeveloperGoogle India PVT LTD, Hyderabad, IndiaGoogle Maps Database Management May 2006 - Dec 2009Description:Maintains data created by the end users which was developed more than 150 countries .The main aim of this project is to develop maps for various countries and use them to find the directions from one way to another place (Navigation). I am in the team from the day one and involved in developing the maps for more than 150 countries.Responsibilities:Provided solutions to vendor issues in time, by satisfying the client.Health check of database and validation of table spaces.Creating automation scripts for DB to auto purge mechanism.Data Management from heterogeneous system.Performance tuning and regular backup of millions of data.Involved in command center call for all production calls.Tuning SQL queries when and where required.Successfully completed the pilot project.Maintained data for more than 100 countries without any performance issues.Education:Bachelor's degree, Mechanical Engineering. 2002 - 2006Nagarjuna College Of Engineering and Technology

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise