Quantcast

Sql Server Data Warehouse Resume Irving,...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Sql Server Data Warehouse
Target Location US-TX-Irving
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Architect Sql Server Allen, TX

Data Analysis Sql Server Arlington, TX

Sql Server Data Analyst Irving, TX

Sql Server Data Plano, TX

Data Engineer Sql Server Richardson, TX

Big Data Sql Server Plano, TX

Sql Server Azure Data Dallas, TX

Click here or scroll down to respond to this candidate
Candidate's Name
Email: EMAIL AVAILABLEPhone: PHONE NUMBER AVAILABLEPROFESSIONAL SUMMARY9 years of IT experience in Analysis, Design, Developing and Testing and implementation of business application systems.Experience in Talend administration, Installation and configuration and worked extensively on Talend Bigdata to load the data in to HDFS, S3, Hive, Redshift.Experience with Talend administration, creating projects, users, and assigning projects to users and job scheduling etc.Experienced in ETL methodology for performing Data Migration, Data Profiling, Extraction, Transformation and Loading using Talend and designed data conversions from large variety of source systems including Oracle 10g/9i/8i/7.x, DB2, Netezza, SQL server, Teradata, Hive and non-relational sources like Delimited files, Positional files, flat file and XML.Expertise in creating mappings using Talend DI/Bigdata components like ts3configuration, thdfsconfiguration, tRedshiftinput, tfileparquetinput, tfileparquetoutput, tFilterRow, tMap, tJoin, ts3Put, ts3Get, ts3Copy, tFileList, tJava, tAggregateRow, tDie, tLogRow, tuniqrow, tamazonEMRManage etc.Involved in code migrations from Dev, QA, CERT and Production and providing operational instructions for deployments.designing and implementing data integration solutions leveraging Azure services. Proficient in developing ETL processes, data pipelines, and ensuring data quality and governance in Azure environments.Experience in integration of various data sources like oracle SQL, AWS (S3, EMR, Redshift, Aurora), Netezza, SQL server and MS Access into staging area.Experience in developing Stored Procedures, Functions and SQL queries using SQL Server.Experienced in all phases of Data warehouse development from requirements gathering for the data warehouse to develop the code, Unit Testing and Documenting.Extensive experience in writing UNIX shell scripts and automation of the ETL processes using UNIX shell scripting, and used Netezza Utilities to load and execute SQL scripts using UnixExperience in data migration with data from different application into a single application.Responsible for Data migration from my SQL Server to Redshift Databases.Experienced in batch scripting on windows and worked extensively with slowly changing dimensions using Talend.Hands-on experience across all stages of Software Development Life Cycle (SDLC) including business requirement analysis, data mapping, build, unit testing, system integration and user acceptance testing.Technical SkillsOperating Systems:Windows 2008/2007, UNIX, LINUX.Data warehousing:Talend DI, Talend Big Data, AWS (S3, EMR, Redshift, Aurora, Lambda, SNS topic), Informatica Power Center 7.x/6.x (Designer, Workflow Manager, Workflow Monitor and Repository manager.DatabasesRedshift, Aurora, Oracle 12c/11g/10g/9i/8i, MS SQL Server 2012 /2008/2005, DB2 v8.1, Netezza.MethodologiesAgile, WaterfallLanguagesSQL, UNIX, Shell scripts, C++.Scheduling ToolsTAC (Talend Administrator Console), Autosys, Control-MPROFESSIONAL EXPERIENCEClient: Department Of Motor Vehicles, CA April 2022 to April 2024Role: Sr. ETL/ Talend DeveloperResponsibilities:Participated in Requirement gathering, Business Analysis, User meetings and translating user inputs into ETL mapping documents.Designed and customized data models for Data warehouse supporting data from multiple sources on real timeInvolved in building the Data Ingestion architecture and Source to Target mapping to load data into Data warehouseWorked with Data mapping team to understand the source to target mapping rules.Prepared both High level and Low-level mapping documents.Analyzed the requirements and framed the business logic and implemented it using Talend.Involved in ETL design and documentation.Developed Talend jobs from the mapping documents and loaded the data into the warehouse.Involved in end-to-end Testing of Talend jobs.Analyzed and performed data integration using Talend open integration suite.Created a number of Java routines for complex transformation and utilities to use across the integrations,Experience in migrating the data to cloud data warehouses, and Salesforce CRM from legacy systems, like Oracle, Salesforce.Worked on the cloud data storages like Amazon S3Worked with the Amazon S3 and EC2 components to migrate the Data from Different source systems to Amazon S3 buckets,Experience in working with large data warehouses, mapping and extracting data from legacy systems, Redshift/SQL Server [] UDB databases.Worked on the design, development and testing of Talend mappings.Experience with big data tools: Snowflake, Hadoop, etc.Developed a Talend jobs integrate the data between Snowflake to and from different data sources.Wrote complex SQL queries to take data from various sources and integrated it with Talend.Involved in loading the data into Netezza from legacy and flat files using Unix scripts. Worked on Performance Tuning of Netezza queries with proper understanding of joins and DistributionCreated ETL job infrastructure using Talend Open Studio.Responsible for MDM for Customer Data Talend MDM Customers Suppliers Products Assets Agencies Stores Address Standardizations and Reference Data Employees MDM is about creating and managing the golden records of your businessDeveloped the business rules for cleansing/validating/standardization of data using Informatica Data Quality.Worked on Talend components like tReplace, tmap, tsort and tFilterColumn, tFilterRow etc.Used Database components like tMSSQLInput, tOracleOutput etc.Worked with various File components like tFileCopy, tFileCompare, tFileExist.Developed standards for ETL framework for the ease of reusing similar logic across the board.Analyzed requirements, create design and deliver documented solutions that adhere to prescribed Agile development methodology and toolsDeveloped mappings to extract data from different sources like DB2, XML files are loaded into Data Mart.Created complex mappings by using different transformations like Filter, Router, lookups, Stored procedure, Joiner, Update Strategy, Expressions and Aggregator transformations to pipeline data to Data Mart.Involved in designing Logical/Physical Data Models, reverse engineering for the entire subject across the schema.Scheduling and Automation of ETL processes with scheduling tool in Autosys and TAC.Scheduled the workflows using Shell script.Troubleshoot database, Joblets, mappings, source, and target to find out the bottlenecks and improved the performance.Involved rigorously in Data Cleansing and Data Validation to validate the corrupted data.Migrated Talend mappings/Job /Joblets from Development to Test and to production environment.Environment: Talend 8.x, XML files, DB2, Oracle 11g, postgresql, SQL, MS Excel, MS Access,UNIX Shell Scripts, Talend Administrator Console, Oracle, Jira, SVN, Quality Center, and AgileMethodology.Client: Sirius XM, NY September 2020 to March 2022Role: Sr. ETL/ Talend DeveloperResponsibilities:Worked with Business Analysis team in gathering the requirements and created Functional and ETL specification documents.Tidal work book used for automate schedule the jobs.Deploying jobs in Nexus repository and also involved in Branch creation, feature branch to master branch merge.Analyzed the Business Requirement Documents (BRD) and laid out the steps for the data extraction, business logic implementation & loading into targets.Need to collect the requirement and do the analysis and give the estimation ETA to the business users.Need to analyze the STM (source to target mapping) from data analyst team and do the validation to start the mapping in Talent.Involved into end-to-end Development, Implementation & production support process from dev to prod environment.Involved in Testing the ETL process that meets the business requirement.Extracting, Transforming and Loading the data from Source to Staging and Staging to Target according to the Business requirements.Used Copy command for loading data from S3 into Redshift database.Used tRedshiftUnload Component for unloading data from Redshift database to S3 Bucket.Ingestion of huge files from various sources into S3 bucket and processing the data.Ability to meet deadlines and handle multiple tasks.Environment: Talend Platform for Big Data 6.0.1, UNIX, Oracle 10g, Oracle, TAC (Admin Center)Client: Change Healthcare, Lombard, IL September 2019 to August 2020Role: Sr. Talend DeveloperResponsibilities:Involved in creating jobs to ingest multiple client data sets using Talend data integration (DI) and Talend big data spark job components.Created job lets and sub jobs for code reusability.Used tHDFSConfiguration and tS3Configuration components to access the data from s3 and HDFS when we run the job on Amazon EMR.Used tAmazonEMRManage component in launching and shut down EMR from Talend and running the Talend big data spark job on launched EMR and creating parquet files on s3 using delimited files, excel files, gzip files and load the data into S3, Redshift database and Aurora.Storing and reading the parquet files and csv files using tFileOutParquet, tFileInputParquet, tfileInputDelimited and tfileOutputDelimited.Involved to run the jobs on TAC based events from Amazon s3 using Lambda function and AWS S3 events.Performed data manipulations using various Talend components like tMap, tschemacompliancecheck, tFilterRow, tuniqrow, tJavaRow, tJava, tAmazonRedshift, tOraclerow, tOracleInput, tOracleOutput and many more.Used the extracting appropriate features from data sets in order to handle bad, partial records using Spark SQL.Imported data from AWS S3 into Spark RDD, Performed transformations and actions on RDD's.Implemented a mechanism to start an EMR based on the input file size using Lambda function and s3 events.Experience in using Zeppelin tool and parquet viewer for data analytics and data visualization.Developed jobs to extract data from redshift using UNLOAD command and loading data into redshift using COPY command using tRedshiftrow component.Built Talend big data spark jobs to process large volume of data and perform necessary transformations and do the rollups for huge raw files.Experience in triggering the spark applications on EMR on file arrival from the client on source location.Experience in using s3 components like (ts3Get, tS3Put, ts3Copy, ts3List and ts3Delete).Monitored and supported the Talend jobs scheduled through Talend Admin Center (TAC)Involved in creating S3 event-based trigger jobs in Talend.Experienced using tcacheIn and tcacheout components to load the data memory and other persistence techniques to improve performance of the job.Involved in deployment of jobs from all environments (DEV/QA/UAT/PROD) using Talend Repo Manager (TRM) and involved in production issues.Environment: Talend Bigdata 6.3.1/ 7.1.1, AWS Redshift, S3, Aurora RDS, Spark, Oracle 12c, Pipe delimited / Positional Files/ Parquet, SQL workbench, SQL developer, Putty, WinSCP, FileZilla, Zeppelin, JIRA, SVN, Unix scripting, Agile.Client: HP, Austin, TX January 2019 to August 2019Role: Sr. Talend DeveloperResponsibilities:Interacted with business team to understand business needs and to gather requirements.Designed target tables as per the requirement from the reporting team and also designed Extraction, Transformation and Loading (ETL) using Talend.Designed and developed end-to-end data integration solutions using Talend and Azure Data Factory for seamless data movement and transformation.Implemented ETL processes to ingest data from various sources into Azure SQL Database and Azure Data Lake, improving data accessibility and analytics capabilities.Ensured data quality and integrity by implementing Talend Data Quality rules and Azure Data Quality Services within data pipelines.Collaborated with cross-functional teams to optimize data workflows and enhance performance using Azure Databricks and Azure Synapse Analytics.Led troubleshooting efforts to identify and resolve issues in Talend jobs and Azure services, minimizing downtime and ensuring data consistency.Conducted knowledge sharing sessions and provided mentorship to junior developers on Talend best practices and Azure integration techniques.Created Technical Design Document from Source to stage and Stage to target mapping.Worked with Talend Studio (Development area) & Admin Console (Admin area)Created Java Routines, Reusable transformations, Joblets using Talend as an ETL ToolCreated Complex Jobs and used transformations like tMap, tOracle (Components), tLogCatcher, tStatCatcher, tFlowmeterCatcher, File Delimited components and Error handling components (tWarn, tDie)Developed simple to complex Map Reduce jobs using Hive and Pig for analyzing the data.Assisted in migrating the existing data center into the AWS environment.Written Hive queries for data analysis and to process the data for visualization.Identified performance issues in existing sources, targets and Jobs by analyzing the data flow, evaluating transformations and tuned accordingly for better performance.Manage all technical aspects of the ETL Jobs process with other team membersWorked with Parallel connectors for parallel processing to improve job performance while working with bulk data sources.Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression mechanisms.Developed mappings to load Fact and Dimension tables, SCD Type 1 and SCD Type 2 dimensions and Incremental loading.Created contexts to use the values throughout the process to pass from parent child to child jobs and child to parent jobs.Analyzed and performed data integration using Talend Cloud hybrid integration suite.Worked on Joblets (reusable code) & Java routines in Talend.Performed Unit testing and created Unix Shell Scripts and provided on call support.Schedule Talend Jobs using Job Conductor (Scheduling Tool in Talend) - available in TAC.Retrieved data from Oracle and loaded into SQL Server data WarehouseCreated many complex ETL jobs for data exchange and to Database Server and various other systems including RDBMS, XML, CSV, and Flat file structure.Created and reviewed scripts to create new tables, views, queries for new enhancement in the applications using TOAD.Monitoring the Data Quality, Generating weekly/monthly/yearly statistics reports on production processes - success / failure rates for causal analysis as maintenance part and Enhancing exiting production ETL ProcessDevelopment of high level data dictionary of ETL data mappings and transformations from a series of complex Talend data integration jobs.Experienced working with establishing snowflake integration with Talend with in snowflakes cloud data warehouse.enabling Enterprise Data Governance program to save time when it comes to harvest metadata and do the dataQuality assessment. This was a large effort and was done working and collaborating with Voya's Technology Risk and Security Management (TRSM) team.Environment: Talend 6.2.1/6.0.1, Talend Open Studio Big Data/DQ/DI, Talend Administrator Console, Oracle 11g, Teradata V 14.0, Hive, HANA, PL/SQL, DB2, XML, JAVA. ERwin 7, UNIX Shell Scripting, Oracle 10g, Oracle, TAC (Admin Center).Client: Realogy, New Jersey Mar 2017 to December 2018Role: ETL/Talend DeveloperResponsibilities:Worked closely with Business Analysts to review the business specifications of the project and to gather the ETL requirements.Created Talend jobs to copy the files from one server to another and utilized Talend FTP components.Created and managed Source to Target mapping documents for all Facts and Dimension tablesAnalyzing the source data to know the quality of data by using Talend Data Quality.Involved in writing SQL Queries and used Joins to access Data from Oracle, and MySQL.Assisted in migrating the existing data center into the AWS environment.Prepared ETL mapping Documents for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production environment.SSAS Cube Analysis using MS-Excel and PowerPivot.Implemented SQL Server Analysis Services (SSAS) OLAP Cubes with Dimensional Data Modeling Star and Snowflakes Schema.Design and Implemented ETL for data load from heterogeneous Sources to SQL Server and Oracle as target databases and for Fact and Slowly Changing Dimensions SCD-Type1 and SCD-Type2.Utilized Big Data components like tHDFSInput, tHDFSOutput, tHiveLoad, tHiveInput, tHbaseInput, tHbaseOutput, tSqoopImport and tSqoopExport.Used Talend most used components (tMap, tDie, tConvertType, tFlowMeter, tLogCatcher, tRowGenerator, tSetGlobalVar, tHashInput & tHashOutput and many more)Created many complex ETL jobs for data exchange from and to Database Server and various other systems including RDBMS, XML, CSV, and Flat file structures.Experienced in using debug mode of Talend to debug a job to fix errors.Responsible for developing, support and maintenance for the ETL (Extract, Transform and Load) processes using Talend Integration Suite.Conducted JAD sessions with business users and SME's for better understanding of the reporting requirements.Developed Talend jobs to populate the claims data to data warehouse - star schema.Used Talend Admin Console Job conductor to schedule ETL Jobs on daily, weekly, monthly and yearly basis.Worked on various Talend components such as tMap, tFilterRow, tAggregateRow, tFileExist, tFileCopy, tFileList, tDie etc.Worked Extensively on Talend Admin Console and Schedule Jobs in Job Conductor.Analyzed and performed data integration using Talend Cloud hybrid integration suite.Environment: Talend Enterprise Big Data Edition 5.1, Talend Administrator Console, MS SQL Server 2012/2008, Oracle 11g, Hive, HDFS, Sqoop, TOAD, UNIX Enterprise Platform for Data integration.Client: First object Inc. February 2013 to July 2015Role: Etl/Informatica DeveloperResponsibilities:Assisted gathering business requirements and worked closely with various Application and Business teams to develop Data Model, ETL procedures to design Data Warehouse.Designed and developed star schema model for target database using ERWIN Data modeling.Extensively used ETL Informatica tool to extract data stored in MS SQL 2000, Excel, and Flat files and finally loaded into a single Data Warehouse.Used various active and passive transformations such as Aggregator, Expression, Sorter, Router, Joiner, connected/unconnected Lookup, and Update Strategy transformations for data control, cleansing, and data movement.Designed and developed Mapplets for faster development, standardization and reusability purposes.Implemented Slowly Changing Dimension Type 1 and Type 2 for inserting and updating Target tables for maintaining the history.Used Debugger to validate transformations by creating break points to analyze, and monitor Data flow.Tuned performance of Informatica Session by increasing block size, data cache size, sequence buffer length and Target based commit interval, and mappings by dropping and recreation of indexes.Worked along with the QA Team and provided production support by monitoring the processes running daily.Involved in pre-and post-session migration planning for optimizing data load performance.Interfaced with the Portfolio Management and Global Asset Management Groups to define reporting requirements and project plan for intranet applications for Fixed Income and Equities.Performed Unit testing during the mapping phase to ensure proper and efficient implementation of the transformations.Environment: Informatica Power Center 8.x, Informatica Repository Manager, Oracle10g/9i, DB2, ERwin, TOAD, UNIX - AIX, PL/SQL, SQL DeveloperEducation Details:Bachelors in Electronics and communication Engineering from Amrita sai institute of science and technology @JNTUK(2009-2013)Masters in Computer science from Silicon Valley University, Sanjose, California (AUG 2015-DEC 2016)

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise