Quantcast

Sql Server Data Engineering Resume Round...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Sql Server Data Engineering
Target Location US-TX-Round Rock
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Sql Server Data Analyst Round Rock, TX

Data Warehouse Sql Server Round Rock, TX

Data Engineer Sql Server Austin, TX

Sql Developer Data Modeler Austin, TX

.Net Developer Sql Server Georgetown, TX

Sql Server Power Bi Austin, TX

Sql Server Senior Developer Austin, TX

Click here or scroll down to respond to this candidate
Candidate's Name
Email: EMAIL AVAILABLE Contact: PHONE NUMBER AVAILABLEProfessional Summary:12 years of professional experience into IT, Having Experience in numerous data engineering platforms including Snowflake, Hadoop, SQL server, MySQL.Specializes in implementing big data analytical, cloud data engineering, data quality, and data visualization solutions, with proficiency in AWS and Azure Cloud services and Big Data/Hadoop Applications.Skilled in migrating applications to Azure and AWS cloud from on-premises, system design, functional analysis, and managing Cloud Databases and Data warehouses.Experience in using databases like MySQL and MS SQL Server, SQL development, and developing ETL applications using various tools like PySpark and Spark-Sql.Diverse programming language proficiency including Python and C, software development methodologies like Agile (Scrum), creating reports and dashboards using Tableau, and working with various Hadoop ecosystem components.Developed Snow pipes for continuous data injection using AWS (S3 bucket).Migrated tables and views from Redshift to Snowflake, performing rigorous unit tests.Utilized AWS EC2 instances to run Snow SQL scripts, updating objects in Snowflake and employed AWS Glue for automated ETL processes.Advanced SQL skills including complex Joins, snowflake stored procedures, clone, views, Materialized views etc.Experience with Snowflake concepts like Virtual Warehouse, SnowPipe, Stages, Data Sharing, Streams, Snowflake Clone, Time Travel and Fail Safe etc.Used COPY/INSERT, PUT and GET commands for loading data into Snowflake tables from internal/external stage.Experience in working TASKS, STREAMS and snowflake procedures, functions.Handling large and complex data sets like JSON, ORC, PARQUET, CSV files from various sources like AWS S3.Extensive work with Snowflake, handling data migration challenges and leveraging time travel features.Leveraged Python libraries, like Pandas, within Lambda functions for data analysis and validation and utilized Boto3 library for AWS SDK to interact programmatically with various AWS services.Responsible for Matillion server start/stop activities, version upgrades, and migrating projects into different environments.Successfully migrated projects from SQL Server to Azure and Snowflake, ensuring seamless transitions.Administered Apache Hadoop clusters, including tools like Hive, Pig and Sqoop.Knowledge on data pipelines using Flume, Sqoop, and Pig for ingesting customer behavioural data into HDFS.Used Python for data extraction, transformation, and loading from transaction systems.Actively participated in the validation of database tables, columns, and metadata in Snowflake.Collaborative approach with business stakeholders, ensuring alignment with project deliveries and has utilized JIRA for tracking defects and changes, ensuring effective communication within the team.Technical Skills:Operating SystemsWindows, UnixData BasesSnowflake, Redshift (Cloud Data Warehouse), Microsoft SQL ServerVirtualization PlatformsVirtual BoxProject Management ToolJIRAMonitoring and Reporting ToolsAWS Cloud Watch, TableauETL ToolsMatillion, Pentaho Data IntegrationVersion ControllersGIT, Atlassian BitbucketScripting LanguagesPythonBig Data TechnologiesHadoop, HDFS 2, Hive, Pig, Sqoop, FlumeOther ToolsDatawarehouse, RDBMS, ETL, AWS S3, Cloud Computing, Jenkins, Azure Blob StorageProfessional Experience:Project : Pulse-RPAPIClient : Dominos, USADuration : April 2024 to Till DateRole : Snowflake Lead DeveloperResponsibilities:Unloaded the data from Redshift to AWS S3 BucketMigrated the database 500 + Tables and views from Redshift to SnowflakePerformed unit test between Redshift and Snowflake.Utilized AWS EC2 instances to run the developed Snow SQL scripts for deploying objects and updating changes into Snowflake.Leverage Python libraries for data validation within Lambda functions. For example, we used libraries like Pandas for data analysis and validation tasks.Loading data into Snowflake tables from internal stage using Snow SQLCreated Config, Schema and SQL files as these files are responsible for configuration details, creation of TEMP table, source and target locations and type of file transfer.Utilized AWS Glue, in order to automate the ETL process, by making it easier to transform and load data into Snowflake. This simplifies the ETL workflow by automatically discovering, cataloging, and transforming data.Developed ETL pipelines in and out of data warehouse using Snow SQL Writing SQL queries from S3 to Snowflake.Utilized the Boto3 library in Python for AWS SDK to interact with various AWS services programmatically. This can be beneficial for managing AWS resources and configurations.Created a Cron job to automate the daily process from Redshift to SnowflakeData Validation is performed in order to ensure that transferred data is free from errorsEnvironment: Snowflake, ETL Tools, Redshift, Snow SQL, AWS Glue, AWS Lambda, AWS S3, AWS EC2 instances, PythonProject : Sample TrackingClient : Labcorp Health care, North CarolinaDuration : Mar 2023 to Mar 2024Role : Snowflake Lead DeveloperResponsibilities:Integrated with different source systems Like Workday, Dayforce and migrated large volumes of data into Snowflake.Developed complex SQL Scripts to load data into Snowflake environments and processed them.Created numerous Snow pipes for loading flat files along with stage environments to support Snow pipes.Extensive experience in using OFFSET command for Time travel to test/ validation purposes.Time traveled to 85 days to recover missing Stored Procedure in Snowflake.Helped team members in implementing best practices in Snowflake.Responsible for Start /Stop activity on Matillion server/ Version upgrades.Migrated Projects into different Matillion Environments.Responsible for taking care of the unusual activity related to Jobs in Matillion Web UI, Migrated the jobs into different environments by Import-Export components in Matillion.Experience in consuming Inbound shared date using Snow Share to minimal the work duplication within Internal teams.Actively participated in the validation with data modeler for database tables, columns, and its metadata in Snowflake.Experience in resizing the cluster whenever required for better performance.Processed data from different sources to Snowflake.Successfully implemented out to the DEV, UAT, PROD region environments in Snowflake.Closely worked with Business to make sure they are comfortable with the delivery.Used GIT as Source code Repository and Jenkins to deploy code.Used JIRA tool to track all the defects and changes reported to team.Environment: Matillion ETL, Amazon Web Services (AWS), Snowflake, Agile Project Management, GIT, JIRAProject : AFSVision - ServicingClient : Truist Commercial Lending Bank, North CarolinaDuration : Mar 2019 to Feb 2023Role : Snowflake DeveloperResponsibilities:Evaluated Snowflake Design considerations for any change in the application.Build the Logical and Physical data model for snowflake as per the changes required.Moved data from SQL Server to Azure and snowflake with copy options.Loaded the tables from the DWH to Azure data lake using azure data factory integration run time.Migrated the database Tables and views from Teradata to Snowflake.Redesigned the Views in snowflake to increase the performance.Developed a data warehouse model in snowflake for over 100 datasetsPerformed unit test between Teradata and Snowflake.Validated the data from SQL Server to Snowflake to make sure it is a 100% match.Advocated on Snowflake Data Platform Solution Architecture, Design, Development, and deployment.Replaced other data platform technology using Snowflake with no performance compromises on scalability and quality.Hands on experience with Microsoft Azure Cloud services, storage accounts, and virtual services.Developed workflow in SSIS to automate the tasks of loading the data into HDFS and processing using hive.Create and maintain optimal data pipeline architecture in Cloud Microsoft Azure using Data factoryand Azure Databricks.Developed pre-processing job using Spark Data frames to transform JSON documents to flat file.Environment: Snowflake, SQL Server, Teradata, Microsoft Azure, Python (Programming Language)Project : CNY new notes ExchangeClient : DBS Bank, SingaporeDuration : Oct 2017 to Feb 2019Role : Software DeveloperResponsibilities:Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, Zookeeper and Sqoop.Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.Installed and Configured Sqoop to import and export the data into Hive from Relational databases.Administering large Hadoop environments build and support cluster set up, performance tuning and monitoring in an enterprise environment.Close monitoring and analysis of the MapReduce job executions on cluster at task level and optimized Hadoop clusters components to achieve high performance.Developed data pipeline using Flume, Sqoop, and Pig to ingest customer behavioral data into HDFS for analysis.Used Python to extract, transform & load source data from transaction systems, generated reports, insights, and key conclusions.Developed story telling dashboards in Tableau Desktop and published them on to Tableau Server which allowed end users to understand the data on the fly with the usage of quick filters for on demand needed information.Designed and Developed data mapping procedures ETL- Data Extraction, Data Analysis and Loading process for integrating data using R programming.Created different Pig scripts & executed them through shell scripts.Load the data into HDFS from different Data sources like Oracle, DB2 using Sqoop and loaded into Hive tables.Designed and developed Pig Latin scripts and Pig command line transformations for data joins and custom processing of MapReduce outputs.Environment: Apache Hadoop, Hive, Flume, Pig, Zookeeper, Sqoop, HDFS, Python Programming, Tableau, shell scripts, Oracle, DB2, R ProgrammingProject : UNISON PARTNER PORTALClient : HP Region (APJ, NA, EMEA and LAR), AustinDuration : Apr 2015  Oct 2015Role : Software DeveloperResponsibilities:Responsible for all activities related to the development, implementation, administration and support of ETL processes for large scale Snowflake cloud data warehouse.Bulk loading from external stage (AWS S3)External stage (snowflake) using COPY command.Loading data into Snowflake tables from internal stage and on local machine.Used COPY, LIST, PUT and GET commands for validating internal and external stage files.Used import and export from internal stage (Snowflake) vs external stage (S3 Bucket).Writing complex Snowsql scripts in Snowflake cloud data warehouse to Business Analysis and reporting.Responsible for task distribution among the team.Perform troubleshooting analysis and resolution of critical issues.Involved in data analysis and handling ad-hoc requests by interacting with business analysts, clients and resolve the issues as part of production support.Used performance technics like clustering key, autoscaling for faster way of loading, increase query execution.Performed data validations for source and target using load history and copy history.Involved with TDM masking toolsCreates and maintains documentation for data masking solutions and processes. Works with application teams to understand the data and identify protected and restricted dataInvolved in data analysis and handling ad-hoc requests by interacting with business analysts, clients and resolve the issues as part of production support.Developed snowflake queries using various joins like self, left, right and full outer joins.Created views and materialized views and snowflake procedures.Worked on assigning roles to various users like development, Automation testing etc..Worked on data sharing as a provider to consumers.Environment: Snowflake Cloud Data Warehouse, Snowsql, Oracle11g, AWS S3, Linux, Data warehouse.Project : SCHEME MANAGEMENT (Insurance)Client : Pinter, MalaysiaDuration : Apr 2014  Mar 2015Role : Software DeveloperResponsibilities:Experience in Loading data from Azure Blob storage.Data Ingestion from Landing to Exploratory zone with various transformations through Stored Procedures.Data integration to Snowflake Raw and Prep Layer using External Tables, Streams and Tasks.Having good experience in automation of scheduling the tasks for delta load.Data modelling changes/additions to existing data model in Analytical zonePerforming the transformations on the staged data according to the requirements.Creating users for different accounts with the relevant roles granted to them.Worked in the snowflake code migration process using snowpipe.Experience in rigorous testing and Bug Fixing.Involved in the deployment process as well as quality data delivery within the given time.Managing data securely by creating views which can be used for the final powerbi report.Environment: Snowflake Cloud Data Warehouse Developer, Azure Blob Storage.Project : Securities and Cash Settlement CycleClient : Scotia Bank, CanadaDuration : Feb 2011  Mar 2014Role : Software EngineerResponsibilities:Loading data from SAP HANA.Loading data from SQL, Amazon and Azure stages.Performing the transformations on the staged data according to the requirements.Managing data securely by creating views and secure views.Creating users for different accounts with the relevant roles granted to them.Worked migration of database from SQL to Snowflake.Worked on pulling data from SQL to Snowflake using Azure Data Factory.Experience in rigorous testing and Bug Fixing.Creating mappings, mapping tasks, command task, notification task and task flows and loading the data.Environment: Snowsql Command Line, Web Interface, SQL, Informatica Cloud.Education:M.S (Master of Sciences in Computers) from Liverpool Hope University (U.K).B.Tech from Jawaharlal Nehru Technology University, INDIA.

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise