Quantcast

Data Engineer Senior Resume Plano, TX
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Engineer Senior
Target Location US-TX-Plano
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Data Engineer Senior Denton, TX

Senior Data Engineer Irving, TX

Data Engineer Senior Irving, TX

Senior Data Engineer Frisco, TX

Data Engineer Senior Dallas, TX

Principal Machine Learning Engineer | Senior Data Scientist Dallas, TX

Data Engineer Senior Plano, TX

Click here or scroll down to respond to this candidate
Certifications : SnowPro Core CertificationCandidate's Name
Senior Data EngineerPHONE NUMBER AVAILABLE Email: EMAIL AVAILABLESUMMARY:Having 16+ years of experience in ETL/ELT and Data Engineering for Retail, Healthcare, Insurance, and Banking.Strong Experience in Analysis, Design, Development, Testing, and Implementation of Business Intelligence solutions using Data Warehouse/Data Mart Design.Excellent understanding of Data warehousing Concepts - SCD Type1/Type2/Type3, CDC, Dimension & Fact tables, Schemas (Star and Snowflake).Strong Development Experience in building Data pipelines for batch environment using various tools like Hadoop Ecosystem, AWS, Snowflake, Python, Shell scripting, and other ETL tools.Extensively worked on building data pipelines for Data Processing using Snowflake and DBT Cloud (Data Build Tool).Experience in creating SCD1/SCD2 data pipelines using macros using DBT Jinga language.Extensively worked on creating the data models for processing the data using data models in DBT Cloud (Data Build Tool).Hands on experience in creating data pipelines from Data Lake(S3) to Datawarehouse (Amazon redshift) using AWS Glue.Strong Experience on Cloud technologies like Snowflake, AWS (EC2, EMR, S3, IAM, Redshift, Redshift Spectrum, Glue (Crawlers, Data Cataloged and Python scripting, Athena, Lambda), Salesforce, Matillion.Strong Programming experience with scripting languages like Shell scripting and Python and creating user-defined functions using Python objects and Lambda Expressions for creating reusable components.Extensively worked with python libraries like Pandas for Data Analysis and Data Cleaning.Good Exposure in performing Exploratory Data Analysis (Data Profiling), Root Cause Analysis, Impact Analysis on large volume of datasets using Python, SQL, Hive, Snowflake, Redshift, Redshift spectrum, AWS Athena.Worked on core Hadoop and Hadoop technology stack which includes HDFS, Hive, HBase, Impala and Kafka.Experience in Hive optimization techniques like static and dynamic partitioning and bucketing on different formats of data using internal and external tables.Hands on experience in creating data pipelines using AWS glue to extract the data and load into S3, Redshift and analyze the data using AWS AthenaExtensively worked on snowflake for data migration and new ELT transformations for moving the data from source to target systems.Used various steps in Pentaho transformations including Row Normalizer, Row Denormalizer, Database Lookup, Database Join, Calculator, Add Sequence, Add Constant and various types of inputs and outputs for various data sources including Tables, Text File, Excel and CSV files.Integrating Kettle (ETL) with Hadoop and other various NoSQL data stores can be found in Pentaho Big Data Plugin. This is a Kettle Plugin and provides connectors to HDFS, Map Reduce, HBase, Cassandra, MongoDB, CouchDB that works across Pentaho Data Integration.Good understanding of RDBMS concepts including writing complex queries, performance tuning, and query optimization.Working knowledge on various databases like Oracle, MS SQL Server, DB2, Greenplum, Netezza, Redshift, Athena, Redshift Spectrum, and snowflake.Strong experience in working with ETL and ELT applications.Extensively used EXPLAIN PLAN for SQL tuning and created proper indexes.Strong experience in using various ETL tools like Infosphere Information Server 11.7,11.5, Quality Stage, Pentaho Kettle 5.0.1, Informatica Power Center 9.6.1 and 10.x, Snowflake, MatillionWorked on Data Ingestion process and other components for transforming the data during the data migration from Oracle database to Snowflake using Matillion.Knowledge of HIPPA standards, EDI (Electronic Data Interchange) transactions 820, 834, 835, 837 and x12.Implementation and Knowledge of HIPPA code sets, ICD-9 and ICD-10 coding.Good working knowledge of claims processing, HIPPA regulations and 270, 271, 834, 835, 837 and x12 EDI transactions for health care industries.Experience in various subsystems of MMIS-Claims, Provider, Recipient, Procedure Drug and Diagnosis (PDD), Explanation of Benefits (EOB).Experience in data validation of 837 (Health Care Claims or Encounters), 835 (Health Care Claims payment/ Remittance), 270/271 (Eligibility request/response), 834 (Enrollment/Dis-enrollment to a health plan) for data feeds and data mappings.Knowledge of SAP HANA database.Developed and implemented a comprehensive data governance framework, including data classification, ownership, and stewardship, resulting in a 30% improvement in data quality and security.Experience in working with cross-functional teams to define and document data governance requirements and objectives, leading to a 25% increase in data governance compliance across the organizationEstablished data governance metrics and reporting mechanisms to track the effectiveness of data governance efforts, leading to a 15% increase in data governance transparency and accountability.Collaborated with legal and compliance teams to ensure data governance practices align with regulatory requirements and industry standards, resulting in a 10% reduction in legal and compliance risks.Experience in creating Data Governance Policies, Business Glossary, Data Dictionary, Reference Data, Meta Data, Data Lineage and Data Quality Rules.Worked on data lineage to data quality and business glossary work within the overall data governance program.Collaborated with Business Analysts, Product Owners and Data Architects to identify the data usage patterns and to formulate business names, definitions and data quality rules for data elements and also incorporate data security for role-based access.Worked closely with Data Architect and ensured data quality solutions adhere to established data governance policies and standards and enterprise security and privacy requirements.Experience in using data lineage using IBM Infosphere data lineage.Can work and handle multiple projects to meet business needs and provide 24/7 production supportTeam Player with a strong sense of Ownership.Highly experience in handling offshore and onshore resources.Education M.B.A.I. T from Sikkim Manipal University in 2007 Bachelor of Computer Applications from Osmania University in 2002Training and Certification:SnowPro Core Certification 2023Completed Airflow training on Udemy 2024TECHNICAL SKILLS:ETL/ELT ToolsInfosphere Information Server 11.7,11.5,9.x,7.x,Pentaho Kettle 5.0.1, Informatica Power Center 9.6.1,10.x,Hive, Impala, Amazon Redshift, Redshift Spectrum, DBT Cloud (Data Build Tool), Snowflake, MatillionOperating SystemsWindows 8/7/2000/XP/ NT, UNIX, MS-DOS, LINUXDatabaseOracle 10g, IBM DB2, MS SQL Server 2012/2008, Netezza, Greenplum, AWS Redshift, AWS Redshift spectrumProgramming SkillsPL/SQL, T-SQL, SQL, Shell Scripting, Python.Scheduling ToolsControl M, Automatic, Autosys, Windows Scheduler and AirflowMethodologiesWaterfall, Agile Scrum and KanbanBig Data ToolsHDFS, Hive, HBase, KafkaCloud TechnologiesAWS (EC2, S3, EMR, Glue, Athena, AWS Lambda, cloud watch), SNS, SQS, Salesforce, Snowflake, DBT, KafkaDomainsRetail, Healthcare, Insurance and BankingHealth CareHIPPA Standards, HL7, EDI transactions, EHRProcess ImprovementData Quality, Data Governance and Data StewardWORK EXPERIENCE:Company: Galax-E-Systems Corporation June 2020  Till DateClient : USAA, Plano, TXRole : Senior Data EngineerProject : MDS C360Project Description:The United Services Automobile Association (USAA) is a San Antonio-based Fortune 500 diversified financial services group of companies including a Texas Department of Insurance-regulated reciprocal inter-insurance exchange and subsidiaries offering to the bank, investing, and insurance.Current project MDS C360 is mainly focused on collections for various products like Deposits, Credit Cards etc. from the members and member families who have exceeded their payment due dates for credit card payments or overdraft amounts on deposit accounts.Responsibilities:Involved in discussions with Business users to understand business requirements and translate them into technical documentation.Worked with Data Modelling team for evaluating the Design considerations for existing and new applications (Migration and Development).Created data pipelines for moving the data from various data sources to AWS S3 using Python (Boto3 Module) and AWS Glue.Created AWS lambda functions and Event notifications on S3 to move the data in S3 into various folders (Raw, Processed and Warehouse).Extensively used Athena to analyze the data in S3 to find the patterns and data anomalies.Worked on Data Migration process from existing Netezza DB to Snowflake by creating Snowflake External Stages, Snowpipe and Snowflake Streams.Worked on creating snowflake streams on landing tables to implement the CDC process and implemented SCD type1/type2 in Snowflake.Worked on SnowSQL to migrate the data from local to Snowflake stages like Internal stages during QA activities.Extensively used Time travel process to fix the data issues, creating backups, cloning objects based on business requirements.Extensively used Zero Copy Cloning feature to create the datasets for QA team for data validation.Worked on creating snowflake objects like Databases, Schema, Tables, Views, Stages, File formats, Snowpipe, Shares, Warehouses, Streams.Created Snow pipe for continuous data load for the source files coming from other applications.Extensively created shares for the reusable objects which are shared across other teams between two snowflake accounts.Extensively used various table types for Landing, Staging and Warehouse tables creation depending on the business use-case.Worked extensively on creating various warehouses for Running DDL scripts, Data Loads, Heavy data processing depending on project requirements.Extensively used Python for creating user-defined functions to create reusable code required as per business.Working on DBT Models and GIT to move the data within snowflake source system (Data Lake) to other databases like (GPM and SAM).Worked on creating various macros for processing the incremental data for SCD1/SCD2 scenarios in DBT Cloud (Data Build Tool).Worked on reusable macros for validating the data from Source to Target systemsWorked on creating models (Src, Stg, Int and Load) for processing the data from raw format to structured format using DBT.Worked on creating Control M cycles for creating Scheduling cycles for DBT Models.Good Experience in onsite and offshore coordination.Worked on identifying the RCA of the data quality issues and created and manage a project plan to remedy the issues through data transformations and business process improvements.Worked closely with Data Architect and ensured data quality solutions adhere to established data governance policies and standards and enterprise security and privacy requirements.Worked on a comprehensive data governance framework, including data classification, ownership, and stewardship, resulting in a 30% improvement in data quality and security.Collaborated with cross-functional teams to define and document data governance requirements and objectives, leading to a 25% increase in data governance compliance across the organization.Lead data governance initiatives, such as data privacy and protection, data retention, and data lifecycle management, resulting in a 20% improvement in regulatory compliance.Established data governance metrics and reporting mechanisms to track the effectiveness of data governance efforts, leading to a 15% increase in data governance transparency and accountability.Worked with legal and compliance teams to ensure data governance practices align with regulatory requirements and industry standards, resulting in a 10% reduction in legal and compliance risks.Experience in implementation of Financial Tools, Data Governance, Data Lineage, Data QualityDecomposition and documentation of Business Lineage and Technical Lineage.Highlighted data quality issues and areas of system control pages.Worked on linkage between data lineage to data quality and business glossary work within the overall data governance program.Developed a set of data validation rules and score cards to monitor the quality of the source data in the customer and asset domains.Experience in using data lineage using IBM Infosphere data lineage.Excellent support for 24/7 production activities.Involved in cross-functional team coordination to resolve production issues and provide solutions within the defined SLA.Work Environment: IBM Infosphere DataStage (version 11.5, 11.7), Shell Scripting, Control M, Service Now, Jira, Netezza, Salesforce, AWS S3, Glue, Athena, Snowflake, Python, DBT.Company: Fideliscare/Centene Corp, Buffalo, NY June 2018 - May 2020Role : Sr Application Software EngineerProject: : CCM, NIA MagellanProject Description:Fideliscare is an insurance company which gets funded by Federal Government Medicare and Medicaid plans. Fideliscare has about 1.7M active members enrolled in various plans of Medicare and Medicaid across New York State. NIA Magellan is a third-party vendor who provides various services related to pre-authorizations for all active members. Fideliscare provides all the active members to Nia Magellan in EDI format. This information is used by vendors in providing pre-authorizations and post authorizations related to various medical procedures during hospitalization. Vendor provides information about the approval/denial/AUTHs for review based on various rules set by Fideliscare. After the process the information is sent as files which Fideliscare processes and updates various source systemsResponsibilities:Involved in discussion with vendors to understand their core business and value add provided by this projectInvolved in analyzing the source systems and creating specification documents based on business needsDesigned Mappings between sources to targets, using Star Schema, implemented logic to Slowly Changing Dimensions.Created Mappings for converting source file data to Messages into Kafka topics using Kafka Connector in DataStage.Involved in Peer Testing and Integration Testing. Involved in Performance Testing coordination.Involved in fixes the bugs identified during production runs.Involved in code review for implementation of standard practices as per company requirementsPerformance tuning of ETL (Extract Transform and Load) Jobs, Database for better response time. Analysis of data models for cross-verification, data profiling for cross-domain integration.Pre-Implementation and Post-Implementation support activities.Excellent skills in running Backlog grooming, Sprint Planning, and retrospective activitiesHighly skilled in onsite and offshore coordination and running daily scrums.Single point of contact for onshore and offshore communications.Exception skills in providing 24/7 production support activities.Highly skilled in cross functional team coordination to resolve production issues and provide solution within the defined SLAAbility to lead multiple teams and work as individual contributor with minimal guidance depending on business needs.Worked on processing the data based on HIPPA standards, EDI (Electronic Data Interchange) transactions 820, 834, 835, 837 and x12.Implemented HIPPA code sets, ICD-9 and ICD-10 coding.Extensively worked on claims processing, HIPPA regulations and 270, 271, 834, 835, 837 and x12 EDI transactions based on health care industries.Worked on various subsystems of MMIS-Claims, Provider, Recipient, Procedure Drug and Diagnosis (PDD), Explanation of Benefits (EOB).Experience in data validation of 837 (Health Care Claims or Encounters), 835 (Health Care Claims payment/ Remittance), 270/271 (Eligibility request/response), 834 (Enrollment/Dis-enrollment to a health plan) for data feeds and data mappings.Worked with key stakeholders to improve process efficiencies and improve overall satisfaction of internal customersExperience in working autonomously and applying judgment and decision making when monitoring workflow and handling new data requests, changes, and deletions.Manage business priorities while driving process improvements and data quality enhancements.Worked alongside the data protection officer to understand the regulatory environment and assess future issues/risks with regards to data capture and management.Worked on data integrity checks, report anomalies, and work with development teams for resolutionEnsure business rules are adhered to when data is created, updated, deleted, retired, and archived by performing data quality control activitiesUpload Membership files to MDM, create related reports and work with divisions on review of the data.Implemented automatic process and checks to identify the missing data elements and correct the data as per the compliance.Collaborated with Enterprise Risk function to understand the enterprise risk framework (risk principles, risk standards, etc.) and to ensure that the consolidated risk data from the investment boutiques are consistent with that framework.Work Environment: IBM Infosphere DataStage (version 11.7), Unix Scripting, Flat Files, Atomic, Service Now, Jira, Oracle, Greenplum, SQL ServerCompany : CIIT Nov 2017  May 2018Client : Ally Financial Inc, Charlotte, NCRole : Software DeveloperProject : Auto AdvantageProject Description:Auto Advantage project is focused on supporting auto loans to retail customers. It has many interfaces with source systems like customer agreements, loans, Collections, Recovery etc. Main purpose of this project is to migrate existing mainframe source systems to java-based applications known as ALFA.The Infosphere information server plays a vital role in processing the various source system files from and to ALFA. Currently the project is in the SIT phase.Responsibilities:Involved in understanding the specification documents and knowing the business process overviewSupporting defect fixing and modifying the existing ETL code with new CR from business usersWork Environment: Infosphere Information Server 11.5, Oracle 12c, UNIX, SQL developer.Company : Deloitte, India April 2013  Sep 2017Role : Sr ConsultantProject : Research TrustProject Description:The Converge HEALTH Research Trust is a repository for integrating a variety of medical research data from sources such as clinical trials, biobanks, tumor registries, pathology reports, and LIMS systems.It also includes packaged ETL procedures to help you transform the raw source data into an integrated set of production-ready data. When processing is complete, the data is ready for additional processing (such as, data aggregation and de-identification of PHI) and for loading into an appropriate data mart by a converge HEALTH, third-party or custom tool.In Research Trust, subject privacy is protected by standardized subject de-identification packages.Research Trust is Converge HEALTH repository for clinical data. By integrating information about the medical services that a healthcare organization provides to its patients, along with a variety of associated financial and organizational information, the Research Trust allows healthcare organizations to improve quality of care and to reduce costs.Responsibilities:Involved in understanding business requirements, analyzing the source systems, and creating specification documents based on business needs.Worked on data analysis using Python pandas for identifying data anomalies and data cleansing.Created python UDF to implement business logic for creating various reusable components.Worked on AWS Glue to move the data from various sources to S3 and AWS Redshift by creating crawlers and Data Catalogs.Created AWS lambda functions and Event notifications on S3 to move the data in S3 into various folders.Worked on python Boto3 libraries to move the data from Hadoop to S3.Analyzed the data in S3 using AWS Redshift spectrum tables and Athena to provide the data insights to business and reporting team.Worked on Designing and Implementing ETL solutions for heterogeneous sources using various ETL tools like Pentaho Kettle, Infosphere Information Server and Informatica, AWS Glue.Strong experience in SQL-PL/SQL queries in various databases like Oracle, SQL server, AWS Redshift, Athena, and Redshift Spectrum.Worked on various analytical solutions for gaining analytical insights into large data sets by ingesting and transforming the data into Bigdata environment using technologies like Hive and Impala.Developed Hive queries as per business requirements and report generation to load the data from source file to Hive tables.Worked on Hive optimization techniques like static and dynamic partitioning and bucketing on different formats of data using internal and external tables.Implemented incremental loading in Pentaho Kettle, Infosphere Information Server, Informatica, Hadoop Hive, Impala and Amazon Redshift.Worked on various SQL objects creation like tables, views, sequences, synonyms, Table partitions, Sub-partitioning and writing complex queries.Extensively worked on reviewing Explain Plans and fine-tuned the SQL queries to introduce the required indexes as part of performance tuning and query optimization.Worked on advanced PL/SQL and T-SQL code through various stored procedures, functions, cursors, triggers, and materialized views/Indexed Views.Created and modified several UNIX Shell Scripts according to the changing needs of the project and client requirements.Worked on scheduling tools like control MWorked on multiple projects simultaneously.Worked on End-End Implementation of various projects through the complete SDLC Life cycle using different tools ETL toolsExtensively worked on MDM to identify the master record based on rules defined as per the rule specification of the business.Work Environment: Pentaho Kettle 5.0.1, Infosphere Information Server 9.1, Informatica 9.6.1,10.x, Hadoop Hive, Impala, AWS S3, IAM, EC2, EMR, Amazon Redshift, Glue, Athena, Python, MS SQL Server management studio 2012, UNIX shell scripting, Windows, SQL developer, Oracle 10g.Company : Sapient Corporation Nov 2011  Feb 2013Role : Sr Associate Platform 1Project : IGHS - TescoResponsibilities:Analyzed, designed, developed, implemented, and maintained Parallel jobs using IBM infosphere Information Server.Worked SCDs to populate Type I and Type II slowly changing dimension tables from several operational source files.Experienced in PX file stages that include Complex Flat File stage, Dataset stage, and Sequential file stage.Implemented Shared container for multiple jobs and Local containers for the same job as per requirements.Adept knowledge and experience in mapping source to target data using IBM Infosphere Information Server 8.xImplemented multi-node declaration using configuration files (APT_CONFIG_FILE) for performance enhancement.Experienced in developing parallel jobs using various Development/debug stages (Peek stage, Head & Tail Stage, Row generator stage, Column generator stage, Sample Stage) and processing stages (Aggregator, Change Capture, Change Apply, Filter, Sort & Merge, Funnel, Remove Duplicate Stage)Debug, test and fix the transformation logic applied in the parallel jobsInvolved in creating UNIX shell scripts for database connectivity and executing queries in parallel job execution.Used the ETL DataStage Director to schedule and run the jobs, testing, and debugging its components & monitoring performance statistics.Successfully implemented pipeline and partitioning parallelism techniques and ensured load balancing of data.Deployed different partitioning methods like Hash by column, Round Robin, Entire, Modulus, and Range for bulk data loading and for performance boost.Repartitioned job flow by determining Infosphere Information Server PX best available resource consumption.Worked on designing and developing the Quality stage.Worked with the Data Warehouse team in developing the Dimensional Model.Worked on complete SDLC from Extraction, Transformation and Loading of data using DataStage.Responsible for tuning ETL procedures and STAR Schema and Snowflake Schema to optimize load and query performance.Worked on multiple projects simultaneously. Supported multiple applications after hours.Updated existing and created new documentation for ETL Mappings and migration.Deployed code over environments.Proficient in MS SQL.Worked on UNIX shell scripts.Proficient in complex SQL queries.Worked with UNIX teams and DBAs.Worked with business users to gather requirements.Provided DBAs with queries to execute.Worked on multiple projects simultaneously.Work Environment: DataStage 9.1, MS SQL Server management studio 2008, UNIX, Windows.Company : ITC InfoTech India Pvt Ltd Oct 2010  Nov 2011Role : Associate IT consultantProject : Danske BankResponsibilities:Extensively used Infosphere Information Server for extracting, transforming, and loading databases from sources including Oracle, DB2 and Flat files.Collaborated with the EDW team in, High Level design documents for extract, transform, validate and load ETL process data dictionaries, Metadata descriptions, file layouts and flow diagrams.Collaborated with EDW team in, Low Level design document for mapping the files from source to target and implementing business logic.Generation of Surrogate Keys for the dimensions and fact tables for indexing and faster access of data in Data Warehouse.Tuned transformations and jobs for Performance Enhancement.Extracted data from flat files and then transformed according to the requirement and Loaded into target tables using various stages like sequential file, Look up, Aggregator, Transformer, Join, Remove Duplicates, Change capture data, Sort, Column generators, Funnel and Oracle Enterprise.Created Batches (DS job controls) and Sequences to control set of jobs.Collaborated in design testing using HP Quality Center.Extensively worked on Job Sequences to Control the Execution of the job flow using various Activities & Triggers (Conditional and Unconditional) like Job Activity, wait for file, Email Notification, Sequencer, Exception handler activity and Execute Command.Developed DS jobs to populate the data into staging and Data Mart.Executed jobs through sequencer for better performance and easy maintenance.Performed the Unit testing for jobs developed to ensure that it meets the requirements.Modified UNIX shell scripts to automate file manipulation and data loading procedures.Responsible for daily verification that all scripts, downloads, and file copies were executed as planned, troubleshooting any steps that failed, and providing both immediate and long-term problem resolution.Provided technical assistance and support to IT analysts and business community.Environment: DataStage 9.1, PL/SQL, Windows XP, UNIX, SQL server.Company : Target Corporation India Pvt Ltd July 2007  Aug 2010Role : ETL DeveloperProject : Project: TGT100Responsibilities:Worked on Infosphere Information Server, Designer, Manager, Administrator and Director.Worked with the Business analysts and the DBAs for requirements gathering analysis, testing, and metrics and project coordination.Involved in extracting the data from different data sources like Oracle and flat files.Involved in Data Profiling using Infosphere Information Analyzer.Involved in creating and maintaining Sequencer and Batch jobs.Creating ETL Job flow design.Used ETL to load data into the Oracle warehouse.Created various standard/reusable jobs in Infosphere Information Server using various active and passive stages like Sort, Lookup, Filter, Join, Transformer, aggregator, Change Capture Data, Sequential file, Datasets.Involved in development of Job Sequencing using the Sequencer.Used Remove Duplicates stage to remove the duplicates in the data.Used the designer and director to schedule and monitor jobs and to collect the performance statistics.Extensively worked with database objects including tables, views, indexes, schemas, PL/SQL packages, stored procedures, functions, and triggers.Creating local and shared containers to facilitate ease and reuse of jobs.Implemented the underlying logic for Slowly Changing Dimensions.Worked with Developers to troubleshoot and resolve issues in job logic as well as performance.Documented ETL validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing, error handling and analysis.Involved in migrating code from Development to QA.Used PL/SQL procedures to fill the gaps by defining the business needs for all the interfaces.Environment: DataStage 7.5.2, PL/SQL (Stored Procedures, Trigger), Windows XP, UNIX,Company : Dell India Pvt Ltd Sep 2004  July 2006Role : Senior Technical Support EngineerResponsibilities:I was into the EMEA voice process where I assisted customers from the United Kingdom, Middle East, and Asia Countries.In this process we troubleshoot each part of the Desktop computer including opening the system box and removing and installing the parts.Identifying hardware problems and shipping the required parts to fix the issue.Troubleshooting all software issues like virus removal, email settings and not being able to be logging into to email accounts, fixing all blue screen issues and

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise