Quantcast

Business Intelligence Sql Server Resume ...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Business Intelligence Sql Server
Target Location US-FL-Tampa
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Business Intelligence Supply Chain Odessa, FL

Business Intelligence Project Management Wesley Chapel, FL

Business Intelligence Data Analyst Zephyrhills, FL

Supply Chain Business Intelligence Tampa, FL

Business Analyst Pl Sql Tampa, FL

Business Intelligence Revenue Cycle Tampa, FL

Business Systems Analysts - Data Architect St. Pete Beach, FL

Click here or scroll down to respond to this candidate
Candidate's Name
Senior Data EngineerEmail Id: EMAIL AVAILABLEPhone No: PHONE NUMBER AVAILABLEProfessional Summary:A qualified IT Professional with 10 years of experience in Data Analysis, Data Warehouse Concepts and Hadoop ecosystem. Good technical expertise in SQL, Python scripting, Hadoop technologies and AWS Cloud Services.Use Business Intelligence tools such as Business Objects and Data Visualization tools such as Tableau, Power BI. Worked with several Azure services, such as Data Lake, to store and analyze data.Experience designing business intelligence solutions using Microsoft SQL Server 2008 and 2012.Experience in layers of Hadoop Framework - Storage (HDFS), Analysis (Pig and Hive), Engineering (Jobs and Workflows), extending the functionality by writing custom UDFs. Strong experience on Hadoop distributions like Horton works and Cloudera.Hands-on experience with Hadoop architecture and various components such as Hadoop File System HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Hadoop MapReduce programming.I have extensive knowledge of data architecture including designing pipelines, data ingestion, Hadoop/Spark architecture, data modeling, data mining, machine learning and advanced data processing.Excellent programming skills with PL/SQL, SQL, Oracle. Extensive experience in developing Data warehouse applications using Hadoop, Informatica, Oracle, Teradata, MS SQL server on UNIX and Windows platforms and experience in creating complex mappings using various transformations and developing strategies for Extraction, Transformation and Loading (ETL) mechanism by using Informatica.Experience designing errors and exception handling techniques for detecting, recording, and reporting errors.Extensive experience in T-SQL writing stored procedures, triggers, functions, tables, views, indexes, and relational database models.Experience in writing Unit Test and Smoke Test for testing the code (modules) using Scala Test Framework.Helped individual teams to set up their repositories in bit bucket and maintain their code and help them set up jobs which can make use of CI/CD environment.Expert in designing ETL data flows using creating mappings/workflows to extract data from SQL Server and Data Migration and Transformation from Oracle/Access/Excel Sheets using SQL Server SSIS.Experienced in using distributed computing architectures such as AWS products (e.g. S3, EC2, Redshift, and EMR), Hadoop and effective use of Map-Reduce, SQL and Cassandra to solve Big Data type problems.Designed and developed Security Framework to provide fine grained access to objects in AWS S3 using AWS Lambda, DynamoDB. Solid programming knowledge on Python and shell scripting. Implemented automated data workflows and ETL processes using AWS Lambda functions and AWS Step Functions, streamlining operations and reducing manual effort.Extract, transform and load the data from different formats like JSON, a Database, and expose it for ad-hoc/interactive queries using Spark SQL. Good working experience in Relational and databases like MySQL, Oracle.Demonstrated expertise in leveraging Spring JMS, Spring Security, Spring Data, and Spring Integration to develop robust and scalable enterprise applications. Experienced in utilizing these tools to streamline application development and improve overall system performanceDevelop framework for converting existing PowerCenter mappings and to Spark (Python and Spark) Jobs. Domain Knowledge of Finance, Logistics and Health insurance. Experience on Palantir Foundry and Data warehouses (SQL Azure and Confidential Redshift/RDS).Familiar with data architecture including data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing.Looker is a cloud-based data analytics platform that offers a business intelligence solution for data exploration and visualization sources such as Google Big Query, Amazon Redshift, and more.Experience in Azure Cloud, Azure Data Factory, Azure Data Lake storage, Azure Synapse Analytics, Azure Analytical services, Azure Cosmos NO SQL DB and Data bricks. Experience in developing, support and maintenance for the ETL (Extract, Transform and Load) processes using Informatica.Collaborate with cross-functional teams to integrate Elastic Path solutions into existing systems and workflows, ensuring seamless functionality and performance.Provide technical guidance and mentorship to junior team members, fostering skill development and knowledge sharing in Elastic Path implementations.An outstanding team player and technically strong individual who can collaborate with business users, project managers, team leads, architects, and peers to keep the project running well.Technical Skills:Operating SystemsWindows, Unix/Linux.Scripting LanguagesCassandra, Python, Scala, Ruby on Rails and Bash, PowerShell.ETL ToolsSQL Server Integration services, Informatica, AWS, GCP, Snowflake.Cloud PlatformAWS (S3, EC2. EMR), Azure Data Factory, Azure Data Lake, GCP Cloud Storage, Big Query, Composer, Cloud Dataproc, Cloud SQL, Cloud Functions, Cloud Pub/SubBig Data EcosystemsHadoop, MapReduce, HDFS, HBase, Zookeeper, Hive, Sqoop, Cassandra, Oozie, Storm, and FlumeBig Data TechnologiesSpark, Azure Storage, Azure Database, Azure Data Factory, Azure Analysis ServicesMethodologiesAgile, WaterfallDatabasesIBM DB2, Netezza, Oracle, SQL Server, Teradata, Cassandra, Snowflake, Sybase.Professional Experience:Client: FannieMae June 2022  presentRole: ETL Developer Led the installation, configuration and deployment of product softwares on new edge nodes that connect and contact Kafka cluster for data acquisition.Designed high level ETL architecture for overall data transfer from the source server to the Enterprise Services Warehouse.Created and optimized complex SQL queries, stored procedures, functions, and triggers for data extraction, transformation, and loading (ETL) processes.Conducted extensive research and experimentation with different generative architectures to optimize model performance and reduce training time by 20%.Implemented robust data validation processes to ensure data accuracy and consistency, reducing data discrepancies by 30%.Conducted regular security audits and implemented best practices for data protection, access control, and encryption.Collaborated with development teams to refactor and migrate legacy applications to cloud-native architectures.Provided technical guidance and training to junior engineers on AWS best practices and cloud integration strategies.Managed data loading processes for large datasets into data warehouses and databases, ensuring timely and accurate data availability.Managed Credential Stream within Epic, ensuring accurate and secure user credentialing and role assignments for healthcare professionalsImplemented and maintained security protocols and compliance measures for Credential Stream, adhering to HIPAA and other regulatory standards.Identified and resolved data discrepancies and errors in ETL processes, ensuring accurate and reliable data for business analysis.Designed, implemented, and managed robust ETL processes using tools like SSIS (SQL Server Integration Services) and Informatica to ensure seamless data integration and transformation.Successfully integrated Epic data with external systems, maintaining data integrity and adherence to healthcare data standards and regulatory requirements.Developed automated error detection scripts to monitor data pipelines and alert relevant teams, reducing downtime and manual error-checking efforts.Conducted root cause analysis to identify the source of data errors and implemented corrective measures to prevent recurrence.Participated in Epic system upgrades and maintenance, ensuring ETL processes remained compatible and optimized for new features and versions.Developed automated validation scripts using Python and SQL to check data integrity before loading into data warehouses, significantly reducing manual validation efforts.Integrated Epic data with external systems using ETL processes, ensuring seamless data flow and consistency across platforms.Ensured compliance with industry-standard EDI formats such as HL7, X12, and EDIFACT, implementing robust validation and translation mechanisms to maintain data integrity and accuracy.Developed and implemented error-handling protocols for EDI transactions, quickly identifying and resolving data exchange issues to minimize disruptions and maintain data flow continuity.Developed custom reports and dashboards using Epics Reporting Workbench, providing critical insights for clinical and administrative stakeholders.Designed and optimized complex SQL queries in BigQuery to support large-scale data analysis and reporting, leveraging its powerful analytical capabilities.Developed and implemented ETL workflows using GCP tools such as Dataflow, Dataproc, and Cloud Composer, ensuring efficient data extraction, transformation, and loading.Designed and implemented ETL processes to extract data from multiple sources, transform it according to business requirements, and load it into data warehouses.Developed detailed data mapping specifications to translate business requirements into technical solutions, ensuring accurate data transformation and integration.Conducted data mapping sessions with business stakeholders to identify and define data elements, relationships, and transformation rules.Managed data integration and migration projects, seamlessly moving data from on-premises databases and other cloud platforms to BigQuery, ensuring data integrity and minimal downtime.Managed user accounts, permissions, and group policies in Active Directory, ensuring secure and efficient access control across the organizationIntegrated Active Directory with other IT systems, streamlining authentication and authorization processes.Monitored and optimized BigQuery and other GCP services usage, employing cost-saving strategies such as partitioning, clustering, and using cost management tools to ensure efficient resource allocation.Ensured data security and compliance with industry standards and regulations, implementing GCP's security features such as IAM, encryption, and VPC Service Controls.Automated ETL workflows and data pipelines using Cloud Composer (Apache Airflow), enhancing efficiency and reliability of data processes.Developed real-time data processing pipelines using Google Dataflow and Pub/Sub, enabling timely insights and actions based on streaming data.Implemented data mapping solutions using ETL tools and custom scripts, ensuring seamless data integration across different systems.Developed complex SQL queries to retrieve and manipulate data for analysis and reporting, improving data accessibility and usability.Designed and maintained relational database schemas to support business requirements and ensure efficient data storage and retrieval.Developed a new data scheme for the data consumption store for the Machine Learning and Al models to quicken the processing time using SQL, Hadoop, and Cloud services.Able to properly understand the business requirements and develop data models accordingly by taking care of theresources.Client: Cardinal Health Jan 2021 --- June 2022Role: ETL DeveloperWorked with the business partners, end users and IT personnel to gather requirements for the decommissioning project.Designed, developed, and managed EDI processes to facilitate efficient and accurate data exchange between healthcare systems, ensuring seamless interoperability and timely data transmission.Automated routine maintenance tasks using AWS Lambda and Step Functions, improving operational efficiency.Developed and managed EDI processes for data exchange between healthcare systems, ensuring timely and accurate data transmission.Ensured compliance with EDI standards such as HL7, X12, and others, facilitating smooth interoperability between systems.Designed and implemented IAM solutions to manage user identities, roles, and permissions across various systems and applications.Developed and enforced access control policies, ensuring secure and compliant access to sensitive data and systems.Developed and managed ETL processes to extract, transform, and load data from Epic Clarity into data warehouses, ensuring seamless integration and high data accuracy for healthcare analytics.Utilized ETL tools such as SSIS (SQL Server Integration Services), Informatica, and Talend to automate data extraction, transformation, and loading from Epic systems, improving data processing efficiency.Designed and maintained custom reports and dashboards using Epics Reporting Workbench and SQL Server Reporting Services (SSRS), providing actionable insights for clinical and administrative decision-making.Created and optimized complex SQL queries, stored procedures, and functions to support ETL workflows and enhance data transformation and reporting capabilities in healthcare environments.Implemented robust data quality checks, validation rules, and error-handling mechanisms within ETL processes to ensure the accuracy, consistency, and reliability of healthcare data.Conducted performance tuning of SQL queries and ETL processes, optimizing resource usage and reducing data load times, thereby enhancing the overall performance of data integration systems.Collaborated with cross-functional teams to design and implement hybrid cloud solutions.Ensured compliance with industry standards and regulatory requirements through continuous monitoring and auditing.Extracting business logic, Identifying Entities and identifying measures/dimensions out from the existing data using Business Requirement Document and business users.Collaborated with data engineers and analysts to define validation criteria and develop efficient validation workflows, improving overall data reliability.Utilized data profiling tools to identify and address data anomalies and inconsistencies, enhancing data quality across the organization.Maintained detailed documentation of validation processes and criteria, ensuring transparency and reproducibility of validation efforts.Ensured data integrity during the loading process by performing validation checks and error handling, maintaining high data quality standards.Collaborated with database administrators to optimize data loading processes and ensure optimal performance of data warehouses.Documented data loading procedures and best practices, providing a reference for team members and ensuring consistency.Used Business Objects to create reports based on SQL-queries. Generated executive dashboard reports with latest company financial data by business unit and by product.Performed the data analysis and mapping database normalization, performance tuning, query optimization data extraction,transfer, and loading ETL and clean up.Skilled in writing complex SQL queries, stored procedures, triggers, and user-defined functions to support data manipulation and business logic implementation within DB2 environments.Implemented robust error-handling mechanisms for EDI transactions, promptly identifying and resolving data exchange issues.Implemented Active Directory security policies and compliance measures, ensuring adherence to organizational and regulatory standards.Automated user provisioning and deprovisioning processes, enhancing efficiency and reducing the risk of unauthorized access.Maintained an error log and tracking system to document and analyze data errors, facilitating continuous improvement in data processes.Provided training and support to team members on error resolution techniques and best practices, enhancing the team's ability to handle data issues effectively.Utilized data profiling and validation tools to proactively identify potential data errors and inconsistencies, reducing the occurrence of data issues.Integrated ETL processes with data validation and error handling mechanisms to ensure data integrity and consistency.Managed the Credential Stream within Epic, ensuring accurate and secure creation, maintenance, and deactivation of user credentials and roles.Collaborated with data architects and analysts to design ETL processes that meet business requirements and support analytics initiatives.Implemented and maintained role-based access control (RBAC) policies, ensuring users have appropriate access to sensitive healthcare data based on their roles and responsibilities.Ensured Credential Stream processes adhered to HIPAA and other regulatory requirements, maintaining the security and privacy of patient and organizational data.Documented ETL processes and procedures to ensure reproducibility and provide a reference for team members, enhancing team collaboration and efficiency.Maintained data mapping documentation to provide a clear reference for data integration processes and facilitate knowledge transfer.Collaborated with data engineers and analysts to refine data mappings and ensure alignment with business requirements and data standards.Utilized data mapping tools to streamline the mapping process and improve efficiency, reducing manual effort and errors.Conducted data profiling and analysis using SQL to identify trends, patterns, and anomalies, providing valuable insights for decision-making.Collaborated with application developers to design and implement database schemas that support application functionality and performance.Ensured data integrity and consistency by implementing constraints, validation rules, and referential integrity in SQL databases.Collaborate with team members and stakeholders in design and development of data environment Preparing associated documentation for specifications, requirements, and testing.Client: Advent Health, Orlando June 2018 --Dec 2020Role: Data EngineerGot involved in migrating on prem Hadoop system to using GCP (Google Cloud Platform). Experience in developing and executing data migration scripts and ETL processes to transfer and transform data between DB2 and other databases or data sources.Proficient in designing, developing, and managing IBM DB2 databases, ensuring optimal data structure and performance for enterprise applications.Developed and automated workflows for credentialing processes using scripting and ETL tools, improving efficiency and reducing manual errors in user access management.Utilized data profiling tools to assess data quality and identify discrepancies, improving data reliability for analytical purposes.Conducted regular audits of user access and credentials, generating detailed reports to identify and address any discrepancies or security issues promptly.Collaborated with IT, HR, and compliance teams to align credentialing processes with organizational policies and procedures, ensuring seamless integration and operation across departments.Developed and optimized complex SQL queries in BigQuery to support large-scale data analysis and reporting.Designed and implemented ETL processes using GCP tools such as Dataflow, Dataproc, and Cloud Composer, ensuring efficient data integration.Designed scalable and secure cloud architectures on GCP, leveraging services like BigQuery, Cloud Storage, and IAM to meet business requirements.Conducted regular data audits and quality assessments to maintain high standards of data integrity, leading to a 25% decrease in data discrepancies.Collaborated with cross-functional teams to establish and enforce data validation protocols, enhancing overall data governance.Maintained comprehensive data mapping documentation to provide clear guidelines for data integration processes, facilitating knowledge transfer.Worked closely with data architects and developers to refine data mappings, ensuring alignment with business objectives and technical requirements.Monitored and troubleshooted data loading jobs to promptly identify and resolve issues, ensuring continuous data flow and availability.Performed data cleansing and transformation during the loading process to maintain high data quality, enhancing the reliability of analytical outputs.Collaborated with database administrators and engineers to ensure optimal configuration and performance of data loading operations.Developed and implemented data validation frameworks using SQL and ETL tools to ensure data integrity, accuracy, and consistency across various data pipelines, reducing data errors by 30%.Designed automated validation scripts to verify data quality during the ETL process, significantly minimizing manual intervention and improving efficiency.Utilized advanced SQL queries to perform comprehensive data validation checks, including range, format, and consistency validations, ensuring high standards of data quality.Collaborated with data engineers and analysts to define and enforce data validation rules and protocols, enhancing overall data governance and reliability.Worked on Power Bl dashboards using stacked bars, bar graphs, scattered plots, waterfall charts, geographical maps using show me functionality.Client: Lumen Technologies, LA 2016August  2018 MayRole: ETL Data Engineer:Working on building data pipelines, end to end ETL process for ingesting data in GCP.Working on running data flow jobs using Apache beam integrated in python for performing heavy historical loads of data.Designed and automated the pipeline to transfer the data to Stakeholders which provides centralized KPIs and reports for CVS.Building multiple programs with Python and Apache beam and execute it in cloud Dataflow to run Data validation between raw source file and big query tables Loading the historical data of multiple resources to Big Query table.Have used whistle language along with pandas in python and we showcase our data in Bigtable for stake holders for ad hoc querying.Monitoring Big query, Data proc and cloud Data flow jobs via Stack driver for all the environments. Analyze various types of raw file like Json, Csv, Xml with Python using Pandas, NumPy etc.Creating a pub subtopic and Configuring in the codebase. Developed guidelines for Airflow cluster and DAGs.Performance tuning of the DAGs and task implementation. Leveraged Google Cloud Platform Services to process and manage the data from streaming and file-based sources.Used cloud shell SDK in GCP to configure the services Data Proc, Storage, Big Query. Using Spark Context, Spark-SQL, Spark MLlib, Data Frame, Pair RDD and YARN.Used Spark Streaming APIs to perform transformations and actions on the fly for building common.Stay updated with the latest advancements and best practices in the Spring ecosystem, proactively identifying opportunities to leverage new features and enhancements to improve development efficiency and application performance.Client: Costco, NY May 2014 - 2016 JulyRole: ETL/Data Engineer:Hands on experience with building data pipelines in python/Pyspark/Hive SQL/Presto.Monitored Data Engines to define data requirements and data Accusations from both relational and non-relational databases including Cassandra, HDFS.Created ETL Pipeline using Spark and Hive for ingest data from multiple sources.Carried out data transformation and cleansing using SQL queries, Python and Pyspark.Expertise knowledge in Hive SQL, Presto SQL and Spark SQL for ETL jobs and using the right technology to get the job done. Implementing and Managing ETL solutions and automating operational processes. Was responsible for ETL and data validation using SQL Server Integration Services.Led successful migrations of on-premises data infrastructure to the AWS cloud, leveraging AEP expertise to ensure seamless transitions, cost savings, and improved scalability.Worked on building dashboards in Tableau with ODBC connections from different sources like Big Query/ presto SQL engine.Developed stored procedures in MS SQL to fetch the data from different servers using FTP and processed these files to update the tables.Designed and developed various SSIS packages (ETL) to extract and transform data and involved in Scheduling SSIS Packages.Used Power BI Power Pivot to develop data analysis prototype and used Power View and Power Map to visualize reports.Involved in creation of various Business Intelligence/ Analytical Dashboards utilizing Power BI with global filters, calculated columns and measures using complex DAX calculations and transformations using Query Editor & Power Query Formula Language (M).Performed cost benefit analysis of different ETL packages (SSIS) to determine the optimal Process. Created ETL metadata reports using SSRS, reports include like execution times for the SSIS packages, Failure reports with error description.Created OLAP applications with OLAP services in SQL Server and build cubes with many dimensions using both star and snowflake schemas Extracted and transformed Data from OLTP databases to the specific database designed for OLAP services (was involved in the creation of all objects for that database) on off-peak hours.Sustaining the BigQuery, PySpark and Hive code by fixing the bugs and providing the enhancements required by the Business User.Performed end-to-end delivery of PYSPARK ETL pipelines on databricks to perform the transformation of data orchestrated.Generate metadata, create Talend ETL jobs, mappings to load data warehouse, data lake.Used Kubernetes to orchestrate the deployment, scaling, and management of Docker Containers.Developed data validation tools / utility functions in PYSPARK.Experience in creating python topology script to generate cloud formation template for creating the EMR cluster in AWS.Automated the Investigation reports, metrics, and audit data of investigators based manual actions using Tableau and SQL.Developed report using Tableau that keeps track of the dashboards published to Tableau Server, which helps us find the potential future clients in the organization.Involved in creating Oozie workflow and coordinated jobs to kick off jobs on time and data availability.

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise