Quantcast

Power Bi Google Cloud Resume Tampa, FL
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Power Bi Google Cloud
Target Location US-FL-Tampa
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Power Bi Business Analyst Tampa, FL

Electrical Safety Power Distribution Tampa, FL

Data Engineer Power Bi Tampa, FL

Power Bi Sql Server Tampa, FL

Power Bi Data Analyst Tampa, FL

Data Analyst Power Bi Tampa, FL

Power Systems Management Plant City, FL

Click here or scroll down to respond to this candidate
Candidate's Name
Ph.: PHONE NUMBER AVAILABLEEmail: EMAIL AVAILABLE12126 NE 191st Street, Apt 131, Bothell, Washington, 98011. Profile: 4+ Years of IT Experience in Data Engineering and Data Analytics Platforms Using Python, Go Lang, and SQL Technologies. Experience in working on the Google Cloud Platform using various services like Data Flow, Data Proc, Google Storage, FHIR Viewer, and Whistle DTL. Hands-on experience on FHIR. I worked in an agile development environment and participated in the daily scrum and other design-related meetings. Good knowledge in Tableau report testing. Proficient SQL experience in querying, data extraction/transformations, and developing queries for a wide range of applications. Utilized Spark SQL API in PySpark to extract and load data and perform SQL queries. Used cloud shell SDK in GCP to configure the service data proc and data storage. Implemented secure electronic health record (EHR) systems to maintain patient privacy and compliance with regulatory requirements. Collaborated with teams in the successful execution of projects, ensuring alignment with project objectives and client expectations. Facilitated regular team meetings to encourage idea-sharing and problem-solving, enhancing overall team cohesion. Experience in Extraction, Transformation, and Loading (ETL) data from various sources into data warehouses, as well as data processing like collecting, aggregating, and moving data from various sources using PowerBI, and Microsoft SSIS. Experienced in building automation regression scripts for validation of ETL processes between multiple databases like Oracle, and SQL Server, using Python. Verify the accuracy of data by cross-referencing Power BI results with the source data. Ensure that data transformations, calculations, and aggregations are correct. Conducted thorough testing of Power BI reports and dashboards to ensure data accuracy, consistency, and compliance with business requirements. Identified and resolved performance bottlenecks in Power BI reports, optimizing query performance and enhancing overall system responsiveness. Technical Skills:Cloud: Google Cloud Platform (GCP), AWSETL & Data Integration: Informatica PowerCenter, Data Pipeline. Programming Language: Python, Whistle DLT.BI (Business Intelligence) Tool: Tableau, Power BI RDBMS: MySQL, SQL Server 2012.Databases & Data Warehousing: Mircosoft SQL Server, Snowflake. Programming: Shell/Bash, GoIDE: Notepad++, IntelliJ, Visual Studio.Version Control Tools: GitHubMS: PowerPointEducation:Masters in Information Technology and Management. Lindsey Wilson College, Columbia, KY.Bachelor of Technology in Biomedical Engineering.SRM University, Chennai, Tamil Nadu, India.Experience:Client: CVS Pharmacy, Dallas, Texas.Role: Data Engineer Dec'21-PresentRoles & responsibilities: Python, Google Cloud Platform (GCP), Spark, and Whistle DTL. Providing data engineering solutions to pharmaceutical companies. Used Python programming to build data pipelines in batch processing. Used Whistle Data Transformation Language to transform data and map it according to FHIR standards. Worked on both batch and streaming processes used for building data pipelines. Performed testing to evaluate the data mappings from one schema to another. Experienced in working on various services of Google Cloud Platform like Data Flow, Data Proc, FHIR Viewer, Google Storage, etc. Designed to facilitate the exchange of healthcare data in a standardized format, making it easier for different healthcare systems and applications to communicate and share patient information. Expert in building and deploying applications that handle and exchange healthcare data securely and efficiently. For example, an application hosted on GCP could utilize FHIR to access and process patient data in a standardized way, ensuring interoperability between different healthcare systems and stakeholders. Ingest FHIR data into GCP, and we can use Google Cloud Storage to store the FHIR resources securely. FHIR data can be stored in JSON or XML format, depending on your preference. Expertise in FHIR viewer to process and index the FHIR resources for efficient querying. Could use Google Cloud Pub/Sub, Cloud Functions, or Dataflow to process and transform the data into a format suitable for viewing. Collaborated with a team of volunteers to plan and execute community events, demonstrating strong teamwork and communication skills. Transforming the data into FHIR resources. The structured data elements represent different aspects of healthcare information, such as Patient, Encounter, Observation, Medication, etc. The transformation process involves mapping the data fields from the source to the corresponding fields in the FHIR resources. Tableau report testing using SQL queries. Loading the Data to FHIR for Cloud Pub/Sub. It helps the messaging service that enables you to ingest and distribute real-time data streams reliably. It can be used as a data source for streaming processing pipelines. Testing FHIR resources within the application's code, and can write unit tests that validate the generation, parsing, and manipulation of FHIR resources. There are FHIR libraries available in various programming languages that can assist with generating and parsing FHIR resources for testing purposes.Environment: Data Flow, Data Proc, FHIR Viewer, Google Storage, Airflow, etc. Client: Thomson Reuters, Richmond, Virginia.Role: Data Engineer Aug21 - Oct21Roles & responsibilities: Experience with cloud platforms and services such as AWS/Azure. Proficient in Python syntax, data types, and language features. Experience with popular Python libraries and frameworks, including Numpy and Pandas. Analyzed and visualized data using Python for tasks such as exploratory data analysis, reporting, and storytelling. Automate repetitive tasks and workflows using Python scripts. Collaborated with teams using platforms like GitHub or GitLab for code sharing, code reviews, and issue tracking. Designed and implemented data models and schemas for relational databases, data warehouses, and data lakes. Extracted data from diverse sources such as databases flat files (CSV, Excel), APIs, web scraping. Developed SQL queries, stored procedures, and scripts to extract data from source systems efficiently. Used ETL tools like Informatica PowerCenter for extracting data from multiple sources. Developed complex transformations using SQL, Python, and scripting languages (Shell, Bash). Loaded transformed data into target systems such as data warehouses like Snowflake. Implemented error-handling mechanisms to capture and log errors during the ETL process. Collaborated with cross-functional teams, including data engineers and analysts, to understand requirements and deliver solutions. Experienced working with data visualization tools (Tableau, PowerBI). Experience with version control systems such as Git. Like Access, Excel, CSV, Oracle, and flat files using connectors, tasks, and transformations provided by the AWS Data Pipeline. Hands-on knowledge in using SQL queries (analytical functions) and writing and optimizing SQL queries. Experience with Shell scripting. Experience with Agile software development practices. Environment: Python, AWS, ETL, shell scripting, git, Snowflake, SQL, Bash, Tableau, and PowerBI. Client: Amzur Technologies, Visakhapatnam, AP, India Role: MySQL Developer Jan19 - Dec19Roles & responsibilities: Confidential in IT management software and solutions company with expertise across all IT environments. Developed data load scripts to pull the data from multiple data like SQL Server database, Spreadsheets uploaded on an SFTP, etc. Develop stored procedures to support database operations of applications and generate confidential, complex reports. Write data validation scripts for data integrity and quality checks. Develop a Data layer for the application. Generated Custom SQL to verify the dependency for the daily, Weekly, and Monthly jobs. Developed SQL statements to extract the Data and packaging/encrypting Data for delivery to customers.Environment: MySQL, SQL Server, MS SQL Server 2012.

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise