Quantcast

Data Entry Clerk Resume Dallas, TX
Resumes | Register

Candidate Information
Title Data Entry Clerk
Target Location US-TX-Dallas
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Inventory Clerk Data Entry Dallas, TX

Data Entry Clerk Dallas, TX

Data Entry Clerk Arlington, TX

Data Entry Clerk Haltom City, TX

Data Entry Clerk Dallas, TX

Data Entry Clerk Garland, TX

Data Entry Clerk - Work From Home Plano, TX

Click here or scroll down to respond to this candidate
AlekhyaPhone number: PHONE NUMBER AVAILABLEEmail: EMAIL AVAILABLELinkedIn: LINKEDIN LINK AVAILABLEPROFESSIONAL SUMMARY:An experienced professional with 6+ years of experience in IT industry, having in-depth knowledge on software applications, Analysis, Design, Development, Testing and Deployment.Experience in building data pipelines using Azure Data factory, Azure data bricks and loading data to Azure data Lake, Azure SQL Database, Azure SQL Data warehouse and controlling and granting database access.Experience in Developing Spark applications using Spark - SQL, Pyspark and Delta Lake in Databricks for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.Good understanding of Spark Architecture, MPP Architecture, including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.Productionize models in Cloud environment, which would include, automated processes, CI/CD pipelines, monitoring/alerting, and troubleshooting issues. Present the model and results to technical and non-technical audienceDesigned, Built, Deployed and Presented BI Solutions in close cross-functional collaboration with various departments like Sales, Marketing, Product Management, Production DevelopmentLead Business Data Analysis, Reporting, Data Quality, Test coverage, Data Validation and Presentation to various business stakeholders and executivesIntegration and in-bound / out-bound data flow automation of various enterprise source systems using ETL & Data Warehousing conceptsCurate data sourced out of Lake in to Databricks in different phases environments and perform Delta strong data engineering experience in Spark and Azure Databricks, running notebooks using ADFExperienced working with configuration management tool chef by developing chef cookbooks to configure, deploy and manage web applications, config files, databases, mount points and packages of existing infrastructure.Experience in creating Docker containers leveraging existing Linux Containers and AMI's in addition to creating Docker containers from scratch.Experienced Application Security Researcher, thereby possess hands-on experience on wideHands-on experience in implementing Model View Control (MVC) architecture using server-side applications like Django and Flask for developing web applications.Proficient in JSON based RESTful web services, and XML based SOAP web servicesUsed Transformations like Derived Column, Conditional Split, Aggregates, Sort, Data Conversion, Merge join and Union all. Did error handling while moving the data in SSIS/ADF(Dataflows).Integration and in-bound / out-bound data flow automation of various enterprise source systems using ETL & Data Warehousing concept.TECHNICAL SKILLS:Programming Languages: AZURE PowerShell, PySpark, Python, U-SQL, T-SQL, LINUX Shell Scripting.Databases: Azure SQL Warehouse, Azure SQL DB, Teradata, RDBMS, MySQL, Oracle, Microsoft SQL Server.Big data Technologies: Apache Spark, Hadoop, HDFS, Hive.Azure Cloud Resources: Azure Data Lake Storage Gen1 & Gen2, Azure SQL DB, Azure Stream Analytics, Azure Event Hub, Key Vault, Azure App Services, Logic Apps, Event Grid, Service Bus, Azure DevOps, ARM Templates.CI/CD: Azure Devops, GIT Hub JenkinsSoftware Methodology: Agile, Waterfall, SDLCIDE Tools: SSMS, Microsoft Visual Studio, JIRA.Mindtree, India Mar 2021-Mar 2022Role: Azure Data EngineerResponsibilities:Meetings with business/user groups to understand the business process, gather requirements, analyze, design, development and implementation according to client requirement.Designing and Developing Azure Data Factory (ADF) extensively for ingesting data from different source systems like relational and non-relational to meet business functional requirements.Designed and Developed event driven architectures using blob triggers and Data Factory.Creating pipelines, data flows and complex data transformations and manipulations using ADF and PySpark with Databricks.Automated jobs using different triggers like Events, Schedules and Tumbling in ADF.Created, provisioned different Databricks clusters, notebooks, jobs and autoscaling.Ingested huge volume and variety of data from disparate source systems into Azure Data Lake Gen2 using Azure Data Factory V2.Created several Databricks Spark jobs with Pyspark to perform several tables to table operations.Performed data flow transformation using the data flow activity.Implemented Azure, self-hosted integration runtime in ADF.Developed streaming pipelines using Apache Spark with Python.Created, provisioned multiple Databricks clusters needed for batch and continuous streaming data processing and installed the required libraries for the clusters.Improved performance by optimizing computing time to process the streaming data and saved cost to company by optimizing the cluster run time.Perform ongoing monitoring, automation, and refinement of data engineering solutions.Designed and developed a new solution to process the NRT data by using Azure stream analytics, Azure Event Hub and Service Bus Queue.Created Linked service to land the data from SFTP location to Azure Data Lake.Extensively used SQL Server Import and Export Data tool.Working with complex SQL views, Stored Procedures, Triggers, and packages in large databases from various servers.Experience in working on both agile and waterfall methods in a fast pace manner.Generating alerts on the daily metrics of the events to the product people.Extensively used SQL Queries to verify and validate the Database Updates.Suggest fixes to complex issues by doing a thorough analysis of root cause and impact of the defect.Provided 24/7 On-call Production Support for various applications and provided resolution for night-time production job, attend conference calls with business operations, system managers for resolution of issues.Environment: Azure Data Factory (ADF v2), Azure SQL Database, Azure functions Apps, Azure Data Lake, BLOB Storage, SQL server, Windows remote desktop, UNIX Shell Scripting, AZURE PowerShell, Data bricks, Python, ADLS Gen 2, Azure Cosmos DB, Azure Event Hub.Cognizant, India May2019  July 2020Role: Data Warehouse DeveloperResponsibilities:Collaborated with business users and analysts to gather requirements and convert them into technical specifications.Data Analysis to extract useful data, finding patterns and regularities from the sources and develop conclusions.Interacted with functional/end users to gather requirements of core reporting system to understand exceptional features users expecting with ETL and Reporting system and also to successfully implement business logic.Created Dimensional modeling (Star Schema) of the Data warehouse and used Erwin to design the business process, grain, dimensions and measured facts.Extracted the data from the flat files and other RDBMS databases into staging area andpopulated onto Data Warehouse.Developed number of complex Informatica mapplets and reusable transformations to implement the business logic and to load the data incrementally. Created table mappings to load Fact and Dimension tables.Developed Informatica mappings by usage of Aggregator, SQL Overrides in Lookups, Source filter in Source Qualifier and data flow management into multiple targets using Router transformationsUsed PowerCenter server manager/Workflow manager for session management, database connection management and scheduling of jobs to be run in the batch process using Control M auto scheduling tool.Migrated mappings, sessions, and workflows from Development to testing and then to Production environments.Created multiple Type 2 mappings in the Customer mart for both Dimension as well as Fact tables, implementing both date based and flag based versioning logic.Performed scheduled loading and clean ups and Monitored troubleshoots batches and sessions for weekly and Monthly extracts from various data sources across all platforms to the target databaseWorked with Session Logs, Informatica Debugger, and Performance Logs for error handling the workflows and session failures.Identify the flow of information, analyzing the existing systems, evaluating alternatives and choosing the "most appropriate" alternative.Worked with tools like TOAD to write queries and generate the resultUsed SQL Override in Source qualifier to customize SQL and filter data according to requirement.Wrote PRE and POST SQL commands in session properties to manage constraints that improved performance and written SQL queries and PL/SQL procedures to perform database operations according to business requirements.Performed Pushdown Optimization to increase the read and write throughputCreated system end to end testing (Unit and System Integration Testing) as part of the SDLC process and performance tuning at the mapping level, session level, source level, and the target level.Environment: (Repository Manager, Designer, Workflow Manager, and Workflow Monitor), Windows XP, Oracle 10g, TOAD, PL/SQL Developer, SQL Plus.LINK, India Nov2018  Jan 2019Role: ETL ConsultantResponsibilities:Implementation of end-to-end data solution on Azure using Azure Databricks, ADF, DW and Power BI.Designed a robust data modelling environment using Databricks on Azure, enabling consumers to easily operate highly descriptive Notebooks in a fully governed environmentMigration large data sets to Databricks (Spark), create and administer cluster, load data, configure data pipelines, loading data from ADLS Gen2 to Databricks using ADF pipelinesExtensive hands-on experience of writing notebooks in data bricks using python/Spark SQL for complex data aggregations, transformations, schema operations. Good familiarity with Databricks delta and data frames conceptsExtensive hands-on experience in designing and implementing scalable ETL pipelines to process variety of data types (structured, unstructured), file formats (json, csv, text delimited)Collaborate with analysts to prepare complex data sets that can be used to solve data needsBuilt pipelines to take data from various telemetry streams and data sources to craft a unified data model for analytics and reporting.Creating Temporary views and loading curating data in destination tables.Databricks job configuration, Refactoring of ETL Databricks notebooksCreated various pipelines to load the data from Azure data lake into Staging SQLDB and followed by to Azure SQL DB.Knowledge in retrieve, analyze and present data using Azure Data Explorer/KustoExperience in Lift and shift existing SSIS packages using ADF.Created pipelines to load data from Lake to Databricks and Databricks to Azure SQL DB.Created Databricks Notebooks to streamline data and curate data for various business use cases.Designed various azure data factory pipelines to pull data various data sources and load the data into Azure SQL database.Triggers have been created for the pipelines to run on a day to day basis.Various tabular model cubes are created on top of the azure SQL database which are consumed for various reporting needs.Migrated the data from existing on prem database into Azure SQL.Worked on Huge Data Transfers from & to SQL Server Databases using utilities / tools like DTS, SSIS, BULK INSERT etc and used configuration files and variables for production deployment.Created SSIS packages to transform source data into dimension and fact tables.Developed complex Stored Procedures and views, ingested them in to SSIS packages. implemented slowly changing dimensions while transforming data in SSIS.Designed and loaded data into tabular model cubes to connect power bi reports.Created pipelines using ADF to run SQL scripts.Created database tables and stored procedures as required for reporting and ETL needs.Created/modified the existing SQL views as needed.Created Power BI reports and dashboards as per the business requirement using different data sources.Extensively used DAX to create complex calculated measures and columns in power bi and cubes.Implemented row level security within power bi to create user specific view of the data within report based on the role.Environment: MS SQL Server 2012/2014, SSIS 2012/2014, SSAS, ADF, Databricks and ADLADP, UAL, India April 2014  June 2017Role: Data EngineerResponsibilities:Involved in understanding the Requirements of the End Users/Business Analysts and Developed Strategies for ETL processes.Used Agile Methodology of Data Warehouse development using Kanbanize.Developed data pipeline using Spark, Hive and HBase to ingest customer behavioral data and financial histories into Hadoop cluster for analysis.Working Experience on Azure Databricks cloud to organizing the data into notebooks and making it easy to visualize data using dashboards.Performed ETL on data from different source systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in in Azure Databricks.Worked on managing the Spark Databricks azureImplemented data ingestion from various source systems using sqoop and PySpark.Hands on experience implementing Spark and Hive jobs performance tuning.KS by proper troubleshooting, estimation, and monitoring of the clusters.Performed Data Aggregation, Validation and on Azure HDInsight using spark scripts written in Python.Performed monitoring and management of the Hadoop cluster by using Azure HDInsight.Involved in extraction, transformation and loading of data directly from different source systems (flat files/Excel/Oracle/SQL) using SAS/SQL, SAS/macros.Generated PL/SQL scripts for data manipulation, validation, and materialized views for remote instances.Created partitioned tables in Hive, also designed a data warehouse using Hive external tables and also created hive queries for analysis.Created and modified several database objects such as Tables, Views, Indexes,Constraints, Stored procedures, Packages, Functions and Triggers using SQL and PL/SQL.Created large datasets by combining individual datasets using various inner and outer joins in SAS/SQL and dataset sorting and merging techniques using SAS/Base.Extensively worked on Shell scripts for running SAS programs in batch mode on UNIX.Wrote Python scripts to parse XML documents and load the data in database.Used Python to extract weekly information from XML files.Integrated Nifi with Snowflake to optimize the client session running.Used Hive, Impala and Sqoop utilities and Oozie workflows for data extraction and data loading.Performed File system management and monitoring on Hadoop log files.Used Spark API over Hadoop YARN to perform analytics on data in Hive.Created stored procedures to import data in to Elasticsearch engine.Used Spark SQL to process huge amount of structured data to aid in better analysis for our business teams.Implemented Optimized join base by joining different data sets to get top claims based on state using Map Reduce.Created HBase tables to store various data formats of data coming from different sources.Responsible for importing log files from various sources into HDFS using Flume.Worked on SAS Visual Analytics & SAS Web Report Studio for data presentation and reporting.Extensively used SAS/Macros to parameterize the reports so that the user could choose the summary and sub-setting variables to be used from the web application.Responsible for translating business and data requirements into logical data models in support Enterprise data models, ODS, OLAP, OLTP and Operational data structures.Created SSIS packages to migrate data from heterogeneous sources such as MS Excel, Flat files and CVS files.Provided thought leadership for architecture and the design of Big Data Analytics solutions for customers, actively drive Proof of Concept (POC) and Proof of Technology (POT) evaluations and to implement a Big Data solutionEnvironment: Azure Data Factory (V2), Azure Databricks, Pyspark, Snowflake, Azure SQL, Azure Data Lake, Azure Blob Storage, Azure MLEducation Details:Masters from Trine University (2024 MAY).Bachelors from Malineni Lakmaiah engineering college (2013)

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise