Quantcast

Data Engineer Senior Resume Columbus, OH
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Data Engineer Senior
Target Location US-OH-Columbus
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Senior Software Engineer Columbus, OH

Senior test engineer Logan, OH

Data Engineer Big Dublin, OH

Management Specialist Senior Data Columbus, OH

Data Engineer Machine Learning Columbus, OH

Data engineer Powell, OH

Data Engineer Analytics Columbus, OH

Click here or scroll down to respond to this candidate
Candidate's Name
PHONE NUMBER AVAILABLEEMAIL AVAILABLESenior Data EngineerPROFESSIONAL SUMMARY:Over 6+ years of extensive experience as a Data Engineer with a proven track record in large-scale data processing, data architecture, and analytics.Expertise in Python development, ETL processes, and data warehousing solutions.Proficient in a wide range of big data technologies including Apache Spark, Hadoop, and Kafka.Skilled in cloud-based solutions, especially AWS and Azure data services.Strong background in developing scalable data pipelines and stream-processing systems.Experienced in data modeling, data analytics, and creating data-driven solutions.Proficient in SQL and NoSQL databases, including MongoDB, PostgreSQL, and SQL Server.Hands-on experience with Apache Airflow and Luigi for workflow management.Knowledgeable in data security and compliance, adept in using Azure Active Directory and AWS Key Vault.Proficient in containerization and orchestration using Docker and Kubernetes.Experience in developing CI/CD pipelines with Jenkins and Azure DevOps.Expert in using cloud storage solutions like AWS S3 and Azure Blob Storage.Skilled in data integration tools such as Informatica PowerCenter and Talend.Experience with machine learning frameworks, including TensorFlow.Strong ability in scripting and automation using Python and other scripting languages.Demonstrated ability in data governance and ensuring data quality.Agile methodology proponent with experience in Agile project management.Excellent problem-solving skills and ability to work in fast-paced environments.Strong communication skills, adept at stakeholder management and team leadership.Continuously updating skills with latest data engineering trends and technologies.TECHNICAL SKILLS:Languages: Python, Scala, SQLBig Data Technologies: Apache Spark, Apache Hadoop, Apache Kafka, Apache Hive, Apache FlumeData Warehousing: AWS Redshift, Azure SQL Database, PostgreSQL, Microsoft SQL ServerCloud Platforms: AWS (EMR, Glue, S3, Data Pipeline), Azure (HDInsight, Cosmos DB, Data Factory)Containers & Orchestration: Docker, KubernetesCI/CD & Version Control: Jenkins, Git, Azure DevOpsETL Tools: Informatica PowerCenter, Talend, AWS Glue, SSISScripting & Automation: Python, Shell ScriptingData Security & Governance: Azure Active Directory, AWS Key Vault, Data GovernanceMachine Learning: TensorFlowWorkflow Management: Apache Airflow, LuigiInfrastructure as Code: TerraformOperating Systems: LinuxMethodologies: Agile, Data Analytics, Data Modeling, DocumentationPROFESSIONAL EXPERIENCE:Client: HDFC Bank Jun 2017 to Sep 2020Role: Cloud Data EngineerRoles & Responsibilities:Spearheading cloud-based data engineering projects in the banking domain using Azure and AWS platforms.Implementing data warehousing solutions with Azure SQL Database and Data Factory (ADF).Developing scalable ETL pipelines using Talend and Azure Data Factory.Utilizing Azure HDInsight for big data processing and analytics.Employing Azure Cosmos DB for NoSQL database solutions and storage optimization.Architecting data lake solutions in Azure, integrating diverse data sources.Streamlining data workflows with Databricks for advanced analytics.Managing containerized applications using Docker for consistent deployment.Ensuring data security with Azure Active Directory and Azure Key Vault.Applying TensorFlow for machine learning projects within data pipelines.Utilizing Azure DevOps for continuous integration and delivery in cloud projects.Enhancing data analytics capabilities with SQL and Python scripting.Producing comprehensive documentation for all data engineering projects.Employing Agile methodologies for efficient project management and delivery.Leading data migration projects to Azure cloud platform.Implementing data governance and compliance measures in banking data projects.Optimizing data storage and retrieval with Azure Cosmos DB.Designing and managing data warehouses for banking analytics.Coordinating cross-functional teams for end-to-end data solutions.Innovating data strategies to support banking operations and decision-making.Applied advanced SQL techniques for complex data querying and analysis.Implemented data governance standards in airline data management.Optimized data storage and processing using AWS cloud services.Developed scalable data pipelines for real-time airline data processing.Ensured data quality and consistency in aviation data projects.Coordinated with cross-functional teams for integrated data solutions.Innovated data strategies to enhance airline operational efficiency.Environment: ETL, DAX, Power BI, Agile,Azure, Synapse, Azure Devops, SSMS,Azure SQL DB, ADLS Gen2Client: Sonata Software, Pune, India Oct 2020 to Nov 2022Role: Spark EngineerRoles& Responsibilities:Managed Apache Hadoop ecosystem for large-scale data processing in software projects.Utilized Apache Hive for data warehousing and SQL-like querying.Led Apache Spark projects for fast and efficient big data processing.Employed Python for scripting and automation in data engineering tasks.Integrated Apache Flume for efficient data collection and aggregation.Implemented Jenkins for continuous integration and deployment in data projects.Managed Docker containers for consistent and scalable application deployment.Administered PostgreSQL for relational database solutions in software applications.Employed Linux as the primary operating system for data engineering environments.Utilized GIT for version control in collaborative software development.Applied SQL skills for data querying and analysis in software projects.Developed scripting solutions for automation and efficiency in data tasks.Implemented best practices in data security and governance in software data projects.Environment: YAML, Teradata, Spark scala, pyspark, JSON, PL/SQL, LOG4J, Jenkins, JIRA, Intellij, GIT.Education:Bachelor of Technology (B.Tech) in computer science and Information Technology from Kakatiya University, Warangal, Telangana, India Aug 2012  2016Masters in Computer Science from Campbellsville University, Louisville, Kentucky, USA Jan 2023-Dec 2024

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise