| 20,000+ Fresh Resumes Monthly | |
|
|
| | Click here or scroll down to respond to this candidateJEEVAN KUMAR TIRUMALAGIRISr. AWS, GCP, Cloud & Data EngineerReach me PHONE NUMBER AVAILABLE
C2C - Corp to Corp OnlySummary of Qualifications:Result-driven IT Professional with referable 9+ years of experience in developing scalable data pipelines and ETL processes using AWS Glue, Apache Kafka, and Spark, enhancing data flow and storage solutions.Proficient in managing and analyzing large datasets using Amazon S3, SQL, NumPy, and Pandas, driving insightful business decisions.Demonstrated expertise in Spark 1.6/2.0 and PySpark for complex data processing, contributing to significant improvements in data analysis and processing speed.Versatile in handling data lake architectures and AWS cloud services, ensuring optimized data storage and accessibility for diverse applications.Expert in Python programming, automating data workflows and integrating machine learning models for predictive analytics and data insights.Extensive experience with Cloudera Stack, managing HBase, Hive, Impala, and Pig for robust big data ecosystem support.Proficient in streamlining data flows with NiFi and Spark Streaming, enabling real-time data processing and analytics.Skilled in implementing ELK/Splunk for log management and data visualization, enhancing operational intelligence and data-driven strategies.Advanced knowledge of RESTful API, JSON, XML, and SOAP UI for efficient data integration and web services development.Deep understanding of database management using MySQL, Cassandra, and Mongo Db, ensuring data integrity and performance.Expertise in cloud-based data warehousing and analytics using GCP BigQuery, Azure Data Lake, and Snowflake, providing scalable and cost-effective solutions.Proficient in Jenkins, Docker, and Kubernetes for CI/CD pipelines, containerization, and orchestration, enhancing deployment efficiency and scalability.Strong background in data visualization and reporting tools like Tableau, Power BI, and MicroStrategy, translating complex data into actionable insights.Strong experience with GCP Dataproc, Dataflow, and Azure Data Factory for cloud-native data processing and integration services.Skilled in leveraging DBT to streamline and automate ETL processes, enhancing data transformation, testing, and documentation within data warehousing environments.Expert in utilizing DBT's capabilities for modular, version-controlled, and collaborative data workflows, ensuring efficient and reliable data pipeline development.Proficient in using DataStage for complex data integration projects, with extensive experience in designing, developing, and managing high-performance ETL processes.Demonstrated ability to handle large-scale data transformations, data cleansing, and integration tasks, optimizing data flow and quality for business intelligence and analytics.Expert in NoSQL database technologies with extensive experience in designing, implementing, and managing scalable and high-performance MongoDB databases.Proficient in developing robust data models, performing data migration, and optimizing database performance in MongoDB and other NoSQL platforms.Skilled in integrating NoSQL databases with various data processing and analytics tools, enhancing data-driven decision-making and operational efficiency.Expertise in data security and compliance, utilizing Azure Storage, Cloud Spanner, and Cloud SQL for secure data handling and transactions.Expertise in financial data engineering and analytics, capable of developing and optimizing real-time data processing systems for market data, supporting decision-making in fast-paced financial environments like Bloomberg.Skilled in Apache Airflow and Oozie for workflow scheduling, automating and managing data pipelines for improved efficiency and reliability.Advanced user of Terraform and Ansible for infrastructure as code and configuration management, streamlining cloud infrastructure provisioning and maintenance.Adept in leveraging Salesforce for CRM data integration and analytics, enhancing customer engagement and business processes through data-driven insights.Strong analytical and problem-solving skills, with a proven track record of improving database systems to meet the dynamic needs of businesses.Technical competencies:Cloud Platforms & Services: AWS Glue, AWS cloud, Amazon S3, GCP, GCS, Azure, Azure Data Lake, Azure storage, Cloud Composer, Cloud Pub/Sub, Cloud Storage Transfer Service, Cloud Spanner, Cloud SQL, AWS Neptune, Azure Cosmos DBData Processing & Analytics: Apache Kafka, Spark, Spark 1.6 / 2.0, PySpark, DBT, SPARQL, ETL, Data Lake, DataStage, NiFi, Spark Streaming, GCPs DataProc Big Query, GCP Dataflow, GCP Dataproc, Azure Data Factory, Data Flow, ETL PipelinesDatabase Management: SQL Database, Mongo Db, MySQL, Cassandra, Snowflake, SQL, SnowSQLBig Data Technologies: Hadoop, Hive, Cloudera Stack, HBase, Impala, Pig, ELK/Splunk, Athena, RedshiftDevOps & CI/CD: Jenkins, Docker, Ansible, Terraform, Maven, GITProgramming Languages: Python, NumPy, Pandas, Scala, Golang, R, Shell ScriptingData Visualization: Tableau, Power Bi, MicroStrategy, Quicksight, MS OfficeAPIs & Web Services: RESTFul API, JSON, JAXB, XML, WSDL, Soap UIMachine Learning & Statistics: Cross ValidationSoftware & Tools: JMeter, ElasticSearch, Logstash, Kibana, Spring, Hibernate, Apache Airflow, Oozie, Web Sphere, Splunk, Tomcat, Linux, Bloomberg, RedHat, Salesforce, Data Bricks, GCP Databricks, Azure Data BricksProfessional Experience:Client: Ascena Retail Group -Pataskala, OH Nov 2022 - PresentRole: Sr. AWS Cloud EngineerRoles & Responsibilities:Developed and maintained ETL processes using AWS Glue and Apache Kafka to ensure efficient data flow and storage across various platforms including Amazon S3 and Data Lake architectures.Engineered and optimized SQL queries and Spark scripts to perform complex data analysis, enhancing data retrieval efficiency and supporting data-driven decision-making.Leveraged AWS cloud services to deploy and manage scalable data infrastructure, improving system reliability and performance.Designed and implemented robust data pipelines using PySpark and Spark 1.6 / 2.0, facilitating the processing of large datasets with high velocity and variety.Utilized Python and NumPy for data manipulation and analysis, enabling the extraction of meaningful insights from structured and unstructured data.Configured and managed Cloudera Stack, including HBase, Hive, Impala, and Pig, to support big data ecosystems and analytics applications.Automated data workflows using Apache Airflow and Oozie, ensuring efficient and error-free data processing cycles.Developed RESTful API services using JAX-RS, Spring, and Hibernate to facilitate seamless data integration and exchange between systems.Implemented data indexing and search solutions using ElasticSearch, Logstash, and Kibana (ELK), enhancing data visibility and accessibility.Managed HDFS systems, ensuring data integrity, scalability, and accessibility in distributed computing environments.Administered MySQL and Cassandra databases, optimizing data storage, retrieval, and management processes.Employed Spark Streaming for real-time data processing, enabling instant data analysis and insights.Designed and maintained Data lakes, centralizing raw data storage and providing a scalable data management solution.Utilized Tableau, MicroStrategy, and Quicksight for data visualization, presenting complex data in an easily understandable format for business stakeholders.Integrated SnowSQL and Snowflake technologies to enhance data warehousing capabilities, supporting scalable and efficient data storage solutions.Orchestrated data pipeline automation and monitoring using Jenkins, ensuring continuous integration and deployment (CI/CD) of data-driven applications.Implemented Athena and Redshift for efficient data querying and analysis in cloud environments, supporting scalable analytics solutions.Developed and enforced data quality frameworks using Spark and ETL processes, ensuring data accuracy and reliability.Configured NiFi flows for efficient data routing, transformation, and system integration, enhancing operational efficiency.Developed and optimized data pipelines for integrating external data sources into graph-based systems, leveraging tools like Apache Kafka and Spark for real-time and batch processing.Designed and implemented ontology-based data models to structure data within graph databases, enhancing semantic querying and data retrieval efficiency.Wrote advanced SPARQL queries for extracting insights and facilitating complex analytical tasks within graph databases.Collaborated with cross-functional teams to translate business requirements into scalable graph-based solutions, improving data connectivity and insights.Employed Scala for application development and data processing, leveraging functional programming paradigms for efficient data handling.Managed XML, JSON, JAXB, and WSDL for data interchange and service-oriented architecture (SOA) implementations, facilitating system interoperability.Utilized ELK/Splunk for data logging and analysis, enhancing system monitoring and operational intelligence.Optimized HBase and Impala configurations for high-performance data querying, supporting real-time analytics and decision-making.Leveraged Apache Kafka for building scalable, fault-tolerant messaging systems, enabling efficient data streaming and processing.Environment: AWS Glue, Apache Kafka, Amazon S3, SQL, Spark, AWS cloud, ETL, NumPy, Spark 1.6 / 2.0, PySpark, Data lake, Python, Cloudera Stack, HBase, Hive, Impala, Pig, NiFi, Spark, SnowSQL, Spark Streaming, ElasticSearch, Logstash, SPARQL, Kibana, JAX-RS, Spring, Hibernate, Apache Airflow, Oozie, RESTFul API, JSON, JAXB, XML, WSDL, MySQL, Cassandra, HDFS, ELK/Splunk, Athena, Tableau, Redshift, Scala, Snow Flake, Jenkins, MicroStrategy, QuicksightClient: Truist Bank - Charlotte, NC Dec 2021 - Oct 2022Role: Sr. GCP, Hadoop, Cloud EngineerRoles & Responsibilities:Developed scalable data pipelines in GCP Dataflow and GCP Dataproc for real-time and batch data processing, ensuring timely and accurate financial reporting.Utilized PySpark and Hadoop for processing large datasets, improving the efficiency of data analysis tasks.Engineered and maintained SQL Database and MongoDB systems for optimized data storage and retrieval, supporting various banking operations.Leveraged DataStage to design and execute complex data integration workflows, enabling effective data extraction, transformation, and loading across multiple banking systems.Implemented DBT for data modeling and transformation tasks, optimizing data structures and enhancing analytics capabilities within the banking sector.Orchestrated data transformation and workflow management using DBT, establishing standardized practices for version control, testing, and deployment in the bank's data ecosystem.Utilized DataStage to facilitate real-time data integration and processing, enhancing the banks ability to respond swiftly to market changes and regulatory requirements.Developed and maintained scalable data pipelines for real-time processing of financial market data, similar to the systems used in Bloomberg, ensuring timely and accurate analytics for capital markets operations.Enhanced the banks data infrastructure for capital markets activities, implementing best practices in data security, distributed computing, and big data management, reflecting the high standards observed in Bloombergs data operations.Developed scalable data pipelines to support real-time processing of digital wallet transactions, leveraging GCP Dataflow and Apache Beam for efficient batch and stream data processing.Designed conceptual, logical, and physical data models to accurately represent digital wallet transactions, customer interactions, and financial data, facilitating seamless integration with existing banking systems.Implemented SAS analytics for advanced financial modeling, contributing to strategic decision-making processes.Configured and managed Teradata systems, enhancing data warehousing capabilities and supporting complex query execution.Implemented CI/CD pipelines for the digital wallets backend systems, using tools like Jenkins and Spinnaker to automate testing and deployment processes, improving development efficiency and product reliability.Leveraged GCP's BigQuery for fast, economical, and scalable analytics, enabling effective data-driven insights.Utilized Hive and Sqoop for data aggregation and transformation, facilitating seamless data integration across platforms.Developed Python scripts for data manipulation and analysis, automating routine data processing tasks.Managed Snowflake cloud data warehouse, optimizing data storage and computation for financial analytics.Created dynamic visualizations using Power BI, providing actionable insights into financial trends and patterns.Orchestrated data ingestion and processing workflows using Cloud Composer, ensuring smooth and efficient data pipeline operations.Implemented Cloud Pub/Sub for event-driven data integration, enhancing data availability and accessibility.Utilized Cloud Storage Transfer Service for efficient data migration between different cloud storage services.Configured Cloud Spanner and Cloud SQL for highly available and scalable database services, supporting critical banking applications.Employed Data Catalog for metadata management, improving data discoverability and governance.Developed and maintained GCP Databricks environments for collaborative data science and engineering projects.Engineered financial data models using GCS and BigQuery, facilitating advanced data analysis and reporting.Automated data cleansing and quality checks using GCP Dataprep, ensuring high data integrity and reliability.Managed data security and compliance within GCP and Cloud SQL environments, adhering to financial industry regulations.Utilized Data Flow for stream and batch data processing, optimizing financial data analysis and insights.Implemented GCS for secure and scalable cloud storage solutions, ensuring data availability and disaster recovery.Developed data integration solutions using Sqoop and Cloud Dataflow, streamlining data exchange between disparate data sources.Optimized BigQuery and Snowflake performance for financial data analytics, reducing query execution time and costs.Leveraged Cloud Spanner for globally distributed database management, ensuring consistency and reliability across financial operations.Automated financial reports generation using Power BI and GCP Data Studio, enhancing reporting efficiency and accuracy.Environment: GCP, PySpark, SAS, Hive, Sqoop, Teradata, GCPs DataProc Big Query, Hadoop, Hive, GCS, Python, Snowflake, Power Bi, DBT, Data Flow, SQL Database, DataStage, Mongo Db, Bloomberg, Data Bricks, GCP, GCS, BigQuery, GCP Dataprep, GCP Dataflow, GCP Dataproc, Cloud Composer, Cloud Pub/Sub, Cloud Storage Transfer Service, Cloud Spanner, Cloud SQL, Data Catalog, GCP DatabricksClient: AbbVie - Vernon Hills, IL July 2018 - Nov 2021Role: Cloud EngineerRoles & Responsibilities:Engineered data integration pipelines using Azure Data Factory, streamlining data flow from research datasets to Azure Data Lake, supporting immunology research projects.Managed and optimized Azure Storage solutions for secure and scalable storage of large-scale genomic data, enhancing data accessibility for oncology research.Developed and maintained robust data processing frameworks using Azure Databricks, facilitating advanced analytics on clinical trial data.Implemented JMeter and Kafka for real-time data ingestion and processing, improving data quality and speed for gastroenterology research outcomes.Automated deployment and configuration processes using Ansible and Jenkins, ensuring reliable and efficient application updates within research environments.Containerized research applications and data processing tools using Docker, enhancing portability and scalability across computing environments.Managed build and deployment pipelines using Maven and GIT, streamlining code integration and version control for data engineering projects.Administered Linux and Red Hat servers hosting data-intensive applications, ensuring high availability and performance for data analysis tools.Wrote and optimized Python scripts for data manipulation and analysis, extracting insights from complex biomedical data.Developed Shell Scripting routines for automating data processing tasks, reducing manual effort and increasing efficiency in data management.Configured and maintained MYSQL databases, supporting structured data storage for research findings and clinical data.Implemented Elastic Search for fast and scalable search capabilities across vast repositories of research documents and data.Utilized Golang for developing high-performance data processing tools, enhancing data throughput for large-scale data sets.Managed Web Sphere and Tomcat servers, ensuring robust hosting environments for web-based research data applications.Integrated Splunk for log management and analysis, monitoring data processing pipelines and ensuring system health.Automated testing and validation of web services using Soap UI, ensuring data integrity and reliability in research data exchange.Orchestrated containerized environments using Kubernetes, facilitating scalable and manageable deployment of data applications.Employed Terraform for infrastructure as code (IaC) management, automating cloud infrastructure provisioning and ensuring reproducibility.Developed PowerShell scripts for automation and configuration tasks, enhancing operational efficiency in cloud and on-premises environments.Ensured data pipeline integrity and security through continuous integration and delivery (CI/CD) practices using Jenkins and GIT.Optimized data query performance and analysis using Azure Data Lake and Azure Databricks, supporting fast-paced research and development.Implemented secure data exchange and APIs with Azure Data Factory, facilitating seamless data integration across research platforms.Automated environment setup and application deployment using Docker and Kubernetes, reducing setup times for data processing environments.Utilized Ansible for configuration management, ensuring consistent environments across development, testing, and production.Integrated Splunk for real-time monitoring and analytics of data operations, enhancing visibility and insights into data processing performance.Environment: Azure Data Factory, Azure Data lake, Azure storage, Azure Data Bricks, JMeter, Kafka, Ansible, Jenkins, Docker, Maven, Linux, Red Hat, GIT, Python, Shell Scripting MYSQL, Elastic Search, Golang, Web Sphere, Splunk, Tomcat, Soap UI, Kubernetes, Terraform, PowerShellClient: NetEnrich Technologies - Telangana, India Dec 2016 - Mar 2018Role: Data Analyst, Hadoop EngineerRoles & Responsibilities:Developed Spark applications within Databricks to extract, transform, and aggregate data from various file formats using Spark-SQL, enabling in-depth analysis of customer usage, consumption trends, and behavior.Demonstrated proficiency in dimensional modeling, encompassing Snowflake schema, Star schema, transactional modeling, and Slowly Changing Dimension (SCD), contributing to robust model construction.Engaged actively in model development by identifying, collecting, exploring, and cleansing data, ensuring its quality and relevance for modeling purposes.Conducted thorough data cleaning and scaling operations to bolster data quality and prepare it for further analysis.Developed statistical models for diagnostics, prediction, and prescriptive solutions, operating in both distributed and standalone environments.Applied Python libraries including NumPy, Scikit-learn, and Matplotlib for data analysis, visualization, interpretation, and reporting of key insights.Designed and implemented NoSQL database schemas using MongoDB, optimizing for performance, scalability, and reliability.Managed MongoDB clusters, ensuring high availability, efficient indexing, and optimal shard configuration for distributed data processing.Led the migration of legacy systems to MongoDB, ensuring seamless data transfer, integrity, and consistency across different storage systems.Integrated MongoDB with various data sources and applications using ETL processes, facilitating real-time data synchronization and analytics.Leveraged leading text mining, data mining, and analytical tools, alongside open-source software, to conduct comprehensive research.Developed and maintained complex data models in NoSQL environments, addressing the needs for high-speed transactions and large-scale data storage.Utilized MongoDBs aggregation framework for data analysis and reporting, optimizing queries for faster response times and reduced server load.Optimized ETL procedures and implemented appropriate transformations to enhance data migration performance, aligning with project requirements.Employed Cross Validation, Log Loss Function, ROC Curves, and AUC for feature selection, ensuring rigorous evaluation of models' effectiveness.Generated dummy variables for specific datasets to facilitate regression analysis and improve model accuracy.Showcased strong data visualization skills using tools such as Matplotlib and the Seaborn package.Utilized Tableau to craft visually engaging data visualizations, dashboards, and comprehensive reports, effectively communicating findings to both the team and stakeholders.Environment: NumPy, Pandas, Tableau, MongoDB, ETL, Cross Validation, PythonClient: PalTech - Hyderabad, India Aug 2014 - Nov 2016Role: Data AnalystRoles & Responsibilities:Facilitated communication between IT Technical Teams and end-users, acting as a liaison to understand and convey specific needs and requirements effectively.Utilized advanced data analysis techniques to predict variations aligned with market demands, contributing to informed decision-making.Developed a deep understanding of product knowledge, enabling accurate estimation of product costs for clients.Interpreted and analyzed results using various techniques and tools, ensuring comprehensive comprehension of data outcomes.Played a pivotal role in supporting the data warehouse by aligning and revising reporting requirements.Conducted test runs, implemented the latest software updates, and contributed to strategic decision-making processes.Monitored daily activities and performance using Salesforce reports and analysis, ensuring operational efficiency.Worked on ETL tools, pipelining, and data warehousing to enhance overall data management capabilities.Automated ETL transformations and executed complex SQL queries, resulting in a 40% improvement in report generation, data preparation, and predictive analytics for business growth.Proactively troubleshoot database report maintenance issues, ensuring smooth and uninterrupted data operations.Prepared detailed and comprehensive reports using Tableau, facilitating easy comprehension of project status and outcomes.Created presentations and dashboards using Tableau, MS Excel, and other MS tools to effectively meet client requirements.Environment: R, SQL Script, Salesforce, Tableau, ETL Pipelines, Data Warehouse, MS OfficeEducation:Bachelors Degree in Information Technology (Graduated in 2014)JNT University - AP, IndiaReferences: Provided upon request |