Quantcast

Aws Devops Engineer Resume Germantown, M...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title AWS devops engineer
Target Location US-MD-Germantown
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

aws devops engineer Hillcrest Heights, MD

Devops Engineer Aws Cloud Hyattsville, MD

Senior DevOps Engineer Fairfax, VA

Devops Engineer Azure Severn, MD

Devops Engineer Azure Baltimore, MD

Devops Engineer Systems Manager Alexandria, VA

Devops Engineer Fairfax, VA

Click here or scroll down to respond to this candidate
Parimala. PAWS DevOps EngineerEmail id: EMAIL AVAILABLE Contact no: PHONE NUMBER AVAILABLE
Professional summary
Over 9+ years of experience in roles including of DevOps, AWS Cloud, Build and Release Engineer with outstanding skills in Configuration, Automating, Managing, Building, Software Integration and Deploy to Servers, Support and Maintenance under Linux/Windows Platforms.      Expertise with AWS cloud administration, encompassing services such as RDS, IAM, Auto scaling, EC2, S3, EBS, VPC, ELB, AMI, SNS, Cloud Front, Cloud Watch, Cloud Trail, OPS Work, Security Groups, etc. Used Route53, configure the DNS server in the AWS cloud. Using YAML script, create AWS Cloud Formation templates to create VPCs, subnets, NATs, and EC2 instances with specific sizes.      Experience in AWS Route53 to direct traffic among various availability zones. Elastic Load Balancing(ELB) was configured for traffic routing between zones after Mem-cache and AWS Elastic-Cache were deployed and supported.      Expertise in working with Terraform, Terraform Cloud, Terraform Vault, and other key features such as Infrastructure as a code (IAC),Execution plans, Resource Graphs, Change Automation.      Hands on experience on DevOps tools such as GitHub, Maven, Jenkins, Docker, Kubernetes, Chef, Ansible, Puppet, Vagrant, Packer, Terraform.      Created reproducible builds of the Kubernetes applications, templatize Kubernetes manifests, and provide a set of configuration parameters to customize the deployment and Managed releases of Helm packages.      Experience of Docker Engine, DockerHub, DockerImages, DockerCompose, DockerSwarm, andDockerRegistry. By using containerisation, we were able to make our application platform consistent and adaptable so that we could use it in many situations.      Experience tools like Docker and Kubernetes to work with application teams to containerise legacy and monolithic applications in order to onboard them to cloud-based infrastructures.      Have experience on Ansibleplaybooks for virtual and physical instance provisioning, Configuration management and patching through Ansible.      Skilled in migrating on-premises databases to Amazon RDS and performing database schema conversions, data migration, and compatibility testing.      Experience with development teams to use monitoring technologies such as Prometheus, Kibana, Splunk, Nagios, and Grafana to optimize application performance and diagnose issues.      Expertise on CI/CD deployment utilising Jenkins on AmazonWebService and managed AmazonIAM for user and group administration. Build procedures for new projects are automated using MavenPOMs, and they are integrated with Git, GitHub, SonarQube, Jenkinspipelines, and Nexus.      Proficiency in handling versions in various contexts through Branching, Merging, Tagging, and SCM technologies such as SVN, TFS, GIT, and CVS.      Proficient in setting up, configuring, and managing Amazon RDS instances for various database engines such as MySQL, PostgreSQL, Oracle, SQL Server, etc.      Developed PySpark applications for large-scale data processing and analytics, leveraging Python's rich ecosystem and Spark's distributed computing capabilities.      Implemented complex data transformations, aggregations, and machine learning algorithms using PySpark RDDs, DataFrames, and MLlib libraries.      Hands on experience with online services provided by AWS and scripting languages such as Python, Ruby, and Bash, as well as configuration management tools such as Chef and Puppet.      Proficiency in Build Automation principles and tools, including ANT, MAVEN, Jenkins, Teamcity, QuickBuild, Buildforge, and Bamboo, for Continuous Integration.      Write scripts in PowerShell/SQL to automate maintenance tasks such as database clones, monitoring, and backups.      Have experience with log analytics tools such as New Relic, Elastic Search, Log Stash, Kibana (ELK), and AWSKinesis before, and have also used them to monitor servers and Splunk, Cloud Watch, and Nagios.      Good experience in Azure cloud services, Azure storage, IIS, Azure Active Directory (AD), Azure Resource Manager (ARM), Azure Storage, Azure, Blob Storage, Azure VMs, SQL Database, Azure Functions, Azure Service Fabric, Azure Monitor, and Azure Service Bus.      Acquired expertise in managing SQL Server applications, providing operational support, and running queries against NoSQL databases such as MongoDB and RDBMS such as MySQL, SQL Server, DynamoDB, Redshift, and others.      Experience with SaaS (Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service) solutions.      Proficient in implementing and configuring ticket management systems such as ServiceNow, Jira Service Desk, Zendesk, or Freshdesk to meet organizational needs.      Have expertise with CloudWatch, Nagios, and Splunk for monitoring IT infrastructure.      Experience in Bitbucket and Git as version control systems to manage code updates. Used tools like as Jenkins and CircleCI to work with CI/CD workflows.      Gained experience on application servers running on Windows, Linux, and UNIX environments, such as ApacheTomcat, WebLogic and WebSphere.      Hands on experience on JIRA for creating bug tickets, workflows, pulling reports from dashboard, creating and planning sprints.      Gained experience in branching technique appropriate for Agile, Scrum and software development life cycle (SDLC) models.Education:Bachelors in Computer science, Jawaharlal Nehru University,Hyderabad,India-2012Technical SkillsCloud Technologies
AWS & Azure, DevOpsAWSRDS, IAM, Auto scaling, EC2, S3, EBS, VPC, ELB,AMI, SNS, Cloud Front, Cloud Watch, Cloud Trail, OPS Work, Security GroupsContainers ToolsDocker, Docker swarm, KubernetesWeb/App serversApache Tomcat, Web sphere, Web logicCI & CD Tools
Jenkins, Ansible, Chef, Puppet, TerraformVersion Control
GIT, SVN, Bit BucketMethodologiesAgile, Scrum, WaterfallMonitoring ToolsNagios, Splunk, Prometheus, Cloud watch, Grafana
Build and ReleaseAnt, Maven, GradleScripting LanguagesBASH, Python, Shell,Go langData BasesNo SQL, MongoDB, My SQL DynamoDB, Red ShiftArtifact RepositoriesNexus, ArtifactoryTracking ToolsJIRA, RemedyPlatformsWindows, UNIX, and LinuxProfessional ExperienceClient: Mc kinsey , San Diego, CA                                                                          Duration: July 2021   Till DateRole: AWS DevOps EngineerDescription: This project's objective was to collaborate with the IT team to use AWS Cloud, VPCs, Lambda functions, and EC2 virtual machines to unify the company's DevOps tools, which were previously spread across several service providers, into a single platform. Ansible and Terraform made the migration process go more quickly. The outcome was a newly built DevOps pipeline running on the AWS cloud.Responsibilities:      Designing and implementing High Available Kubernetescluster in Dev, QA and Prod and Own of all Kubernetesclusters.      Sight Reliability, Infrastructure-as-Code, CI/CD, Bamboo, AWSDevOps, Jenkins, CircleCi, New Relic, OpsGenie, Snowflake, RDS, Python, Bash, Groovy, Golang, IaC, Terraform, SRE.      Installed multi-master high available Kubernetesclusters on Open stack.      Created and configured infrastructure using heat templates in Open stack to create Instances, security groups, networks, VIPs etc.      Proficient in setting up, configuring, and managing Amazon RDS instances for various database engines such as MySQL, PostgreSQL, Oracle, SQL Server, etc.      Developing templates or scripts to automate everyday developer or operations functions.      Deploy, Configure, and maintainMongoDBdatabases and replica sets across multiple environments. Planning and performingMongoDB databases upgrades and migrations.      Monitor using Opsmanager and MongoDBCloudManager and tune performance of MongoDB Databases.      Good knowledge of performance tuning of MongoDBinstances, Configuration parameter, Schema design, indexing on premises and on clouds AWS, RDS, EC2, S3.      Work with application team to understand the database needs of our applications and optimize them using MongoDB.      Creating Dockerimages for micro-services applications and automating the entire flow using Jenkinspipeline, Spark, PySpark, EKSandAWSenvironment, Dremio,Databricks.      Writing JenkinsPipelines/Libraries to automate end-to-end pipelines.      Designed, implemented, and managed AWS cloud infrastructure using Terraform, AWS CloudFormation, and GoLang.      Proficient in hosting and managing J2EE applications on AWS infrastructure, leveraging services such as Amazon EC2, Amazon ECS, or AWS Elastic Beanstalk.      Automated CI/CD for our micro-service applications using Stash/GitLab, Maven, JUNIT, SonarQube, and Quality gates, Docker, Kubernetes, Selenium and JFrog Artifactory.      Writing Dockerfiles and automatedbuildingDockerimages using Jenkins and deploy to Kubernetes.      Migration of data from Google storage buckets onto Google Big query, Ingested XMLs from source gs buckets to intermediate GS buckets in parquet formats using spark and Scala.      Migration of data from AWSS3 to GCS using Airflow/CloudComposer, enabling schema on read for AWSS3json files and loaded data to bigquery.      Ingested JSON files from source AWSS3 buckets to intermediate Google Storage buckets in json formats using airflow.      Automated the application deployments to Kubernetes using YAMLs and later migrated to HELM charts and maintain all the helm charts in the relevant repositories.      Audited and analyzed resource usage on all microservices and fine-tuned micro services for optimal performance.      Worked with New Relic for monitoring and auditing.      Designed and deployed canary-based deployment on AWS with custom CRDs to support custom tweaks.      Designed an approach to deploy and maintain global-properties for Java microservices to replace spring-config server.      Deployed Jenkins with dynamicslaves on Kubernetes and configured with external Jenkins servers for triggering jobs.      Set up custom dashboards and alerts to monitor PySpark job status, resource utilization, and throughput, enabling proactive issue detection and resolution.      Designed and implemented GIT strategy of maintaining branches and release branches.      Automated in issues creation on Jira when errors occur during builds on Jenkins.      Installed, configured, and deployed Neo4j, Redis, BonitaBPM.      Installed and configured proxy and reverse-proxy using Apache and Nginx.      Installed configured and deployed SiteMinder SSO on Kubernetes and performance tuned its engine and Apache.      Experience in specifying Node Group configurations such as instance types, desired capacity, maximum capacity, and scaling settings based on workload requirements.      Generated reports, dashboards, and analytics using ticket management system data to track key performance indicators (KPIs), identify trends, and make data-driven decisions for process improvement.      Installed, configured, and deployed GlusterfsW/ Heketi for storage on Kubernetes.      Writing Vagrant files and Shell scripts for automating local servers for developers.      Used Rundeck to schedule jobs and for regular operations.      Implemented Logging cluster using (FEK) Elastic-search, Fluentd and Kibana on Kubernetes to get logs and create alerts on top of that.      Implemented monitoring cluster with various exporters to monitor every aspect of the cluster for Prometheus, Grafana and alert-manager.      Implemented remote-node-exporter, SNMPexporter, integrated a java service for SNMP traps collection and push to Prometheus.      Designed and automated Jenkinspipelines to generate various configurations of Prometheus exporters and deploy to Kubernetes.      Created dashboards for Grafana for custom metrics of the cluster.Environment: Linux, AWS, OpenStack, Docker, Kubernetes, Glusterfs, Jenkins, SonarQube, Quality Gates, Vagrant, Shell, Ansible, Prometheus, Grafana, Alert-manager, New Relic, Rundeck, Fluentd, Elastic-Search, Kibana, Artifactory, Redis, Jira, Confluence, Stash, GitLab.Client:Macys, Atlanta, GADuration:Nov 2019   June 2021Role: AWS DevOps EngineerDescription:To manage internal tooling and projects, we have provided our clients with infrastructure management and DevOps services. Used a variety of tools, including as Docker, GitLab, Terraform, and others, to provision, script, and deploy production instances to AWS.Responsibilities:      Worked on AWS services EC2, IAM, S3, Lambda, Cloud Watch, DynamoDB, SNS, Elastic beanstalk, VPC, ELB, RDS, EBS, Route 53, ECS and Auto scaling.      Worked in order to Developed safe, extremely scalable, and adaptable systems that managed both anticipated and unforeseen spikes in load by utilising AWS cloud services like EC2, auto-scaling, and VPC.      Automated infrastructure provisioning that makes use of Terraform and AWS CloudFormation templates for consistency and repeatability, as well as integrated AWS IAM for access control management and best practices in security.      Experienced in implementing automated backup strategies using Amazon RDS automated backups and manual snapshots to ensure data integrity and disaster recovery preparedness.      Proficiency in setting up and executing Route 53 for AWS Web Instances, ELB, and Cloud Front in an AWS environment.      Implemented application tracing and debugging solutions for J2EE applications on AWS using tools like AWS X-Ray, enabling visibility into application performance and behavior.      Managed container clusters, deployments, and scaling of GoLang microservices using AWS-managed Kubernetes services like Amazon EKS, optimizing resource utilization and performance.      Skilled in migrating on-premises databases to Amazon RDS and performing database schema conversions, data migration, and compatibility testing.      Automated deployment processes for J2EE applications on AWS using tools like AWS CodeDeploy, Jenkins, or GitLab CI/CD pipelines.      Developed pods for applications using Kubernetes and Docker, and used Kubernetes to launch a web application throughout a multi-node Kubernetescluster.      Provisioned and managed AWS cloud resources using infrastructure as code (IaC) tools such as AWS CloudFormation or Terraform to deploy PySpark clusters and supporting services.      Proficiency in crafting Playbooks for deployment automation and incorporating Ansible Playbooks into Jenkins tasks for a continuous delivery architecture.      Developed serverless applications and functions using AWS Lambda and GoLang, leveraging the benefits of event-driven architecture, scalability, and cost-effectiveness.      Knowledgeable in optimizing database performance through parameter tuning, index optimization, query optimization, and instance scaling based on workload demands.      The project workflows and pipelines for Continuous Integration and deployment onto various Web/Application Servers were designed using Jenkins.      Developed Docker images by utilising a Docker file, handled Docker volume management, worked on Docker container snapshots, and applied Docker automation solutions for Continuous Integration/Continuous Delivery paradigm.      Managed and executed upgrades of Node Groups to newer Kubernetes versions or Amazon EKS platform versions to leverage new features, enhancements, and security patches.      Ansible automation experience was used to set up Nagios for EC2Linux instance monitoring.      Python and Shellscripts, to automate routinetasks as part of the CI and CD processes.Deployed Applications into Tomcat and WebLogic servers using Chef and Jenkins.      Developed the Elastic search ELK Stack for the collection, search, and analysis of log files across the servers requires configuring and managing an ELK stack. System logs were assessed using the ELK Stack.      Set up Nexus for artifact repository management. Have created, shared, and discussed projects and content using Confluence.      Worked for an organization that trained application teams to create and deliver code using GitHub, Jenkins, and Ansible.      Worked on Developing Continuous Integration and On-demand build system from scratch with Jenkins, ANT, and Maven. Branching, Tagging and Release Activities on Version Control Tools like Bit bucket.      Deploying phase monitoring Nagios, Splunk code deploying phase puppet, chef, Ansible, testing, debugging.      Implemented CRUD (Create, Read, Update, Delete) operations, data modeling, and query optimization techniques in GoLang applications interacting with AWS databases.      Installed Docker Registry for local upload and download of Docker images and even from Docker Hub using Jenkins Pipe line, Bamboo deployed Windows application.      Involved in developing custom scripts using Python, Perl & Shell (bash) to automate jobs.      Worked on Jira was used to manage work flows and tickets.      Worked with maintaining, administering and supporting of Red Hat Enterprise Linux servers.Environment:AWS, Ansible, Chef, Jenkins, GitHub, Docker, Kubernetes, Splunk, ECS,Python, Maven, Ant, Jira, Apache Tomcat, Nagios, Linux, Splunk, Bamboo, Windows, Perl, Shell, Bash, Red hat.Client: Fannie MaeReston ,Virginia                                                                                          Duration:Aug 2017   Oct 2019Role: DevOps EngineerDescription: As a DevOps engineer, my primary responsibilities include managing the infrastructure and the CI/CD pipelines. The project & primary goal was to use Python to automate the process. Have been using Terraform with AWS and building a CI/CD pipeline to build the infrastructure.Responsibilities:      Wrote Terraform scripts to improve the infrastructure in AWS. Experience in configuring Jenkins job to spin up infrastructure using Terraform scripts and modules.      Utilized Jenkins to establish a ContinuousIntegration and ContinuousDeployment (CI/CD) pipeline that manages the deployment of applications from development to production settings.      Skilled in Prometheus and Grafana which are used to establish infrastructure and monitor services.      Developed in clustering, managing Kubernetes local deployments, and installing and configuring Kubernetes.      Created deployment scripts, templates, and automation workflows to streamline the deployment of J2EE applications, ensuring consistency and reliability across environments.      Worked on and was in charge of implementing the ContinuousDelivery (CD) and ContinuousIntegration (CI) processes. Routine chores were automated using Python and Shellscripts, along with the BITBUCKET and Pushtools.      Managed container clusters and deployments of J2EE microservices using AWS-managed Kubernetes services like Amazon EKS, optimizing resource utilization and performance.      Implemented DevOps practices and automation workflows for PySpark applications, ensuring repeatability, scalability, and reliability in production environments.      Used Shellscripts included in Jenkins jobs which were used to create, distribute, and deploy packages into Tomcat application servers for the used GIT administration systems. Jenkins was used as a full cycle continuous delivery tool.      Worked on tasks included managing director structures, managing containers, attaching to an already-running container, removing images, and taking Docker container snapshots.      Created playbooks using Ansible and the Automation Agent after Nagios was installed on Windows and Linuxservers.      Leveraged AWS services like Amazon EMR (Elastic MapReduce) for scalable, cost-effective big data processing, and Amazon S3 for data storage and retrieval in PySpark applications.      Implemented CI for end-to-end automation of all builds and deployments using Bamboo and TeamCity. Created automation scripts in Perl, Shell, and Bash.      Integrated ELK (Elastic search, Log stash, Kibana) Stack to an existing appliance framework for real time log aggregation, analysis, and querying using ELK Stack, Elastic search for deep search and data analytics, Log stash, Splunk for centralized logging, log enrichment and parsing and Kibana for powerful and beautiful data visualizations.      Built process was automated, source code was built, and end users' Dynamic Views and Snapshot views were produced using expertise with ANT and MAVEN scripts.      Used SonarQube was integrated into Jenkins and AzureDevOps tools for static code analysis.      Created projects in JIRA, assigned permissions to users and groups, and created mail handlers and notification schemes.      Created Post Commit and pre-push hooks using Python in SVN and GIT.      Linux backup/restore with tar including disk partitioning and formatting. Configured and installed Apache web servers on the different Linux and UNIX servers.Environment: AWS, EC2, S3, EBS, AMI S, AWS IAM, VPC, Jira, Terraform, Cloud formation, Jenkins,CI/CD Pipeline, Docker, Kubernetes, Ansible, Microservices, Chef, Shell Scripting, Bitbucket, Git, Ant, maven, Cloud Watch, Shell, Bash, Perl, Nagios, Azure, Bitbucket, Python, Grafana, Prometheus.Client:knoah solutions Pvt LtdDuration:March 2014   Nov 2016Role: DevOps EngineerDescription:Responsible for developing and delivering scripts and automation tools used to create, integrate, and deploy software releases to various platforms as well as offering tool awareness and services to assist product management and project teams in managing and deploying releases into production.Responsibilities:      Worked on Terraform as a tool for building, changing, and versioning infrastructure safely and efficiently and Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation.      UtilizingDocker, have containerised the application and all of its dependencies by managing Docker volumes, creating Docker files, Docker-Compose files, and Docker container snapshots, all with the help of Ansible.      Used in Kubernetes to create service mesh for production traffic management and to manage deployment rollouts and rollbacks.      Created GIT, Jenkins, Ansible test automation processes, improved continuous integration (CI) and continuousdeployment (CD), and developed DevOps automation.      Established Jenkins automations, as well as creating Terraform templates for Infrastructure as code to establish staging and production environments.      Experience in together on Jenkins-powered infrastructure automation, including software and service configuration using Ansibleplaybooks. Developed Ansibleplaybooks for software deployment, patching, virtual and physical instance provisioning, and configuration management.      Used Production experience in large environments using configuration management tools such asChef, Puppet, and Ansible to achieve product continuousintegration (CI)/continuousdelivery(CD).      Experience with Maven and ANT build repository managers such as Nexus, Artifactory.      Created Dockerimages and configured Kubernetes for high availability using Dockerfiles. Applications were deployed and managed in the production environment using Ansible. The Kubernetes system was created.      Used shell, Python, and Puppet/Ansiblescripts, we automated the DevOps service server infrastructure setup.      Utilized App Dynamics, Splunk, Nagios, and other tools, worked with distributed system monitoring and cloud operations instrumentation.      Utilize Jira ticketing to launch new projects and include new application components.      Proficiency in Web Services, API Gateways, and application integration development and design. It is necessary to have knowledge of application lifecycle management, Scrum, and Agile development.      Experience Perlscripts for UNIX and Shell that allowed for the manual deployment of code to different environments, and sent the team an email once the build was finished. keeping up with vital servers running Linux, SolarisUNIX, and webservices.Environment:AWS, Elastic search, Cloud watch, AWS CLI, Amazon API, S3, IAM, Jenkins, DevOps, Ansible, Terraform, GitHub, SonarQube, Open Shift, Kubernetes, Python, WebLogic, UNIX, Jira.Client:Enquero-IndiaDuration:Sep 2011   Feb 2014Role: Build & Release EngineerDescription: As member of Release Engineering group, redefined processes and implemented tools for software builds, patch creation, source control, and release tracking and reporting, on UNIX platform.Responsibilities:
      Build and release software baselines, code merges, branch and label creation in Subversion/GIT and interfaced between development and infrastructure.      Collaborated with developers to establish effective branching and labeling conventions using Subversion (SVN) source control.      Demonstrated proficiency in Build Management Tools like Ant and Maven for crafting build.xml and Pom.xml files.      Conducted AWS usage cost estimations and identified operational cost control mechanisms to optimize resource allocation.      Automated Linux production server setup using Puppet scripts. Designed and implemented Puppet-based configuration management system for all new Linux machines (physical and virtual).      Automated legacy infrastructure in the interim while working through Chef. Managed user accounts and roles across Rally, MySQL, and production and staging servers.      Implemented Python and Perl scripts for release and build automation. Manipulated and automated scripts to suit the requirements.      Developed and maintained UNIX/Perl/Ant scripts for build and release tasks.Environment: Subversion, GIT, Ant, Maven, AWS, Puppet, Chef, Rally, MySQL, Python, Perl, UNIX, Perl.

Respond to this candidate
Your Email «
Your Message
Please type the code shown in the image:
Register for Free on Jobvertise