Quantcast

Senior Cloud Devops Architect Resume Dal...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Senior Cloud/DevOps Architect
Target Location US-TX-Dallas
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Senior Cloud DevOps Engineer Dallas, TX

Cloud Engineer Senior Wylie, TX

Cloud Engineer Devops Arlington, TX

Devops Engineer Google Cloud Plano, TX

Devops Engineer Senior Dallas, TX

Devops Engineer Cloud Lewisville, TX

Devops Engineer Cloud Solutions Frisco, TX

Click here or scroll down to respond to this candidate
Candidate's Name
Sr. Cloud Architect ProfessionalPhone: PHONE NUMBER AVAILABLE Email: EMAIL AVAILABLEProfile Summary:Dedicated Senior Cloud Architect with a proven track record of 17+ years in Information Technology, including 11+ years specializing in cloud technologies. An expert in AWS and Azure Cloud Architecture, adept at orchestrating seamless migrations from monolithic to microservices architectures. Possesses a strong academic background in Information Assurance & Cybersecurity.Extensive experience in AWS and Azure Cloud Architecture.Specialization in migrating organizations from monolithic to microservices architecture.In-depth knowledge of AWS and Azure services, including EC2/Virtual Machine, EKS, anime VPC, RDS, ELB Load Balancing, IAM, Azure AD, Route53, Direct Connect, and Express Route.Expertise in Docker, Kubernetes, and EKS.Skilled in deploying and managing containerized applications at scale.Proficient in Bash, Python, and Groovy scripting.Automation of build, deploy, and CI/CD workflows using Terraform and CloudFormation.Extensive experience with core AWS database services.Design and implementation of robust, scalable database solutions.Proven ability to secure cloud environments with IAM, CloudWatch, and Splunk.Proactive monitoring with Nagios for optimal uptime.Deep understanding of key AWS components.Successful implementation of advanced services like EKS and CloudFront.Track record of deploying web applications on AWS S3 via CloudFront and Route 53 with CloudFormation.Comprehensive understanding and execution of the full Software Development Life Cycle (SDLC) with a DevOps and Agile approach.Proficient in Linux Administration (Ubuntu, RedHat) and Windows environments.Proficient in business process analysis, use case creation, and requirements definition. Driving innovation in services and businesses.Engineered end-to-end Cloud System Solutions prioritizing Resilience, Security, Performance, Availability, and Scalability for operational success.Hands-on experience with monitoring tools like CloudWatch, Prometheus, Datadog, Grafana, and Nagios.Demonstrated exceptional verbal and written communication skills.Effective engagement with technical and non-technical stakeholders across various organizational levels.Technical Skills:DevOps and Container: Jenkins, Kubernetes, Docker, Ansible, Git, AWS CodePipeline/Azure DevOps. EKS/AKS, ECR/ACR, ECS/ACS, ELK, JFrog Artifactory, SonarQubeProgramming Languages: Java/Groovy, Python, Bash, NodeJS/JavaScriptCloud: AWS, Azure, Site Reliability Engineering (SRE)Markup: YAML, XML, JSON, HCLOperating Systems: Unix/Linux, Windows, MacOSMonitoring and Security: CloudWatch, CloudTrail, Splunk, ELK, Azure Monitor, Sentinel, Defender for Cloud, Datadog, GrafanaNetwork Protocols: TCP/IP, UDP, DNS, DHCP, SMTP, SNMP, ICMPDatabases: SQL Server Database, MySQL, PostgreSQL, Amazon Redshift, Cassandra, NoSQL, MongoDBIAC: CloudFormation, Azure Resource Manager, TerraformProfessional ExperienceSr. DevSecOps Engineer Southwest Airlines, Dallas, Texas August 2023  PresentProject Summary: As a DevSecOps Engineer for the Southwest Mytrips NextGen experience, I spearheaded the design and implementation of scalable, cloud-native microservices utilizing AWS Lambda and API Gateway. This innovative approach leveraged a multitude of AWS services to significantly enhance the user experience while upholding the highest standards of DevSecOps principles.Responsibilities:Architected scalable, cloud-native microservices for SouthWest Mytrips NextGen experience using AWS Lambda and API Gateway. This leveraged various AWS services to elevate the user experience while adhering to DevSecOps principles.Implemented cloud-native microservices on both Microsoft Azure (Functions & API Management) and AWS (Lambda & API Gateway) to enhance Mytrips platform's scalability and agility.Mastered core AWS services (EC2, S3, IAM, Lambda, CloudFormation, Route53, VPC, RDS) for practical applications and seamless compatibility across environments.Established and maintained real-time application monitoring dashboards (Sumologic & Dynatrace) across Microsoft and Linux platforms for continuous security and performance optimization.Enhanced security and monitoring by establishing real-time application monitoring dashboards using Sumologic & Dynatrace, and integrating comprehensive security tools including AWS Shield, AWS WAF, and AWS NACL.Conducted rigorous security analyses using tools like Veracode for static and dynamic security testing and AWS Secret Manager for managing secrets efficiently within Linux environments.Executed thorough static and dynamic security analyses (Veracode) on Linux environments, fortifying application safety within DevSecOps practices.Conducted comprehensive security assessments (Azure Security Center & Azure DevOps) to strengthen the security posture of Linux applications deployed on Azure, aligning with DevSecOps best practices.Configured and managed AWS CloudTrail to capture and log API activity across AWS services, providing comprehensive visibility into user actions, resource changes, and security events within the AWS environment.Effectively Communicated technical information to a diverse audience, fostering clear communication within DevSecOps teams.Orchestrated containerized applications using Kubernetes on AWS, ensuring scalability, reliability, and efficient resource utilization.Contributed to the continuous improvement of DevOps processes and infrastructure automation on AWS, leveraging tools such as Terraform, Ansible, and Jenkins for CI/CD pipelines and infrastructure as code (IaC) practices.Designed, implemented, and managed Kubernetes clusters on AWS EC2 instances, leveraging features such as Auto Scaling Groups and Elastic Load Balancing to optimize performance and availability.Configured and managed Elasticsearch, Kibana, and Logstash (EKL) stack on AWS EC2 instances for centralized log aggregation, analysis, and visualization.Created Sequence, Use Case, and Activity diagrams for Mytrips application, providing valuable insights for NextGen development decisions.Practiced Agile methodologies to deliver software solutions iteratively and incrementally, fostering collaboration, flexibility, and rapid response to changing requirements.Scrum Framework: Implemented Scrum framework as a Scrum Master or team member, facilitating daily stand-ups, sprint planning, sprint reviews, and retrospectives to drive team productivity and continuous improvement.Completed rigorous AWS Cloud services training program.Sr. AWS & Azure Architect Travelers Co., New York City, NY Jun 2021 to Jul 2023Project Summary: As a Sr. AWS & Azure architect for Travelers, I orchestrated the design and deployment of secure, scalable, and highly available cloud environments on AWS, ensuring seamless support for critical Travelers applications and data. Leveraging Site Reliability Engineering (SRE) principles, I prioritized reliability and fault tolerance to uphold the highest standards of operational excellence.Responsibilities:Designed and deployed secure, scalable, and highly available cloud environments on AWS and Azure to support core Travelers applications and data. Utilized SRE principles for reliability and fault tolerance.Supported Infrastructure-as-Code (IaC) practices with CloudFormation and Terraform for efficient and automated deployments across critical Travelers infrastructure (e.g., VPC, RDS, S3) on both AWS and Azure platforms.Implemented Infrastructure as Code (IaC) practices on Azure using Azure Resource Manager (ARM) templates, automating the provisioning and configuration of Azure resources for consistency and efficiency.Streamlined software development lifecycles for internal applications by automating builds (Maven/Gradle) and deployments (Jenkins) with IaC tools on both AWS and Azure. Managed Dev, Staging, Prod, and DR environments for smooth operations across both cloud platforms.Successfully migrated on-premises applications to the cloud platform (AWS and Azure), optimizing scalability with Elastic Load Balancers (ELBs), Azure Load Balancers, and auto-scaling policies to meet Travelers' business needs.Deployed and managed containerized applications on Azure Kubernetes Service (AKS), leveraging Kubernetes orchestration for scalable and resilient microservices architecture.Set up and configured Azure Log Analytics Workspace to collect, analyze, and visualize logs and telemetry data from Azure resources, enabling centralized logging and advanced analytics for operational insights and compliance reporting.Automated deployment, scaling, and management of containerized applications using Kubernetes Helm charts and Kubernetes Operators on AWS infrastructure.Implemented monitoring and logging solutions for Kubernetes clusters on AWS using tools like Prometheus, Grafana, and AWS CloudWatch, ensuring real-time visibility into system health and performance metrics.Established best practices for CI/CD workflows, including version control, automated testing, code quality checks, and deployment strategies, to accelerate the release cycle and ensure consistent software delivery.Provided guidance and support to teams on AWS infrastructure provisioning, networking, security, and integration with Kubernetes and EKL stack components.Implemented robust infrastructure automation using Ansible Playbooks, Python scripts, and configuration management techniques for centralized control and efficiency across both AWS and Azure environments.Configured Azure Monitor to gain insights into the performance and health of Azure resources, leveraging metrics, alerts, and dashboards for proactive monitoring and troubleshooting.Proactively monitored and optimized cloud infrastructure costs with tools like Nagios and Datadog to ensure efficient resource allocation for Travelers on both AWS and Azure platforms.Offered comprehensive support for AWS and Azure services specific to Travelers' cloud environment, resolving technical issues swiftly to maintain smooth operations.Designed and implemented CloudTrail logging configurations to capture and retain audit trails of API calls, enabling compliance with regulatory requirements (e.g., PCI DSS, HIPAA, GDPR) and enhancing security posture.Automated repository management, testing, and deployment workflows using Python and Bash scripts, driving efficiency, and streamlining processes for IT teams across both AWS and Azure environments.Enabled collaborative project management and API integration through Jira and Confluence for improved communication and development practices, integrating with both AWS and Azure services as needed.Sr. DevOps Engineer Merck & Co., Rahway, New Jersey Apr 2019 to May 2021Project Summary: As a sr. DevOps Engineer at Merck, I orchestrated the provisioning of AWS resources, leveraging cutting-edge technologies like EC2 and ECS clusters to ensure scalable and reliable deployments for critical applications. My focus on efficiency and scalability led me to champion containerization with Docker, optimizing resource utilization and achieving high scalability through load balancers and distributed architectures.Responsibilities:Led cloud infrastructure provisioning on AWS, leveraging technologies like EC2 and ECS clusters to ensure scalable and reliable deployments for critical Merck applications.Advocated containerization with Docker for production systems, optimizing resource utilization and achieving high scalability with load balancers and distributed architectures.Spearheaded a successful migration from ECS to Kubernetes for enhanced resource management and improved scalability to meet Merck's growing needs.Integrated CI/CD pipelines with AWS services such as AWS CodeCommit, AWS CodeDeploy, and AWS Elastic Beanstalk to automate code deployment to AWS infrastructure in a reliable and scalable manner.Collaborated on secure and performant VPC design and implementation, optimizing network configuration with subnets, availability zones, and security best practices to protect sensitive Merck data.Implemented proactive monitoring with CloudWatch and CloudTrail, ensuring optimal system health, identifying security risks, and upholding compliance for Merck.Promoted DevOps practices by automating infrastructure provisioning and configuration across environments with Ansible, Python, and Bash scripts.Streamlined code management and collaboration by managing Git repositories for efficient version control.Integrated DevOps practices into development and operations workflows, fostering collaboration, automation, and continuous delivery to accelerate time to market and enhance product quality.Developed custom CloudTrail event filters and alerts using AWS CloudWatch Events and AWS Lambda functions to automate incident response, threat detection, and compliance enforcement workflows.Authored Infrastructure-as-Code (IaC) templates using CloudFormation or Terraform to ensure efficient and repeatable cloud resource management for Merck's infrastructure.Utilized Kanban method to visualize and optimize workflow, manage work in progress (WIP), and identify bottlenecks, ensuring smooth and efficient delivery of features and enhancements.Optimized serverless computing by implementing event-driven AWS Lambda functions for cost-effective processing of tasks.Delivered data migration solutions aligned with Merck's data management strategies, ensuring seamless transition to the cloud.Designed and implemented secure cloud solutions that prioritize data integrity and confidentiality, adhering to Merck's strict compliance requirements.Established and managed CI/CD pipelines with Jenkins, Git, and Docker registry, enabling efficient and automated code integration and deployment for faster development lifecycles.Managed database deployments in the cloud, ensuring optimal performance, scalability, and data integrity for critical Merck applications.Utilized collaboration tools to foster effective communication and coordination among team members, promoting a collaborative and efficient cloud environment at Merck.Sr. AWS Architect General Motors, Detroit, Michigan Apr 2017 to Mar 2019Project Summary: As the driving force behind cloud infrastructure initiatives at General Motors, I spearheaded the large-scale migration of Linux environments to AWS, meticulously orchestrating migration plans and deploying EC2 instances within secure Virtual Private Clouds (VPCs). My focus on security was unwavering, as I designed and implemented robust measures using AWS CloudFormation templates and Ansible modules, configuring security groups, NACLs, IAM profiles, and roles to ensure least privilege access.Responsibilities:Led the large-scale migration of Linux environments to AWS, orchestrating migration plans and deploying EC2 instances within secure Virtual Private Clouds (VPCs).Designed and implemented robust security measures using AWS CloudFormation templates and Ansible modules. This included configuring security groups, Network Access Control Lists (NACLs), IAM profiles, and roles for least privilege access.Applied Lean principles to eliminate waste, improve flow, and maximize value delivery, focusing on customer needs and feedback to drive continuous improvement and innovation.Collaborated with development and application teams to ensure optimal database capacity, assess suitable instance classes for workloads, and fulfill specific application requirements.Supported DevOps practices by automating continuous deployment with Ansible. Crafted YAML-based playbooks and orchestrated Ansible Tower for playbook scheduling and centralized management. Maintained configuration files and remote machine deployments using Git version control.Orchestrated performance tests and failover evaluations for Platform Cloud Foundry (PCF) applications connecting to highly available RDS Multi-AZ instances. Established performance baselines and implemented failover strategies to ensure application resilience.Deployed build and maintenance scripts using Docker, Jenkins, and Maven. Leveraged Nexus and JFrog Artifactory repositories for secure and efficient artifact storage.Engineered and managed Elastic Load Balancers (ELBs) and EC2 Auto Scaling groups. Utilized CloudWatch alerts and metrics to fine-tune Auto Scaling launch configurations for optimal resource utilization.Authored Ansible Playbooks and Puppet Manifests for efficient server and application provisioning. Seamlessly integrated Ansible with Jenkins for automated deployments and configuration management.Developed Lambda functions for S3 bucket object categorization, strengthening cloud security with tailored configurations. Conducted code analysis to identify and address potential vulnerabilities.Analyzed logs for performance and database troubleshooting, utilizing CI systems like Jenkins and Bamboo for automated builds and change list management.Customized Jenkins and Bamboo with various plugins and tools, integrating Maven for automated continuous integration processes.Orchestrated Puppet configurations across diverse systems to manage installations, upgrades, and configurations.Managed Kubernetes clusters, oversaw Docker containers, and implemented automation using Ansible playbooks for Kubernetes deployments.Utilized Ansible Tower to automate software development processes. Crafted Terraform templates for virtual network provisioning and infrastructure as code (IaC) management.Coordinated testing while managing release schedules and reporting for JIRA applications.Actively participated in Change Approval Board (CAB) meetings, contributing to discussions and decisions on production application changes.AWS Cloud Engineer Target Corporation, Minneapolis, Minnesota Feb 2015 to Mar 2017Project Summary: As a key architect within Target Corporation's cloud initiatives, I played a pivotal role in designing and building highly scalable production systems on AWS. Specializing in load balancers, caching (Memcached), and distributed architectures, I ensured Target's infrastructure could handle high traffic volumes with ease.Responsibilities:Designed & built highly scalable production systems for Target on AWS, specializing in load balancers, caching (Memcached), and distributed architectures (master/slave) to handle high traffic volumes.Implemented comprehensive monitoring with CloudWatch and CloudTrail for proactive performance and security management, safeguarding Target's critical infrastructure.Led successful cloud migrations by analyzing and strategically moving legacy on-premises applications to AWS. Ensured seamless transition and optimized performance for a smooth user experience.Leveraged AWS provisioning tools (EC2 and ECS) to manage Target's cloud infrastructure, maintaining a continuous integration and delivery (CI/CD) environment for rapid development and deployment.Contributed to a secure and performant VPC design for Target's cloud environment. Optimized network configuration with subnets, availability zones, and security best practices.Promoted DevOps practices by automating tasks across environments with a range of tools (Ansible, Bash/Python scripts). Automated builds, deployments, and releases to streamline development lifecycles.Streamlined code management and containerization by managing Git repositories and crafting Docker containers for optimized Linux environments and Amazon Machine Images (AMIs) for Target's applications.Configured robust network settings using Route53, DNS, Elastic Load Balancers (ELBs), IP addresses, and CIDR blocks to ensure optimal connectivity for Target's applications.Ensured best practices throughout the development lifecycle for cloud initiatives, promoting quality and efficiency in deploying and debugging cloud solutions.Contributed to the AWS community by engaging with customers in the AWS Containers Area of Depth Technical Feedback Community, educating them on containerization solutions.Optimized application deployments using Elastic Beanstalk and leveraged event-driven AWS Lambda functions to trigger resource allocation based on specific events, ensuring efficient resource utilization.Facilitated seamless data migration from on-premises environments to AWS. Resolved application issues using a combination of services like Amazon Kinesis, Lambda, SQS, SNS, and SWF.Established a robust CI/CD pipeline by integrating Jenkins with GitHub and Bitbucket using various plugins to orchestrate multiple jobs within the build pipeline, ensuring a smooth and automated development process.Provided expert troubleshooting and support for Kubernetes clusters, databases (RDS and EC2-hosted), and storage solutions (S3, EBS, EFS, and Glacier) to maintain optimal performance and address issues within Target's cloud environment.Engineered highly available applications for Target by utilizing AWS services like multi-AZ deployments, read replicas, and ECS to ensure business continuity and minimize downtime.Build & Release Engineer Goldman Sachs, New York City, NY Jan 2013 to Jan 2015Project Summary: As a pivotal figure within Goldman Sachs' technology landscape, I led the orchestration of Java application building and deployment processes across various environments, including development, integration, and user acceptance testing (UAT). My expertise in setting up Jenkins on Linux proved invaluable as I configured primary and secondary builds, optimizing build times and ensuring efficient concurrent processing.Responsibilities:Orchestrated the building and deployment of Java applications across development, integration, and user acceptance testing (UAT) environments.Set up Jenkins on Linux, configuring primary and secondary builds for concurrent processing, optimizing build times.Developed automated build scripts using Maven, Perl, and Bash Shell for QA, staging, and production deployments.Utilized Bash and Python scripting to automate system administration tasks, improving efficiency and consistency.Guided developers on Git branching strategies, labeling, and conflict resolution practices for effective code management.Implemented a centralized Maven repository using Nexus to manage dependencies and ensure version control with Git.Established and maintained automated build systems using Jenkins, ClearCase, and Perl/Python scripts, streamlining the CI/CD pipeline.Defined branching strategies for Subversion to maintain code stability and manage user issues.Deployed WebLogic application artifacts utilizing WLST scripts, managing and maintaining Linux environments for optimal application performance.Oversaw the implementation of Configuration Management (CM) and Change Management (CM) policies on Linux systems for centralized control and compliance.Led Release Management meetings, facilitating collaboration and ensuring seamless synchronization between teams for successful deployments.Designed and implemented Subversion metadata elements to manage release versions and coordinated cross-team releases for efficient delivery.Actively participated in change control meetings, securing approvals for deployments in minor and major release events.Data Analyst Teradata, San Diego, CA Jan 2007 to Dec 2012Project Summary: As a data analyst at Teradata, I played a pivotal role in leveraging the power of Teradata tools and SQL queries to conduct in-depth analysis of large datasets, extracting actionable insights to drive decision-making processes. My expertise extended to designing and implementing efficient data models within Teradata environments, ensuring optimal performance and scalability to meet diverse business requirements.Responsibilities:Conducting in-depth analysis of large datasets using Teradata tools and SQL queries to extract actionable insights and drive decision-making.Designing and implementing efficient data models in Teradata to support business requirements, ensuring optimal performance and scalability.Optimizing queries, database structures, and ETL processes in Teradata to improve performance and reduce latency, ensuring efficient data processing.Integrating data from various sources into Teradata environments, including data warehouses and data lakes, to create unified and consistent datasets for analysis.Developing interactive dashboards and reports using tools like Tableau, Power BI, or Teradata Vantage, to visualize insights and communicate findings to stakeholders effectively.Implementing data quality checks and validation procedures to ensure accuracy, completeness, and consistency of data in Teradata environments.Applying advanced statistical and machine learning techniques to analyze data and build predictive models for forecasting, segmentation, and other business use cases.Collaborating with cross-functional teams including data engineers, business analysts, and stakeholders to understand business requirements, define KPIs, and deliver data-driven solutions.Documenting data analysis processes, methodologies, and findings, ensuring transparency, reproducibility, and compliance with regulatory requirements.Staying updated with the latest trends, technologies, and best practices in data analytics, Teradata, and related domains, and driving continuous improvement initiatives within the team.Providing guidance, mentorship, and knowledge sharing to junior data analysts and team members, fostering a culture of learning and development.Managing end-to-end data analysis projects, including requirements gathering, planning, execution, and delivery, ensuring adherence to timelines, budget, and quality standards.CertificationsAWS Certified DevOps Engineer  ProfessionalAWS Certified Solutions Architect  ProfessionalAWS Certified Security  SpecialtyEducation DetailsBachelor of Science in Cloud Computing from Western Governors UniversityCybersecurity Career Studies Certificate from Northern Virginia Community CollegeBachelor of Science - Medical Technology from Norfolk State University

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise