| 20,000+ Fresh Resumes Monthly | |
|
|
| | Click here or scroll down to respond to this candidateCandidate's Name
1200 EVERGREEN PT RD, Medina, WA, 98039 PHONE NUMBER AVAILABLE EMAIL AVAILABLE SUMMARYMotivated Software Engineer with a strong computer science background. Known for skillful software engineering, passion for innovative solutions, and a commitment to continuous learning. Seeking opportunities in a dynamic, collaborative environment to contribute expertise and drive technological advancements. TECHNICAL SKILLS Programming languages: Python, R, Java. Databases: SQL (MySQL, Oracle, PostgreSQL, PL/SQL), NoSQL (MongoDB, Cassandra, Redis). Cloud Technologies: AWS (Glue, S3, Lambda, IAM, EMR, KMS, Lake Formation, Kinesis, Step Functions, CloudWatch, SQS, Athena), Microsoft Azure, GCP, DataBricks Big Data tools: Hadoop, Apache Spark, Hive, Kafka., Pandas, NumPy, Matplotlib, Scikit-Learn Data Visualization: Tableau, PowerBI, SSRS, Power Pivots Operational Skills: Waterfall, Scrum, Agile SDLC, MS Word, Excel, PowerPoint, JIRA. WORK EXPERIENCESoftware Developer, United Health Care May 2024 - present Utilized Docker and Kubernetes for containerizing applications and orchestrated them on Azure Kubernetes Service (AKS) to ensure high availability and scalability. Conducted performance tuning and optimization of SQL queries and data processing tasks, resulting in a 30% increase in overall system performance. Collaborated with cross-functional teams to gather requirements and deliver data solutions that meet business needs. Developed streaming data pipelines using Azure Databricks and Spark Streaming to provide real-time insights to stakeholders on Power BI dashboards. Implemented data security best practices, including data encryption, network security, and access control using Azure Key Vault and Azure Active Directory.Software Development Engineer, Amazon, (Big Data Engineering) Seattle, Washington. October 2022 May 2024. Developed a data retrieval system using Python and DynamoDB API to generate recommendations based on customer impressions, efficiently filtering and analyzing data for enhanced user insights. Collaborated with cross-functional teams to build customized dashboards on Salesforce CRM that provided actionable insights, leading to a 20% increase in sales productivity and revenue growth. Developed automated CI/CD pipelines, increasing deployment frequency by 40%. Implemented CloudWatch alarms and dashboards for monitoring service performances, CPU Utilization, disk usage, etc., to ensure best operational and monitoring standards. Created complex ETL scripts with PySpark for data integration from Kinesis for AWS sellers. Implemented monitoring and alerting systems for DevOps containerized applications on Amazon EKS Contributed to teams operational excellence by debugging 200+ bugs, including service failures, and root cause analysis. Utilized AWS Glue and Apache Spark for automated ETL processes and Athena for real-time ad-hoc data analysis, optimizing data workflows and enabling actionable insights for business decisions. Interacted with API Gateway to access a data warehouse to facilitate real-time changes for over 2000 AWS sellers. Performed various system- related analytical activities within all phases of Software Development Life Cycle (SDLC), Waterfall and AGILE methodologies. Authored and implemented comprehensive System Designs, threat models, and project development tracking documents enforcing Agile methodologies to streamline development. Software Engineer, Tata Consultancy Services, Hyderabad, India. January 2019 August 2021 Implemented Azure Data Factory extensively for ingesting data from different source systems like relational and unstructured data to meet the business functional requirements. Created, provisioned different Databricks clusters, notebooks, jobs and autoscaling. Migrating data from On-prem SQL server to cloud databases Azure Synapse Analytics and Azure SQL DB. Maintaining version control of code using Azure Devops and GIT repository. Experience in developing Spark applications using Spark SQL in Databricks for data extraction, aggregation and transformation from multiple file formats for analyzing and transforming the data to uncover insights into usage. Collaborated with cross-functional teams to optimize Apache Kafka configuration for specific use cases, resulting in a 30% increase in system performance and overall efficiency. Utilized Data Lake to store datasets which includes structured and unstructured data to ingest Data sources. Developed data pipelines using Amazon EMR, S3, EC2 to facilitate complex ETL logics. CERTIFICATIONS AWS Certified: Solutions Architect, Azure Fundamentals, AWS Cloud Practitioner, Python Certified By Coursera. EDUCATIONMasters in Data Science, University at Buffalo, NY GPA-3.5 August 2021 August 2022. Bachelor of Engineering, Mahatma Gandhi Institute of Technology, Hyderabad, India GPA3.6 May 2015 May 2019. |