Quantcast

Python Developer Software Development Re...
Resumes | Register

Candidate Information
Title Python Developer Software Development
Target Location US-TX-Irving
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Python Developer Software Development Aubrey, TX

Software Development Python Developer Irving, TX

Software Development Python Developer Irving, TX

Software Engineer Stack Developer McKinney, TX

Python Developer Machine Learning Irving, TX

Stack Developer Software Engineering Dallas, TX

Financial Services Software Development Little Elm, TX

Click here or scroll down to respond to this candidate
AkankshaSr AWS / Python DeveloperEMAIL AVAILABLE, PHONE NUMBER AVAILABLEhttps://LINKEDIN LINK AVAILABLESummary:Around 9 years of professional IT experience in analysis, design, development, testing, quality analysis, and audits of enterprise applications and database development.Experience in software development in Python (libraries used: Beautiful Soup, NumPy, SciPy, matplotlib, markdown, Report Lab, Pandas data frame, network, urllib2, MySQL dB for database connectivity) and IDEs - sublime text, Spyder, PyCharm along with of Experience in Analysis, Design, and Development of various stand-alone, Client-Server and Web Based Software applications using Python 3.7, Django.Worked with core AWS services (S3, EC2, ELB,RDS, EBS, Route53, VPC, Auto Scaling, etc.), deployment services (Elastic Beanstalk, Ops Works, and Cloud Formation), and security practices (IAM, Cloud Watch and Cloud Trail).Hands-on experience in developing web applications implementing MVT/MVC architecture using Django, Flask, Webapp2, and Spring web application frameworks.Created methods (get, post, put, delete) to make requests to the API server and tested Restful API using Postman. Also used Loaded CloudWatch Logs to S3 and then load into Kinesis Streams for Data Processing.Designed and optimized ETL pipelines using a combination of Pandas, NumPy, and Pyarrow, ensuring efficient data extraction, transformation, and loading processes.Expert in writing complex PL/SQL queries.Developed custom directives, pipes, and reusable components in Angular, promoting code reusability and maintainability across projects.Developed and managed ETL pipelines in Databricks using Apache Spark to efficiently extract, transform, and load large datasets. Ensured data integrity and consistency across various data sources.Developed interactive and responsive front-end interfaces using JavaScript frameworks such as React, Angular, or Vue.js, enhancing user experience and usability.Developed and maintained data warehouses using Snowflake to support large-scale data analytics and business intelligence applications.Extensive experience in designing, developing, and optimizing ETL (Extract, Transform, Load) jobs using AWS Glue to automate data workflows and pipeline processes.Integrated FastAPI with various databases such as PostgreSQL, MySQL, and MongoDB, implementing asynchronous database operations for improved performance.Implemented automation scripts and tools to streamline development, testing, and deployment processes.Designed and implemented high-performance microservices using Go, ensuring low latency and high throughput for critical applications.Extensively worked on Push and Pull architectures, Ansible, Docker, and Kubernetes.Proficient in Docker for containerization and Kubernetes for orchestration, optimizing application deployment, scaling, and management in dynamic DevOps environments.Hands-on experience with Docker for creating, deploying, and managing containerized applications.Worked on end-to-end application-level performance tuning.Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring, and cloud deployment manager.Strong understanding and experience in financial platforms.Strong time management skills, with a proven ability to manage work to tight deadlines.Education:Bachelors in computer science, GITAM University, India Aug 2011  May 2015Skillset:Programming LanguagesPython-3.7 & 2.7, PL SQL, Perl, Java and Shell Scripting.Python libraries:Django, Flask, Beautiful Soup, httplib2, Jinja2, Numpy, Pandas, Matplotlib, Pickle, Pyside, SciPy, PyTables, Pytest, Pyarrow,PySpark.FrameworksFlask, Django, Fast API.Web TechnologiesHTML, CSS, DOM, SAX, Java Script, jQuery, AJAX, XML, AngularJSVersion Control & IDE's/ Development ToolsGIT (GitHub), SVN, NetBeans, Android Studio, PyCharm, Visual Studio, VS code, Eclipse and Sublime Text. Jenkins, Docker,.Tracking Tools & MethodologiesBugzilla and JIRA, Agile, Scrum and Waterfall.DatabasesPL/SQL, MySQL, PostgreSQL, Oracle, Amazon DynamoDB, MariaDB, Cassandra DB, MongoDB.Reporting & Monitoring ToolsSSRS, Tableau, PowerBI, Cloud watch, Data Dog, SplunkCloud EnvironmentAWS Services, EC2, ELB, VPC, RDS, IAM, Cloud formation, S3, Cloud, Azure, GCP.Operating SystemUnix/ Linux, Windows, MacOS.Work Experience:Client: Charles Schwab, Austin, TX Sep 2022- Till dateRole: AWS / Python DeveloperResponsibilities:Responsible for developing, monitor, and maintaining Amazon AWS cloud infrastructure and its solutions for the internal cloud-based framework application.Implemented FastAPI to develop high-performance APIs, leveraging asynchronous capabilities to handle concurrent requests efficientlyUsed Django Framework to develop the application and build all database mapping classes using Django models.Wrote Python scripts to parse XML documents and load the data in database.Utilized PyUnit, the Python unit test framework, for all Python applications and used Django configuration to manage URLs and application parameters.Designed and developed RESTful APIs using FastAPI, ensuring optimal performance and scalability for frontend and backend interactions.Created methods (get, post, put, delete) to make requests to the API server and tested Restful API using postman.Used Amazon EC2 command line interface along with Bash/Python to automate repetitive work.Used AWS IAM for creating roles, users, groups and implemented Multi-Factor Authentication (MFA) to provide additional security to AWS account and its resources.Explored FastAPI's GraphQL capabilities, integrating GraphQL endpoints alongside REST APIs to provide flexible data querying options for clients.Implemented end-to-end payment gateways for seamless transactions.Designed and implemented ETL pipelines to ingest and transform data from various sources into Snowflake, ensuring data integrity and consistency.Deployed FastAPI applications on AWS infrastructure, leveraging services like EC2, Docker, and Kubernetes for horizontal scaling and high availability.Implemented ORM within MVC frameworks like Django or Flask, separating concerns between data models (using ORM), business logic, and presentation layers. This architectural pattern enhances maintainability and scalability of web applications.Worked on data transition programs from DynamoDB to AWS Redshift (ETL Process) using AWS Lambda by creating functions in Python for the certain events based on use cases.Used AWS Beanstalk for deploying and scaling web applications and services.Utilized FastAPI's data validation and serialization features to ensure data integrity and security across API endpoints.Utilized Go's concurrency model with goroutines and channels to build efficient, scalable applications capable of handling numerous simultaneous tasks. Automated backup and restore processes, log rotation, and system monitoring tasks with Unix shell scripts.Integrated Databricks with AWS and Azure cloud services, enabling seamless data movement and processing across cloud platforms. Utilized S3 and ADLS for data storage and retrieval within Databricks workflows.Performed performance profiling and optimization during refactoring, identifying and addressing bottlenecks to enhance application speed and responsiveness.Connected continuous integration system with GIT version control repository and continually build as the check-ins come from the developer and performed Continuous Integration with Jenkins.Utilized encryption and tokenization techniques to secure sensitive payment information.Implemented full CI/CD pipeline by integrating SCM (Git) with automated testing tool Gradle & Deployed using Jenkins and Dockized containers in production and engaged in few DevOps tools like Ansible, Chef, AWS CloudFormation, AWS Code pipeline, Terraform and Kubernetes.Worked in Agile/Scrum development team to deliver an end-to-end continuous integration and continuous deployment in SDLC.Environment: Python 3.9, AWS, EC2, EBS, S3, RDS, VPC, Lambda, DynamoDB, PyCharm, HTML, CSS, JavaScript, JSON, Bootstrap, FastAPI, MongoDB, Jenkins, Docker, GIT, My SQL, Unix, PostgreSQL.Client: Meijer, Grand Rapids, MI Mar 2021- Aug 2022Role: Sr Python DeveloperResponsibilities:Worked on fully automated continuous integration system using Git, Jenkins, MySQL and custom tools developed in Python and Bash.Managed, developed and designed a dashboard control panel for customers and Administrators using Django, Oracle DB, PostgreSQL, Redshift and BOTO3 API calls using resource and client libraries.Created an entire application using Python, Django, MySQL and Linux. Implemented SonarQube to identify code quality.Created RESTful APIs in Go, adhering to best practices for performance and security, and integrated them with frontend applications.Worked on Cyber security implantation using AWS-WAF, AWS cloud front by integrating API gateways.Involved in Web-services backend development using Python (CherryPy, Django, SQL Alchemy).Developed internal auxiliary web apps using Python Flask framework.Developed the required XML Schema documents and implemented the framework for parsing XML documentsExperience in using various version control systems like Git, CVS, GitHub, Heroku and Amazon EC2.Optimized query performance in Snowflake by tuning SQL queries, leveraging clustering keys, and utilizing Snowflake's micro-partitioning and caching features.Implemented real-time data streaming solutions using Spark Streaming, Kafka, or other streaming technologies.Integrated Azure services such as Azure Functions, Azure CosmosDB, and Azure Data Explorer to enhance application scalability and performance.Wrote Unix shell scripts to manage files and directories, including tasks such as searching, sorting, compressing, and transferring files.Developed Lambda functions encapsulating business logic, integrated with Step Functions for defining state transitions and error handling within complex workflows.Written with object-oriented Python, Flask, SQL, Beautiful Soup, httplib2, Jinja2, HTML/CSS, Bootstrap, jQuery, Linux, Sublime Text, Git.Integrated Datadog for centralized log management and analysis, improving the ability to troubleshoot and resolve issues efficiently.Managed large datasets using Pandas API ecosystem to analyze the different segments of customers based on Location.Used Databricks notebooks to create and share interactive data analytics and machine learning experiments. Collaborated with data scientists and analysts to develop, test, and deploy data models.Integration of PySpark with other Python libraries and frameworks like Pandas, NumPy, and SciPy for advanced analytics and machine learning tasks.Developed and maintained unit tests using Python's Unittest framework, ensuring code correctness and functionality. written unit and integration tests for JavaScript applications using frameworks like Jest, Mocha, and Chai to ensure code reliability and maintainability.Integrated React applications with backend services and RESTful APIs, ensuring seamless data retrieval and updating UI components based on API responses.Leveraged FastAPI's automatic API documentation generation (Swagger UI and ReDoc) to provide clear and comprehensive API documentation for developers and stakeholders.Utilized Tableau Prep for data cleaning, transformation, and blending, preparing datasets for efficient analysis and visualization. Automated data extraction, transformation, and loading (ETL) processes using Unix shell scripting.Utilized Azure DevOps for managing CI/CD pipelines, ensuring efficient build, test, and deployment workflows.Actively resolved conflicts and impediments within the team, utilizing SCRUM principles to foster a collaborative and focused work environment.Employed JIRA and Confluence to manage Agile workflows, track project progress, and maintain transparency across teams, facilitating continuous improvement and adherence to Agile principles throughout the SDLC.Environment: Python 3.6, Git, GitHub, Lambda, Amazon EC2, RDS,MySQL, Angular.js, Pandas, PyUnit, HTML, CSS, jQuery, JavaScript, Apache, Jira, Linux, Git, FastAPI, Windows, Linux, Azure, Microservices, S3, SQS, SNS, API Gateway.Client: State Farm, Bloomington, IL Dec 2019- Feb 2021Role: Sr. Python DeveloperResponsibilities:Coordinated with Cloud Engineers to analyze and design all aspects of AWS environments and topologies.Built Database models, views and APIs using Python for interactive web-based solutions.Involved in Web-services backend development using Python (CherryPy, Django, SQL Alchemy).Used Pandas library for statistical Analysis. Worked on Python Open stack API.Used PyUnit the Python unit test framework, for all Python applications.Wrote Python modules to view and connect the Apache Cassandra instance.Leveraged Go in cloud-based environments, particularly with AWS and Kubernetes, to develop, deploy, and maintain cloud-native applications.Create and manage S3 buckets and store database logs and backups.Implemented microservices architecture using Node.js, breaking down monolithic applications into smaller, manageable services to improve scalability and maintainability.Implemented AWS security, privacy, performance, and monitoring solutions, including automated responses.Implemented data migration, replication processes, and real-time data processing with NoSQL databases.Responsible for importing data from DynamoDB to Redshift in batches using AWS Glue, TWS scheduler, and built CI/CD pipeline using Jenkins.Orchestrated infrastructure provisioning across multiple cloud providers using Terraform. This capability enabled hybrid and multi-cloud deployments, providing flexibility and resilience by leveraging AWS alongside other cloud platforms like Azure and Google Cloud.Designed and implemented Lambda functions triggered by various AWS services such as S3, DynamoDB, and API Gateway, adhering to event-driven architecture principles.Developed Unix shell scripts to manage user accounts, groups, and permissions, ensuring secure access control.Integrated AWS Glue with other AWS services such as S3, Redshift, RDS, DynamoDB, and Athena to streamline data processing and analytics workflows.Proficient in developing Lambda functions using multiple languages including Python, Node.js, Java, and Go, adapting to diverse project requirements.Built application and database servers using AWS EC2 and RDS for Oracle DB.Designed and implemented custom visuals in Power BI to meet specific business requirements and enhance the user experience.Employed PySpark to process large volumes of unstructured data, such as JSON and text files, leveraging its distributed computing capabilities to enhance performance and scalability.Developed and maintained data extracts, live connections, and data blending techniques to optimize Tableau performance and ensure up-to-date insights.Implemented event-driven architecture in FastAPI applications using AWS Lambda and AWS SNS/SQS for real-time data processing and notifications.Implemented real-time analytics solutions using Databricks Structured Streaming, processing streaming data from sources like Kafka and Kinesis to deliver real-time insights.Automated infrastructure provisioning and deployment processes using Python scripts, Docker, and Kubernetes.Exported/Imported data between different data sources using SQL Server Management Studio and Oracle.Utilized SCRUM metrics to measure team performance, such as sprint velocity and burndown charts, to drive continuous improvement and accountability.Worked in a fast pace agile environment (2 weeks of sprint), attended sprint planning on the beginning of each sprint and retro at the end of each sprint and in the middle, we had mid sprint review/product backlog review (PBR) meeting to go over to our backlog and prioritized User stories and estimated points.Environment: Python, Git, SVN, GitHub, Lambda, DynamoDB, Redshift, EC2, IAM, S3, CloudWatch, Django 1.5, MySQL, Angular.JS, Pandas,Pyarrow, Flash, PyUnit, Open Stack, HTML, CSS, jQuery, Jenkins, Nexus, JavaScript, Apache, Jira, Linux, Git, Cassandra, Windows, Linux, SSMS, Informatica, Oracle.Client: PayPal, Austin, TX Sep 2017- Nov 2019Role: Sr. Python DeveloperResponsibilities:Automate different workflows, which are initiated manually with Python scripts and Unix shell scripting.Create, activate and program in Anaconda environment.Use Python unit and functional testing modules such as unit test, unittest2, mock, and custom frameworks in-line with Agile Software Development methodologies.Develop Sqoop scripts to handle change data capture for processing incremental records between new arrived and existing data in RDBMS tables.Manage datasets using Panda data frames and MySQL, queried MYSQL database queries from python using Python-MySQL connector and MySQL dB package to retrieve information.Generated Python Django forms to record data of online users and used PyTest for writing test cases.Implemented and modified various PL/SQL queries and Functions, Cursors and Triggers as per the client requirements.Developed and maintained scalable data storage solutions leveraging NoSQL databases, ensuring high performance and availability.Created Unix shell scripts for parsing and analyzing log files, extracting relevant information for troubleshooting and reporting.Used Pandas as API to put the data as time series and tabular format for manipulation and retrieval of data.Proficiency in performing PL/SQL-like operations and complex DataFrame manipulations using PySpark SQL for data analysis and reporting.Implemented data quality checks within Glue jobs to validate data integrity and ensure that only clean data enters the analytics pipeline.Worked on card issuing processes, including physical card production, virtual card generation, and management.Ensured smooth data flow and transaction processing between systems through robust API integration.Experience in python, Jupyter, Scientific computing stack (Numpy, SciPy, pandas and matplotlib).Integrated NoSQL databases with Python applications, utilizing drivers and ORMs to streamline data operations.Generating various capacity planning reports (graphical) using Python packages like Numpy, matplotlib.Automated deployment of Lambda functions using AWS CLI, SDKs, and CI/CD pipelines (e.g., Jenkins, AWS CodePipeline), ensuring rapid and reliable deployment cycles.Ensured data security and compliance within Databricks by configuring secure access controls, encryption protocols, and compliance with industry standards and regulations.Design and maintain databases using Python and developed Python based API (RESTful Web Service) using Flask, SQL Alchemy and PostgreSQL.Collaborated closely with cross-functional teams to prioritize and deliver user stories, applying Agile methodologies such as Scrum and Kanban to streamline development processes and enhance product delivery efficiency.Manage code versioning with GitHub, Bit Bucket and deployment to staging and production servers and implement MVC architecture in developing the web application with the help of Django framework.Used tools like Terraform or Kubernetes YAML manifests to define and manage cloud infrastructure.Environment: Python 2.7, Django, HTML5/CSS, PostgreSQL, MS SQL Server 2013, MySQL, GraphQL, JavaScript, Jupyter Notebook, VIM, Pycharm, Shell Scripting, Angular.JS, JIRA.Client: Stock One Technologies, India Aug 2015  Mar 2017Role: Data EngineerResponsibilities:Experience in building distributed high-performance systems using Spark and Scala.Experience developing Scala applications for loading/streaming data into NoSQL databases (MongoDB) and HDFS.Perform T-SQL tuning and optimizing queries for and packages.Designed Distributed algorithms for identifying trends in data and processing them effectively.Creating an SSIS package to import data from SQL tables to different sheets in Excel.Used Spark and Scala for developing machine learning algorithms that analyze clickstream data.Used Spark SQL for data pre-processing, cleaning, and joining very large data sets.Conducted data cleansing, validation, and enrichment within AWS Glue jobs to prepare high-quality datasets for analytics and reporting.Applied machine learning algorithms with Scikit-Learn for predictive modeling and data mining tasks, enhancing decision-making processes and business insights.Performed data validation with Redshift and constructed pipelines designed over 100TB per day.Co-developed the SQL server database system to maximize performance benefits for clients.Assisted senior-level data scientists in the design of ETL processes, including SSIS packages.Database migrations from traditional data warehouses to spark clusters.Ensure the data warehouse was populated only with quality entries by performing regular cleaning and integrity checks.Used Oracle relational tables and used them in process design.Leveraged Databricks' built-in visualization tools to create comprehensive data visualizations, aiding in data-driven decision-making processes.Developed SQL queries to perform data extraction from existing sources to check format accuracy.Implemented user access controls and permissions in Tableau to ensure data security and compliance with organizational policies.Installed a Linux operated Cisco server and performed regular updates and backup and used MS excel functions for data validation.Coordinated data security issues and instructed other departments about secure data transmission and encryption.Environment: T-SQL, MongoDB, HDFS, Scala, Spark SQL, Relational Databases, Redshift, SSIS, PL/SQL, Linux, Data Validation, MS Excel.

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise