Quantcast

Data Engineer Machine Learning Resume Ka...
Resumes | Register

Candidate Information
Title Data Engineer Machine Learning
Target Location US-MO-Kansas City
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Machine Learning Data Engineer Overland Park, KS

Big data engineer Kansas City, MO

Data Engineer Overland Park, KS

Data Engineer Azure Lees Summit, MO

Data Engineer Overland Park, KS

Data Engineer Senior Overland Park, KS

Data Analyst Machine Learning Kansas City, MO

Click here or scroll down to respond to this candidate
LOCATION: KANSAS, MO EMAIL: EMAIL AVAILABLE PHONE: PHONE NUMBER AVAILABLE LinkedInSUMMARYSeasoned Data Engineer with 3 years of hands-on experience in developing and maintaining robust data Infrastructure solutions.Proficient in a variety of programming languages including Python, C/C++, SQL, and Java, adept at building efficient data pipelines and performing data analysis.Skilled in web technologies such as HTML/HTML5, CSS3/CSS (including Tailwind CSS, LESS, Bootstrap, and Sass), JavaScript, Typescript, and frameworks like Django, Flask, React JS, and Angular JS, for developing web-based data applications.Extensive knowledge of database management systems including MySQL, PostgreSQL, SQL Server, and MongoDB, with expertise in designing and optimizing database schemas and queries.Well-versed in Agile (SCRUM) and Waterfall methodologies, with experience in managing projects through the Software Development Life Cycle (SDLC) from inception to deployment.Proficient in cloud technologies such as AWS (EC2, S3, Lambda, ECS, ECR, Cloud Front, Cloud Watch, Cloud Formation, etl tools) and Amazon Athena, Azure, and GCP, with hands-on experience in deploying and managing data solutions on cloud platforms.Strong testing and automation skills using tools like Git, GitHub, Jenkins, Docker, Kubernetes, Redis, Jest, Tableau, and Circle CI, ensuring code quality, reliability, and scalability.Experienced in utilizing libraries and frameworks for machine learning, including Pandas, NumPy, Matplotlib, Scikit-learn, SciPy, Plotly, Seaborn, and implementing advanced analytics techniques such as Deep Learning, Natural Language Processing (NLP), Data Analytics, and Data Visualization for deriving actionable insights from data.EDUCATIONMaster in Computer Science Jan 2023 - May 2024University of Missouri Kansas City, USABachelor in Electronics and Computer Engineering Jun 2017  Jun 2021 Sreenidhi Institute of Science and Technology, India TECHNICAL SKILLSCategory SkillsLanguages Python, C/C++, SQL, JavaWeb TechnologiesHTML/HTML5, CSS3/CSS (Tailwind CSS, LESS, Bootstrap, Sass), JavaScript, Typescript, JSON, Web Services (REST, SOAP)Frameworks Django, Flask, React JS, Angular JSDatabases MySQL, PostgreSQL, SQL Server, MongoDBMethodologies SDLC, Agile (SCRUM), WaterfallCloud Technologies AWS (EC2, S3, Lambda, ECS, ECR, Cloud Front, Cloud Watch, Cloud Formation), Azure, GCP Testing Git, GitHub, Jenkins, Docker, Kubernetes, Redis, Jest, Tableau, CircleCI Libraries & MLPandas, NumPy, Matplotlib, Scikit-learn, SciPy, Plotly, Seaborn, Machine Learning, Deep Learning, Natural Language Processing, Data Analytics, Data Visualization WORK EXPERIENCEHEALTH CARE, USA DATA ENGINEER Feb 2024  PresentDesigned and implemented data pipelines using Python, SQL, and Java to support the organization's data infrastructure needs.SAI VARMA EDLADATA ENGINEERDeveloped web-based data applications using Django and Flask frameworks, incorporating HTML/HTML5, CSS3/CSS, JavaScript, and Typescript.Implemented micro services architecture on Linux, utilizing DynamoDB for full-stack application development.Leveraged Git for version control and collaborated with team using GitHub for code review and sharing.Utilized cloud technologies such as AWS (EC2, S3, Lambda, Redshift, ECS) and Azure for deploying and managing scalable data solutions, reducing operational costs by 25% through effective use of cloud services and infrastructure optimization.Implemented Agile (SCRUM) methodologies to iteratively develop and deliver data solutions, ensuring alignment with business requirements.Utilized Terraform for infrastructure as code in e-commerce environments, performed statistical analysis using Big Query, and implemented Apache for scalable data processing solutions.Proficient in Spark, Kafka, Flink, and Airflow for building distributed systems, leveraging Protobuf and Parquet for efficient data processing, and solving complex problems using Bash scripting and NoSQL databases.Conducted data analysis and visualization using Tableau and Plotly to derive actionable insights and support decision-making processes, enhancing decision-making capabilities by delivering real-time data visualizations, leading to a 10% improvement in operational efficiency.Automated testing and deployment processes using Jenkins, Docker, and Kubernetes to ensure code reliability and scalability.Applied machine learning techniques with libraries such as Pandas, NumPy, Scikit-learn, and Tensor Flow for predictive analytics and modeling.TATA CONSULTANCY SERVICE DATA ENGINEER Jul 2021  Oct 2022Designed and developed data pipelines using Java, Scala, and Apache Spark to process and analyze large- scale datasets, achieving a 25% reduction in data processing time.Implemented ETL processes using AWS Glue Job for efficient data extraction, transformation, and loading(ETL) into data lakes and warehouses.Leveraged Amazon Athena for ad-hoc querying and analysis of data stored in Amazon S3, optimizing query performance and resource utilization.Implemented web applications using AngularJS and Node.js, incorporating Restful APIs for data retrieval and manipulation.Managed databases including PostgreSQL, SQL Server, RDS and MongoDB, optimizing queries and ensuring data integrity.Developed responsive applications using relational databases, integrated machine learning models with AI capabilities, and utilized Excel for data analysis.Experienced in Unix shell scripting and TWS for collaborative Teradata system management and automationImplemented Azure Synapse for efficient data warehousing, enhancing analytics capabilities and decision- making processes.Demonstrated strong teamwork skills in collaborative environments, fostering synergy to achieve shared goals and deliver successful outcomes.Facilitated Agile (SCRUM) ceremonies, including sprint planning, daily stand-ups, and retrospectives, to deliver data solutions on time and within scope.Utilized cloud services on AWS (Lambda, Cloud Front, Cloud Watch) and Azure for building scalable and resilient data platforms.Conducted unit testing using Jest and integration testing using CircleCI to validate data pipelines and applications.Developed interactive dashboards using Tableau and Power BI for visualizing key performance indicators and trends.Applied advanced analytics techniques such as natural language processing and deep learning for text and image data analysis.I-SPARROW, India DATA ENGINEER Mar 2020  Jun 2021Developed data processing pipelines using Python and SQL, integrating with MySQL and MongoDB databases for efficient data storage and retrieval.Implemented web services (REST, SOAP) for data integration and communication between systems, utilizing JSON for data interchange, enhancing system interoperability by 30%.Designed and implemented data visualization solutions using Matplotlib, Seaborn, and Plotly to communicate insights to stakeholders.Dedicated software developer proficient in problem-solving and committed to continuous improvement in software development practices.Handled Big Data processing and analytics, ensuring efficient handling and analysis of terabytes of data.Experienced software engineer adept in project management and process improvement within information technology, specializing in DevOps practices and fostering collaboration across teams.Proficient in technical documentation for Spring-based API development in fintech, ensuring compliance with industry regulations.Managed cloud resources on AWS (S3, EC2, EMR) and Google Cloud Platform (GCP) for storing and processing large datasets, including the configuration and optimization of cloud-based data warehouses for efficient storage, querying, and analytics processing.Implemented data ingestion and real-time data streaming solutions using Apache Kafka for high- throughput, fault-tolerant messaging.Contributed to SDLC processes, including requirements gathering, design, security, development, testing, and deployment phases.Utilized Hadoop ecosystem components, including Hive, for large-scale data processing and query execution, improving data processing efficiency by 25%.Deployed and managed containerized applications using Docker, Kubernetes, and Redis for scalability and fault tolerance.Applied machine learning techniques for data analysis, data modeling and prediction, leveraging libraries such as SciPy and Scikit-learn.CERTIFICATIONSAWS Certified Developer  Associate:- LinkAssociate Hands On Essentials  Data Warehouse Issued by Snowflake:- Link

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise