Quantcast

Computer Science Machine Learning Resume...
Resumes | Register

Candidate Information
Name Available: Register for Free
Title Computer Science Machine Learning
Target Location US-AZ-Tempe
Email Available with paid plan
Phone Available with paid plan
20,000+ Fresh Resumes Monthly
    View Phone Numbers
    Receive Resume E-mail Alerts
    Post Jobs Free
    Link your Free Jobs Page
    ... and much more

Register on Jobvertise Free

Search 2 million Resumes
Keywords:
City or Zip:
Related Resumes

Machine Learning Computer Science Tempe, AZ

Computer Science .Net Core Phoenix, AZ

Computer Science Los Angeles Phoenix, AZ

Machine Learning Data Science Phoenix, AZ

Computer Science Software Development Phoenix, AZ

Software Engineer Computer Science Tempe, AZ

Sql Server Computer Science Phoenix, AZ

Click here or scroll down to respond to this candidate
Candidate's Name
Phone: PHONE NUMBER AVAILABLE Email: EMAIL AVAILABLE LinkedIn: LINKEDIN LINK AVAILABLE Google Scholar: Citation 225+ Experienced AI/NLP researcher with 3+ years in LLM evaluation, dataset development, and logical reasoning enhancement. Proven expertise in leveraging novel frameworks to boost AI model performance and contribute to state-of-the-art research in NLP, healthcare, and finance domain. EDUCATION:Master of Science in Computer Science (GPA: 4.0/4.0)  Arizona State University Tempe, Arizona Aug 2022 - May 2024 Bachelor of Technology in Computer Science (GPA: 8.12/10.00)  Nirma University Ahmedabad, India Jul 2018 - May 2022 TECHNICAL SKILLS: Programming Languages: Python, Java, C, C++, C#, HTML/CSS, PHP, JavaScript, R, SQL Tools: Git, Postman, MongoDB, Atlas, DynamoDB, Docker, Kubernetes, AWS, GCP, Streamlit, FastAPI, LangChain, Spark, Hadoop Frameworks/Libraries: TensorFlow, Keras, PyTorch, Hugging Face, NLTK, Spacy, Scikit-Learn, OpenCV, Matplotlib, Numpy Concepts: Attention Mechanism, Deep Learning, Machine Learning, Data Science, Large Language Modelling, Prompting, Big Data Analysis, Software Development, Artificial Intelligence, Design Pattern, Code Reviews, System Design, Back-end, Vector Database Research Skills: Hypothesis Design, Data Creation, Literature Reviews, Experimental Design, Statistical Evaluation, Technical Writing Soft Skills: Communication, Collaboration, Critical Thinking, Active Learning, Time Management, Creativity, Leadership, Innovation PROFESSIONAL EXPERIENCE:Cogint Lab (Arizona State University), Tempe, USA Research Assistant Aug 2022  Present Spearheaded the development of LogicBench, a dataset for natural language question-answering, demonstrating leadership in enhancing logical reasoning proficiency in LLMs through validated performance improvements across diverse datasets. Validated enhanced logical reasoning proficiency with an average 28% performance improvement across LogicNLI, FOLIO, LogiQA, and ReClor datasets, affirming the effectiveness of LLMs trained on LogicBench. Analysed detailed evaluations of LLMs on 25 diverse reasoning patterns in propositional, first-order, and non-monotonic (NM) logic. Evaluated GPT-family models, Gemini, and Llama-2, revealing difficulties in complex reasoning tasks with 55% accuracy. Proposed and developed Multi-LogiEval, a comprehensive evaluation dataset encompassing multi-step logical reasoning with over 30 inference rules and more than 60 combinations at varying depths, covering propositional, first-order, and non-monotonic logic. Utilized a range of LLMs, including GPT-4, ChatGPT, Gemini, Llama-2, Gemini-Pro, Yi, Orca, and Mistral, and applied chain-of- thought, few-shot, zero-shot, self-discover, and step-back prompting techniques to analyse their logical reasoning performance. Executed detailed experimental analyses, revealing that existing large language models struggle significantly with complex reasoning tasks, negations, and contextual information integration. Identified significant performance drops in large language models as the reasoning steps and depth increased, with average accuracy decreasing from approximately 68% at depth-1 to 43% at depth-5. Performed in-depth analysis of reasoning chains generated by LLMs, uncovering critical insights and limitations in their logical reasoning capabilities. Enhanced LLMs multi-step logical evaluation by addressing gaps of recent benchmarks, specifically for NM. Samsung Research and Development Institute, Noida, India Research & Development Intern Jan 2022  Jun 2022 Built a software engineering module for failure call log analysis to capable of processing one million logs in parallel in under a second. Automated process and decreased the human intervention by a staggering 68% by using analysis to improve call quality. Developed a ML system to categorize possible call failures based on predefined threshold domains, effectively reducing call failures by 25%. Coordinated data collection efforts for senior researchers by setting up experiments and analysing results; contributed to advanced experiment and data analysis turnaround. Collaborated with cross-functional teams to update and enhance research materials, ensuring accuracy and relevance to current project requirements, facilitating seamless knowledge sharing and project continuity. Sudeep Tanwars Research Lab, Ahmedabad, India Research Assistant Aug 2020  Aug 2022 Led and developed DL-GuesS, a hybrid framework for cryptocurrency price prediction, which incorporated interdependencies among various cryptocurrencies and market sentiments utilizing GRU and LSTM hybrid model to enhance predictive accuracy of Litecoin. Integrated price history and social media sentiment from platforms like Twitter to improve the accuracy of cryptocurrency price predictions, applying problem-solving skills to solve the volatile and stochastic nature of prices and emphasizing reliable forecasting. Conducted extensive model validation using various loss functions, ensuring reliability achieves average validation accuracy of 85%. Implemented gradient encryption in federated learning (FL) to protect user privacy in autonomous vehicle (AV) learning ecosystems, reducing data transfer by nearly three times compared to traditional FL methods. Built a sign recognition system using CNN algorithm and GeFL, achieving 98% accuracy, achieving 2% higher than conventional FL-based systems with secured optimized framework. PROJECTS:Interactive PDF Chatbot for Extractive QA Developed an interactive innovative PDF chatbot using Stream lit, integrating advanced RAG techniques such as Conversational Retrieval Chain and session management with Conversation Buffer Memory. Leveraged Mistral 7B LLM and FAISS vector datastore for efficient document processing and retrieval, enhancing accuracy and relevance in communication responses. Optimized document processing by employing PyPDF Loader and Recursive Character Text Splitter to extract and segment PDF content effectively. Implemented FAISS for robust document storage and retrieval, ensuring precise answers from document. Conversational Artificial Intelligence with Streamlit with SQL database Orchestrated the integration of LangChain and Google Gemini Pro into a Stream lit-based AI platform, optimizing user interaction and response quality. Managed SQL databases to ensure efficient data retrieval and seamless query handling, for better responsiveness. Engineered ChatBot, utilizing Streamlit for intuitive interface design and dynamic session management, ensuring a user-friendly experience. Implemented Google Gemini Pro to convert AI outputs into human-readable responses with history tracking, structured data storage and real-time retrieval, ensuring accurate and responsive AI interactions, thereby optimizing system efficiency. Publications: Contributed to significant innovations in NLP and AI, with research publications cited 225+ times according to Google Scholar.

Respond to this candidate
Your Message
Please type the code shown in the image:

Note: Responding to this resume will create an account on our partner site postjobfree.com
Register for Free on Jobvertise