
What we’re about
This group is here to bring together natural language processing enthusiasts from both the industry and academia, in order to share inspiring ideas and practical experience in the field and create new opportunities and connections within the community.
Upcoming events (1)
See all- NLP IL x Nvidia - April 2025 MeetupNVIDIA, Tel Aviv
To get updates from Nvidia click here.
Agenda:
18:00-18:40 - Gathering, food, and drinks
18:40-18:45 - Opening words
18:45-19:15 - Lior Cohen (Nvidia) -
Navigating the Complexities of LLMs Evaluation
19:15-19:45 - Oren Sultan (Lightricks) -
Visual Editing with LLM-based Tool Chaining
19:45-20:15 - Dana Sinai, PhD (Laguna) -
Real Clinical Conversations: Making Sense with NLPAbstracts:
## Lecture 1 - Navigating the Complexities of LLMs Evaluation
Lecturer: Lior Cohen, Senior GenAI Solution Architect @ NVIDIA
Abstract: Large Language Models are transforming how we interact with technology. But how do we ensure they're truly up to the task? This session begins by exploring the landscape of LLM evaluation, covering key tools and strategies for assessing the performance of LLMs, RAG systems, and agentic workflows. We'll then cover a case study of the NVIDIA GTC bot evaluation process, sharing key steps in developing our evaluation pipeline and highlighting Synthetic Data Generation as a main component.## Lecture 2 - Visual Editing with LLM-based Tool Chaining
Lecturer: Oren Sultan, AI Researcher @Lightricks; CS PhD Researcher @HUJI
Abstract: We present a practical distillation approach to fine-tune LLMs for invoking tools in real-time applications. We focus on visual editing tasks; specifically, we modify images and videos by interpreting user stylistic requests, specified in natural language ("golden hour"), using an LLM to select the appropriate tools and their parameters to achieve the desired visual effect. We found that proprietary LLMs such as GPT-3.5-Turbo show potential in this task, but their high cost and latency make them unsuitable for real-time applications. In our approach, we fine-tune a (smaller) student LLM with guidance from a (larger) teacher LLM and behavioral signals. We introduce offline metrics to evaluate student LLMs. Both online and offline experiments show that our student models manage to match the performance of our teacher model (GPT-3.5-Turbo), significantly reducing costs and latency. Lastly, we show that fine-tuning was improved by 25% in low-data regimes using augmentation.## Lecture 3 - Real Clinical Conversations: Making Sense with NLP
Lecturer: Dana Sinai, Phd, VP of Conversational AI @ Laguna
Abstract: This talk will present Laguna Health's methodology for building AI solutions that address real challenges in healthcare conversations. I'll share how our unique approach combines clinical expertise with technical innovation to analyze interactions between care managers and patients. The presentation will highlight our closed-loop system that improves through user feedback while maintaining rigorous quality standards. Using concrete examples of our approach to content generation and quality verification, I'll demonstrate how domain expertise, human validation, and data-driven iteration produce AI solutions that healthcare professionals genuinely trust. The talk includes practical lessons and results from implementations with real healthcare users.