What we’re about
SensemakersAMS is a volunteer based community dedicated to connecting people and sharing knowledge, ideas & hands-on experience related to new technology, mostly IoT and AI.
Everybody is welcome: Technical, non-technical or just interested. If you like to know more, be involved or are just curious: Just come to one of our meetups:
• 1th Wednesday: DIY hands-on in the Makerspace at OBA, there are 3Dprinters, lasercutters, sensors arduino, RaspberryPi etc. We also learn by collaborating on projects that impact the city (waterquality, sound) with e.g. Marineterrein, Waag & SensingClues
• 3th Wednesday: Sharing knowledge, ideas & connecting people at CodamCollege
• Random: handson workshops, excursions or whatever comes up:-)
As we want to bring you interesting speakers, who can share their thoughts, experiences and/or wisdom, we would appreciate you tipping us off when you know or heard about someone!
Presentations from earlier meetups can be found on our website
Follow us onTwitter
or join us on Slack by providing us your emailadress in a dm.
Upcoming events (4+)
See all- Running Large Language Models locally 2/3 (new date!)Amsterdam Public Library (OBA), Amsterdam
New date!
Due to circumstances (a closed OBA because of ademonstration at Booking.com next door) we we forced to cancel Wednesday's meetup on a very late notice. We hope for your understanding.
Fortunately we have found a new date on Wednesday May 29th at 19h - 21h.
Workshop 2/3 (May 29th): Continue learning about ollama & build with it
We will continue with our learning path of running LLM's locally with ollama. We will learn about running models from Huggingface (for example mulitilingual or Dutch models), we'll dive deeper into Vision Language Models and we will build a chatbot that can chat with your document. (aka know as RAG - Retrieval Augmented Generation).- The workshop is open to beginners and advanced programmers alike. Some basic knowledge of Python is advised, but if you are new to this language we'll have you covered.
- New users can start with basics.
- If you were at the first workshop you can continue working on your specific use case with the code examples we'll provide.
- Show and tell
- Find all info at www.github.com/MichielBbal/ollama
-----
Next to tinkering in the Makerspace of the OBA again, you can also join the final workshop on Running LLM's locally with the Ollama App. This final workshop focusses on customizing your LLM with your own data.Being able to run a Large Language Model locally has a lot of advantages, next to not paying for a pro plan or API costs, it also means not sharing your chat data. Thanks to recent developments ('quantization') we now have models like Mistral 8x7B that run on your laptop! There are also many products that support you in running, creating and sharing LLM’s locally with a command line, like the open source app Ollama.
In this series of workshops we want to help you in setting up Ollama and running your local LLM’s. Ollama supports a range of models like Mistral, Llama2 and Phi. Every workshop consists of an introduction and has challenges on different levels to help you get started and broaden your knowledge. In this way the workshop will be interesting for both beginners and intermediate level participants. The idea is that participants also help and learn from each other. The evenings run from 19-21.30hFor beginners:
We assume you know how to work with the prompt on your laptop (command line). Please install Ollama beforehand. You can experiment locally with models & prompting.
For Intermediate:
We assume you're familiar with Github and you have basic knowledge of Python and Jupyler. An example of a challenge can be to develop a webinterface (also part of the second workshop).
More advanced challenges (have to be experienced in Python): develop a personalised assistant or running it on a raspberry pi. Using a webcam to take photos' and have the LLM describe the images with LLaVaFollow us on twitter: https://twitter.com/sensemakersa
or join us on Slack by providing us your emailadress. - Running Large Language Models locally 3/3Amsterdam Public Library (OBA), Amsterdam
Next to tinkering in the Makerspace of the OBA again, you can also join the final workshop on Running LLM's locally with the Ollama App. This final workshop focusses on customizing your LLM with your own data.
Being able to run a Large Language Model locally has a lot of advantages, next to not paying for a pro plan or API costs, it also means not sharing your chat data. Thanks to recent developments ('quantization') we now have models like Mistral 8x7B that run on your laptop! There are also many products that support you in running, creating and sharing LLM’s locally with a command line, like the open source app Ollama.
In this series of workshops we want to help you in setting up Ollama and running your local LLM’s. Ollama supports a range of models like Mistral, Llama2 and Phi. Every workshop consists of an introduction and has challenges on different levels to help you get started and broaden your knowledge. In this way the workshop will be interesting for both beginners and intermediate level participants. The idea is that participants also help and learn from each other. The evenings run from 19-21.30hFor beginners:
We assume you know how to work with the prompt on your laptop (command line). Please install Ollama beforehand. You can experiment locally with models & prompting.
For Intermediate:
We assume you're familiar with Github and you have basic knowledge of Python and Jupyler. An example of a challenge can be to develop a webinterface (also part of the second workshop).
More advanced challenges (have to be experienced in Python): develop a personalised assistant or running it on a raspberry pi. Using a webcam to take photos' and have the LLM describe the images with LLaVaWorkshop 3/3 (June 19th); Customize your LLM with your data
For example; working with a predefined database with questions/answers that is to be used by the model (provided that you create such a database beforehand), more advanced participants can also try to finetune the model locally on your way of communicating, for example by training it on your emails.- Or if you only started in May: Using & modifying the Python (provided with the model) to adapt it to your specific usecase.
- New users can start with basics.
- Show and tell
Follow us on twitter: https://twitter.com/sensemakersa
or join us on Slack by providing us your emailadress.