What we’re about
SensemakersAMS is a volunteer based community dedicated to connecting people and sharing knowledge, ideas & hands-on experience related to new technology, mostly IoT and AI.
Everybody is welcome: Technical, non-technical or just interested. If you like to know more, be involved or are just curious: Just come to one of our meetups:
• 1th Wednesday: DIY hands-on in the Makerspace at OBA, there are 3Dprinters, lasercutters, sensors arduino, RaspberryPi etc. We also learn by collaborating on projects that impact the city (waterquality, sound) with e.g. Marineterrein, Waag & SensingClues
• 3th Wednesday: Sharing knowledge, ideas & connecting people at CodamCollege
• Random: handson workshops, excursions or whatever comes up:-)
As we want to bring you interesting speakers, who can share their thoughts, experiences and/or wisdom, we would appreciate you tipping us off when you know or heard about someone!
Presentations from earlier meetups can be found on our website
Follow us onTwitter
or join us on Slack by providing us your emailadress in a dm.
Upcoming events (4+)
See all- Running Large Language Models locally 2/3Amsterdam Public Library (OBA), Amsterdam
Next to tinkering in the Makerspace of the OBA again, you can also join a workshop series on Running LLM's locally with the Ollama App that we're organizing to celebrate the Appril Festival
Being able to run a Large Language Model locally has a lot of advantages, next to not paying for a pro plan or API costs, it also means not sharing your chat data. Thanks to recent developments ('quantization') we now have models like Mistral 8x7B that run on your laptop! There are also many products that support you in running, creating and sharing LLM’s locally with a command line, like the open source app Ollama.
In this series of workshops we want to help you in setting up Ollama and running your local LLM’s. Ollama supports a range of models like Mistral, Llama2 and Phi. Every workshop consists of an introduction and has challenges on different levels to help you get started and broaden your knowledge. In this way the workshop will be interesting for both beginners and intermediate level participants. The idea is that participants also help and learn from each other. The evenings run from 19-21.30hFor beginners:
We assume you know how to work with the prompt on your laptop (command line). Please install Ollama beforehand. You can experiment locally with models & prompting.
For Intermediate:
We assume you're familiar with Github and you have basic knowledge of Python and Jupyler. An example of a challenge can be to develop a webinterface (also part of the second workshop).
More advanced challenges (have to be experienced in Python): develop a personalised assistant or running it on a raspberry pi. Using a webcam to take photos' and have the LLM describe the images with LLaVaWorkshop 2/3 (May 15th); making the most of Ollama on a variety of devices
Beginners: depending on the acquired knowledge and your interests shared in the first workshop we'll help you to build on.- Using & modifying the Python (provided with the model) to adapt it to your specific usecase.
- New users can start with basics.
- Show & tell
Workshop 3/3 (June 19th); Customize your LLM with your data
For example; working with a predefined database with questions/answers that is to be used by the model (provided that you create such a database beforehand), more advanced participants can also try to finetune the model locally on your way of communicating, for example by training it on your emails.- New users can start with basics.
- Show and tell
Follow us on twitter: https://twitter.com/sensemakersa
or join us on Slack by providing us your emailadress. - Running Large Language Models locally 3/3Amsterdam Public Library (OBA), Amsterdam
Next to tinkering in the Makerspace of the OBA again, you can also join the final workshop on Running LLM's locally with the Ollama App. This final workshop focusses on customizing your LLM with your own data.
Being able to run a Large Language Model locally has a lot of advantages, next to not paying for a pro plan or API costs, it also means not sharing your chat data. Thanks to recent developments ('quantization') we now have models like Mistral 8x7B that run on your laptop! There are also many products that support you in running, creating and sharing LLM’s locally with a command line, like the open source app Ollama.
In this series of workshops we want to help you in setting up Ollama and running your local LLM’s. Ollama supports a range of models like Mistral, Llama2 and Phi. Every workshop consists of an introduction and has challenges on different levels to help you get started and broaden your knowledge. In this way the workshop will be interesting for both beginners and intermediate level participants. The idea is that participants also help and learn from each other. The evenings run from 19-21.30hFor beginners:
We assume you know how to work with the prompt on your laptop (command line). Please install Ollama beforehand. You can experiment locally with models & prompting.
For Intermediate:
We assume you're familiar with Github and you have basic knowledge of Python and Jupyler. An example of a challenge can be to develop a webinterface (also part of the second workshop).
More advanced challenges (have to be experienced in Python): develop a personalised assistant or running it on a raspberry pi. Using a webcam to take photos' and have the LLM describe the images with LLaVaWorkshop 3/3 (June 19th); Customize your LLM with your data
For example; working with a predefined database with questions/answers that is to be used by the model (provided that you create such a database beforehand), more advanced participants can also try to finetune the model locally on your way of communicating, for example by training it on your emails.- Or if you only started in May: Using & modifying the Python (provided with the model) to adapt it to your specific usecase.
- New users can start with basics.
- Show and tell
Follow us on twitter: https://twitter.com/sensemakersa
or join us on Slack by providing us your emailadress.