#ai
Imagine a worldwide internet collapse. There is no way to get all the data you need to survive in this new world. How do you build a windmill? When and how do you plant crops? You have no clue, and you are slowly realizing that you are not that smart after all.
Well, despite not being a fan of the end of the world, you may still think it is cool to have AI on your own machine, especially one with an uncensored model able to answer all questions, even those criminal in nature. Heck, you can even create your own Jarvis with a couple of free libraries for speech recognition, just like Tony Stark.
But let us not waste more time and get our hands dirty. So, what is Ollama? In short, Ollama is a platform, an interface that makes the job of downloading and running LLM models fairly easy.
Just go to the official Ollama page and follow the download instructions, which are basically just a command for Linux systems or a click of a button for Windows. Since we are using Linux, we will be focusing on that environment.
For safety purposes, it is suggested to run a VM and install Ollama in the VM instead of directly on your main system. Even if you do not believe that AI could take control over your computer, you may want to avoid potential system crashes if the LLM exceeds your system's RAM.
RAM usage will mostly depend on type of model you choose, but 16GB should be all you need. To mention, if you wish to make Jarvis with more advanced TTS(Text-To-Speech) and Speech to text libraries you may be safer with 32GB of the RAM.
Open a terminal and paste this line:
curl -fsSL https://ollama.com/install.sh | sh
It may take some time to download the interface, so be patient. Once the download is finished, you can run the interface by executing commands in the terminal. You can list all available models or just visit the Ollama site to check them out. I suggest trying one of the Dolphin models or LLM3, and if you need lightweight models, there are tiny ones available too.
The following commands are all you will need:
#List installed models
ollama list
#Simple and easy way to download models
ollama pull <model name>
#When model download is finished, run model
ollama run <model-name>
#Delete model
ollama remove <model-name>
#Help pages
ollama help
The command -run will start your local AI, and you can begin entering your queries. To interrupt a long LLM response, press CTRL+C, and to exit the Ollama runtime, press CTRL+D. Congratulations, your household is now AI-ready for a global internet collapse. Try out different models—each is trained on different data and offers unique capabilities.
Stay connected.
[root@techtoapes]$ Author Luka
Login to comment.