Download DeepSeek R1 Model Locally free | AI RAG with LlamaIndex, Local Embedding and Ollama
In this article I will explain Step-by-step to locally download and use the DeepSeek R1 Model with Ollama for free! and also explain how to set up AI-powered Retrieval Augmented Generation (RAG) using the nomic-embed-text:latest embedding model and run the DeepSeek R1 Model locally via Ollama . Prerequisites for this example is as follows: Visual studio code Python Ollama Open visual studio code and create the file with name "sample.py". Now in visual studio code and go to terminal menu and click on New terminal link it will open new terminal. In terminal enter below command to install the LlamaIndex library and LlamaIndex Olama and LlamaIndex embedding Olama library in your machine. pip install llama-index llama-index-llms-ollama llama-index-embeddings-ollama Create the folder named "doc" in root directory of the application as shown in below image and store the documents you want to query. Now in sample.py enter the code below. from llama_inde...