to complete the environment setup.
I tried running Llama3. Although it consumed a lot of computer resources during operation, slowing down other applications, the overall experience was still acceptable.
Ollama supports various large language models (https://ollama.com/library).
The latest version has added support for Embedding models, using Chroma DB technology, which allows developers to build applications based on Retrieval-Augmented Generation (RAG). This approach generates content by combining text prompts with existing documents or other data. The previously shared Gaia also has similar functionality.
The specific steps are as follows:
Generate embedding vectors Retrieve Generate content