show code js

顯示具有 ollama 標籤的文章。 顯示所有文章
顯示具有 ollama 標籤的文章。 顯示所有文章

2024年4月16日 星期二

AnythingLLM with Ollama

1.Install Ollama First

2.chroma install use docker

  • PS>> docker pull ghcr.io/chroma-core/chroma:0.4.24
  • PS>> docker run -p 8000:8000 ghcr.io/chroma-core/chroma:0.4.24

3.anythingllm install use docker

  • create directory C:/anythingllm/storage and file C:/anythingllm/env.txt
  • PS>> docker run -d -p 3001:3001 --cap-add SYS_ADMIN -v C:/anythingllm/storage:/app/server/storage -v C:/anythingllm/env.txt:/app/server/.env -e STORAGE_DIR="/app/server/storage" --add-host=host.docker.internal:host-gateway --name anythingllm mintplexlabs/anythingllm
  • Open localhost:3001, and config llm setting to use http://host.docker.internal:11434

2024/8/4

UPDATE
  • Stop anythingllm on docker
  • PS> docker pull mintplexlabs/anythingllm
  • Start anythingllm on docker

2024年3月2日 星期六

Use Ollama with Chatbot-Ollama to use local LLM files.

Just go

  • Download and Install ollama from ollama.com or ollama.ai
    • Open http://127.0.0.1:11434/ to show [ollama is running]
    • Use PowerShell to use ollama
      command: ollama list , to list all model
      command: ollama rm modelname , to del model
  • Install docker manager and use it to install chatbox-ollama
    • Open http://127.0.0.1:3000/
  • Use already download model files
    • Create a Modelfile as your_modelname, and config LLM path like below
      FROM c:\path\your_modelname.gguf
      SYSTEM ""
    • Open powershell and goto your_modelname file of directory, run command like below
      ollama create your_modelname -f ./your_modelname
      ollama run your_modelname(>> /bye to exit)
  • Open http://127.0.0.1:3000/zh , chat with llm