show code js

2024年3月28日 星期四

LLama-Factory

 For Windows


  • Download and Unzip file from https://github.com/hiyouga/LLaMA-Factory and Unzip
  • Install rustup‑init.exe from https://rustup.rs/
  • Use Anconda create a env and activate
    • Install Anconda ( https://www.anaconda.com/download )
    • Install Python ( https://www.python.org/downloads/ )
    • Install Pytorch ( https://pytorch.org/get-started/locally/ , just use conda to install 12.x )
      • # Test CUDA with GPU
      • python
      • >> import torch
      • >> torch.cuda.is_available()
      • true
  • cd llama-factory directory
    • pip install -r requirements.txt
    • pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.39.1-py3-none-win_amd64.whl
    • python .\src\train_web.py
  • Open localhost:7860 and Train your LLM

2024年3月2日 星期六

Use Ollama with Chatbot-Ollama to use local LLM files.

Just go

  • Download and Install ollama from ollama.com or ollama.ai
    • Open http://127.0.0.1:11434/ to show [ollama is running]
    • Use PowerShell to use ollama
      command: ollama list , to list all model
      command: ollama rm modelname , to del model
  • Install docker manager and use it to install chatbox-ollama
    • Open http://127.0.0.1:3000/
  • Use already download model files
    • Create a Modelfile as your_modelname, and config LLM path like below
      FROM c:\path\your_modelname.gguf
      SYSTEM ""
    • Open powershell and goto your_modelname file of directory, run command like below
      ollama create your_modelname -f ./your_modelname
      ollama run your_modelname(>> /bye to exit)
  • Open http://127.0.0.1:3000/zh , chat with llm