show code js

2024年8月27日 星期二

Huggingface model convert gguf

Conda

  • conda create -n gguf  #one time
  • conda activate gguf
  • pip install huggingface_hub

Create

  • mkdir convert #one time
  • cd convert

Project and update

  • git https://github.com/ggml-org/llama.cpp.git  #one time
  • update1: cd llama.cpp
    • git pull origin
  • update2 in conda: pip install -r requirements.txt

HF

  • goto hf and create a access token
  • in conda:keyin access token
    huggingface-cli login

Model

  • in convert of directory, mkdir model
  • create a file is download.py in model of directory  #onetime
    from huggingface_hub import snapshot_download
    model_id="[hf model_name like xxx/xxxxxx]"
    snapshot_download(repo_id=model_id, local_dir="model/model_name",local_dir_use_symlinks=False, revision="main")
  • modify:./model/download.py
    model_id="[hf model_name like xxx/xxxxxx]"
    local_dir="model/model_name"
  • in conda: python ./model/download.py

dl model from hf:get model_id,local_dir

  • create file:download.py and save
    from huggingface_hub import snapshot_download
    model_id="Qwen/Qwen2-1.5B-Instruct"
    snapshot_download(repo_id=model_id, local_dir="Qwen2-1.5B-Instruct",local_dir_use_symlinks=False, revision="main")
  • python download.py

Convert

  • python ./llama.cpp/convert_hf_to_gguf.py -h
  • python ./llama.cpp/convert_hf_to_gguf.py ./model/[model_name]  --outtype q8_0 --verbose --outfile ./[model_name]-q8_0.gguf
Ollama
  • cd .ollama/models
  • create a file([model_name])
    FROM C:\path\convert\model_name.gguf
  • cmd:ollama create [model_name] -f ./[model_name]
Update on 2025/8/28

2024年8月16日 星期五

perplexica update

Check

  • docker volume ls | perplexica_backend-dbstore
  • docker run -it --rm -v perplexica_backend-dbstore:/data busybox ls -l /data

Backup

  • docker run --rm -v perplexica_backend-dbstore:/data -v C:/backup:/backup busybox tar cvf /backup/perplexica_backend-dbstore.tar /data

Remove

  • docker remove old container

Update

  • goto perplexica of directory and run git pull origin master
  • check config.toml and docker-compose.yaml

ReBuild

  • goto perplexica of directory and docker compose up -d --build

Recovery

  • docker run --rm -v perplexica_backend-dbstore:/data -v C:/backup:/backup busybox tar xvf /backup/perplexica_backend-dbstore.tar -C /data

Check

  • docker run -it --rm -v perplexica_backend-dbstore:/data busybox ls -l /data

anythingllm update(update 2024.09.23)

Get-New

  • docker pull mintplexlabs/anythingllm

Remove

  • docker remove old container

Run

  • docker run -d -p 3001:3001 --cap-add SYS_ADMIN -v C:/anythingllm/storage:/app/server/storage -v C:/anythingllm/env.txt:/app/server/.env -e STORAGE_DIR="/app/server/storage" --add-host=host.docker.internal:host-gateway --name anythingllm mintplexlabs/anythingllm
---------------------------------------
use github to pull to docker and update
  • git clone https://github.com/Mintplex-Labs/anything-llm
  • copy ./docker to ./anythingllm
  • into ./anything-llm/anythingllm and rename .env.example to .env
  • PS>docker compose up -d (--build)
change port number
  • .env
    SERVER_PORT=3001
  • docker-compose.yml
    ports:
    - "3001:3001"
change name
  • docker-compose.yml
    name: anythingllm
    container_name: anythingllm


Chroma update

 

Backup

  • docker volume ls
  • docker run --rm -v chroma_chroma-data:/data -v C:/backup:/backup busybox tar cvf /backup/chroma_chroma-data.tar /data

Remove

  • docker remove old container

Get-New

  • get new version from https://github.com/chroma-core/chroma/releases
  • docker pull ghcr.io/chroma-core/chroma:0.5.4
  • docker run -p 8000:8000 ghcr.io/chroma-core/chroma:0.5.4

Recovery

  • docker run --rm -v chroma_chroma-data:/data -v C:/backup:/backup busybox tar xvf /backup/chroma_chroma-data.tar -C /data
  • docker run -it --rm -v chroma_chroma-data:/data busybox ls -l /data

2024年8月11日 星期日

SenseVoice

Get 

  • git clone https://github.com/FunAudioLLM/SenseVoice.git

Env

  • conda create -n sensevoice python=3.8
  • conda activate sensevoice
Install
  • cd sensevoice
  • pip install -r requirements.txt

Run

  • python webui.py

Change port

  • demo.launch(server_port=your_portnumber,share=True,server_name='your_ip')

Chroma for AI(update 2024.09.23)

Get1

  •  docker pull ghcr.io/chroma-core/chroma:0.5.4

Get2

  • git clone https://github.com/chroma-core/chroma.git
  • docker compose up -d
Update1
  • watch version on https://github.com/chroma-core/chroma
  • docker pull ghcr.io/chroma-core/chroma:version
Update2
  •  goto chroma of directory
  • PS>docker compose up -d --build
-----------------------------------------------

you can use github to pull to docker
  • git clone https://github.com/chroma-core/chroma
  • cd ./chroma and copy compose-env.windows .env
  • rename docker-compose.yml to docker-compose-any.yml
  • copy docker-compose.server.example.yml to docker-compose.yml
  • PS>docker compose up -d (--build)

2024年8月6日 星期二

LangFlow

Github

  • git clone https://github.com/langflow-ai/langflow.git

Docker

  • cd langflow
  • copy ./docker_example ./langflow (or your_name_for_docker)
  • open and config(ChangePort) langflow/langflow  (or your_name_for_docker)/docker-compose.yml
  • PowerShell>
    cd langflow/langflow  (or your_name_for_docker)
    docker compose up -d (--build)

Run

  • Open http://localhost:7860/store and Click API_Key
  • Login https://www.langflow.store/ and get API-Key

ChatTTS

It good than Bark

Github

  • git clone https://github.com/2noise/ChatTTS.git
  • cd ChatTTS

Env

  • conda create -n chattts python=3.8
  • conda activate chattts

Install

  • pip install --upgrade -r requirements.txt
  • pip install -r requirements.txt

Run

  • python examples/web/webui.py

Change port, public link

  • open ChatTTS\examples\web\webui.py
  • find
    demo.launch()
  • and add args
    server_name='0.0.0.0'
    server_port=7860
    share=True