show code js

顯示具有 ai 標籤的文章。 顯示所有文章
顯示具有 ai 標籤的文章。 顯示所有文章

2025年1月29日 星期三

searxng

  • docker pull searxng/searxng
  • docker run -d -p 4000:8080 -e "BASE_URL=http://localhost:4000/" -e "INSTANCE_NAME=searxng" searxng/searxng

2024年11月3日 星期日

F5-TTS

Env

  • conda create -n f5tts python=3.10
  • conda activate f5tts

(apt install -y ffmpeg)

Python

  • pip uninstall torch torchvision torchaudio transformers
  • pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  • pip install transformers

Git

  • git clone https://github.com/SWivid/F5-TTS.git

Run

  • cd F5-TTS
  • pip install -e .
  • f5-tts_infer-gradio --port 7865 --host 0.0.0.0 --share

2024年10月31日 星期四

Kotaemon AI/RAG

Install on docker

  • docker run -e GRADIO_SERVER_NAME=0.0.0.0 -e GRADIO_SERVER_PORT=7860 -p 7860:7860 --name kotaemon -d ghcr.io/cinnamon/kotaemon:main-full 
  • no --it --rm

2024年9月16日 星期一

use axolotl for trainning

Env

  • no internet
  • qlora_root c:\qlora
  • gguf_root c:\gguf

Dataset 

  • c:/qlora/output_dataset/instruction_dataset.parquet

PS:Axolotl

  • docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest
PS:Dataset
  • docker cp C:\qlora\output_dataset\instruction_dataset.parquet container_name:/workspace/axolotl/examples/instruction_dataset.parquet

PS:LLM

  • docker cp C:\Meta-Llama-3.1-8B container_name:/workspace/axolotl/examples/Meta-Llama-3.1-8B/

Axolotl:Qlora

  • open ./examples/llama-3/qlora.yml find and modify path of llm and parquet

Axolotl:Trainning

  • CUDA_VISIBLE_DEVICES="" python -m axolotl.cli.preprocess examples/llama-3/qlora.yml
  • accelerate launch -m axolotl.cli.train examples/llama-3/qlora.yml

Axolotl:Test(need internet)

  • accelerate launch -m axolotl.cli.inference examples/llama-3/qlora.yml --lora_model_dir="./outputs/qlora-out" --gradio

Axolotl:Merged

  • python3 -m axolotl.cli.merge_lora examples/llama-3/qlora.yml --lora_model_dir="./outputs/qlora-out"

PS:Export

  • docker cp container_name:/workspace/axolotl/outputs/qlora-out/merged C:\merged

GGUF

  • convert to merged_f16.gguf

Ollama

  • ollama run merged


2024年9月6日 星期五

unsloth

Conda

  • conda install -c conda-forge xformers
  • pip install xformers
  • conda config --add channels conda-forge
  • conda update conda

Env

  • conda create --name unsloth python=3.11 pytorch-cuda=12.1 pytorch cudatoolkit -c pytorch -c nvidia -y
  • conda activate unsloth
  • pip install xformers

Install

  • pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
  • pip install --no-deps trl peft accelerate bitsandbytes

2024年8月16日 星期五

perplexica update

Check

  • docker volume ls | perplexica_backend-dbstore
  • docker run -it --rm -v perplexica_backend-dbstore:/data busybox ls -l /data

Backup

  • docker run --rm -v perplexica_backend-dbstore:/data -v C:/backup:/backup busybox tar cvf /backup/perplexica_backend-dbstore.tar /data

Remove

  • docker remove old container

Update

  • goto perplexica of directory and run git pull origin master
  • check config.toml and docker-compose.yaml

ReBuild

  • goto perplexica of directory and docker compose up -d --build

Recovery

  • docker run --rm -v perplexica_backend-dbstore:/data -v C:/backup:/backup busybox tar xvf /backup/perplexica_backend-dbstore.tar -C /data

Check

  • docker run -it --rm -v perplexica_backend-dbstore:/data busybox ls -l /data

anythingllm update(update 2024.09.23)

Get-New

  • docker pull mintplexlabs/anythingllm

Remove

  • docker remove old container

Run

  • docker run -d -p 3001:3001 --cap-add SYS_ADMIN -v C:/anythingllm/storage:/app/server/storage -v C:/anythingllm/env.txt:/app/server/.env -e STORAGE_DIR="/app/server/storage" --add-host=host.docker.internal:host-gateway --name anythingllm mintplexlabs/anythingllm
---------------------------------------
use github to pull to docker and update
  • git clone https://github.com/Mintplex-Labs/anything-llm
  • copy ./docker to ./anythingllm
  • into ./anything-llm/anythingllm and rename .env.example to .env
  • PS>docker compose up -d (--build)
change port number
  • .env
    SERVER_PORT=3001
  • docker-compose.yml
    ports:
    - "3001:3001"
change name
  • docker-compose.yml
    name: anythingllm
    container_name: anythingllm


Chroma update

 

Backup

  • docker volume ls
  • docker run --rm -v chroma_chroma-data:/data -v C:/backup:/backup busybox tar cvf /backup/chroma_chroma-data.tar /data

Remove

  • docker remove old container

Get-New

  • get new version from https://github.com/chroma-core/chroma/releases
  • docker pull ghcr.io/chroma-core/chroma:0.5.4
  • docker run -p 8000:8000 ghcr.io/chroma-core/chroma:0.5.4

Recovery

  • docker run --rm -v chroma_chroma-data:/data -v C:/backup:/backup busybox tar xvf /backup/chroma_chroma-data.tar -C /data
  • docker run -it --rm -v chroma_chroma-data:/data busybox ls -l /data

2024年8月11日 星期日

SenseVoice

Get 

  • git clone https://github.com/FunAudioLLM/SenseVoice.git

Env

  • conda create -n sensevoice python=3.8
  • conda activate sensevoice
Install
  • cd sensevoice
  • pip install -r requirements.txt

Run

  • python webui.py

Change port

  • demo.launch(server_port=your_portnumber,share=True,server_name='your_ip')

Chroma for AI(update 2024.09.23)

Get1

  •  docker pull ghcr.io/chroma-core/chroma:0.5.4

Get2

  • git clone https://github.com/chroma-core/chroma.git
  • docker compose up -d
Update1
  • watch version on https://github.com/chroma-core/chroma
  • docker pull ghcr.io/chroma-core/chroma:version
Update2
  •  goto chroma of directory
  • PS>docker compose up -d --build
-----------------------------------------------

you can use github to pull to docker
  • git clone https://github.com/chroma-core/chroma
  • cd ./chroma and copy compose-env.windows .env
  • rename docker-compose.yml to docker-compose-any.yml
  • copy docker-compose.server.example.yml to docker-compose.yml
  • PS>docker compose up -d (--build)

2024年8月6日 星期二

LangFlow

Github

  • git clone https://github.com/langflow-ai/langflow.git

Docker

  • cd langflow
  • copy ./docker_example ./langflow (or your_name_for_docker)
  • open and config(ChangePort) langflow/langflow  (or your_name_for_docker)/docker-compose.yml
  • PowerShell>
    cd langflow/langflow  (or your_name_for_docker)
    docker compose up -d (--build)

Run

  • Open http://localhost:7860/store and Click API_Key
  • Login https://www.langflow.store/ and get API-Key

ChatTTS

It good than Bark

Github

  • git clone https://github.com/2noise/ChatTTS.git
  • cd ChatTTS

Env

  • conda create -n chattts python=3.8
  • conda activate chattts

Install

  • pip install --upgrade -r requirements.txt
  • pip install -r requirements.txt

Run

  • python examples/web/webui.py

Change port, public link

  • open ChatTTS\examples\web\webui.py
  • find
    demo.launch()
  • and add args
    server_name='0.0.0.0'
    server_port=7860
    share=True


2024年7月26日 星期五

LivePortrait


  • ffmpeg (https://www.ffmpeg.org/)
    download > win > Windows builds from gyan.dev > release builds > ffmpeg-release-full.7z
    move bin/*.exe to windows/system32/
  • conda create -n liveportrait python=3.9 and activate it
  • d1: git clone https://github.com/KwaiVGI/LivePortrait
  • d2: git clone from hugiingface https://huggingface.co/KwaiVGI/LivePortrait
    move both directory insightface and liveportrait to d1 ./pretrained_weights/ 
  • run for cmd
    first: python  inference.py to create sample in d1 ./animations
    custom: python  inference.py -s d1 .\LivePortrait\assets\examples\source\?.jpg d1 .\LivePortrait\assets\examples\driving\?.mp4 and see outpurt in d1 ./animations
  • run for webgui
    python app.py and open 127.0.0.1:8890
    public: modify d1 .\LivePortrait\src\config\argument_config.py server_ip, server_port, and share for true

2024年7月10日 星期三

Perplexica - AI Search

Before

  • install git & lfs
  • install docker
  • install ollama and llm

Install

  • git clone https://github.com/ItzCrazyKns/Perplexica
  • goto directory of perplexica
  • rename .\Perplexica\sample.config.toml .\Perplexica\config.toml
  • powershell> docker compose up -d (--build)
  • open localhost:3000

Default Port Change [3001]

  • open config.toml
    • find [GENERAL] PORT = 3001 # Port to run the server on
    • change 3001 to your_port
  • open docker-compose.yaml
    • find ports: - 3001:3001
    • change 3001 to your_port both
    • find 2line like NEXT_PUBLIC
    • change 3001 to your_port

Run on public IP

  • open config.toml
    • find [API_ENDPOINTS] SEARXNG = and OLLAMA =
    • change to your_ip
  • open docker-compose.yaml
    • find 2line like NEXT_PUBLIC_
    • change to your_public_ip
  • config your win's env 
    • OLLAMA_HOST=0.0.0.0

UPDATE

  • stop perplexica on docker
  • goto directory of perplexica
  • git pull origin master
  • check config.toml and docker-compose.yaml
  • powershell> docker compose up -d --build


2024年6月25日 星期二

Bark

 Bark

  • Git
    git clone https://github.com/C0untFloyd/bark-gui
  • Anaconda
  • Install
    pip install .
    pip install -r requirements.txt
  • Run
    python webui.py
  • WEB
    Open http://127.0.0.1:7860
  • ERROR:bark Audio.__init__() got an unexpected keyword argument 'source'
    pip uninstall gradio
    pip install gradio==3.50.2