show code js

2024年11月3日 星期日

F5-TTS

Env

  • conda create -n f5tts python=3.10
  • conda activate f5tts

(apt install -y ffmpeg)

Python

  • pip uninstall torch torchvision torchaudio transformers
  • pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  • pip install transformers

Git

  • git clone https://github.com/SWivid/F5-TTS.git

Run

  • cd F5-TTS
  • pip install -e .
  • f5-tts_infer-gradio --port 7865 --host 0.0.0.0 --share

2024年10月31日 星期四

Kotaemon AI/RAG

Install on docker

  • docker run -e GRADIO_SERVER_NAME=0.0.0.0 -e GRADIO_SERVER_PORT=7860 -p 7860:7860 --name kotaemon -d ghcr.io/cinnamon/kotaemon:main-full 
  • no --it --rm

2024年10月30日 星期三

iRedMail Old to New

Ubuntu 24.04

Install iRedMail : https://docs.iredmail.org/install.iredmail.on.debian.ubuntu.html

Check : Important things you MUST know after installation

Conf : https://docs.iredmail.org/file.locations.html

Addtion: https://spiderd.io/ 

  • Roundcube webmail: https://your_server/mail/
  • SOGo Groupware: https://your_server/SOGo
  • Web admin panel (iRedAdmin): https://your_server/iredadmin/
Ubuntu: 
  • sudo systemctl enable ufw 
  • sudo ufw allow smtps pop3s
  • sudo reboot
Use old dkim : https://docs.iredmail.org/sign.dkim.signature.for.new.domain.html#use-existing-dkim-key-for-new-mail-domain
  • copy pem from oldsvr, path /var/lib/dkim/domain.pem to newsvr(use:sudo su)
  • modify newsvr /etc/amavis/conf.d/50-user before # Disclaimer settings(see oldsvr /etc/amavisd/amavisd.conf)
    dkim_key('domain.com', 'dkim', '/var/lib/dkim/domain.com.pem');
    @dkim_signature_options_bysender_maps = ({
        # 'd' defaults to a domain of an author/sender address,
        # 's' defaults to whatever selector is offered by a matching key
        # Per-domain dkim key
        #"domain.com"  => { d => "domain.com", a => 'rsa-sha256', ttl => 10*24*3600 },
        # catch-all (one dkim key for all domains)
        '.' => {d => 'domain.com',
                   a => 'rsa-sha256',
                   c => 'relaxed/simple',
                   ttl => 30*24*3600 },
        });
  • sudo reboot
  • sudo amavisd testkeys (=>pass)
Fail2Ban
  • modify /etc/fail2ban/jail.local
  • modify /etc/postfix/helo_access.pcre
  • sudo su, cd /opt/iredapd/tools
    python wblist_admin.py --list --whitelist for oldsvr to list....
    sudo python3 wblist_admin.py --list --whitelist for newsvr ....
    >> sudo python3 wblist_admin.py --add --whitelist ip or domain from oldsvr
Create Cert

Let's Encrypt offers FREE SSL certificate.
https://docs.iredmail.org/letsencrypt.html
  • sudo apt install -y certbot
  • sudo certbot certonly --webroot --dry-run -w /var/www/html -d mail.domain.com
  • sudo certbot certonly --webroot -w /var/www/html -d mail.domain.com
Backup Cert
  • mv /etc/ssl/certs/iRedMail.crt /etc/ssl/certs/iRedMail.crt.bak
  • mv /etc/ssl/private/iRedMail.key /etc/ssl/private/iRedMail.key.bak
Use New Cert
  • ln -s /etc/letsencrypt/live/mail.domain.com/fullchain.pem /etc/ssl/certs/iRedMail.crt
  • ln -s /etc/letsencrypt/live/mail.domain.com/privkey.pem /etc/ssl/private/iRedMail.key
Restart Service
  • sudo systemctl restart postfix dovecot nginx

2024年9月16日 星期一

use axolotl for trainning

Env

  • no internet
  • qlora_root c:\qlora
  • gguf_root c:\gguf

Dataset 

  • c:/qlora/output_dataset/instruction_dataset.parquet

PS:Axolotl

  • docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest
PS:Dataset
  • docker cp C:\qlora\output_dataset\instruction_dataset.parquet container_name:/workspace/axolotl/examples/instruction_dataset.parquet

PS:LLM

  • docker cp C:\Meta-Llama-3.1-8B container_name:/workspace/axolotl/examples/Meta-Llama-3.1-8B/

Axolotl:Qlora

  • open ./examples/llama-3/qlora.yml find and modify path of llm and parquet

Axolotl:Trainning

  • CUDA_VISIBLE_DEVICES="" python -m axolotl.cli.preprocess examples/llama-3/qlora.yml
  • accelerate launch -m axolotl.cli.train examples/llama-3/qlora.yml

Axolotl:Test(need internet)

  • accelerate launch -m axolotl.cli.inference examples/llama-3/qlora.yml --lora_model_dir="./outputs/qlora-out" --gradio

Axolotl:Merged

  • python3 -m axolotl.cli.merge_lora examples/llama-3/qlora.yml --lora_model_dir="./outputs/qlora-out"

PS:Export

  • docker cp container_name:/workspace/axolotl/outputs/qlora-out/merged C:\merged

GGUF

  • convert to merged_f16.gguf

Ollama

  • ollama run merged


2024年9月6日 星期五

unsloth

Conda

  • conda install -c conda-forge xformers
  • pip install xformers
  • conda config --add channels conda-forge
  • conda update conda

Env

  • conda create --name unsloth python=3.11 pytorch-cuda=12.1 pytorch cudatoolkit -c pytorch -c nvidia -y
  • conda activate unsloth
  • pip install xformers

Install

  • pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
  • pip install --no-deps trl peft accelerate bitsandbytes

2024年8月27日 星期二

Huggingface model convert gguf

config env

  • mkdir model and cd model
  • conda create -n gguf
  • conda activate gguf
  • pip install huggingface_hub

dl model from hf:get model_id,local_dir

  • create file:download.py and save
    from huggingface_hub import snapshot_download
    model_id="Qwen/Qwen2-1.5B-Instruct"
    snapshot_download(repo_id=model_id, local_dir="Qwen2-1.5B-Instruct",local_dir_use_symlinks=False, revision="main")
  • python download.py

config llama.cpp

  • cd ..
  • git clone https://github.com/ggerganov/llama.cpp.git
  • cd llama.cpp
  • pip install -r requirements.txt

run python llama.cpp/convert_hf_to_gguf.py -h

convert

  • cd ..
    check path for directory of model
  • #full f16 or f32
    python ./llama.cpp/convert_hf_to_gguf.py ./model/qwen2_1.5b_instruct  --outtype f32 --verbose --outfile ./Qwen2-1.5B-Instruct_f32.gguf
  • #q2_k,q3_k_l/q3_k_m/q3_k_s,q4_0/q4_1/q4_k_m/q4_k_s,q5_0/q5_1/q5_k_m/q5_k_s,q6_k,q8_0,fp16/f32
    python ./llama.cpp/convert_hf_to_gguf.py ./model/qwen2_1.5b_instruct  --outtype q8_0 --verbose --outfile ./Qwen2-1.5B-Instruct_q8_0.gguf