show code js

2024年12月29日 星期日

postgresql

  •  ubuntu 24.04 install postgresql
  • sudo apt install postgresql postgresql-contrib > Y
  • sudo systemctl enable postgresql.service
  • sudo systemctl start postgresql.service
  • test
    • sudo -i -u postgres
    • psql
    • \q
    • exit
  • create user
    • sudo -i -u postgres
    • createuser --interactive
    • >myuser
    • >n,n,n
  • createdb mydb
  • psql
    • sudo -u postgres psql
    • ALTER USER myuser WITH PASSWORD 'pass';
    • GRANT ALL PRIVILEGES ON DATABASE mydb TO myuser;
    • \q
  • connect config
    • sudo nano /etc/postgresql/XX/main/postgresql.conf
      • listen_addresses = '*'
    • sudo nano /etc/postgresql/16/main/pg_hba.conf
      • IPV4
      • host    db       user       ip           mode:md5(trust)
  • firewall config
    • sudo ufw enable
    • sudo ufw start
    • sudo ufw allow ssh
    • sudo ufw allow 5432/tcp
    • sudo ufw status numbered
    • sudo ufw logging on
  • debug can not connect
    • sudo lsof -i :5432
    • test connect
    • psql -h out_host -U user -d database  
  • Note
    • $ sudo -u postgres psql
    • postgres=# CREATE DATABASE yourdbname;
    • postgres=# CREATE USER youruser WITH ENCRYPTED PASSWORD 'yourpass';
    • postgres=# GRANT ALL PRIVILEGES ON DATABASE yourdbname TO youruser;

iTOP

  •  https://www.combodo.com/itop-193
  • docker run -d -p 8080:80 --name=my-itop -v my-itop-conf-volume:/var/www/html/conf -v my-itop-db-volume:/var/lib/mysql vbkunin/itop
  • docker exec my-itop chown www-data:www-data /var/www/html/conf
  • get mysql pass to install:
    • win>
      • docker logs my-itop | Select-String -Pattern "Your MySQL user 'admin' has password:" -Context 1,7
    • linux>
      • docker logs my-itop | grep -A7 -B1 "Your MySQL user 'admin' has password:"
  • http://localhost:8080/ install and use itil

Wazuh

  •  git clone https://github.com/wazuh/wazuh-docker.git -b v4.9.2
  • cd wazuh-docker/single-node/
    • modify docker-compose.yml
      • #change 443 to your port
      • #modify memory or  CPU limit
  • docker-compose -f generate-indexer-certs.yml run --rm generator
  • docker-compose up -d
  • open https://ip:your_port or space(443) to check is running
    • admin:SecretPassword
    • modify default password

  • Agent
    • https://wazuh.com/install/
    • docker ps
    • docker exec -it single-node-wazuh.manager-1 or com-id /bin/bash
    • bash-5.2# /var/ossec/bin/manage_agents
      • ****************************************
      • * Wazuh v4.9.2 Agent manager.          *
      • * The following options are available: *
      • ****************************************
      •    (A)dd an agent (A).
      •    (E)xtract key for an agent (E).
      •    (L)ist already added agents (L).
      •    (R)emove an agent (R).
      •    (Q)uit.
      • Choose your action: A,E,L,R or Q:
    • 1.Press (A) to set a new agent with name (Computer's name), IP (any)
    • 2.Press (L) to find ID 00x
    • 3.Press (E) to create auth key
      • get the auth key to keyin agent on win or linux
      • check wazuh run on services.msc

install ubuntu

  • install ubunto 24.04.1 
  • sudo apt-get install openssh-server
  • sudo systemctl start ssh
  • sudo systemctl enable ssh
  • sudo ufw enable
  • sudo ufw allow ssh
  • sudo reboot
  • sudo apt upgrade -y && sudo apt update -y
  • sudo apt upgrade -y && sudo apt update -y

clamav for linux

  • sudo apt update && sudo apt upgrade
  • sudo apt install clamav clamav-daemon -y 
  • sudo systemctl stop clamav-freshclam.service
  • sudo freshclam
  • sudo systemctl start clamav-freshclam.service
  • #sudo clamscan -r /path/to/folder
  • sudo systemctl enable clamav-daemon
  • sudo systemctl enable clamav-freshclam.service

create ocs on docker

  •  git clone https://github.com/OCSInventory-NG/OCSInventory-Docker-Image
  • cd OCSInventory-Docker-Image
  • cd new_version
  • modify docker-compose.yml config ports
  • PS>docker-compose up -d

2024年11月3日 星期日

F5-TTS

Env

  • conda create -n f5tts python=3.10
  • conda activate f5tts

(apt install -y ffmpeg)

Python

  • pip uninstall torch torchvision torchaudio transformers
  • pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  • pip install transformers

Git

  • git clone https://github.com/SWivid/F5-TTS.git

Run

  • cd F5-TTS
  • pip install -e .
  • f5-tts_infer-gradio --port 7865 --host 0.0.0.0 --share

2024年10月31日 星期四

Kotaemon AI/RAG

Install on docker

  • docker run -e GRADIO_SERVER_NAME=0.0.0.0 -e GRADIO_SERVER_PORT=7860 -p 7860:7860 --name kotaemon -d ghcr.io/cinnamon/kotaemon:main-full 
  • no --it --rm

2024年10月30日 星期三

iRedMail Old to New

Ubuntu 24.04

Install iRedMail : https://docs.iredmail.org/install.iredmail.on.debian.ubuntu.html

Check : Important things you MUST know after installation

Conf : https://docs.iredmail.org/file.locations.html

Addtion: https://spiderd.io/ 

  • Roundcube webmail: https://your_server/mail/
  • SOGo Groupware: https://your_server/SOGo
  • Web admin panel (iRedAdmin): https://your_server/iredadmin/
Ubuntu: 
  • sudo systemctl enable ufw 
  • sudo ufw allow smtps pop3s
  • sudo reboot
Use old dkim : https://docs.iredmail.org/sign.dkim.signature.for.new.domain.html#use-existing-dkim-key-for-new-mail-domain
  • copy pem from oldsvr, path /var/lib/dkim/domain.pem to newsvr(use:sudo su)
  • modify newsvr /etc/amavis/conf.d/50-user before # Disclaimer settings(see oldsvr /etc/amavisd/amavisd.conf)
    dkim_key('domain.com', 'dkim', '/var/lib/dkim/domain.com.pem');
    @dkim_signature_options_bysender_maps = ({
        # 'd' defaults to a domain of an author/sender address,
        # 's' defaults to whatever selector is offered by a matching key
        # Per-domain dkim key
        #"domain.com"  => { d => "domain.com", a => 'rsa-sha256', ttl => 10*24*3600 },
        # catch-all (one dkim key for all domains)
        '.' => {d => 'domain.com',
                   a => 'rsa-sha256',
                   c => 'relaxed/simple',
                   ttl => 30*24*3600 },
        });
  • sudo reboot
  • sudo amavisd testkeys (=>pass)
Fail2Ban
  • modify /etc/fail2ban/jail.local
  • modify /etc/postfix/helo_access.pcre
  • sudo su, cd /opt/iredapd/tools
    python wblist_admin.py --list --whitelist for oldsvr to list....
    sudo python3 wblist_admin.py --list --whitelist for newsvr ....
    >> sudo python3 wblist_admin.py --add --whitelist ip or domain from oldsvr
Create Cert

Let's Encrypt offers FREE SSL certificate.
https://docs.iredmail.org/letsencrypt.html
  • sudo apt install -y certbot
  • sudo certbot certonly --webroot --dry-run -w /var/www/html -d mail.domain.com
  • sudo certbot certonly --webroot -w /var/www/html -d mail.domain.com
Backup Cert
  • mv /etc/ssl/certs/iRedMail.crt /etc/ssl/certs/iRedMail.crt.bak
  • mv /etc/ssl/private/iRedMail.key /etc/ssl/private/iRedMail.key.bak
Use New Cert
  • ln -s /etc/letsencrypt/live/mail.domain.com/fullchain.pem /etc/ssl/certs/iRedMail.crt
  • ln -s /etc/letsencrypt/live/mail.domain.com/privkey.pem /etc/ssl/private/iRedMail.key
Restart Service
  • sudo systemctl restart postfix dovecot nginx

2024年9月16日 星期一

use axolotl for trainning

Env

  • no internet
  • qlora_root c:\qlora
  • gguf_root c:\gguf

Dataset 

  • c:/qlora/output_dataset/instruction_dataset.parquet

PS:Axolotl

  • docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest
PS:Dataset
  • docker cp C:\qlora\output_dataset\instruction_dataset.parquet container_name:/workspace/axolotl/examples/instruction_dataset.parquet

PS:LLM

  • docker cp C:\Meta-Llama-3.1-8B container_name:/workspace/axolotl/examples/Meta-Llama-3.1-8B/

Axolotl:Qlora

  • open ./examples/llama-3/qlora.yml find and modify path of llm and parquet

Axolotl:Trainning

  • CUDA_VISIBLE_DEVICES="" python -m axolotl.cli.preprocess examples/llama-3/qlora.yml
  • accelerate launch -m axolotl.cli.train examples/llama-3/qlora.yml

Axolotl:Test(need internet)

  • accelerate launch -m axolotl.cli.inference examples/llama-3/qlora.yml --lora_model_dir="./outputs/qlora-out" --gradio

Axolotl:Merged

  • python3 -m axolotl.cli.merge_lora examples/llama-3/qlora.yml --lora_model_dir="./outputs/qlora-out"

PS:Export

  • docker cp container_name:/workspace/axolotl/outputs/qlora-out/merged C:\merged

GGUF

  • convert to merged_f16.gguf

Ollama

  • ollama run merged


2024年9月6日 星期五

unsloth

Conda

  • conda install -c conda-forge xformers
  • pip install xformers
  • conda config --add channels conda-forge
  • conda update conda

Env

  • conda create --name unsloth python=3.11 pytorch-cuda=12.1 pytorch cudatoolkit -c pytorch -c nvidia -y
  • conda activate unsloth
  • pip install xformers

Install

  • pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
  • pip install --no-deps trl peft accelerate bitsandbytes

2024年8月27日 星期二

Huggingface model convert gguf

config env

  • mkdir model and cd model
  • conda create -n gguf
  • conda activate gguf
  • pip install huggingface_hub

dl model from hf:get model_id,local_dir

  • create file:download.py and save
    from huggingface_hub import snapshot_download
    model_id="Qwen/Qwen2-1.5B-Instruct"
    snapshot_download(repo_id=model_id, local_dir="Qwen2-1.5B-Instruct",local_dir_use_symlinks=False, revision="main")
  • python download.py

config llama.cpp

  • cd ..
  • git clone https://github.com/ggerganov/llama.cpp.git
  • cd llama.cpp
  • pip install -r requirements.txt

run python llama.cpp/convert_hf_to_gguf.py -h

convert

  • cd ..
    check path for directory of model
  • #full f16 or f32
    python ./llama.cpp/convert_hf_to_gguf.py ./model/qwen2_1.5b_instruct  --outtype f32 --verbose --outfile ./Qwen2-1.5B-Instruct_f32.gguf
  • #q2_k,q3_k_l/q3_k_m/q3_k_s,q4_0/q4_1/q4_k_m/q4_k_s,q5_0/q5_1/q5_k_m/q5_k_s,q6_k,q8_0,fp16/f32
    python ./llama.cpp/convert_hf_to_gguf.py ./model/qwen2_1.5b_instruct  --outtype q8_0 --verbose --outfile ./Qwen2-1.5B-Instruct_q8_0.gguf

2024年8月16日 星期五

perplexica update

Check

  • docker volume ls | perplexica_backend-dbstore
  • docker run -it --rm -v perplexica_backend-dbstore:/data busybox ls -l /data

Backup

  • docker run --rm -v perplexica_backend-dbstore:/data -v C:/backup:/backup busybox tar cvf /backup/perplexica_backend-dbstore.tar /data

Remove

  • docker remove old container

Update

  • goto perplexica of directory and run git pull origin master
  • check config.toml and docker-compose.yaml

ReBuild

  • goto perplexica of directory and docker compose up -d --build

Recovery

  • docker run --rm -v perplexica_backend-dbstore:/data -v C:/backup:/backup busybox tar xvf /backup/perplexica_backend-dbstore.tar -C /data

Check

  • docker run -it --rm -v perplexica_backend-dbstore:/data busybox ls -l /data

anythingllm update(update 2024.09.23)

Get-New

  • docker pull mintplexlabs/anythingllm

Remove

  • docker remove old container

Run

  • docker run -d -p 3001:3001 --cap-add SYS_ADMIN -v C:/anythingllm/storage:/app/server/storage -v C:/anythingllm/env.txt:/app/server/.env -e STORAGE_DIR="/app/server/storage" --add-host=host.docker.internal:host-gateway --name anythingllm mintplexlabs/anythingllm
---------------------------------------
use github to pull to docker and update
  • git clone https://github.com/Mintplex-Labs/anything-llm
  • copy ./docker to ./anythingllm
  • into ./anything-llm/anythingllm and rename .env.example to .env
  • PS>docker compose up -d (--build)
change port number
  • .env
    SERVER_PORT=3001
  • docker-compose.yml
    ports:
    - "3001:3001"
change name
  • docker-compose.yml
    name: anythingllm
    container_name: anythingllm


Chroma update

 

Backup

  • docker volume ls
  • docker run --rm -v chroma_chroma-data:/data -v C:/backup:/backup busybox tar cvf /backup/chroma_chroma-data.tar /data

Remove

  • docker remove old container

Get-New

  • get new version from https://github.com/chroma-core/chroma/releases
  • docker pull ghcr.io/chroma-core/chroma:0.5.4
  • docker run -p 8000:8000 ghcr.io/chroma-core/chroma:0.5.4

Recovery

  • docker run --rm -v chroma_chroma-data:/data -v C:/backup:/backup busybox tar xvf /backup/chroma_chroma-data.tar -C /data
  • docker run -it --rm -v chroma_chroma-data:/data busybox ls -l /data

2024年8月11日 星期日

SenseVoice

Get 

  • git clone https://github.com/FunAudioLLM/SenseVoice.git

Env

  • conda create -n sensevoice python=3.8
  • conda activate sensevoice
Install
  • cd sensevoice
  • pip install -r requirements.txt

Run

  • python webui.py

Change port

  • demo.launch(server_port=your_portnumber,share=True,server_name='your_ip')

Chroma for AI(update 2024.09.23)

Get1

  •  docker pull ghcr.io/chroma-core/chroma:0.5.4

Get2

  • git clone https://github.com/chroma-core/chroma.git
  • docker compose up -d
Update1
  • watch version on https://github.com/chroma-core/chroma
  • docker pull ghcr.io/chroma-core/chroma:version
Update2
  •  goto chroma of directory
  • PS>docker compose up -d --build
-----------------------------------------------

you can use github to pull to docker
  • git clone https://github.com/chroma-core/chroma
  • cd ./chroma and copy compose-env.windows .env
  • rename docker-compose.yml to docker-compose-any.yml
  • copy docker-compose.server.example.yml to docker-compose.yml
  • PS>docker compose up -d (--build)

2024年8月6日 星期二

LangFlow

Github

  • git clone https://github.com/langflow-ai/langflow.git

Docker

  • cd langflow
  • copy ./docker_example ./langflow (or your_name_for_docker)
  • open and config(ChangePort) langflow/langflow  (or your_name_for_docker)/docker-compose.yml
  • PowerShell>
    cd langflow/langflow  (or your_name_for_docker)
    docker compose up -d (--build)

Run

  • Open http://localhost:7860/store and Click API_Key
  • Login https://www.langflow.store/ and get API-Key

ChatTTS

It good than Bark

Github

  • git clone https://github.com/2noise/ChatTTS.git
  • cd ChatTTS

Env

  • conda create -n chattts python=3.8
  • conda activate chattts

Install

  • pip install --upgrade -r requirements.txt
  • pip install -r requirements.txt

Run

  • python examples/web/webui.py

Change port, public link

  • open ChatTTS\examples\web\webui.py
  • find
    demo.launch()
  • and add args
    server_name='0.0.0.0'
    server_port=7860
    share=True


2024年7月26日 星期五

LivePortrait


  • ffmpeg (https://www.ffmpeg.org/)
    download > win > Windows builds from gyan.dev > release builds > ffmpeg-release-full.7z
    move bin/*.exe to windows/system32/
  • conda create -n liveportrait python=3.9 and activate it
  • d1: git clone https://github.com/KwaiVGI/LivePortrait
  • d2: git clone from hugiingface https://huggingface.co/KwaiVGI/LivePortrait
    move both directory insightface and liveportrait to d1 ./pretrained_weights/ 
  • run for cmd
    first: python  inference.py to create sample in d1 ./animations
    custom: python  inference.py -s d1 .\LivePortrait\assets\examples\source\?.jpg d1 .\LivePortrait\assets\examples\driving\?.mp4 and see outpurt in d1 ./animations
  • run for webgui
    python app.py and open 127.0.0.1:8890
    public: modify d1 .\LivePortrait\src\config\argument_config.py server_ip, server_port, and share for true

2024年7月11日 星期四

Moodle

Get

  • git clone -b MOODLE_404_STABLE git://git.moodle.org/moodle.git on /pathto/moodle/
  • chown -R root /pathto/moodle/
  • chmod -R 0755 /pathto/moodle/
  • find /pathto/moodle/ -type f -exec chmod 0644 {} \;

MoodleData

  • mkdir /pathto/moodledata
  • chmod 0777 /pathto/moodledata
  • setfacl -R -m u:$(whoami):rwX /pathto/moodledata
  • setfacl -R -d -m u:$(whoami):rwX /pathto/moodledata

Security

  • goto /home/moodle/moodledata and create file of .htaccess
    order deny,allow
    deny from all

Install

  • open localhost/moodle to install
  • create moodle/config.php
    chmod -R 0644 moodle/config.php


mysql upgrade 5 to 8

Backup

  • mkdir mysqlbk
  • mysqldump ---all-databases --routines --events > ~/mysqlbk/mysql_backup.sql
  • cp /etc/my.cnf ~/mysqlbk/my.cnf
  • cp -R /etc/my.cnf.d/ ~/mysqlbk/etc_my.cnf.d/
  • cp -R /var/lib/mysql/ ~/mysqlbk/var_lib_mysql/
  • mysql -u -p
    mysql> open mysql
    mysql> SELECT User, Host FROM mysql.user;  # list user and backup

Upgrade

  • rpm -e --nodeps mysql57-community-release
  • yum update -y
  • rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023
  • yum install -y https://repo.mysql.com/mysql84-community-release-el7-1.noarch.rpm
  • yum makecache
  • systemctl stop mysqld.service
  • yum remove -y mysql-community-client-5.7* mysql-community-common-5.7* mysql-community-libs-5.7* mysql-community-server-5.7* 
  • yum install mysql-community-server
  • systemctl start mysqld.service

ERROR:Data Dictionary initialization failed.

  • rm -rf /var/lib/mysql
  • mysqld --initialize --console
  • chown -R mysql:mysql /var/lib/mysql
  • systemctl start mysqld

Config

  • grep 'temporary password' /var/log/mysqld.log
  • mysql_secure_installation


2024年7月10日 星期三

Perplexica - AI Search

Before

  • install git & lfs
  • install docker
  • install ollama and llm

Install

  • git clone https://github.com/ItzCrazyKns/Perplexica
  • goto directory of perplexica
  • rename .\Perplexica\sample.config.toml .\Perplexica\config.toml
  • powershell> docker compose up -d (--build)
  • open localhost:3000

Default Port Change [3001]

  • open config.toml
    • find [GENERAL] PORT = 3001 # Port to run the server on
    • change 3001 to your_port
  • open docker-compose.yaml
    • find ports: - 3001:3001
    • change 3001 to your_port both
    • find 2line like NEXT_PUBLIC
    • change 3001 to your_port

Run on public IP

  • open config.toml
    • find [API_ENDPOINTS] SEARXNG = and OLLAMA =
    • change to your_ip
  • open docker-compose.yaml
    • find 2line like NEXT_PUBLIC_
    • change to your_public_ip
  • config your win's env 
    • OLLAMA_HOST=0.0.0.0

UPDATE

  • stop perplexica on docker
  • goto directory of perplexica
  • git pull origin master
  • check config.toml and docker-compose.yaml
  • powershell> docker compose up -d --build


2024年6月25日 星期二

Bark

 Bark

  • Git
    git clone https://github.com/C0untFloyd/bark-gui
  • Anaconda
  • Install
    pip install .
    pip install -r requirements.txt
  • Run
    python webui.py
  • WEB
    Open http://127.0.0.1:7860
  • ERROR:bark Audio.__init__() got an unexpected keyword argument 'source'
    pip uninstall gradio
    pip install gradio==3.50.2


2024年4月16日 星期二

AnythingLLM with Ollama

1.Install Ollama First

2.chroma install use docker

  • PS>> docker pull ghcr.io/chroma-core/chroma:0.4.24
  • PS>> docker run -p 8000:8000 ghcr.io/chroma-core/chroma:0.4.24

3.anythingllm install use docker

  • create directory C:/anythingllm/storage and file C:/anythingllm/env.txt
  • PS>> docker run -d -p 3001:3001 --cap-add SYS_ADMIN -v C:/anythingllm/storage:/app/server/storage -v C:/anythingllm/env.txt:/app/server/.env -e STORAGE_DIR="/app/server/storage" --add-host=host.docker.internal:host-gateway --name anythingllm mintplexlabs/anythingllm
  • Open localhost:3001, and config llm setting to use http://host.docker.internal:11434

2024/8/4

UPDATE
  • Stop anythingllm on docker
  • PS> docker pull mintplexlabs/anythingllm
  • Start anythingllm on docker

2024年3月28日 星期四

LLama-Factory

 For Windows


  • Download and Unzip file from https://github.com/hiyouga/LLaMA-Factory and Unzip
  • Install rustup‑init.exe from https://rustup.rs/
  • Use Anconda create a env and activate
    • Install Anconda ( https://www.anaconda.com/download )
    • Install Python ( https://www.python.org/downloads/ )
    • Install Pytorch ( https://pytorch.org/get-started/locally/ , just use conda to install 12.x )
      • # Test CUDA with GPU
      • python
      • >> import torch
      • >> torch.cuda.is_available()
      • true
  • cd llama-factory directory
    • pip install -r requirements.txt
    • pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.39.1-py3-none-win_amd64.whl
    • python .\src\train_web.py
  • Open localhost:7860 and Train your LLM

2024年3月2日 星期六

Use Ollama with Chatbot-Ollama to use local LLM files.

Just go

  • Download and Install ollama from ollama.com or ollama.ai
    • Open http://127.0.0.1:11434/ to show [ollama is running]
    • Use PowerShell to use ollama
      command: ollama list , to list all model
      command: ollama rm modelname , to del model
  • Install docker manager and use it to install chatbox-ollama
    • Open http://127.0.0.1:3000/
  • Use already download model files
    • Create a Modelfile as your_modelname, and config LLM path like below
      FROM c:\path\your_modelname.gguf
      SYSTEM ""
    • Open powershell and goto your_modelname file of directory, run command like below
      ollama create your_modelname -f ./your_modelname
      ollama run your_modelname(>> /bye to exit)
  • Open http://127.0.0.1:3000/zh , chat with llm