提交 9dd6cac4 authored 作者: QYG2297248353's avatar QYG2297248353

Synced apps from source repository via GitHub Actions

上级 b1dab161
# 数据持久化路径 [必填]
DIFY_ROOT_PATH=/home/dify
# WebUI 端口 [必填]
PANEL_APP_PORT_HTTP=8080
# WebUI SSL 端口 [必填]
PANEL_APP_PORT_HTTPS=8443
# 数据库端口 [必填]
EXPOSE_DB_PORT=5432
# 插件调试端口 [必填]
EXPOSE_PLUGIN_DEBUGGING_PORT=5003
# Milvus 接口端口 [必填]
MILVUS_STANDALONE_API_PORT=19530
# Milvus 服务端口 [必填]
MILVUS_STANDALONE_SERVER_PORT=9091
# MyScale 端口 [必填]
MYSCALE_PORT=8123
# Elasticsearch 端口 [必填]
ELASTICSEARCH_PORT=9200
# Kibana 端口 [必填]
KIBANA_PORT=5601
# Launching new servers with SSL certificates
## Short description
docker compose certbot configurations with Backward compatibility (without certbot container).
Use `docker compose --profile certbot up` to use this features.
## The simplest way for launching new servers with SSL certificates
1. Get letsencrypt certs
set `.env` values
```properties
NGINX_SSL_CERT_FILENAME=fullchain.pem
NGINX_SSL_CERT_KEY_FILENAME=privkey.pem
NGINX_ENABLE_CERTBOT_CHALLENGE=true
CERTBOT_DOMAIN=your_domain.com
CERTBOT_EMAIL=example@your_domain.com
```
execute command:
```shell
docker network prune
docker compose --profile certbot up --force-recreate -d
```
then after the containers launched:
```shell
docker compose exec -it certbot /bin/sh /update-cert.sh
```
2. Edit `.env` file and `docker compose --profile certbot up` again.
set `.env` value additionally
```properties
NGINX_HTTPS_ENABLED=true
```
execute command:
```shell
docker compose --profile certbot up -d --no-deps --force-recreate nginx
```
Then you can access your serve with HTTPS.
[https://your_domain.com](https://your_domain.com)
## SSL certificates renewal
For SSL certificates renewal, execute commands below:
```shell
docker compose exec -it certbot /bin/sh /update-cert.sh
docker compose exec nginx nginx -s reload
```
## Options for certbot
`CERTBOT_OPTIONS` key might be helpful for testing. i.e.,
```properties
CERTBOT_OPTIONS=--dry-run
```
To apply changes to `CERTBOT_OPTIONS`, regenerate the certbot container before updating the certificates.
```shell
docker compose --profile certbot up -d --no-deps --force-recreate certbot
docker compose exec -it certbot /bin/sh /update-cert.sh
```
Then, reload the nginx container if necessary.
```shell
docker compose exec nginx nginx -s reload
```
## For legacy servers
To use cert files dir `nginx/ssl` as before, simply launch containers WITHOUT `--profile certbot` option.
```shell
docker compose up -d
```
#!/bin/sh
set -e
printf '%s\n' "Docker entrypoint script is running"
printf '%s\n' "\nChecking specific environment variables:"
printf '%s\n' "CERTBOT_EMAIL: ${CERTBOT_EMAIL:-Not set}"
printf '%s\n' "CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-Not set}"
printf '%s\n' "CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-Not set}"
printf '%s\n' "\nChecking mounted directories:"
for dir in "/etc/letsencrypt" "/var/www/html" "/var/log/letsencrypt"; do
if [ -d "$dir" ]; then
printf '%s\n' "$dir exists. Contents:"
ls -la "$dir"
else
printf '%s\n' "$dir does not exist."
fi
done
printf '%s\n' "\nGenerating update-cert.sh from template"
sed -e "s|\${CERTBOT_EMAIL}|$CERTBOT_EMAIL|g" \
-e "s|\${CERTBOT_DOMAIN}|$CERTBOT_DOMAIN|g" \
-e "s|\${CERTBOT_OPTIONS}|$CERTBOT_OPTIONS|g" \
/update-cert.template.txt > /update-cert.sh
chmod +x /update-cert.sh
printf '%s\n' "\nExecuting command:" "$@"
exec "$@"
#!/bin/bash
set -e
DOMAIN="${CERTBOT_DOMAIN}"
EMAIL="${CERTBOT_EMAIL}"
OPTIONS="${CERTBOT_OPTIONS}"
CERT_NAME="${DOMAIN}" # 証明書名をドメイン名と同じにする
# Check if the certificate already exists
if [ -f "/etc/letsencrypt/renewal/${CERT_NAME}.conf" ]; then
echo "Certificate exists. Attempting to renew..."
certbot renew --noninteractive --cert-name ${CERT_NAME} --webroot --webroot-path=/var/www/html --email ${EMAIL} --agree-tos --no-eff-email ${OPTIONS}
else
echo "Certificate does not exist. Obtaining a new certificate..."
certbot certonly --noninteractive --webroot --webroot-path=/var/www/html --email ${EMAIL} --agree-tos --no-eff-email -d ${DOMAIN} ${OPTIONS}
fi
echo "Certificate operation successful"
# Note: Nginx reload should be handled outside this container
echo "Please ensure to reload Nginx to apply any certificate changes."
FROM couchbase/server:latest AS stage_base
# FROM couchbase:latest AS stage_base
COPY init-cbserver.sh /opt/couchbase/init/
RUN chmod +x /opt/couchbase/init/init-cbserver.sh
\ No newline at end of file
#!/bin/bash
# used to start couchbase server - can't get around this as docker compose only allows you to start one command - so we have to start couchbase like the standard couchbase Dockerfile would
# https://github.com/couchbase/docker/blob/master/enterprise/couchbase-server/7.2.0/Dockerfile#L88
/entrypoint.sh couchbase-server &
# track if setup is complete so we don't try to setup again
FILE=/opt/couchbase/init/setupComplete.txt
if ! [ -f "$FILE" ]; then
# used to automatically create the cluster based on environment variables
# https://docs.couchbase.com/server/current/cli/cbcli/couchbase-cli-cluster-init.html
echo $COUCHBASE_ADMINISTRATOR_USERNAME ":" $COUCHBASE_ADMINISTRATOR_PASSWORD
sleep 20s
/opt/couchbase/bin/couchbase-cli cluster-init -c 127.0.0.1 \
--cluster-username $COUCHBASE_ADMINISTRATOR_USERNAME \
--cluster-password $COUCHBASE_ADMINISTRATOR_PASSWORD \
--services data,index,query,fts \
--cluster-ramsize $COUCHBASE_RAM_SIZE \
--cluster-index-ramsize $COUCHBASE_INDEX_RAM_SIZE \
--cluster-eventing-ramsize $COUCHBASE_EVENTING_RAM_SIZE \
--cluster-fts-ramsize $COUCHBASE_FTS_RAM_SIZE \
--index-storage-setting default
sleep 2s
# used to auto create the bucket based on environment variables
# https://docs.couchbase.com/server/current/cli/cbcli/couchbase-cli-bucket-create.html
/opt/couchbase/bin/couchbase-cli bucket-create -c localhost:8091 \
--username $COUCHBASE_ADMINISTRATOR_USERNAME \
--password $COUCHBASE_ADMINISTRATOR_PASSWORD \
--bucket $COUCHBASE_BUCKET \
--bucket-ramsize $COUCHBASE_BUCKET_RAMSIZE \
--bucket-type couchbase
# create file so we know that the cluster is setup and don't run the setup again
touch $FILE
fi
# docker compose will stop the container from running unless we do this
# known issue and workaround
tail -f /dev/null
#!/bin/bash
set -e
if [ "${VECTOR_STORE}" = "elasticsearch-ja" ]; then
# Check if the ICU tokenizer plugin is installed
if ! /usr/share/elasticsearch/bin/elasticsearch-plugin list | grep -q analysis-icu; then
printf '%s\n' "Installing the ICU tokenizer plugin"
if ! /usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-icu; then
printf '%s\n' "Failed to install the ICU tokenizer plugin"
exit 1
fi
fi
# Check if the Japanese language analyzer plugin is installed
if ! /usr/share/elasticsearch/bin/elasticsearch-plugin list | grep -q analysis-kuromoji; then
printf '%s\n' "Installing the Japanese language analyzer plugin"
if ! /usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-kuromoji; then
printf '%s\n' "Failed to install the Japanese language analyzer plugin"
exit 1
fi
fi
fi
# Run the original entrypoint script
exec /bin/tini -- /usr/local/bin/docker-entrypoint.sh
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
server {
listen ${NGINX_PORT};
server_name ${NGINX_SERVER_NAME};
location /console/api {
proxy_pass http://api:5001;
include proxy.conf;
}
location /api {
proxy_pass http://api:5001;
include proxy.conf;
}
location /v1 {
proxy_pass http://api:5001;
include proxy.conf;
}
location /files {
proxy_pass http://api:5001;
include proxy.conf;
}
location /explore {
proxy_pass http://web:3000;
include proxy.conf;
}
location /e/ {
proxy_pass http://plugin_daemon:5002;
proxy_set_header Dify-Hook-Url $scheme://$host$request_uri;
include proxy.conf;
}
location / {
proxy_pass http://web:3000;
include proxy.conf;
}
# placeholder for acme challenge location
${ACME_CHALLENGE_LOCATION}
# placeholder for https config defined in https.conf.template
${HTTPS_CONFIG}
}
#!/bin/bash
if [ "${NGINX_HTTPS_ENABLED}" = "true" ]; then
# Check if the certificate and key files for the specified domain exist
if [ -n "${CERTBOT_DOMAIN}" ] && \
[ -f "/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_FILENAME}" ] && \
[ -f "/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_KEY_FILENAME}" ]; then
SSL_CERTIFICATE_PATH="/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_FILENAME}"
SSL_CERTIFICATE_KEY_PATH="/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_KEY_FILENAME}"
else
SSL_CERTIFICATE_PATH="/etc/ssl/${NGINX_SSL_CERT_FILENAME}"
SSL_CERTIFICATE_KEY_PATH="/etc/ssl/${NGINX_SSL_CERT_KEY_FILENAME}"
fi
export SSL_CERTIFICATE_PATH
export SSL_CERTIFICATE_KEY_PATH
# set the HTTPS_CONFIG environment variable to the content of the https.conf.template
HTTPS_CONFIG=$(envsubst < /etc/nginx/https.conf.template)
export HTTPS_CONFIG
# Substitute the HTTPS_CONFIG in the default.conf.template with content from https.conf.template
envsubst '${HTTPS_CONFIG}' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf
fi
if [ "${NGINX_ENABLE_CERTBOT_CHALLENGE}" = "true" ]; then
ACME_CHALLENGE_LOCATION='location /.well-known/acme-challenge/ { root /var/www/html; }'
else
ACME_CHALLENGE_LOCATION=''
fi
export ACME_CHALLENGE_LOCATION
env_vars=$(printenv | cut -d= -f1 | sed 's/^/$/g' | paste -sd, -)
envsubst "$env_vars" < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf
envsubst "$env_vars" < /etc/nginx/proxy.conf.template > /etc/nginx/proxy.conf
envsubst < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf
# Start Nginx using the default entrypoint
exec nginx -g 'daemon off;'
\ No newline at end of file
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
listen ${NGINX_SSL_PORT} ssl;
ssl_certificate ${SSL_CERTIFICATE_PATH};
ssl_certificate_key ${SSL_CERTIFICATE_KEY_PATH};
ssl_protocols ${NGINX_SSL_PROTOCOLS};
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
\ No newline at end of file
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
user nginx;
worker_processes ${NGINX_WORKER_PROCESSES};
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout ${NGINX_KEEPALIVE_TIMEOUT};
#gzip on;
client_max_body_size ${NGINX_CLIENT_MAX_BODY_SIZE};
include /etc/nginx/conf.d/*.conf;
}
\ No newline at end of file
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
proxy_read_timeout ${NGINX_PROXY_READ_TIMEOUT};
proxy_send_timeout ${NGINX_PROXY_SEND_TIMEOUT};
#!/bin/bash
# Modified based on Squid OCI image entrypoint
# This entrypoint aims to forward the squid logs to stdout to assist users of
# common container related tooling (e.g., kubernetes, docker-compose, etc) to
# access the service logs.
# Moreover, it invokes the squid binary, leaving all the desired parameters to
# be provided by the "command" passed to the spawned container. If no command
# is provided by the user, the default behavior (as per the CMD statement in
# the Dockerfile) will be to use Ubuntu's default configuration [1] and run
# squid with the "-NYC" options to mimic the behavior of the Ubuntu provided
# systemd unit.
# [1] The default configuration is changed in the Dockerfile to allow local
# network connections. See the Dockerfile for further information.
echo "[ENTRYPOINT] re-create snakeoil self-signed certificate removed in the build process"
if [ ! -f /etc/ssl/private/ssl-cert-snakeoil.key ]; then
/usr/sbin/make-ssl-cert generate-default-snakeoil --force-overwrite > /dev/null 2>&1
fi
tail -F /var/log/squid/access.log 2>/dev/null &
tail -F /var/log/squid/error.log 2>/dev/null &
tail -F /var/log/squid/store.log 2>/dev/null &
tail -F /var/log/squid/cache.log 2>/dev/null &
# Replace environment variables in the template and output to the squid.conf
echo "[ENTRYPOINT] replacing environment variables in the template"
awk '{
while(match($0, /\${[A-Za-z_][A-Za-z_0-9]*}/)) {
var = substr($0, RSTART+2, RLENGTH-3)
val = ENVIRON[var]
$0 = substr($0, 1, RSTART-1) val substr($0, RSTART+RLENGTH)
}
print
}' /etc/squid/squid.conf.template > /etc/squid/squid.conf
/usr/sbin/squid -Nz
echo "[ENTRYPOINT] starting squid"
/usr/sbin/squid -f /etc/squid/squid.conf -NYC 1
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
# acl SSL_ports port 1025-65535 # Enable the configuration to resolve this issue: https://github.com/langgenius/dify/issues/12792
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
include /etc/squid/conf.d/*.conf
http_access deny all
################################## Proxy Server ################################
http_port ${HTTP_PORT}
coredump_dir ${COREDUMP_DIR}
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern . 0 20% 4320
# cache_dir ufs /var/spool/squid 100 16 256
# upstream proxy, set to your own upstream proxy IP to avoid SSRF attacks
# cache_peer 172.1.1.1 parent 3128 0 no-query no-digest no-netdb-exchange default
################################## Reverse Proxy To Sandbox ################################
http_port ${REVERSE_PROXY_PORT} accel vhost
cache_peer ${SANDBOX_HOST} parent ${SANDBOX_PORT} 0 no-query originserver
acl src_all src all
http_access allow src_all
#!/usr/bin/env bash
DB_INITIALIZED="/opt/oracle/oradata/dbinit"
#[ -f ${DB_INITIALIZED} ] && exit
#touch ${DB_INITIALIZED}
if [ -f ${DB_INITIALIZED} ]; then
echo 'File exists. Standards for have been Init'
exit
else
echo 'File does not exist. Standards for first time Start up this DB'
"$ORACLE_HOME"/bin/sqlplus -s "/ as sysdba" @"/opt/oracle/scripts/startup/init_user.script";
touch ${DB_INITIALIZED}
fi
show pdbs;
ALTER SYSTEM SET PROCESSES=500 SCOPE=SPFILE;
alter session set container= freepdb1;
create user dify identified by dify DEFAULT TABLESPACE users quota unlimited on users;
grant DB_DEVELOPER_ROLE to dify;
BEGIN
CTX_DDL.CREATE_PREFERENCE('my_chinese_vgram_lexer','CHINESE_VGRAM_LEXER');
END;
/
# PD Configuration File reference:
# https://docs.pingcap.com/tidb/stable/pd-configuration-file#pd-configuration-file
[replication]
max-replicas = 1
\ No newline at end of file
# TiFlash tiflash-learner.toml Configuration File reference:
# https://docs.pingcap.com/tidb/stable/tiflash-configuration#configure-the-tiflash-learnertoml-file
log-file = "/logs/tiflash_tikv.log"
[server]
engine-addr = "tiflash:4030"
addr = "0.0.0.0:20280"
advertise-addr = "tiflash:20280"
status-addr = "tiflash:20292"
[storage]
data-dir = "/data/flash"
# TiFlash tiflash.toml Configuration File reference:
# https://docs.pingcap.com/tidb/stable/tiflash-configuration#configure-the-tiflashtoml-file
listen_host = "0.0.0.0"
path = "/data"
[flash]
tidb_status_addr = "tidb:10080"
service_addr = "tiflash:4030"
[flash.proxy]
config = "/tiflash-learner.toml"
[logger]
errorlog = "/logs/tiflash_error.log"
log = "/logs/tiflash.log"
[raft]
pd_addr = "pd0:2379"
services:
pd0:
image: pingcap/pd:v8.5.1
# ports:
# - "2379"
volumes:
- ./config/pd.toml:/pd.toml:ro
- ./volumes/data:/data
- ./volumes/logs:/logs
command:
- --name=pd0
- --client-urls=http://0.0.0.0:2379
- --peer-urls=http://0.0.0.0:2380
- --advertise-client-urls=http://pd0:2379
- --advertise-peer-urls=http://pd0:2380
- --initial-cluster=pd0=http://pd0:2380
- --data-dir=/data/pd
- --config=/pd.toml
- --log-file=/logs/pd.log
restart: on-failure
tikv:
image: pingcap/tikv:v8.5.1
volumes:
- ./volumes/data:/data
- ./volumes/logs:/logs
command:
- --addr=0.0.0.0:20160
- --advertise-addr=tikv:20160
- --status-addr=tikv:20180
- --data-dir=/data/tikv
- --pd=pd0:2379
- --log-file=/logs/tikv.log
depends_on:
- "pd0"
restart: on-failure
tidb:
image: pingcap/tidb:v8.5.1
# ports:
# - "4000:4000"
volumes:
- ./volumes/logs:/logs
command:
- --advertise-address=tidb
- --store=tikv
- --path=pd0:2379
- --log-file=/logs/tidb.log
depends_on:
- "tikv"
restart: on-failure
tiflash:
image: pingcap/tiflash:v8.5.1
volumes:
- ./config/tiflash.toml:/tiflash.toml:ro
- ./config/tiflash-learner.toml:/tiflash-learner.toml:ro
- ./volumes/data:/data
- ./volumes/logs:/logs
command:
- --config=/tiflash.toml
depends_on:
- "tikv"
- "tidb"
restart: on-failure
<clickhouse>
<users>
<default>
<password></password>
<networks>
<ip>::1</ip> <!-- change to ::/0 to allow access from all addresses -->
<ip>127.0.0.1</ip>
<ip>10.0.0.0/8</ip>
<ip>172.16.0.0/12</ip>
<ip>192.168.0.0/16</ip>
</networks>
<profile>default</profile>
<quota>default</quota>
<access_management>1</access_management>
</default>
</users>
</clickhouse>
\ No newline at end of file
ALTER SYSTEM SET ob_vector_memory_limit_percentage = 30;
\ No newline at end of file
---
# Copyright OpenSearch Contributors
# SPDX-License-Identifier: Apache-2.0
# Description:
# Default configuration for OpenSearch Dashboards
# OpenSearch Dashboards is served by a back end server. This setting specifies the port to use.
# server.port: 5601
# Specifies the address to which the OpenSearch Dashboards server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
# server.host: "localhost"
# Enables you to specify a path to mount OpenSearch Dashboards at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell OpenSearch Dashboards if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
# server.basePath: ""
# Specifies whether OpenSearch Dashboards should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
# server.maxPayloadBytes: 1048576
# The OpenSearch Dashboards server's name. This is used for display purposes.
# server.name: "your-hostname"
# The URLs of the OpenSearch instances to use for all your queries.
# opensearch.hosts: ["http://localhost:9200"]
# OpenSearch Dashboards uses an index in OpenSearch to store saved searches, visualizations and
# dashboards. OpenSearch Dashboards creates a new index if the index doesn't already exist.
# opensearchDashboards.index: ".opensearch_dashboards"
# The default application to load.
# opensearchDashboards.defaultAppId: "home"
# Setting for an optimized healthcheck that only uses the local OpenSearch node to do Dashboards healthcheck.
# This settings should be used for large clusters or for clusters with ingest heavy nodes.
# It allows Dashboards to only healthcheck using the local OpenSearch node rather than fan out requests across all nodes.
#
# It requires the user to create an OpenSearch node attribute with the same name as the value used in the setting
# This node attribute should assign all nodes of the same cluster an integer value that increments with each new cluster that is spun up
# e.g. in opensearch.yml file you would set the value to a setting using node.attr.cluster_id:
# Should only be enabled if there is a corresponding node attribute created in your OpenSearch config that matches the value here
# opensearch.optimizedHealthcheckId: "cluster_id"
# If your OpenSearch is protected with basic authentication, these settings provide
# the username and password that the OpenSearch Dashboards server uses to perform maintenance on the OpenSearch Dashboards
# index at startup. Your OpenSearch Dashboards users still need to authenticate with OpenSearch, which
# is proxied through the OpenSearch Dashboards server.
# opensearch.username: "opensearch_dashboards_system"
# opensearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the OpenSearch Dashboards server to the browser.
# server.ssl.enabled: false
# server.ssl.certificate: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of OpenSearch Dashboards to OpenSearch and are required when
# xpack.security.http.ssl.client_authentication in OpenSearch is set to required.
# opensearch.ssl.certificate: /path/to/your/client.crt
# opensearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your OpenSearch instance.
# opensearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
# opensearch.ssl.verificationMode: full
# Time in milliseconds to wait for OpenSearch to respond to pings. Defaults to the value of
# the opensearch.requestTimeout setting.
# opensearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or OpenSearch. This value
# must be a positive integer.
# opensearch.requestTimeout: 30000
# List of OpenSearch Dashboards client-side headers to send to OpenSearch. To send *no* client-side
# headers, set this value to [] (an empty list).
# opensearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to OpenSearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the opensearch.requestHeadersWhitelist configuration.
# opensearch.customHeaders: {}
# Time in milliseconds for OpenSearch to wait for responses from shards. Set to 0 to disable.
# opensearch.shardTimeout: 30000
# Logs queries sent to OpenSearch. Requires logging.verbose set to true.
# opensearch.logQueries: false
# Specifies the path where OpenSearch Dashboards creates the process ID file.
# pid.file: /var/run/opensearchDashboards.pid
# Enables you to specify a file where OpenSearch Dashboards stores log output.
# logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
# logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
# logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
# logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
# ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
# i18n.locale: "en"
# Set the allowlist to check input graphite Url. Allowlist is the default check list.
# vis_type_timeline.graphiteAllowedUrls: ['https://www.hostedgraphite.com/UID/ACCESS_KEY/graphite']
# Set the blocklist to check input graphite Url. Blocklist is an IP list.
# Below is an example for reference
# vis_type_timeline.graphiteBlockedIPs: [
# //Loopback
# '127.0.0.0/8',
# '::1/128',
# //Link-local Address for IPv6
# 'fe80::/10',
# //Private IP address for IPv4
# '10.0.0.0/8',
# '172.16.0.0/12',
# '192.168.0.0/16',
# //Unique local address (ULA)
# 'fc00::/7',
# //Reserved IP address
# '0.0.0.0/8',
# '100.64.0.0/10',
# '192.0.0.0/24',
# '192.0.2.0/24',
# '198.18.0.0/15',
# '192.88.99.0/24',
# '198.51.100.0/24',
# '203.0.113.0/24',
# '224.0.0.0/4',
# '240.0.0.0/4',
# '255.255.255.255/32',
# '::/128',
# '2001:db8::/32',
# 'ff00::/8',
# ]
# vis_type_timeline.graphiteBlockedIPs: []
# opensearchDashboards.branding:
# logo:
# defaultUrl: ""
# darkModeUrl: ""
# mark:
# defaultUrl: ""
# darkModeUrl: ""
# loadingLogo:
# defaultUrl: ""
# darkModeUrl: ""
# faviconUrl: ""
# applicationTitle: ""
# Set the value of this setting to true to capture region blocked warnings and errors
# for your map rendering services.
# map.showRegionBlockedWarning: false%
# Set the value of this setting to false to suppress search usage telemetry
# for reducing the load of OpenSearch cluster.
# data.search.usageTelemetry.enabled: false
# 2.4 renames 'wizard.enabled: false' to 'vis_builder.enabled: false'
# Set the value of this setting to false to disable VisBuilder
# functionality in Visualization.
# vis_builder.enabled: false
# 2.4 New Experimental Feature
# Set the value of this setting to true to enable the experimental multiple data source
# support feature. Use with caution.
# data_source.enabled: false
# Set the value of these settings to customize crypto materials to encryption saved credentials
# in data sources.
# data_source.encryption.wrappingKeyName: 'changeme'
# data_source.encryption.wrappingKeyNamespace: 'changeme'
# data_source.encryption.wrappingKey: [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
# 2.6 New ML Commons Dashboards Feature
# Set the value of this setting to true to enable the ml commons dashboards
# ml_commons_dashboards.enabled: false
# 2.12 New experimental Assistant Dashboards Feature
# Set the value of this setting to true to enable the assistant dashboards
# assistant.chat.enabled: false
# 2.13 New Query Assistant Feature
# Set the value of this setting to false to disable the query assistant
# observability.query_assist.enabled: false
# 2.14 Enable Ui Metric Collectors in Usage Collector
# Set the value of this setting to true to enable UI Metric collections
# usageCollection.uiMetric.enabled: false
opensearch.hosts: [https://localhost:9200]
opensearch.ssl.verificationMode: none
opensearch.username: admin
opensearch.password: 'Qazwsxedc!@#123'
opensearch.requestHeadersWhitelist: [authorization, securitytenant]
opensearch_security.multitenancy.enabled: true
opensearch_security.multitenancy.tenants.preferred: [Private, Global]
opensearch_security.readonly_mode.roles: [kibana_read_only]
# Use this setting if you are running opensearch-dashboards without https
opensearch_security.cookie.secure: false
server.host: '0.0.0.0'
app:
port: 8194
debug: True
key: dify-sandbox
max_workers: 4
max_requests: 50
worker_timeout: 5
python_path: /usr/local/bin/python3
enable_network: True # please make sure there is no network risk in your environment
allowed_syscalls: # please leave it empty if you have no idea how seccomp works
proxy:
socks5: ''
http: ''
https: ''
app:
port: 8194
debug: True
key: dify-sandbox
max_workers: 4
max_requests: 50
worker_timeout: 5
python_path: /usr/local/bin/python3
python_lib_path:
- /usr/local/lib/python3.10
- /usr/lib/python3.10
- /usr/lib/python3
- /usr/lib/x86_64-linux-gnu
- /etc/ssl/certs/ca-certificates.crt
- /etc/nsswitch.conf
- /etc/hosts
- /etc/resolv.conf
- /run/systemd/resolve/stub-resolv.conf
- /run/resolvconf/resolv.conf
- /etc/localtime
- /usr/share/zoneinfo
- /etc/timezone
# add more paths if needed
python_pip_mirror_url: https://pypi.tuna.tsinghua.edu.cn/simple
nodejs_path: /usr/local/bin/node
enable_network: True
allowed_syscalls:
- 1
- 2
- 3
# add all the syscalls which you require
proxy:
socks5: ''
http: ''
https: ''
additionalProperties:
formFields:
- default: "/home/dify"
edit: true
envKey: DIFY_ROOT_PATH
labelZh: 数据持久化路径
labelEn: Data persistence path
required: true
type: text
- default: 8080
edit: true
envKey: PANEL_APP_PORT_HTTP
labelZh: WebUI 端口
labelEn: WebUI port
required: true
rule: paramPort
type: number
- default: 8443
edit: true
envKey: PANEL_APP_PORT_HTTPS
labelZh: WebUI SSL 端口
labelEn: WebUI SSL port
required: true
rule: paramPort
type: number
- default: 5432
edit: true
envKey: EXPOSE_DB_PORT
labelZh: 数据库端口
labelEn: Database port
required: true
rule: paramPort
type: number
- default: 5003
edit: true
envKey: EXPOSE_PLUGIN_DEBUGGING_PORT
labelZh: 插件调试端口
labelEn: Plugin debugging port
required: true
rule: paramPort
type: number
- default: 19530
disabled: true
edit: true
envKey: MILVUS_STANDALONE_API_PORT
labelZh: Milvus 接口端口
labelEn: Milvus API port
required: true
rule: paramPort
type: number
- default: 9091
disabled: true
envKey: MILVUS_STANDALONE_SERVER_PORT
labelZh: Milvus 服务端口
labelEn: Milvus server port
required: true
rule: paramPort
type: number
- default: 8123
edit: true
envKey: MYSCALE_PORT
labelZh: MyScale 端口
labelEn: MyScale port
required: true
rule: paramPort
type: number
- default: 9200
edit: true
envKey: ELASTICSEARCH_PORT
labelZh: Elasticsearch 端口
labelEn: Elasticsearch port
required: true
rule: paramPort
type: number
- default: 5601
edit: true
envKey: KIBANA_PORT
labelZh: Kibana 端口
labelEn: Kibana port
required: true
rule: paramPort
type: number
This source diff could not be displayed because it is too large. You can view the blob instead.
# copyright© 2024 XinJiang Ms Studio
ENV_FILE=.env
差异被折叠。
# copyright© 2024 XinJiang Ms Studio
TZ=Asia/Shanghai
#!/bin/bash
if [ -f .env ]; then
source .env
# setup-1 add default values
CURRENT_DIR=$(pwd)
sed -i '/^ENV_FILE=/d' .env
sed -i '/^GLOBAL_ENV_FILE=/d' .env
echo "ENV_FILE=${CURRENT_DIR}/.env" >> .env
echo "GLOBAL_ENV_FILE=${CURRENT_DIR}/envs/global.env" >> .env
echo "APP_ENV_FILE=${CURRENT_DIR}/envs/dify.env" >> .env
# setup-2 update dir permissions
mkdir -p "$DIFY_ROOT_PATH"
cp -r conf/. "$DIFY_ROOT_PATH/"
# setup-3 sync environment variables
env_source="envs/dify.env"
if [ -f "$env_source" ]; then
while IFS='=' read -r key value; do
if [[ -z "$key" || "$key" =~ ^# ]]; then
continue
fi
if ! grep -q "^$key=" .env; then
echo "$key=$value" >> .env
fi
done < "$env_source"
fi
echo "Check Finish."
else
echo "Error: .env file not found."
fi
#!/bin/bash
if [ -f .env ]; then
source .env
echo "Check Finish."
else
echo "Error: .env file not found."
fi
#!/bin/bash
if [ -f .env ]; then
source .env
# setup-1 add default values
CURRENT_DIR=$(pwd)
sed -i '/^ENV_FILE=/d' .env
sed -i '/^GLOBAL_ENV_FILE=/d' .env
echo "ENV_FILE=${CURRENT_DIR}/.env" >> .env
echo "GLOBAL_ENV_FILE=${CURRENT_DIR}/envs/global.env" >> .env
echo "APP_ENV_FILE=${CURRENT_DIR}/envs/dify.env" >> .env
# setup-2 update dir permissions
mkdir -p "$DIFY_ROOT_PATH"
if [ -d "conf" ]; then
find conf -type f | while read -r file; do
dest="$DIFY_ROOT_PATH/${file#conf/}"
if [ ! -e "$dest" ]; then
mkdir -p "$(dirname "$dest")"
cp "$file" "$dest"
fi
done
echo "Conf files copied to $DIFY_ROOT_PATH."
else
echo "Warning: conf directory not found."
fi
# setup-3 sync environment variables
env_source="envs/dify.env"
if [ -f "$env_source" ]; then
while IFS='=' read -r key value; do
if [[ -z "$key" || "$key" =~ ^# ]]; then
continue
fi
if ! grep -q "^$key=" .env; then
echo "$key=$value" >> .env
fi
done < "$env_source"
fi
echo "Check Finish."
else
echo "Error: .env file not found."
fi
# Dify
Dify 是一个开源的 LLM 应用开发平台。其直观的界面结合了 AI 工作流、RAG 管道、Agent、模型管理、可观测性功能等,让您可以快速从原型到生产
![Dify](https://file.lifebus.top/imgs/dify_cover.png)
![](https://img.shields.io/badge/%E6%96%B0%E7%96%86%E8%90%8C%E6%A3%AE%E8%BD%AF%E4%BB%B6%E5%BC%80%E5%8F%91%E5%B7%A5%E4%BD%9C%E5%AE%A4-%E6%8F%90%E4%BE%9B%E6%8A%80%E6%9C%AF%E6%94%AF%E6%8C%81-blue)
## 简介
### 工作流
在画布上构建和测试功能强大的 AI 工作流程,利用以下所有功能以及更多功能
### 全面的模型支持
与数百种专有/开源 LLMs 以及数十种推理提供商和自托管解决方案无缝集成,涵盖 GPT、Mistral、Llama3 以及任何与 OpenAI API 兼容的模型
### Prompt IDE
用于制作提示、比较模型性能以及向基于聊天的应用程序添加其他功能(如文本转语音)的直观界面
### RAG Pipeline
广泛的 RAG 功能,涵盖从文档摄入到检索的所有内容,支持从 PDF、PPT 和其他常见文档格式中提取文本的开箱即用的支持
### Agent 智能体
您可以基于 LLM 函数调用或 ReAct 定义 Agent,并为 Agent 添加预构建或自定义工具。Dify 为 AI Agent
提供了50多种内置工具,如谷歌搜索、DALL·E、Stable Diffusion 和 WolframAlpha 等
### LLMOps
随时间监视和分析应用程序日志和性能。您可以根据生产数据和标注持续改进提示、数据集和模型
### 后端即服务
所有 Dify 的功能都带有相应的 API,因此您可以轻松地将 Dify 集成到自己的业务逻辑中
## 功能比较
<table style="width: 100%;">
<tr>
<th align="center">功能</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistant API</th>
</tr>
<tr>
<td align="center">编程方法</td>
<td align="center">API + 应用程序导向</td>
<td align="center">Python 代码</td>
<td align="center">应用程序导向</td>
<td align="center">API 导向</td>
</tr>
<tr>
<td align="center">支持的 LLMs</td>
<td align="center">丰富多样</td>
<td align="center">丰富多样</td>
<td align="center">丰富多样</td>
<td align="center">仅限 OpenAI</td>
</tr>
<tr>
<td align="center">RAG引擎</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Agent</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">工作流</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">可观测性</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">企业功能(SSO/访问控制)</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">本地部署</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
## 安装说明
在安装 Dify 之前,请确保您的机器满足以下最低系统要求:
+ CPU >= 2 Core
+ RAM >= 4 GiB
## 修改配置
应用安装后,如有需要的配置,请修改应用目录下的 `.env` 文件
---
![Ms Studio](https://file.lifebus.top/imgs/ms_blank_001.png)
additionalProperties:
key: dify
name: Dify
tags:
- WebSite
- Local
shortDescZh: Dify 是一个开源的 LLM 应用开发平台
shortDescEn: Dify is an open-source LLM application development platform
type: website
crossVersionUpdate: true
limit: 0
website: https://dify.ai/
github: https://github.com/langgenius/dify/
document: https://docs.dify.ai/
# Postgres 服务 (前置检查) [必填]
PANEL_POSTGRES_TYPE=postgresql
# Redis 服务 (前置检查) [必填]
PANEL_REDIS_TYPE=redis
# 数据持久化路径 [必填]
MASTODON_ROOT_PATH=/home/mastodon
# WebUI 端口 [必填]
PANEL_APP_PORT_HTTP=3000
# Stream 端口 [必填]
PANEL_APP_PORT_STREAM=4000
# 数据库加密密钥 [必填]
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=
# 数据库加密盐 [必填]
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=
# 数据库加密主键 [必填]
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=
# 密钥 [必填]
SECRET_KEY_BASE=
# OTP 密钥 [必填]
OTP_SECRET=
# 数据库 主机地址 [必填]
DB_HOST=127.0.0.1
# 数据库 端口 [必填]
DB_PORT=5432
# 数据库 名称 [必填]
DB_NAME=mastodon
# 数据库 用户名 [必填]
DB_USER=mastodon
# 数据库 密码 [必填]
DB_PASS=
# Redis 主机 [必填]
REDIS_HOST=127.0.0.1
# Redis 端口 [必填]
REDIS_PORT=6379
# Redis 索引 (0-20) [必填]
REDIS_DB=0
# Redis 用户名
REDIS_USERNAME=
# Redis 密码
REDIS_PASSWORD=
# ES数据库 主机地址
ES_HOST=127.0.0.1
# ES数据库 端口
ES_PORT=9200
# ES数据库 用户名
ES_USER=elastic
# ES数据库 密码
ES_PASS=
additionalProperties:
formFields:
- child:
default: ""
envKey: PANEL_POSTGRES_SERVICE
required: true
type: service
default: postgresql
envKey: PANEL_POSTGRES_TYPE
labelZh: Postgres 服务 (前置检查)
labelEn: Postgres Service (Pre-check)
required: true
type: apps
values:
- label: PostgreSQL
value: postgresql
- child:
default: ""
envKey: PANEL_REDIS_SERVICE
required: true
type: service
default: redis
envKey: PANEL_REDIS_TYPE
labelZh: Redis 服务 (前置检查)
labelEn: Redis Service (Pre-check)
required: true
type: apps
values:
- label: Redis
value: redis
- default: "/home/mastodon"
edit: true
envKey: MASTODON_ROOT_PATH
labelZh: 数据持久化路径
labelEn: Data persistence path
required: true
type: text
- default: 3000
edit: true
envKey: PANEL_APP_PORT_HTTP
labelZh: WebUI 端口
labelEn: WebUI port
required: true
rule: paramPort
type: number
- default: 4000
edit: true
envKey: PANEL_APP_PORT_STREAM
labelZh: Stream 端口
labelEn: Stream port
required: true
rule: paramPort
type: number
- default: ""
edit: true
envKey: ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY
labelZh: 数据库加密密钥
labelEn: Database encryption key
required: true
type: text
- default: ""
edit: true
envKey: ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT
labelZh: 数据库加密盐
labelEn: Database encryption salt
required: true
type: text
- default: ""
edit: true
envKey: ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY
labelZh: 数据库加密主键
labelEn: Database encryption primary key
required: true
type: text
- default: ""
edit: true
envKey: SECRET_KEY_BASE
labelZh: 密钥
labelEn: Secret key
required: true
type: text
- default: ""
edit: true
envKey: OTP_SECRET
labelZh: OTP 密钥
labelEn: OTP secret
required: true
type: text
- default: "127.0.0.1"
edit: true
envKey: DB_HOST
labelZh: 数据库 主机地址
labelEn: Database Host
required: true
type: text
- default: 5432
edit: true
envKey: DB_PORT
labelZh: 数据库 端口
labelEn: Database Port
required: true
rule: paramPort
type: number
- default: "mastodon"
edit: true
envKey: DB_NAME
labelZh: 数据库 名称
labelEn: Database Name
required: true
rule: paramCommon
type: text
- default: "mastodon"
edit: true
envKey: DB_USER
labelZh: 数据库 用户名
labelEn: Database Username
required: true
type: text
- default: ""
edit: true
envKey: DB_PASS
labelZh: 数据库 密码
labelEn: Database Password
random: true
required: true
rule: paramComplexity
type: password
- default: "127.0.0.1"
edit: true
envKey: REDIS_HOST
labelZh: Redis 主机
labelEn: Redis Host
required: true
type: text
- default: 6379
edit: true
envKey: REDIS_PORT
labelZh: Redis 端口
labelEn: Redis Port
required: true
rule: paramPort
type: number
- default: 0
edit: true
envKey: REDIS_DB
labelZh: Redis 索引 (0-20)
labelEn: Redis Index (0-20)
required: true
type: number
- default: ""
edit: true
envKey: REDIS_USERNAME
labelZh: Redis 用户名
labelEn: Redis UserName
required: false
type: text
- default: ""
edit: true
envKey: REDIS_PASSWORD
labelZh: Redis 密码
labelEn: Redis Password
required: false
type: password
- default: "127.0.0.1"
edit: true
envKey: ES_HOST
labelZh: ES数据库 主机地址
labelEn: Database Host
required: false
type: text
- default: 9200
edit: true
envKey: ES_PORT
labelZh: ES数据库 端口
labelEn: Database Port
required: false
rule: paramPort
type: number
- default: "elastic"
edit: true
envKey: ES_USER
labelZh: ES数据库 用户名
labelEn: Database Username
required: false
type: text
- default: ""
edit: true
envKey: ES_PASS
labelZh: ES数据库 密码
labelEn: Database Password
random: true
required: false
rule: paramComplexity
type: password
networks:
1panel-network:
external: true
services:
mastodon:
command: bundle exec puma -C config/puma.rb
container_name: mastodon
env_file:
- ./envs/global.env
- ./envs/mastodon.env
- .env
environment:
- TZ=Asia/Shanghai
image: ghcr.io/mastodon/mastodon:v4.3.6
labels:
createdBy: Apps
networks:
- 1panel-network
ports:
- ${PANEL_APP_PORT_HTTP}:3000
restart: always
volumes:
- ${MASTODON_ROOT_PATH}/system:/mastodon/public/system
sidekiq-mastodon:
command: bundle exec sidekiq
container_name: sidekiq-mastodon
env_file:
- ./envs/global.env
- ./envs/mastodon.env
- .env
environment:
- TZ=Asia/Shanghai
image: ghcr.io/mastodon/mastodon:v4.3.6
networks:
- 1panel-network
restart: always
volumes:
- ${MASTODON_ROOT_PATH}/system:/mastodon/public/system
streaming-mastodon:
command: node ./streaming/index.js
container_name: streaming-mastodon
env_file:
- ./envs/global.env
- ./envs/mastodon.env
- .env
environment:
- TZ=Asia/Shanghai
image: ghcr.io/mastodon/mastodon-streaming:v4.3.6
networks:
- 1panel-network
ports:
- ${PANEL_APP_PORT_STREAM}:4000
restart: always
# copyright© 2024 XinJiang Ms Studio
ENV_FILE=.env
# copyright© 2024 XinJiang Ms Studio
TZ=Asia/Shanghai
# This is a sample configuration file. You can generate your configuration
# with the `bundle exec rails mastodon:setup` interactive setup wizard, but to customize
# your setup even further, you'll need to edit it manually. This sample does
# not demonstrate all available configuration options. Please look at
# https://docs.joinmastodon.org/admin/config/ for the full documentation.
# Note that this file accepts slightly different syntax depending on whether
# you are using `docker-compose` or not. In particular, if you use
# `docker-compose`, the value of each declared variable will be taken verbatim,
# including surrounding quotes.
# See: https://github.com/mastodon/mastodon/issues/16895
# Federation
# ----------
# This identifies your server and cannot be changed safely later
# ----------
LOCAL_DOMAIN=example.com
# Redis
# -----
REDIS_HOST=localhost
REDIS_PORT=6379
# PostgreSQL
# ----------
DB_HOST=/var/run/postgresql
DB_USER=mastodon
DB_NAME=mastodon_production
DB_PASS=
DB_PORT=5432
# Elasticsearch (optional)
# ------------------------
ES_ENABLED=true
ES_HOST=localhost
ES_PORT=9200
# Authentication for ES (optional)
ES_USER=elastic
ES_PASS=password
# Secrets
# -------
# Make sure to use `bundle exec rails secret` to generate secrets
# -------
SECRET_KEY_BASE=
OTP_SECRET=
# Encryption secrets
# ------------------
# Must be available (and set to same values) for all server processes
# These are private/secret values, do not share outside hosting environment
# Use `bin/rails db:encryption:init` to generate fresh secrets
# Do NOT change these secrets once in use, as this would cause data loss and other issues
# ------------------
# ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=
# ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=
# ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=
# Web Push
# --------
# Generate with `bundle exec rails mastodon:webpush:generate_vapid_key`
# --------
VAPID_PRIVATE_KEY=
VAPID_PUBLIC_KEY=
# Sending mail
# ------------
SMTP_SERVER=
SMTP_PORT=587
SMTP_LOGIN=
SMTP_PASSWORD=
SMTP_FROM_ADDRESS=notifications@example.com
# File storage (optional)
# -----------------------
S3_ENABLED=true
S3_BUCKET=files.example.com
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
S3_ALIAS_HOST=files.example.com
# IP and session retention
# -----------------------
# Make sure to modify the scheduling of ip_cleanup_scheduler in config/sidekiq.yml
# to be less than daily if you lower IP_RETENTION_PERIOD below two days (172800).
# -----------------------
IP_RETENTION_PERIOD=31556952
SESSION_RETENTION_PERIOD=31556952
# Fetch All Replies Behavior
# --------------------------
# When a user expands a post (DetailedStatus view), fetch all of its replies
# (default: false)
FETCH_REPLIES_ENABLED=false
# Period to wait between fetching replies (in minutes)
FETCH_REPLIES_COOLDOWN_MINUTES=15
# Period to wait after a post is first created before fetching its replies (in minutes)
FETCH_REPLIES_INITIAL_WAIT_MINUTES=5
# Max number of replies to fetch - total, recursively through a whole reply tree
FETCH_REPLIES_MAX_GLOBAL=1000
# Max number of replies to fetch - for a single post
FETCH_REPLIES_MAX_SINGLE=500
# Max number of replies Collection pages to fetch - total
FETCH_REPLIES_MAX_PAGES=500
#!/bin/bash
if [ -f .env ]; then
source .env
# setup-1 add default values
CURRENT_DIR=$(pwd)
sed -i '/^ENV_FILE=/d' .env
sed -i '/^GLOBAL_ENV_FILE=/d' .env
echo "ENV_FILE=${CURRENT_DIR}/.env" >> .env
echo "GLOBAL_ENV_FILE=${CURRENT_DIR}/envs/global.env" >> .env
echo "APP_ENV_FILE=${CURRENT_DIR}/envs/mastodon.env" >> .env
# setup-2 add update permission
chown -R 991:991 "$MASTODON_ROOT_PATH"
echo "Check Finish."
else
echo "Error: .env file not found."
fi
#!/bin/bash
if [ -f .env ]; then
source .env
echo "Check Finish."
else
echo "Error: .env file not found."
fi
#!/bin/bash
if [ -f .env ]; then
source .env
# setup-1 add default values
CURRENT_DIR=$(pwd)
sed -i '/^ENV_FILE=/d' .env
sed -i '/^GLOBAL_ENV_FILE=/d' .env
echo "ENV_FILE=${CURRENT_DIR}/.env" >> .env
echo "GLOBAL_ENV_FILE=${CURRENT_DIR}/envs/global.env" >> .env
echo "APP_ENV_FILE=${CURRENT_DIR}/envs/mastodon.env" >> .env
# setup-2 add update permission
chown -R 991:991 "$MASTODON_ROOT_PATH"
echo "Check Finish."
else
echo "Error: .env file not found."
fi
# Mastodon (长毛象)
自由开源的去中心化的分布式微博客社交网络
![Mastodon](https://file.lifebus.top/imgs/mastodon_cover.png)
![](https://img.shields.io/badge/%E6%96%B0%E7%96%86%E8%90%8C%E6%A3%AE%E8%BD%AF%E4%BB%B6%E5%BC%80%E5%8F%91%E5%B7%A5%E4%BD%9C%E5%AE%A4-%E6%8F%90%E4%BE%9B%E6%8A%80%E6%9C%AF%E6%94%AF%E6%8C%81-blue)
## 简介
Mastodon 是一款基于 ActivityPub 的免费开源社交网络服务器,用户可以关注好友并发现新朋友。在 Mastodon
上,用户可以发布任何他们想要的内容:链接、图片、文本和视频。所有 Mastodon 服务器都可以作为联合网络互操作(一台服务器上的用户可以与另一台服务器上的用户无缝通信,包括实现
ActivityPub 的非 Mastodon 软件!)
## 特征
### 不受供应商限制
可与任何符合要求的平台完全互操作- 不一定是 Mastodon;无论实现 ActivityPub 的是什么,都是社交网络的一部分!了解更多
### 实时、按时间顺序排列的时间线更新
您关注的人的更新会通过 WebSockets 实时显示在 UI 中。还有流水线视图!
### 媒体附件,如图片和短视频
上传和查看更新中附加的图片和 WebM/MP4 视频。没有音轨的视频将被视为 GIF;普通视频将连续循环播放!
### 安全和审核工具
Mastodon 包括私人帖子、锁定帐户、短语过滤、静音、屏蔽和各种其他功能,以及报告和审核系统。了解更多
### OAuth2 和简单的 REST API
Mastodon 充当 OAuth2 提供商,因此第三方应用可以使用 REST 和 Streaming
API。这带来了一个拥有众多选择的丰富应用生态系统!
## 安装说明
> 项目使用 软件版本
>
> Redis 4+
>
> PostgreSQL 12+
### 密钥
您必须提供唯一的加密密钥:`ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY` `ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY` `ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT`
您可以通过命令:`bin/rails db:encryption:init` 生成。
您也可以使用终端的 `openssl` 命令生成:
```bash
echo "ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=$(openssl rand -hex 32)"
echo "ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=$(openssl rand -hex 32)"
echo "ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=$(openssl rand -hex 32)"
```
### 其他密钥
安装后,进入安装目录,查看 `env/mastodon.env` 文件,里面有其他密钥的配置说明,在您有需要的情况下。
---
![Ms Studio](https://file.lifebus.top/imgs/ms_blank_001.png)
差异被折叠。
差异被折叠。
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论