博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
redis.config 配置文件,附上我知道的理解
阅读量:4569 次
发布时间:2019-06-08

本文共 13995 字,大约阅读时间需要 46 分钟。

################################## NETWORK 网络###################################### 监听网络接口bind 127.0.0.1# 以守护线程运行protected-mode yes# 监听端口port 6379# 监听tcp队列的长度?tcp-backlog 511# 客户端闲置后多久,关闭连接timeout 0# 慢连接等多久tcp-keepalive 300
################################# GENERAL一般 ###################################### 是否以精灵线程等待daemonize no# 进程号文件pidfile /var/run/redis_6379.pid# 日志级别loglevel notice# 日志文件名logfile ""# 数据库数量 16个databases 16#显示logoalways-show-logo yes
################################ SNAPSHOTTING  快照 ################################# RDB存储策略save 900 1save 300 10save 60 10000# 后台存储发生错误关闭写入stop-writes-on-bgsave-error yes# .rdb文件压缩算法rdbcompression yes# .rdb验证算法rdbchecksum yes# .rdb文件名dbfilename dump.rdb# 文件存储路径dir ./
################################# REPLICATION 从配置 ################################## 从失去主的连接后两种表现形式,yes代表继续回复客户端请求,no报错replica-serve-stale-data yes# 从权限为只读replica-read-only yes# 从刚连接上主时候,有两种同步方式# Disk-backed :先写rdb文件,再发给从# Diskless :直接边写边发repl-diskless-sync no# Diskless同步延迟,为了让从进入同步匹配接收socketrepl-diskless-sync-delay 5# 两种方式给从发送TCP包# yes 小包,带宽小,延迟大点# no 延迟小点,带宽大repl-disable-tcp-nodelay no# 哨兵模式的时候,越优先级小的越会被选replica-priority 100
############################# LAZY FREEING ###################################### 先放着以后看lazyfree-lazy-eviction nolazyfree-lazy-expire nolazyfree-lazy-server-del noreplica-lazy-flush no
############################## APPEND ONLY MODE ################################ AOF备份开启appendonly no# AOF文件名appendfilename "appendonly.aof"# AOF 同步策略 每操作/每秒/不同步# appendfsync alwaysappendfsync everysec# appendfsync no# 如果备份的同时开始rewrite压缩了,那么IO会过大# no就阻塞一方,安全的持久化# yes就把备份写到缓冲区,两者不同步进行IO,不安全的持久化no-appendfsync-on-rewrite no# 当前备份比上次备份大 i 倍的时候且小于 j mb,就开启重写压缩算法auto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mb# 能否load头或尾不完整的备份,yes记录日志,no报错aof-load-truncated yes# 这个有意思,在AOF备份之前加上一个RDB备份的一部分,可以加速重写和加载,意思是混合持久化?aof-use-rdb-preamble yes
################################ LUA SCRIPTING  ################################ Max execution time of a Lua script in milliseconds.## If the maximum execution time is reached Redis will log that a script is# still in execution after the maximum allowed time and will start to# reply to queries with an error.## When a long running script exceeds the maximum execution time only the# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be# used to stop a script that did not yet called write commands. The second# is the only way to shut down the server in the case a write command was# already issued by the script but the user doesn't want to wait for the natural# termination of the script.## Set it to 0 or a negative value for unlimited execution without warnings.lua-time-limit 5000################################## SLOW LOG #################################### The Redis Slow Log is a system to log queries that exceeded a specified# execution time. The execution time does not include the I/O operations# like talking with the client, sending the reply and so forth,# but just the time needed to actually execute the command (this is the only# stage of command execution where the thread is blocked and can not serve# other requests in the meantime).## You can configure the slow log with two parameters: one tells Redis# what is the execution time, in microseconds, to exceed in order for the# command to get logged, and the other parameter is the length of the# slow log. When a new command is logged the oldest one is removed from the# queue of logged commands.# The following time is expressed in microseconds, so 1000000 is equivalent# to one second. Note that a negative number disables the slow log, while# a value of zero forces the logging of every command.slowlog-log-slower-than 10000# There is no limit to this length. Just be aware that it will consume memory.# You can reclaim memory used by the slow log with SLOWLOG RESET.slowlog-max-len 128################################ LATENCY MONITOR ############################### The Redis latency monitoring subsystem samples different operations# at runtime in order to collect data related to possible sources of# latency of a Redis instance.## Via the LATENCY command this information is available to the user that can# print graphs and obtain reports.## The system only logs operations that were performed in a time equal or# greater than the amount of milliseconds specified via the# latency-monitor-threshold configuration directive. When its value is set# to zero, the latency monitor is turned off.## By default latency monitoring is disabled since it is mostly not needed# if you don't have latency issues, and collecting data has a performance# impact, that while very small, can be measured under big load. Latency# monitoring can easily be enabled at runtime using the command# "CONFIG SET latency-monitor-threshold 
" if needed.latency-monitor-threshold 0############################### ADVANCED CONFIG ################################ Hashes are encoded using a memory efficient data structure when they have a# small number of entries, and the biggest entry does not exceed a given# threshold. These thresholds can be configured using the following directives.hash-max-ziplist-entries 512hash-max-ziplist-value 64# Lists are also encoded in a special way to save a lot of space.# The number of entries allowed per internal list node can be specified# as a fixed maximum size or a maximum number of elements.# For a fixed maximum size, use -5 through -1, meaning:# -5: max size: 64 Kb <-- not recommended for normal workloads# -4: max size: 32 Kb <-- not recommended# -3: max size: 16 Kb <-- probably not recommended# -2: max size: 8 Kb <-- good# -1: max size: 4 Kb <-- good# Positive numbers mean store up to _exactly_ that number of elements# per list node.# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),# but if your use case is unique, adjust the settings as necessary.list-max-ziplist-size -2# Lists may also be compressed.# Compress depth is the number of quicklist ziplist nodes from *each* side of# the list to *exclude* from compression. The head and tail of the list# are always uncompressed for fast push/pop operations. Settings are:# 0: disable all list compression# 1: depth 1 means "don't start compressing until after 1 node into the list,# going from either the head or tail"# So: [head]->node->node->...->node->[tail]# [head], [tail] will always be uncompressed; inner nodes will compress.# 2: [head]->[next]->node->node->...->node->[prev]->[tail]# 2 here means: don't compress head or head->next or tail->prev or tail,# but compress all nodes between them.# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]# etc.list-compress-depth 0# Sets have a special encoding in just one case: when a set is composed# of just strings that happen to be integers in radix 10 in the range# of 64 bit signed integers.# The following configuration setting sets the limit in the size of the# set in order to use this special memory saving encoding.set-max-intset-entries 512# Similarly to hashes and lists, sorted sets are also specially encoded in# order to save a lot of space. This encoding is only used when the length and# elements of a sorted set are below the following limits:zset-max-ziplist-entries 128zset-max-ziplist-value 64# HyperLogLog sparse representation bytes limit. The limit includes the# 16 bytes header. When an HyperLogLog using the sparse representation crosses# this limit, it is converted into the dense representation.## A value greater than 16000 is totally useless, since at that point the# dense representation is more memory efficient.## The suggested value is ~ 3000 in order to have the benefits of# the space efficient encoding without slowing down too much PFADD,# which is O(N) with the sparse encoding. The value can be raised to# ~ 10000 when CPU is not a concern, but space is, and the data set is# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.hll-sparse-max-bytes 3000# Streams macro node max size / items. The stream data structure is a radix# tree of big nodes that encode multiple items inside. Using this configuration# it is possible to configure how big a single node can be in bytes, and the# maximum number of items it may contain before switching to a new node when# appending new stream entries. If any of the following settings are set to# zero, the limit is ignored, so for instance it is possible to set just a# max entires limit by setting max-bytes to 0 and max-entries to the desired# value.stream-node-max-bytes 4096stream-node-max-entries 100# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in# order to help rehashing the main Redis hash table (the one mapping top-level# keys to values). The hash table implementation Redis uses (see dict.c)# performs a lazy rehashing: the more operation you run into a hash table# that is rehashing, the more rehashing "steps" are performed, so if the# server is idle the rehashing is never complete and some more memory is used# by the hash table.## The default is to use this millisecond 10 times every second in order to# actively rehash the main dictionaries, freeing memory when possible.## If unsure:# use "activerehashing no" if you have hard latency requirements and it is# not a good thing in your environment that Redis can reply from time to time# to queries with 2 milliseconds delay.## use "activerehashing yes" if you don't have such hard requirements but# want to free memory asap when possible.activerehashing yes# The client output buffer limits can be used to force disconnection of clients# that are not reading data from the server fast enough for some reason (a# common reason is that a Pub/Sub client can't consume messages as fast as the# publisher can produce them).## The limit can be set differently for the three different classes of clients:## normal -> normal clients including MONITOR clients# replica -> replica clients# pubsub -> clients subscribed to at least one pubsub channel or pattern## The syntax of every client-output-buffer-limit directive is the following:## client-output-buffer-limit
## A client is immediately disconnected once the hard limit is reached, or if# the soft limit is reached and remains reached for the specified number of# seconds (continuously).# So for instance if the hard limit is 32 megabytes and the soft limit is# 16 megabytes / 10 seconds, the client will get disconnected immediately# if the size of the output buffers reach 32 megabytes, but will also get# disconnected if the client reaches 16 megabytes and continuously overcomes# the limit for 10 seconds.## By default normal clients are not limited because they don't receive data# without asking (in a push way), but just after a request, so only# asynchronous clients may create a scenario where data is requested faster# than it can read.## Instead there is a default limit for pubsub and replica clients, since# subscribers and replicas receive data in a push fashion.## Both the hard or the soft limit can be disabled by setting them to zero.client-output-buffer-limit normal 0 0 0client-output-buffer-limit replica 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60# Client query buffers accumulate new commands. They are limited to a fixed# amount by default in order to avoid that a protocol desynchronization (for# instance due to a bug in the client) will lead to unbound memory usage in# the query buffer. However you can configure it here if you have very special# needs, such us huge multi/exec requests or alike.## client-query-buffer-limit 1gb# In the Redis protocol, bulk requests, that are, elements representing single# strings, are normally limited ot 512 mb. However you can change this limit# here.## proto-max-bulk-len 512mb# Redis calls an internal function to perform many background tasks, like# closing connections of clients in timeout, purging expired keys that are# never requested, and so forth.## Not all tasks are performed with the same frequency, but Redis checks for# tasks to perform according to the specified "hz" value.## By default "hz" is set to 10. Raising the value will use more CPU when# Redis is idle, but at the same time will make Redis more responsive when# there are many keys expiring at the same time, and timeouts may be# handled with more precision.## The range is between 1 and 500, however a value over 100 is usually not# a good idea. Most users should use the default of 10 and raise this up to# 100 only in environments where very low latency is required.hz 10# Normally it is useful to have an HZ value which is proportional to the# number of clients connected. This is useful in order, for instance, to# avoid too many clients are processed for each background task invocation# in order to avoid latency spikes.## Since the default HZ value by default is conservatively set to 10, Redis# offers, and enables by default, the ability to use an adaptive HZ value# which will temporary raise when there are many connected clients.## When dynamic HZ is enabled, the actual configured HZ will be used as# as a baseline, but multiples of the configured HZ value will be actually# used as needed once more clients are connected. In this way an idle# instance will use very little CPU time while a busy instance will be# more responsive.dynamic-hz yes# When a child rewrites the AOF file, if the following option is enabled# the file will be fsync-ed every 32 MB of data generated. This is useful# in order to commit the file to the disk more incrementally and avoid# big latency spikes.aof-rewrite-incremental-fsync yes# When redis saves RDB file, if the following option is enabled# the file will be fsync-ed every 32 MB of data generated. This is useful# in order to commit the file to the disk more incrementally and avoid# big latency spikes.rdb-save-incremental-fsync yes

转载于:https://www.cnblogs.com/haon/p/10942409.html

你可能感兴趣的文章
C#类和结构以及堆和栈大烩菜(本来就迷,那就让暴风来的更猛烈吧!)
查看>>
Bayan 2012-2013 Elimination Round (ACM ICPC Rules, English statements) A. Old Peykan
查看>>
jmeter之jdbc请求
查看>>
Windows常用命令集
查看>>
C语言习题三
查看>>
94. Binary Tree Inorder Traversal
查看>>
MongoDB安装及多实例启动
查看>>
[css]我要用css画幅画(三)
查看>>
eletron打包
查看>>
numpy
查看>>
django | 连接mysql数据库
查看>>
labelme2coco问题:TypeError: Object of type 'int64' is not JSON serializable
查看>>
Python字符串操作
查看>>
连接池
查看>>
使用易语言COM对象取文件版本
查看>>
3、将uboot,kernel,rootfs下载到开发板上
查看>>
2.16.10.init进程详解1
查看>>
对redis深入理解
查看>>
centos7 install idea and x-windows
查看>>
Spring Boot + Spring Cloud 构建微服务系统(九):配置中心(Spring Cloud Config)
查看>>