When the disk utilization exceeds 85% or reaches 100%, the ES cluster or Kibana cannot provide services normally, and the following problems may occur:
{[FORBIDDEN/12/index read-only/allow delete(api)];","type":"cluster_block_exception"}
is returned.[FORBIDDEN/13/cluster read-only / allow delete (api)]
is returned.GET _cat/allocation?v
command), and there are unassigned shards (which can be viewed through the GET _cat/allocation?v
command).The above problems are caused by high disk utilization. The disk utilization of data nodes has the following three thresholds. Exceeding them may affect Elasticsearch or Kibana services.
read_only_allow_delete
attribute for each index on the corresponding node in the Elasticsearch cluster. At this time, the node cannot write data to any index; instead, it can only read and delete the corresponding indexes.Step 1. Enable batch operation for cluster indexes.alarm:Data cannot be recovered after deletion; therefore, please do so with caution. You can also choose to keep the data, but you need to expand the disk space.
PUT _cluster/settings
{
"persistent": {
"action.destructive_requires_name": "false"
}
}
Step 2. Delete data, such as DELETE NginxLog-12*
.DELETE index-name-*
PUT _all/_settings
{
"index.blocks.read_only_allow_delete": null
}
PUT _cluster/settings
{
"persistent": {
"cluster.blocks.read_only_allow_delete": null
}
}
read_only
status and whether the index can be written to.GET /_cluster/allocation/explain
If you don't want to clean cluster data, you can also expand the disk space on the Cluster Configuration page in the ES console in the following steps:
PUT _all/_settings
{
"index.blocks.read_only_allow_delete": null
}
PUT _cluster/settings
{
"persistent": {
"cluster.blocks.read_only_allow_delete": null
}
}
Was this page helpful?