Type | Restriction Item | Description |
Index | Number of Fields | Up to 300 fields can be added for a single log topic key-value index. |
| Field Name | Letters, digits, and special characters (except for *\\", ) are supported. The name cannot start with _ (except for the __CONTENT__ field).It is not allowed to contain JSON parent-child fields at the same time, such as a and a.b . |
| Field Hierarchy | When the key-value index for multi-level JSON is configured, there shall be no more than 10 levels for the Key, such as a.b.c.d.e.f.g.h.j.k. |
| Delimiter | Only English symbols, \\n\\t\\r and escape characters\\ are supported. |
| Field Length | After the field is enabled for statistics, only the first 32,766 bytes are involved in SQL operation. The excess part cannot execute SQL, but the log will still be completely stored. |
| Token Length | After the field is tokenized, only the first 10,000 characters of a single word are involved in search. The excess part cannot be searched, but the log will still be completely stored. |
| Numeric Field Precision and Range | The data range supported by the long type field is -1E15 ~ 1E15. Data outside of this range may lose accuracy or do not support search. The data range supported by the double type field is -1.79E+308 ~ +1.79E+308. If the number of floating-point coding bits exceeds 64, precision will be lost. Suggestions on index configuration of ultra-long numerical field: If the field does not need to be searched by value range comparison, it can be stored as text type. If the field needs to be searched by value range comparison, it can be stored as double type and some precision may be lost. |
| Activation Mechanism | Index configuration is effective only for newly collected data. The index rules after editing are effective only for newly written logs and existing data will not be updated. To update existing data, rebuild the index. |
| Modifying index configuration | When the index configurations are created, modified and deleted, a single user has a maximum of 10 tasks being executed at the same time. If the number is exceeded, other tasks need to wait for the previous ones to completed and a single task usually takes no more than 1 minute. |
| Reindexing | A single log topic allows to run only one index rebuilding task at a time. A single log topic can have 10 index rebuilding task records at most. An index task can only be created after the task records that are no longer needed are deleted. For logs within the same time range, the index can only be rebuilt once. You need to delete previous task records before rebuilding. The write traffic of logs corresponding to the selected time range cannot exceed 5 TB. The index rebuilding time range is subject to the log time. When there is a deviation of more than 1 hour between the log uploading time and the index rebuilding time range (for example, a 02:00 log is uploaded to CLS at 16:00, and the log index from 00:00 to 12:00 is rebuilt), it will not be rebuilt and cannot be retrieved subsequently. When a new log is submitted to the time range for which the index has been rebuilt, the index will not be rebuilt and the log will not be searched later. |
Query | Sentence Length | Search and analysis statements support up to 12,000 characters. |
| Query Concurrency | A single log topic supports 15 concurrent queries, including search and analysis queries. |
| Fuzzy search | Fuzzy prefix search is not supported, e.g. search for error by *rror . |
| Phrase search | Wildcards in phrase search can only match with 128 words that meet the conditions, and all logs containing these 128 words will be returned. The more accurate the specified word is, the more accurate the query result will be. |
| Logical group nesting depth | When parentheses are used for logical grouping of search conditions under CQL syntax rules, up to 10 layers can be nested. This limitation is not applicable to Lucene syntax rules. For example, (level:ERROR AND pid:1234) AND service:test is 2-layer nesting and can perform search normally. The statement below is 11-layer nesting, and an error will be reported when the search is performed. status:"499" AND ("0.000" AND (request_length:"528" AND ("https" AND (url:"/api" AND (version:"HTTP/1.1" AND ("2021" AND ("0" AND (upstream_addr:"169.254.128.14" AND (method:"GET" AND (remote_addr:"114.86.92.100")))))))))) |
| Memory usage (analysis) | The server memory occupied by statistical analysis each time cannot exceed 3 GB. Generally, this limitation may be triggered when group by, distinct() and count(distinct()) are used, because there are too many values in the statistical field after deduplication through group by or distinct(). It is recommended to optimize the query statement and use fields with fewer values to perform grouping statistics on data, or use approx_distinct() to replace count(distinct()). |
| Query Result | If raw logs are queried, a maximum of 1,000 raw logs can be returned for each query. |
| | If statistical analysis results are queried, a maximum of 100 results can be returned for each query by default. If a LIMIT clause is used, a maximum of 1 million results can be returned for each query. |
| | The maximum data packet returned for the query cannot be greater than 49 MB. GZIP compression (Header Accept-Encoding:gzip) can be enabled if the corresponding API is used. |
| Timeout Time | The timeout time for a single query is 55 seconds, including search and analysis. |
| Query Latency | The delay from log reporting to being available for search and analysis is less than 1 minute. |
Download | Log Count | The maximum number of logs downloaded at a time is 50 million. |
| Task Quantity | For a single log topic, a maximum of 2 tasks can be in the File Generating status, and other unfinished tasks are in the Waiting status and queuing for execution. A single log topic can have up to 1,000 tasks at the same time, including completed tasks in the status of File Generated. |
| File Retention Duration | The generated log files will be retained for only 3 days. |
Related external data | Quantity Limit | A single log topic can be associated with up to 20 external data. |
| Query Timeout | When an external database is queried, the timeout period is 50 seconds. |
| MySQL Version | It is compatible with MySQL 5.7, 8.0 and later versions. MySQL 5.6 has not received a complete compatibility test. It is necessary to actually test whether the SQL statements are executed properly. |
| CSV file size | The file shall not exceed 50 MB and compression is not supported. |
Restriction Item | Description |
Query Concurrency | A single metric topic supports up to 15 concurrent queries. |
Query data volume | A single query can involve up to 200,000 time series, with a maximum of 11,000 data points per time series in the query results. |
Timeout Time | The timeout time for a single query is 55 seconds, including search and analysis. |
문제 해결에 도움이 되었나요?