Overview
A JSON log automatically extracts the key at the first layer as the field name and the value at the first layer as the field value to implement structured processing of the entire log. Each complete log ends with a line break \\n
.
Prerequisites
Suppose your raw JSON log data is:
{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
After being structured by CLS, the log is changed to the following:
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0
body_sent: 23
http_host: 127.0.0.1
method: POST
referer: http://127.0.0.1/my/course/4
remote_ip: 10.135.46.111
request: POST /event/dispatch HTTP/1.1
response_code: 200
responsetime: 0.232
time_local: 22/Jan/2019:19:19:34 +0800
upstreamhost: unix:/tmp/php-cgi.sock
upstreamtime: 0.232
url: /event/dispatch
xff: -
Directions
Logging in to the console
2. On the left sidebar, click Log Topic to go to the log topic management page.
Creating a log topic
1. Click Create Log Topic.
2. In the pop-up dialog box, enter test-json
as Log Topic Name and click Confirm.
Managing the machine group
1. After the log topic is created successfully, click its name to go to the log topic management page.
2. Click the Collection Configuration tab, click Add in LogListener Collection Configuration, and select the format in which you need to collect logs.
3. On the Machine Group Management page, select the server group to which to bind the current log topic and click Next to proceed to collection configuration.
For more information, see Machine Group Management. Configuring collection
Configuring the log file collection path
On the Collection Configuration page, enter the collection rule name and enter the Collection Path according to the log collection path format.
Log collection path format: [directory prefix expression]/**/[filename expression]
.
After the log collection path is entered, LogListener will match all common prefix paths that meet the [directory prefix expression] rule and listen for all log files in the directories (including subdirectories) that meet the [filename expression] rule. The parameters are as detailed below:
|
Directory Prefix | Directory prefix for log files, which supports only the wildcard characters * and ? . \\* indicates to match any multiple characters.
? indicates to match any single character.
|
/**/ | Current directory and all its subdirectories. |
File Name | Log file name, which supports only the wildcard characters * and ? . \\* indicates to match any multiple characters.
? indicates to match any single character.
|
Common configuration modes are as follows:
[Common directory prefix]/**/[common filename prefix]*
[Common directory prefix]/**/*[common filename suffix]
[Common directory prefix]/**/[common filename prefix]*[common filename suffix]
[Common directory prefix]/**/*[common string]*
Below are examples:
|
| | | In this example, the log path is configured as /var/log/nginx/**/access.log . LogListener will listen for log files named access.log in all subdirectories in the /var/log/nginx prefix path. |
| | | In this example, the log path is configured as /var/log/nginx/**/*.log . LogListener will listen for log files suffixed with .log in all subdirectories in the /var/log/nginx prefix path. |
| | | In this example, the log path is configured as /var/log/nginx/**/error* . LogListener will listen for log files prefixed with error in all subdirectories in the /var/log/nginx prefix path. |
Note:
Only LogListener 2.3.9 and later support adding multiple collection paths.
The system does not support uploading logs with contents in multiple text formats, which may cause write failures, such as key:"{"substream":XXX}"
.
We recommend you configure the collection path as log/*.log
and rename the old file after log rotation as log/*.log.xxxx
.
By default, a log file can only be collected by one log topic. If you want to have multiple collection configurations for the same file, add a soft link to the source file and add it to another collection configuration.
Configuring the collection policy
Full collection: When LogListener collects a file, it starts reading data from the beginning of the file.
Incremental collection: When LogListener collects a file, it collects only the newly added content in the file.
Configuring the JSON mode
On the Collection Configuration page, select JSON as the Extraction Mode.
Configuring the collection time
Note:
The log time is measured in seconds. If the log time is entered in an incorrect format, the collection time is used as the log time.
The time attribute of a log is defined in two ways: collection time and original timestamp.
Collection time: The time attribute of a log is determined by the time when CLS collects the log.
Original timestamp: The time attribute of a log is determined by the timestamp in the raw log.
Using the collection time as the time attribute of logs: Keep Collection Time enabled.
Using the original timestamp as the time attribute of logs: Disable Collection Time and enter the time key of the original timestamp and the corresponding time parsing format in Time Key and Time Parsing Format respectively. For more information on the time parsing format, see Configuring Time Format.
Below are examples of how to enter a time parsing format:
Example 1: The parsing format of the original timestamp 10/Dec/2017:08:00:00
is %d/%b/%Y:%H:%M:%S
.
Example 2: The parsing format of the original timestamp `2017-12-10 08:00:00`
is %Y-%m-%d %H:%M:%S
.
Example 3: The parsing format of the original timestamp 12/10/2017, 08:00:00
is %m/%d/%Y, %H:%M:%S
. Configuring filter rules
Filters are designed to help you extract valuable log data by adding log collection filter rules based on your business needs. If the filter rule is a Perl regular expression, the created filter rule will be used for matching; in other words, only logs that match the regular expression will be collected and reported.
You can configure a filter rule for JSON logs according to the parsed key-value pair. For example, if you want to collect all log data with a response_code
field with the value 400 or 500 from the original JSON log file, you need to configure key
as response_code
and the filter rule as 400|500
.
Note:
The relationship logic between multiple filter rules is "AND". If multiple filter rules are configured for the same key name, previous rules will be overwritten.
Configuring parsing-failed log upload
We recommend you enable Upload Parsing-Failed Logs. After it is enabled, LogListener will upload all types of parsing-failed logs. If it is disabled, such logs will be discarded.
After this feature is enabled, you need to configure the Key
value for parsing failures (which is LogParseFailure
by default). All parsing-failed logs are uploaded with the input content as the key name (Key
) and the raw log content as the key value (Value
).
Configuring indexes
1. Click Next to enter the Index Configuration page.
2. On the Index Configuration page, set the following information:
Index Status: Select whether to enable it.
Note:
Index configuration must be enabled before you can perform searches.
Full-Text Index: Select whether to set it to case-sensitive.
Full-Text Delimiter: It is disabled by default and can be enabled as needed.
Key-Value Index: Enabled by default. You can configure the field type, delimiters, and whether to enable statistical analysis as needed. To disable key-value index, you can set to . 3. Click Submit.
문제 해결에 도움이 되었나요?