POST /text/auditing HTTP/1.1Host: <BucketName-APPID>.ci.<Region>.myqcloud.comDate: <GMT Date>Authorization: <Auth String>Content-Length: <length>Content-Type: application/xml<body>
<Request><Input><Content></Content><DataId></DataId></Input><Conf><Callback></Callback><CallbackVersion></CallbackVersion><CallbackType></CallbackType><BizType></BizType><Freeze><PornScore></PornScore></Freeze></Conf></Request>
Node Name (Keyword) | Parent Node | Description | Type | Required or Not |
Request | No | The specific configuration item for text moderation. | Container | Yes |
Node Name (Keyword) | Parent Node | Description | Type | Required or Not |
Input | Request | Content requiring moderation. | Container | Yes |
Conf | Request | Configuration of moderation rules. | Container | Yes |
Node Name (Keyword) | Parent Node | Description | Type | Required or Not |
Content | Request.Input | When the input is plaintext, it should first be Base64-encoded. The text length before encoding cannot exceed 10,000 UTF-8 encoded characters. If this length limit is exceeded, the API will return an error.Note: Currently, detection and moderation are supported in Chinese, mixed Chinese and English, and Arabic numerals. If you need to implement moderation in other languages, please contact us. | String | No |
DataId | Request.Input | This field in the moderation result will return the original content, with a length limit of 512 bytes. You can use this field to uniquely identify the data to be moderated. | String | No |
UserInfo | Request.Input | User business field. | Container | No |
Node Name (Keyword) | Description | Type | Required or Not |
TokenId | It usually indicates account information, limited to 128 bytes in length. | String | No |
Nickname | It usually indicates nickname information, limited to 128 bytes in length. | String | No |
DeviceId | It usually indicates device information, limited to 128 bytes in length. | String | No |
AppId | It usually indicates a unique identifier for the App, limited to 128 bytes in length. | String | No |
Room | It usually indicates room number information, limited to 128 bytes in length. | String | No |
IP | It usually indicates IP address information, limited to 128 bytes in length. | String | No |
Type | It usually indicates the business type, limited to 128 bytes in length. | String | No |
ReceiveTokenId | It usually indicates the user account for receiving messages, limited to 128 bytes in length. | String | No |
Gender | It usually indicates gender data, limited to 128 bytes in length. | String | No |
Level | It usually indicates level information, limited to 128 bytes in length. | String | No |
Role | It usually indicates role details, limited to 128 bytes in length. | String | No |
Node Name (Keyword) | Parent Node | Description | Type | Required or Not |
BizType | Request.Conf | It indicates a unique identifier for the moderation policy. You can set the scenes you want to moderate, such as pornography, advertising, or illegal content, through the moderation policy page on the console. Follow these guidelines to configure: Setting Moderation Policy. BizType can be obtained from the console. When you specify BizType, this moderation request will enable moderation according to the scenes configured in the policy. If BizType is left blank, the default moderation policy will be applied automatically. | String | No |
Callback | Request.Conf | The moderation results can be sent to your callback address in a callback form. It supports addresses starting with http:// or https:// , such as: http://www.callback.com . When Input is set to Content, this parameter is not effective, and results will be returned directly. | String | No |
CallbackVersion | Request.Conf | The structure of the callback content. Valid values: Simple (the callback content includes basic information), Detail (the callback content includes detailed information). The default value is Simple. | String | No |
CallbackType | Request.Conf | The type of callback segment. Valid values: 1 (all text segments are recalled), 2 (non-compliant text segments are recalled). The default value is 1. | Integer | No |
Freeze | Request.Conf | This field allows you to automatically block text files based on the different scores given according to the moderation results. It is effective only when the text moderated in input is object. | Container | No |
Node Name (Keyword) | Parent Node | Description | Type | Required or Not |
PornScore | Request.Conf.Freeze | The value range is [0,100]. A block operation will be carried out automatically when the pornographic content moderation score equals or exceeds the given score. If it is left blank, an automatic block operation will not occur. The default value is null. | Integer | No |
AdsScore | Request.Conf.Freeze | The value range is [0,100]. A block operation will be carried out automatically when the advertising moderation score equals or exceeds the specified score. If it is left blank, an automatic block operation will not occur. The default value is null. | Integer | No |
IllegalScore | Request.Conf.Freeze | The value range is [0,100]. A block operation will be carried out automatically if the illegal content moderation result equals or exceeds this score. If it is left blank, an automatic block operation will not occur. The default value is null. | Integer | No |
AbuseScore | Request.Conf.Freeze | The value range is [0,100]. A block operation will be carried out automatically if the verbal abuse moderation result equals or exceeds this score. If it is left blank, an automatic block operation will not occur. The default value is null. | Integer | No |
<Response><JobsDetail><DataId></DataId><JobId></JobId><State></State><CreationTime></CreationTime><Code>Success</Code><Message>Success</Message><SectionCount></SectionCount><Result>1</Result><ContextText></ContextText><PornInfo><HitFlag></HitFlag><Count></Count></PornInfo><Section><StartByte></StartByte><PornInfo><HitFlag></HitFlag><Score></Score><Keywords></Keywords></PornInfo></Section></JobsDetail><RequestId></RequestId></Response>
Node Name (Keyword) | Parent Node | Description | Type |
Response | No | The specific response content returned by text moderation. | Container |
Node Name (Keyword) | Parent Node | Description | Type |
JobsDetail | Response | Detailed information of text moderation. | Container |
RequestId | Response | When a request is send, the server automatically creates an ID specific to the request, assisting in locating encountered problems. | String |
Node Name (Keyword) | Parent Node | Description | Type |
Code | Response.JobsDetail | String | |
DataId | Response.JobsDetail | Unique business identifier added in the request. | String |
Message | Response.JobsDetail | Error description. It is returned only when State is Failed. | String |
JobId | Response.JobsDetail | Task ID of this text moderation operation. | String |
CreationTime | Response.JobsDetail | Creation time of the text moderation. | String |
State | Response.JobsDetail | Status of the text moderation. Valid values include Success (moderation succeeded) and Failed (moderation failed). | String |
Content | Response.JobsDetail | The submitted text moderation content, which is returned in Base64 encoding. | String |
ContextText | Response.JobsDetail | When the text context correlation moderation capability is activated, this field, along with the current text under moderation and its associated text, will be returned in their original forms.Note: When you use the context correlation capabilities, it is necessary to incorporate the UserInfo field during the initiation of a text moderation operation. It indicates that the correlated context is specific to a particular user ID, and the text content uploaded by different user IDs will not be correlated.To enable context correlation capabilities, please contact our service team for activation. | String |
Label | Response.JobsDetail | This field returns the moderation results which correspond to the malicious tag with the highest priority and are recommended by the model. It is recommended that you handle different types of violations and suggested values based on the business requirements. Returned values include: Normal: normal, Porn: pornography, Ads: advertising, along with other types of unsafe or inappropriate content. | String |
Result | Response.JobsDetail | This field indicates the moderation result of the current assessment. You can perform subsequent operations based on the results. Valid values: 0 (normal), 1 (sensitive and non-compliant files), and 2 (possibly sensitive, with manual moderation recommended). | Integer |
SectionCount | Response.JobsDetail | The number of content segments for text moderation. The value is fixed at 1. | Integer |
PornInfo | Response.JobsDetail | The moderation result of the pornographic content moderation scene. | Container |
AdsInfo | Response.JobsDetail | The moderation result of the advertising content moderation scene. | Container |
IllegalInfo | Response.JobsDetail | The moderation result of the illegal content moderation scene. | Container |
AbuseInfo | Response.JobsDetail | The moderation result of the abusive content moderation scene. | Container |
Section | Response.JobsDetail | The specific result information of text moderation. | Container Array |
UserInfo | Response.JobsDetail | User business field. | Container |
ListInfo | Response.JobsDetail | Account whitelist/blacklist results. | Container |
Node Name (Keyword) | Parent Node | Description | Type |
HitFlag | Response.JobsDetail.*Info | It is used to return moderation results of the corresponding scene. Returned values: 0: normal, 1: confirmed as violation content of the current scene, 2: suspected as violation content of the current scene. | Integer |
Count | Response.JobsDetail.*Info | The number of segments that match the moderation classification. | Integer |
Node Name (Keyword) | Parent Node | Description | Type |
StartByte | Response.JobsDetail.Section | The location within the text where the segment begins (that is, 10 represents the 11th UTF-8 character). It starts from 0. | Integer |
Label | Response.JobsDetail.Section | This field returns the moderation results which correspond to the malicious tag with the highest priority and are recommended by the model. It is recommended that you handle different types of violations and suggested values based on the business requirements. Returned values include: Normal: normal, Porn: pornography, Ads: advertising, along with other types of unsafe or inappropriate content. | String |
Result | Response.JobsDetail.Section | This field indicates the moderation result of the current assessment. You can perform subsequent operations based on the results. Valid values: 0 (normal), 1 (sensitive and non-compliant files), and 2 (possibly sensitive, with manual moderation recommended). | Integer |
PornInfo | Response.JobsDetail.Section | The moderation result of the pornographic content moderation scene. | Container |
AdsInfo | Response.JobsDetail.Section | The moderation result of the advertising content moderation scene. | Container |
IllegalInfo | Response.JobsDetail.Section | The moderation result of the illegal content moderation scene. | Container |
AbuseInfo | Response.JobsDetail.Section | The moderation result of the abusive content moderation scene. | Container |
Node Name (Keyword) | Parent Node | Description | Type |
HitFlag | Response.JobsDetail.Section.*Info | It is used to return moderation results of the corresponding scene. Returned values: 0: normal, 1: confirmed as violation content of the current scene, 2: suspected as violation content of the current scene. | Integer |
Score | Response.JobsDetail.Section.*Info | The moderation result score within this segment. Higher scores indicate more sensitive content. | Integer |
Keywords | Response.JobsDetail.Section.*Info | Keywords matched under the current moderation scene. Multiple keywords are separated by ",". | String |
LibResults | Response.JobsDetail.Section.*Info | This field is used to return results identified through the risk library. Note: This field is not returned when no samples within the risk library have been matched. | Container Array |
SubLabel | Response.JobsDetail.Section.*Info | This field indicates the specific sub-tags matched in the moderation. For example: the SexBehavior sub-tag under Porn. Note: This field may return null, indicating that no specific sub-tags are matched. | String |
Node Name (Keyword) | Parent Node | Description | Type |
LibType | Response.JobsDetail.Section.*Info.LibResults | Type of the matched risk library. Valid values include 1 (preset white library and black library) and 2 (custom risk library). | Integer |
LibName | Response.JobsDetail.Section.*Info.LibResults | Name of the matched risk library. | String |
Keywords | Response.JobsDetail.Section.*Info.LibResults | Keywords matched in the library. This parameter may return multiple values, indicating multiple keywords matched. | String Array |
Node Name (Keyword) | Description | Type | Required or Not |
TokenId | It usually indicates account information, limited to 128 bytes in length. | String | No |
Nickname | It usually indicates nickname information, limited to 128 bytes in length. | String | No |
DeviceId | It usually indicates device information, limited to 128 bytes in length. | String | No |
AppId | It usually indicates a unique identifier for the App, limited to 128 bytes in length. | String | No |
Room | It usually indicates room number information, limited to 128 bytes in length. | String | No |
IP | It usually indicates IP address information, limited to 128 bytes in length. | String | No |
Type | It usually indicates the business type, limited to 128 bytes in length. | String | No |
ReceiveTokenId | It usually indicates the user account for receiving messages, limited to 128 bytes in length. | String | No |
Gender | It usually indicates gender data, limited to 128 bytes in length. | String | No |
Level | It usually indicates level information, limited to 128 bytes in length. | String | No |
Role | It usually indicates role details, limited to 128 bytes in length. | String | No |
Node Name (Keyword) | Parent Node | Description | Type |
ListResults | Response.JobsDetail.ListInfo | Result of all match lists. | Container Array |
Node Name (Keyword) | Parent Node | Description | Type |
ListType | Response.JobsDetail.ListInfo.ListResults | Type of match list. Valid values include 0 (whitelist) and 1 (blacklist). | Integer |
ListName | Response.JobsDetail.ListInfo.ListResults | Name of the match list. | String |
Entity | Response.JobsDetail.ListInfo.ListResults | The matched entry on the list. | String |
POST /text/auditing HTTP/1.1Authorization: q-sign-algorithm=sha1&q-ak=AKIDZfbOAo7cllgPvF9cXFrJD0a1ICvR****&q-sign-time=1497530202;1497610202&q-key-time=1497530202;1497610202&q-header-list=&q-url-param-list=&q-signature=28e9a4986df11bed0255e97ff90500557e0e****Host: examplebucket-1250000000.ci.ap-beijing.myqcloud.comContent-Length: 166Content-Type: application/xml<Request><Input><Content>54uZ5Ye75omL</Content></Input><Conf><BizType>b81d45f94b91a683255e9a9506f45a11</BizType></Conf></Request>
HTTP/1.1 200 OKContent-Type: application/xmlContent-Length: 230Connection: keep-aliveDate: Thu, 15 Jun 2017 12:37:29 GMTServer: tencent-cix-ci-request-id: NTk0MjdmODlfMjQ4OGY3XzYzYzhf****<Response><JobsDetail><JobId>vab1ca9fc8a3ed11ea834c525400863904</JobId><Content>54uZ5Ye75omL</Content><State>Success</State><CreationTime>2019-07-07T12:12:12+0800</CreationTime><SectionCount>1</SectionCount><Label>Illegal</Label><Result>2</Result><PornInfo><HitFlag>0</HitFlag><Count>0</Count></PornInfo><Section><StartByte>0</StartByte><Label>Illegal</Label><Result>2</Result><PornInfo><HitFlag>0</HitFlag><Score>0</Score><Keywords/></PornInfo></Section></JobsDetail><RequestId>xxxxxxxxxxxxxx</RequestId></Response>
Was this page helpful?