# checToxicityTest

## Description

Check toxicity of a piece of text. This functionality is developed using \[this]\(<https://huggingface.co/unitary/toxic-bert>) model.

## Parameters

| Parameter | Type   | Description     |
| --------- | ------ | --------------- |
| prompt    | string | a piece of text |

## Response

| Parameter | Type   | Description                                                                                                                                |
| --------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------ |
| message   | string | message about the response                                                                                                                 |
| status    | int    | <p>1 for successful execution</p><p>-1 for any error occurs during the execution of the request</p>                                        |
| response  | object | this object will have six properties, each is a class of toxicity, the value will be true in case of toxic and false in case of not toxic. |

## Example Request and Response

### Prerequisites

Before making requests with NO.AI SDK, you must have it installed.

You can install NO.AI SDK using either `npm` or `yarn`. Use the following commands to install NO.AI SDK:

```sh
npm install @nest25/ai-core-sdk
OR
yarn add @nest25/ai-core-sdk
```

## Request

Here is an example of how to make a `checkToxicityText`request using the NO.AI SDK:

```javascript
// import the ai-core-sdk
import {AIServices} from '@nest25/ai-core-sdk';

// create a new instance of the sdk
const aiServices = new AIServices();

async function main() {
  // get the result of the test
  const result = await aiServices.checkToxicityText('this is a prompt');
  console.log(result);
}

main();
```

## Response

```json
{
    "message": "Request successful",
    "response": {
        "identity_hate": true,
        "insult": false,
        "obscene": false,
        "severe_toxic": false,
        "threat": false,
        "toxic": true
    },
    "status": 1
}
```
