Assessment
The Assessment endpoint is the core functionality of the Compliance Service API. It allows you to evaluate specific input data (documents or URLs) against selected compliance criteria within a framework using a Large Language Model (LLM)-based assessment engine.
This process returns a structured result, including the assessment category, a rationale explaining the decision made by the AI, and citations from the analyzed materials.
Use this endpoint to:
- Run an automated compliance assessment on uploaded documents or URLs.
- Evaluate input data against a specific criterion from a selected framework.
- Retrieve structured assessment results with justifications and source citations.
Run assessment
Code
Content-Type: multipart/form-data
Request Parameters
Parameter | Type | Required | Description |
---|---|---|---|
assessmentConfiguration | JSON object | Yes | Configuration specifying the assessment details |
documents | File array | No* | Multipart files to be assessed |
urls | String array | No* | URLs to be assessed |
At least one of documents
or urls
must be provided.
Assessment Configuration Structure
Code
Field | Type | Required | Description |
---|---|---|---|
expertId | string | Yes | The ID of the expert who carries out the assessment |
criterionCodeName | string | Yes | The code name of the criterion on which the assessment is based |
frameworkCodeName | string | Yes | The code name of the framework to which the criterion belongs |
assessmentProcessId | string | No | The ID of the assessment process, serves to group assessment responses together. If not provided, a new UUID will be generated |
Example Request
Code
Example Response
Code
Response Fields
Field | Type | Description |
---|---|---|
id | integer | Unique identifier of the assessment response |
assessmentProcessId | string | Unique identifier for the assessment run (useful for audits and logs) |
inputTokens | integer | Number of tokens used for LLM input (useful for performance tracking) |
outputTokens | integer | Number of tokens used for LLM output (useful for performance tracking) |
processingTimeMs | integer | Total time taken to complete the assessment, in milliseconds |
fileNames | string array | List of input file names used during the assessment |
urls | string array | List of URLs used during the assessment |
frameworkCodeName | string | Identifier of the framework used for assessment |
criterionCodeName | string | Identifier of the criterion used for assessment |
category | string | Assessment outcome (see Assessment Categories below) |
rationale | string | Human-readable explanation provided by the AI |
citations | array | Source citations from the analyzed materials (optional) |
Assessment Categories
The assessment can result in one of the following categories:
SUFFICIENTLY_REGULATED
- The assessed material adequately meets the compliance criterionNOT_SUFFICIENTLY_REGULATED
- The assessed material does not adequately meet the compliance criterionMISSING
- The required information for the criterion is not present in the assessed material
Citation Structure
When available, citations provide traceability to source materials:
Field | Type | Description |
---|---|---|
fileName | string | Name of the source material (file name or URL) |
content | string | Relevant excerpt from the source material |
heading | string | The heading of the reference within the source material |
pageNumber | integer | The page number of the reference (if applicable) |
relStartIndex | integer | The relative start index of the quoted text (relative to the page) |
relEndIndex | integer | The relative end index of the quoted text (relative to the page) |
absStartIndex | integer | The absolute start index of the quoted text |
absEndIndex | integer | The absolute end index of the quoted text |
- The
assessmentConfiguration
is central to the request and should be constructed based on available frameworks and criteria. - Use the same
assessmentProcessId
across related assessments to group them together. - Display the
rationale
,category
, andcitations
clearly in your app's UI to ensure results are actionable.
Submit Assessment Feedback
After the assessment result is displayed, users can provide feedback on the AI's decision. This step is a critical component of the human-in-the-loop process, ensuring that automated decisions remain transparent, accountable, and continuously improving.
Providing feedback is strongly encouraged, as it helps refine the AI model over time and adapt it to specific organizational or domain-specific expectations.
Submit Feedback
Code
Content-Type: application/json
Request Body
Code
Request Fields
Field | Type | Required | Description |
---|---|---|---|
assessmentResponseId | integer | Yes | ID of the assessment response being reviewed |
expertId | string | Yes | Identifier of the expert submitting the feedback |
rationalFeedback | string | Yes | Detailed rationale justifying agreement or disagreement with the AI assessment |
categoryFeedback | string | No | Suggested correct assessment category (if different from AI assessment) |
Example Request
Code
Response
The endpoint returns HTTP 200 OK on successful feedback submission.
To ensure the feedback is actionable, validation is enforced on the rationalFeedback
field to ensure meaningful input.
Why This Matters
This feedback mechanism is designed to integrate human expertise into the compliance evaluation loop. It enhances:
- Trust by giving users control and visibility over AI decisions
- Adaptability through domain-specific refinement and learning
- Auditability by capturing the rationale behind expert decisions
- Quality through continuous improvement of the assessment model
By combining AI-driven automation with structured human input, the Compliance Assessment Service supports a scalable and explainable approach to regulatory evaluations.
Error Handling
The API returns standard HTTP status codes:
- 200 OK - Successful assessment or feedback submission
- 400 Bad Request - Invalid request parameters or missing required fields
- 401 Unauthorized - Invalid or missing authentication
- 409 Conflict - Assessment processing error or system conflict
Error responses include descriptive messages to help troubleshoot issues.