Automated Review uses Artificial Intelligence to support the analysis of responses submitted in assessments. This feature helps standardize grading criteria, reducing review time and increasing the accuracy of the final score.
Where to Find It
You can activate automated review when creating or editing a question of the following types: Free Text, Project, Programming, or Free Programming.
Access: Library → Questions → New Question → Automated Review
How to Activate
Within the question editor, locate the Automated Review section.
Click on Add Criterion.
Fill in the required information:
Name: Define a title for the criterion (e.g., "General Architecture and Scope").
Description: Detail what the AI should evaluate (e.g., "Check if the system delivers a global real-time clock...").
Weight: Choose a relevance score from 1 to 5, depending on the impact on the final score.
If the criterion is purely observational (no impact on the score), toggle the Extra Criterion option.
Click Save.
Best Practices
Use clear and objective descriptions: The more detailed the criterion, the better the AI results.
Combine technical and behavioral criteria: (e.g., Code Quality + Adherence to the Proposal).
Review weights: Balance the impact of each item to reflect its importance in the total score.
Score Calculation
The final score for the question is calculated based on the configured criteria and their respective weights. Each criterion receives an individual grade assigned by the AI (from 0 to 100%).
The calculation follows this formula:
$$\text{Final Score} = \frac{\sum (\text{criterion grade} \times \text{criterion weight})}{\sum \text{weights of all criteria}}$$
Example:
Criterion | Weight | AI Grade (%) | Weighted Result |
General Architecture | 2 | 80 | 160 |
Documentation | 1 | 100 | 100 |
Main Functionality | 3 | 60 | 180 |
Total | 6 |
| 440 |
Calculation: $440 \div 6 = 73.3\%$
In this example, the final score for the question would be 73.3%.
Note: Criteria marked as Extra are not included in the calculation—they are only displayed as observations for the human reviewer.
