When you enable integrity features in your assessment, you can review each person’s activity history on the results screen.
Integrity
When you open the result, the following information may be displayed:
Device used: type and model of the device used by the candidate.
Location: approximate location from where the test was taken.
Full screen mode always active?: indicates whether the candidate kept the assessment in full screen mode.
Mouse always on the assessment?: checks if the mouse focus remained on the assessment window.
Completed from a single IP address?: indicates whether the test was taken from a single IP address.
Copy/Paste behavior detected?: indicates whether there were attempts to copy and paste during the test.
Photo identification
If the “photo identification” option is enabled, an image of the candidate will be captured at the beginning of the test.
Image supervision
If the “image supervision” option is enabled, additional images will be captured while the test is running and will also be available in the integrity history.
When you enable AI-based image analysis, the system can identify the following suspicious behaviors:
Person not looking at the screen
Number of people in frame
Face visible
Person not detected
Camera turned off
Severity indicator
In addition to individual events, the system automatically calculates a severity indicator for each assessment attempt.
This indicator appears as a label with one of the following values:
none – no relevant behavior detected
low – low severity
medium – medium severity
high – high severity
Technically, this label is exposed as the severityintegrity_category field and helps reviewers prioritize which attempts deserve closer manual review.
How the indicator is calculated
During the assessment, various integrity events are logged (for example: tab exit, copy/paste attempts, proctoring image captures, etc.).
Each event receives a severity “weight”:
High severity (high) events are worth 15 points
Medium severity (medium) events are worth 5 points
Low severity (low) events are worth 1 point
Neutral (none) events are worth 0.5 points (they register behavior with minimal impact)
When the same type of event happens repeatedly in a short time window (for example, several tab exits within a few seconds), the system applies a frequency multiplier, increasing the score of those repeated events.
Then:
The sum of all points is converted into a 0–100 score (normalization).
This score is classified into ranges (quartiles), generating the final category:
0 to 25 → None
26 to 50 → Low
51 to 75 → Medium
76 to 100 → High
Optionally, the interface can display:
A numeric_score (0–100), for more granular insight;
A list of technical justifications, showing which events contributed to the score (for example: tab exit, copy/paste, suspicious snapshots), useful for auditing.
How to interpret the severity indicator
Attempts with category none usually indicate normal behavior, with no relevant signs of risk.
low and medium categories indicate events that deserve attention, but do not always represent fraud.
Attempts with high severity should be prioritized for manual review, as they concentrate a higher number and intensity of risky events.
The integrity system is designed as a decision-support tool for reviewers, not as an automatic decision engine.
Not every suspicious signal is actually fraud, so it is essential to carefully review the history before making decisions about disqualification or rejection.
Timeline and video review
The integrity history logs all actions taken by the candidate during the test, from the beginning to the end of the assessment.
You can:
Watch the recorded navigation video of the assessment session.
Review the text-based history, displayed on the side of the page when you click the “Timeline” button.
When you hover over the clickable icons in the history, you are taken to the exact moment in the video when that action occurred.
You can also adjust the playback speed according to your preference.
When copy-and-paste behavior is detected, the system allows you to see which content was copied, helping you decide whether that action compromised the integrity of the test or not.
