Processors
Processors run as separate threads in the pipeline, linked by input-output queues.
Processors define a process(snapshot) method that receives a Snap object
containing both the input image(s) and the results of previous processors,
performs actions on this data, and returns either the same snap or a modified
version of it to be processed by later processors in the pipeline.
Processor shared state
In general, processors don't share state to avoid inter-thread locking.
But a shared database of gathered information is available as a global_scope property on all of them.
This GlobalScope object contains:
- Shared
stagewithActorsdetected in previous frames. - A list of
cameras, with calibration info. - A
media_timelinewith information about playing media to keep track of views and audience.
Available Processors
Loading network resnet50_fc512_person2 to device AUTO
Module base
Common Processor Parameters
-
display(bool): Show processor on a system window.- default: False.
-
display_extra(bool): Show processor DEBUG output in a window.- default: False.
-
enabled(bool): Enable processor.- default: True.
-
max_fps(float): If processing more than this fps, bypass processor (zero or less disables).- default: -1.0.
-
stream(bool): Show in browser.- default: False.
-
tags(str): Apply this processor only to boxes with the given tags.- default: None.
BoxParallelProcessor
Superclass for a thread-pool parallelizing processor with minimum-interval checking
Parameters
-
number_of_workers(int): Number of worker threads.- default: 2 (1-64).
-
recheck_interval(float): Period of re-measurement (seconds).- default: 1.0 (0.0-60.0).
-
data_updates(bool): Should we send data updates for each new value.- default: False.
BypassProcessor
Bypass processor for streaming/display
Parameters
SampleParametersProcessor
Demo processor for all kinds of parameters. Does nothing.
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
float01(float): Floating point value from 0 to 1.- default: 1.0 (0.0-1.0).
-
float0inf(float): Floating point value from 0 to some large value.- default: 1.0 (0.0-9.9e+99).
-
int0_100(int): Int form 0 to 100.- default: 10 (0-100).
-
line(line): Line selection.- default: [[0, 50], [100, 50]].
-
polygon(polygon): Polygon selection.- default: [[0, 0], [0, 100], [100, 100], [100, 0]].
-
polyline(polyline): Polyline selection.- default: [[0, 80], [100, 80], [80, 60]].
-
selector(value1|value2|value3): Selector.- default: value1.
-
square(square): Square selection.- default: [[0, 0], [0, 100], [100, 100], [100, 0]].
-
string(str): A string value.- default: Some value.
-
toggle(bool): Toggle Parameter.- default: False.
-
vector(vector): Vector selection.- default: [[0, 50], [100, 50]].
Module boxfilters
BoxAreaFilterProcessor
Filter boxes between the given min/max areas
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
maxarea(area): Maximum area in pixels or percent of the image of the box to be detected (=width x height).- default: 80000 (0-4194304).
-
minarea(area): Minimum area in pixels or percent of the image of the box to be detected (=width x height).- default: 3000 (0-160000).
BoxCameraZonesFilterProcessor
Filter person+face+upper_body boxes outside of camera defined zones
Person boxes are filtered based on feet position.
Face/upper_body boxes are filtered
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will filter.- default: person.
-
project_head_down(float): Project N heads downwards to guess feet position to filter out heads.- default: 7.0 (0.0-10.0).
BoxContourFilterProcessor
Discard boxes whose contact point (center, top-center, bottom-center) is outside the given contour
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will filter.- default: person.
-
contour(polygon): Polygon where to detect collision.- default: [].
-
contact_point(top|center|bottom): Point to check for presence inside/outside the contour.- default: bottom.
BoxMergeProcessor
Merges boxes into ComboBox clusters based on their containment.
Parameters
box_order(face&person&upper_body&blob&hand&head&skeleton&placeholder&combo): Type of boxes in order of containment for the clusters.- default: person,skeleton,upper_body,head,face.
BoxRatioFilterProcessor
Discard boxes whose size ratio (height/width) is below the set value
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will filter.- default: person.
-
minratio(float): Minumum ratio (height/width) for the box to be detected (e.g. in a face height > width).- default: 1.2 (0.0-10.0).
FocalBoxFilterProcessor
Filter Person/Head Bounding Boxes by estimated Focal-Point distance. Assuming an average height (~1.75m) for the person and for the head (~30cm), discard those boxes that are smaller than the average.
Parameters
-
exp_head_height(float): Expected average height of a head in meters.- default: 0.2 (0.0-0.5).
-
exp_person_height(float): Expected average height of a person in meters.- default: 1.8 (1.0-3.0).
-
threshold(float): Threshold distance to discard boxes (m).- default: 5.0 (0.0-1000.0).
SkeletonsToBoxesProcessor
Convert skeletons to face+body boxes, and discard them
Parameters
use_face(bool): Use face points to generate a face box.- default: False.
Module camera
CameraFloorDrawingProcessor
Draws a projection of the xy camera plane onto the current snapshot.
Parameters
CameraIntrinsicCalibrationProcessor
Calibrate a camera intrinsic parameters using a checkerboard pattern.
Parameters
-
grid_x(int): Grid x size.- default: 10 (1-100).
-
grid_y(int): Grid x size.- default: 7 (1-100).
-
max_rms_error(float): Max. RMS error in calibration.- default: 2.0 (0.0-200.0).
-
min_points(int): Minimum number of captured points.- default: 50 (1-500).
-
recalibrate(bool): Recalibrate camera trigger calibration.- default: False.
-
square_size_mm(float): Square size in mm.- default: 30 (0.0-200.0).
CameraParametersProcessor
Adjust the camera extrinsic parameters either by a 1 m. square on the ground or manually.
Parameters
-
calculate_square(trigger): Adjust translation and rotation based on the floor square.- default: False.
-
camera_h(float): Height from the camera to the ground where the square lays.- default: 2.0 (-50.0-50.0).
-
meter_square(polygon): A square of 1m sitting at 3m.- default: [].
-
position_x(float): Camera X position.- default: 0.0 (-1000.0-1000.0).
-
position_y(float): Camera Y position.- default: 0.0 (-1000.0-1000.0).
-
position_z(float): Camera Z position.- default: 0.0 (-1000.0-1000.0).
-
rotation_x(float): Camera X angle in degrees.- default: 0.0 (-360.0-360.0).
-
rotation_y(float): Camera Y angle in degrees.- default: 0.0 (-360.0-360.0).
-
rotation_z(float): Camera Z angle in degrees.- default: 0.0 (-360.0-360.0).
-
save_calibration(trigger): Save current calibration parameters.- default: False.
-
save_calibration_file(str): File where to save calibration parameters.- default: calibration.json.
-
square_d(float): Distance to the calibration square.- default: 3.0 (0.1-20.0).
-
square_l(float): Length in meters of the side of the calibration square.- default: 1.0 (0.1-20.0).
CameraZonesAnnotatingProcessor
Draws contours and gates on the picture.
Parameters
Module comm
BoxToSimulatedMQTTProcessor
Publisher processor that outputs different box events to different MQTT paths
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: blob.
-
send_falloff(float): Seconds of falloff after sending event.- default: 2 (0-100).
-
trigger_class(str): Class of the trigger.- default: sensor.
-
trigger_id(str): Id of the trigger.- default: 1.
MQTTPublishingProcessor
Publisher processor that outputs different box events to different MQTT paths
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
minimal(bool): Only send box and hands, reducing packet size.- default: False.
-
send_biggest_n(int): Only send N biggest boxes.- default: 0 (0-100).
-
skel_send(str): Comma separated parts to send (hands,feet,chest) or 'none'.- default: hands.
MessagingProcessor
Send (MQTT) messages for all entering and exiting Boxes. Message payload is of the form:
{
"sent-by": "messaging",
"class": "box",
"action": "in|out",
"boxtype": "face|person|blob|head",
"id": "abcdefg",
"tags": ["tag1", "tag2", "tag3"],
"camera": "camera name"
}
Parameters
OSCPublishingProcessor
Publisher processor that outputs different box events to different OSC messages
Parameters
-
box_state(any|entered): Only emit boxes in this stage state.- default: any.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
target_host(str): Target host for OSC messages.- default: 127.0.0.1.
-
target_port(int): Target port of OSC messages.- default: 15000.
SingleTargetMQTTPublishingProcessor
Publisher processor that outputs different box events to different MQTT paths
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
minimal(bool): Only send box and hands, reducing packet size.- default: False.
-
send_criteria(center|biggest): Only send center/biggest box.- default: biggest.
-
skel_send(str): Comma separated parts to send (hands,feet,chest) or 'none'.- default: hands.
StageOSCPublishingProcessor
Publisher processor that outputs different box events to different OSC messages
Parameters
-
box_state(any|entered|entering|exiting|exited): Only emit boxes in this stage state.- default: entered.
-
target_host(str): Target host for OSC messages.- default: 127.0.0.1.
-
target_port(int): Target port of OSC messages.- default: 15000.
StageTUIOPublishingProcessor
Publisher processor that outputs different box events to different TUIO messages
Parameters
-
box_state(any|entered|entering|exiting|exited): Only emit boxes in this stage state.- default: entered.
-
max_skeleton_speed(float): Maximum speed accepted to send skeleton hands.- default: 0.2 (0.0-1000.0).
-
skel_send(hands|chest|feet|full): Blob(s) to send for skeletons.- default: hands.
-
target_host(str): Target host for TUIO messages.- default: 127.0.0.1.
-
target_port(int): Target port of TUIO messages.- default: 15000.
TUIOPublishingProcessor
Publisher processor that outputs different box events to different TUIO messages
Parameters
-
box_state(any|entered): Only emit boxes in this stage state.- default: any.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
image_path(str): Path where to dump snap images as FSEQ.jpg files (if set).- default: .
-
max_skeleton_speed(float): Maximum speed accepted to send skeleton hands.- default: 0.2 (0.0-1000.0).
-
skel_send(hands|chest|feet|full): Blob(s) to send for skeletons.- default: hands.
-
target_host(str): Target host for TUIO messages.- default: 127.0.0.1.
-
target_port(int): Target port of TUIO messages.- default: 15000.
VirtualCamProcessor
Outputs video frames to a virtual camera device
Parameters
-
annotate(bool): Annotate the image with detection results.- default: False.
-
clip_h(int): Crop height from the center of the image (zero disables).- default: 0 (0-32000).
-
clip_w(int): Crop width from the center of the image (zero disables).- default: 0 (0-32000).
-
device(str): Device parameter for pyvirtualcam (default automatic).- default: None.
-
draw_floor(bool): Draw camera calibration floor on the image.- default: False.
-
virtualcam_driver(auto|v4l2loopback|obs|unitycapture): Force driver for the camera.- default: auto.
-
target_fps(int): Target FPS rate of the Virtual Camera.- default: 15 (5-60).
ZContoursProcessor
From the Z buffer, generate a mask between distance ranges. Also, calculate normals at every point and send the resulting mask to virtual camera.
Parameters
-
virtualcam_device(str): Device path (e.g. /dev/video12) for the camera.- default: None.
-
virtualcam_driver(auto|v4l2loopback|obs|unitycapture): Force driver for the camera.- default: auto.
-
max_z_threshold(int): Maximum z if using depth plane (mm).- default: 3000 (0-32768).
-
min_z_threshold(int): Minimum z if using depth plane (mm).- default: 1000 (0-32768).
-
target_fps(int): Target FPS rate of the Virtual Camera.- default: 15 (5-60).
Module detect
BlobsDetectorProcessor
Substract background and detect blobs on the image
Parameters
-
closing_kernel_size(int): Pixel radius for the closing used to reduce small dots (0 disables).- default: 6 (0-100).
-
max_box_area(float): Maximum area of a detected blob.- default: 1000000 (0.0-9.99e+100).
-
max_box_ratio(float): Maximum width/height ratio of a detected blob.- default: 4.0 (0.0-9.99e+100).
-
max_z_threshold(int): Maximum z if using depth plane.- default: 5000 (0-32768).
-
min_box_area(float): Minimum area of a detected blob.- default: 100 (0.0-9.99e+100).
-
min_box_ratio(float): Minimum width/height ratio of a detected blob.- default: 0.0 (0.0-10).
-
min_z_threshold(int): Minimum z if using depth plane.- default: 0 (0-32768).
-
opening_kernel_size(int): Pixel radius for the opening used to reduce small dots (0 disables).- default: 6 (0-100).
-
plane(rgb|z): Color plane to use.- default: z.
BoxCounterProcessor
No description
Parameters
-
area_name(str): Name of the counter area.- default: Count Area.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
count_chunk_size(int): Number of samples to accumulate before emitting a count event.- default: 100.
-
outlier_filter_percentile(float): Percentile to filter outliers out (zero disables).- default: 0.1 (0.0-1.0).
-
report_post_url(str): URL where to POST counter updates.- default: None.
BoxFinderProcessor
Detect bounding boxes for a single class of objects, with optional filtering by area and ratio.
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
contour(square): Area of the image where to detect.- default: [].
-
filter_out_iou(float): Filter out boxes overlapping intersection-over-union more than the value (zero disables).- default: 0.4 (0.0-1.0).
-
maxarea(area): Maximum area in pixels of the box to be detected (=width x height).- default: 80000 (0-4194304).
-
minarea(area): Minimum area in pixels of the box to be detected (=width x height).- default: 3000 (0-160000).
-
minratio(float): Minumum ratio (height/width) for the box to be detected (e.g. in a face height > width).- default: 1.2 (0.0-10.0).
-
threshold(float): Threshold of acceptance for the network detection.- default: 0.5 (0.0-1.0).
Networks
- Network
box_finder, defaults to FaceDetectionNetwork.
ClassDetectionProcessor
Run the given multiclass detector on the image, producing boxes filtered by minimum confidence.
Parameters
-
bottom_crop(int): Bottom crop area for detection.- default: 0 (0-4096).
-
classes(str): Classes to accept, comma separated (empty = all).- default: .
-
classes_set(str): Name of the class set (coco, peron-face-hand, yolov8...) or list of comma-separated classes.- default: coco.
-
maxarea(area): Maximum area in percent of the image or pixel area of the box to be detected (=width x height).- default: -1.0 (-1.0-4194304).
-
minarea(area): Minimum area in percent of the image or pixel area of the box to be detected (=width x height).- default: -1.0 (-1.0-160000).
-
policy(nooverlaps|bestofclass|none): Apply a reduction policy of keeping a single item per class, with best score.- default: none.
-
threshold(float): Threshold of acceptance for the network detection.- default: 0.25 (0.0-1.0).
Networks
- Network
class_finder, defaults to SSDLiteObjectDetectionNetwork.
CounterProcessor
No description
Parameters
-
area_name(str): Name of the counter area.- default: Count Area.
-
count_chunk_size(int): Number of samples to accumulate before emitting a count event.- default: 100.
-
outlier_filter_percentile(float): Percentile to filter outliers out (zero disables).- default: 0.1 (0.0-1.0).
-
report_post_url(str): URL where to POST counter updates.- default: None.
-
split_mode(none|full): Wether to use sliding window to split the input image in chunks.- default: none.
Networks
- Network
counter_network, defaults to CLIPCounterNetwork.
DepthBlobDetectorProcessor
Detect blobs in depth image.
Parameters
-
blob_area_max(int): Maximum blob area to accept.- default: 2097152 (0-16777216).
-
blob_area_min(int): Minimum blob area to accept.- default: 10 (0-16777216).
-
min_distance_between_blobs(int): Minumum distance between blobs.- default: 1 (0-2048).
-
morph_open_size(int): Size of the opening kernel to filter the depth image.- default: 10 (1-100).
-
threshold_max(int): Maximum distance threshold.- default: 30 (0-255).
-
threshold_min(int): Minumum distance threshold.- default: 10 (0-255).
SkeletonProcessor
Find persons and skeletal keypoints in an image.
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: skeleton.
-
identify(bool): Identify (add id) to the skeletons.- default: True.
-
invert_lr(bool): Invert left and right.- default: True.
-
maxarea(area): Maximum area as percentage of the image or pixel area of the box to be detected (=width x height).- default: 4194304 (-1-4194304).
-
minarea(area): Minimum area as percentage of the image or pixel area of the box to be detected (=width x height).- default: 2500 (-1-160000).
-
minpercent(float): Minumum percent (0-1) of valid keypoints to accept a skeleton.- default: 0.4 (0.0-1.0).
-
short_raise(bool): Consider wrist higher than elbow a raised hand, else wrist higher than neck.- default: True.
Networks
- Network
skeleton_finder, defaults to YOLOV8PoseNetwork.
SlidingWindowClassDetectionProcessor
Find classes in the image, as blobs
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will emit.- default: face.
-
classes(str): Classes to accept, comma separated (empty = all).- default: .
-
classes_set(str): Name of the class set (coco, peron-face-hand, yolov8...) or list of comma-separated classes.- default: coco.
-
threshold(float): Threshold of acceptance for the network detection.- default: 0.25 (0.0-1.0).
Networks
- Network
class_finder, defaults to SSDLiteObjectDetectionNetwork.
TopDepthBlobProcessor
Scan depth images for blobs between given distances. Used for person detection from a top-mounted looking straight down 3D camera.
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
calibrated(bool): Check for recalibration.- default: False.
-
kernelsize(int): Kernel size for opening the detected image.- default: 5 (1-255).
-
maxarea(float): Maximum area in pixels of the box to be detected (=width x height).- default: 10000 (0-10000000).
-
max_threshold(float): Maximum threshold for blob detection.- default: 256.0 (0.0-255.0).
-
metric_unit(float): Divider to convert camera values to meters.- default: 1000.
-
minarea(float): Minimum area in pixels of the box to be detected (=width x height).- default: 20 (0-10000000).
-
min_threshold(float): Minimum threshold for blob detection.- default: 10.0 (0.0-255.0).
Networks
- Network
box_finder, defaults to TopDepthPersonDetectionNetwork.
Module features
EncoderProcessor
Assigns and recognizes box ids based on vector encodings
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
maximum_overlap_time(float): Maximum time during which we will assign an id because of a IOU overlap.- default: 2.0 (0.0-1.0).
-
minimum_distance_weight(float): Minimum weight to consider a cosine distance valid.- default: 0.5 (0.0-1.0).
-
minimum_overlap_percent(float): Minumum IOU overlap percent to consider two boxes the same.- default: 0.4 (0.0-1.0).
-
number_of_workers(int): Number of worker threads.- default: 2 (1-64).
-
purge_less_than_frames(int): Purge identifications only seen less than that number of frames during purge_older_than_secs.- default: 20 (0-200).
-
purge_older_than_secs(float): Purge identifications older than this and seen purge_less_than_frames.- default: 8.0 (0.0-60.0).
FaceAgeGenderProcessor
Find age and gender for the snap face boxes, using separate networks for age and gender
Parameters
-
number_of_workers(int): Number of worker threads.- default: 2 (1-64).
-
recheck_interval(float): Period of re-measurement (seconds).- default: 1.0 (0.0-60.0).
-
data_updates(bool): Should we send data updates for each new value.- default: False.
Networks
- Network
age, defaults to FaceAgeNetwork. - Network
gender, defaults to FaceGenderNetwork.
FaceAttentionProcessor
Calculate face attention as a percentage of total time looking to the camera surroundings.
Parameters
-
number_of_workers(int): Number of worker threads.- default: 2 (1-64).
-
recheck_interval(float): Period of re-measurement (seconds).- default: 1.0 (0.0-60.0).
-
data_updates(bool): Should we send data updates for each new value.- default: False.
-
straight_angle_limit(float): Maximum view angle that is considered being looking straight to the camera.- default: 25.0 (0.0-180.0).
Networks
- Network
gaze, defaults to FaceGazeNetwork. - Network
pose, defaults to FacePoseNetwork.
FaceEmotionProcessor
Calculate dominant perceived emotional state
Parameters
-
number_of_workers(int): Number of worker threads.- default: 2 (1-64).
-
recheck_interval(float): Period of re-measurement (seconds).- default: 1.0 (0.0-60.0).
-
data_updates(bool): Should we send data updates for each new value.- default: False.
Networks
- Network
emotion, defaults to FaceEmotionNetwork.
FaceEncoderProcessor
Assigns and recognizes box ids based on rotated face encodings
Parameters
Networks
- Network
landmarks, defaults to FaceLandmarksNetwork. - Network
identifier, defaults to FaceReidentificationNetwork.
FaceLandmarksProcessor
Retrieve landmarks and rotated face from the given face boxes in the snap.
Parameters
-
number_of_workers(int): Number of worker threads.- default: 2 (1-64).
-
recheck_interval(float): Period of re-measurement (seconds).- default: 1.0 (0.0-60.0).
-
save_rotated(bool): Save the rotated, warped face for later processing.- default: True.
-
data_updates(bool): Should we send data updates for each new value.- default: False.
Networks
- Network
landmarks, defaults to FaceLandmarksNetwork.
FeatureMergeProcessor
Assigns feature (class) blobs to the nearest overlapping person.
Parameters
-
min_iou(float): Minimum overlap needed to assign a class to a box.- default: 0.1 (0.0-1.0).
-
target_box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes where the classes will be assigned to.- default: person.
FixedEmbedderProcessor
No description
Parameters
FrontFacingCounterProcessor
Approximate attention based on whether the face for a given person is detected or not.
Parameters
PersonAgeGenderProcessor
Find age and gender for the given face boxes
Parameters
Networks
- Network
age_gender, defaults to FaceAgeGenderVoloNetwork.
ScreenAttentionProcessor
Calculate face attention as a percentage of total time looking to the camera surroundings.
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: head.
-
fallback_angle(float): Angle at which an unconfigured screen is considered to be being looked at.- default: 25.0 (0.0-180.0).
Networks
- Network
pose, defaults to FacePoseNetwork.
SelectiveAgeGenderProcessor
Detect a single candidate age and gender, picking one per frame from the global stage in order of descending face box area.
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes to process.- default: face.
-
max_ttl(float): Maximum time to live for a box before it is discarded.- default: 10.0 (0.0-360.0).
-
use_body(bool): Use face and body picture with the network.- default: True.
Networks
- Network
age_gender_network, defaults to FaceAgeGenderVoloNetwork.
SelectiveInferenceProcessor
Superclass for processors acting on the global stage. Override and implement a metric to define order of box processing. One box will be processed per frame.
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes to process.- default: face.
-
max_ttl(float): Maximum time to live for a box before it is discarded.- default: 10.0 (0.0-360.0).
Module globalstage
ActorAgeGenderProcessor
Detect a single candidate age and gender, picking one per frame from the global stage in order of descending face box area.
Parameters
use_body(bool): Use face and body picture with the network.- default: True.
Networks
- Network
age_gender_network, defaults to FaceAgeGenderVoloNetwork.
ActorInZoneProcessor
Track boxes hitting into/out of a certain contour
Parameters
-
box_position(center|topcenter|bottomcenter): Position on the box to locate inside/outside the area.- default: bottomcenter.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will use for the detection.- default: person.
-
contour(polygon): Contour Polygon where to detect entrance.- default: [].
-
contour_name(str): Name of the detection contour.- default: contour.
ActorMultiZoneProcessor
Track boxes hitting into/out of certain contours. Contours can be dynamically configured by setting parameters:
contour#contour_name#
where # is a number from 0 to 9
Contours can also be defined per-camera in visionnode.ini [camera] section
Parameters
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will use for the detection.- default: combo.
StageLoaderProcessor
Load IDENTIFIED person and face boxes into actors
Parameters
-
actor_box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of actor boxes to use as actor basis.- default: person.
-
debug_boxes(bool): Generate debug boxes for annotation.- default: True.
-
exit_time_interval(float): Time to wait to consider an actor out of stage.- default: 6.0 (0.0-60).
-
ingress_time_interval(float): Time to wait to consider an actor in stage.- default: 3.0 (0.0-60).
-
lost_time_interval(float): Time to wait to consider an actor lost.- default: 2.0 (0.0-60).
-
purge_maxframes(int): Maximum frames boxes can disappear before we stop tracking them.- default: 150.
-
purge_maxtime(float): Maximum time boxes can disappear before we stop tracking them.- default: 4.0.
-
track_face_boxes(bool): Add identified faces to actors.- default: False.
StageReadingProcessor
Superclass for processors acting on the global stage. Override and implement a metric to define order of box processing. One box will be processed per frame.
Parameters
StageReportProcessor
Report stage status and events like actor entrances exits.
Parameters
-
debug_boxes(bool): Generate debug boxes for annotation.- default: True.
-
enable_http(bool): Enable HTTP POSTing of stage to send_status_post_url.- default: False.
-
enable_mqtt(bool): Enable MQTT sending to user-defined channel send_status_mqtt_channel.- default: False.
-
is_view_time_interval(float): Minimum attention time to consider presence a view event.- default: 2.0 (0.0-60).
-
send_status_mqtt_channel(str): Use this MQTT channel instead of the default one for status messages.- default: /vision.
-
send_status_post_url(str): Send a HTTP POST (JSON) request with the whole stage info every send_status_time_interval..- default: http://localhost:8085.
-
send_status_time_interval(float): Send a MQTT status packet with the whole stage info every time interval. Zero disables..- default: 1.0 (0.0-60.0).
Module image
ImageCircleCropProcessor
Clip an image region and black out outside. Useful as a preprocessing step to blank outside interferences.
Parameters
-
dx(int): Pixels of center x offset.- default: 0 (-10000-10000).
-
dy(int): Pixels of center x offset.- default: 0 (-10000-10000).
-
radius(int): Radius of the circle.- default: 300.
-
plane(rgb|z): Color plane to use.- default: z.
ImageRegionProcessor
Clip an image region and black out outside. Useful as a preprocessing step to blank outside interferences.
Parameters
-
contour(polygon): Polygon area of the image to clip.- default: [].
-
crop(bool): Crop the boundaries of the contour.- default: True.
ZDepthFilterProcessor
Mask the RGB image of the snapshot using the Z (depth) image, clipping between two intervals.
Parameters
-
threshold_max(float): Maximum distance threshold.- default: 3000 (0-4294967296).
-
threshold_min(float): Minumum distance threshold.- default: 1000 (0-4294967296).
Module tag
AreaEnterExitProcessor
Track boxes entering/exiting a given area limiting boundary.
Parameters
-
area_name(str): Name of the area.- default: Area1.
-
box_position(center|topcenter|bottomcenter): Position on the box to locate inside/outside the area.- default: center.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will use for the detection.- default: head.
-
report_post_period(float): Period of POST updates, in seconds.- default: 30.0 (0.0-1000.0).
-
report_post_url(str): URL where to POST counter updates.- default: None.
AreaHitProcessor
Track boxes hitting into/out of a certain contour
Parameters
-
box_name(str): Name of the hit box.- default: box1.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
contour(polygon): Polygon where to detect collision.- default: [].
-
remove_timeout(float): Timeout for known boxes to be removed.- default: 30.
AreaTaggerProcessor
Add a tag to any boxes whose center point (or top center or bottom center) enter the area of interest.
Parameters
-
area(polygon): Detection area for tag.- default: None.
-
box_position(center|topcenter|bottomcenter): Position on the box to locate inside/outside the area.- default: bottomcenter.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
tag(str): Tag to mark on area hit.- default: tag1.
DoorCounterProcessor
Track boxes passing a door and inside a given contour.
Parameters
-
area_limit(int): Maximum number of persons this area allows.- default: 10.
-
area_name(str): Name of the area this door enters or exits.- default: area1.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
door(polygon): Polygon representing the entry door.- default: [[0, 240], [640, 240]].
-
door_name(str): Name of the door.- default: door1.
-
inside(square): Square to mark the inside area (centroid will be used).- default: [[320, 400], [330, 400], [330, 410], [320, 410]].
-
remove_timeout(float): Timeout (seconds) for stale boxes to be removed.- default: 10.
-
track_bottom(bool): Count crossings at the feet, instead of the center of the person box.- default: True.
GateProcessor
Tracks entrances and exits of gates and zones. Allows to send events when a box enters/exits a gate or zone.
Parameters
-
bounce_band(int): Bounce band width where no count is made.- default: 28.
-
box_target(head|feet|center): Target point of the box to track.- default: feet.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo|a|n|y): Type of boxes this processor will filter.- default: combo.
-
counter_type(contour|gate): Type of counter to use.- default: gate.
-
distance_threshold(int): Distance threshold to consider a box as entering/exiting.- default: 70.
-
enable_events(bool): Enable events to be sent to the MQTT broker.- default: True.
-
hysteresis_frames(int): Number of frames to wait before counting a double crossing.- default: 8.
TryOnProcessor
Virtual Try-On Processor
Sets up a table contour for pick and place actions
Parameters
-
bistate_frames(int): Number of frames of delay for pick/place trigger.- default: 10.
-
box_try_on_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of box where the pickables should be inside-of to be considered a try-on.- default: face.
-
pickables(str): Comma separated list of pickable classes.- default: glasses1,glasses2,glasses3.
-
table_contour(polygon): Polygon of the pick/place table.- default: [[0.0, 0.5], [1.0, 0.5], [1.0, 1.0], [0.0, 1.0]].
-
use_global_stage(bool): Use boxes from the global stage.- default: True.
Module tracking
AreaKalmanTrackingProcessor
Kalman based box tracking with delimited areas of tracking and areas where no trackers are expected to be "born" (e.g. you don't expect someone to appear from nowhere in the middle of the frame).
Parameters
-
association_iou(float): Minimum IOU overlap percent for a track to match a tracker.- default: 0.3 (0.0-1.0).
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
max_age(int): Maximum age in FRAMES for a tracklet to survive.- default: 50 (0-9999).
-
min_hits(int): Minimum number of hits for a tracklet to survive.- default: 3 (0-1000).
-
no_birth_area(polygon): Area of the image where no trackers should appear.- default: [].
-
re_association_distance(float): Re-association distance to re-match an unmatched detection.- default: 50 (0.0-200).
-
tracking_area(polygon): Area of the image where to track.- default: [].
BoxOverlapIdentifierProcessor
Assigns and recognizes box ids based on simple IOU overlap
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
discard_older_than(float): Discard seen boxes when more than (secs) have passed.- default: 3 (0-9999).
-
update_iou(float): Minumum IOU between frames to preserve stable coordinates.- default: 0.8 (0.0-1.0).
FaceInsidePersonIdentifierProcessor
Assigns and recognizes faces inside person boxes and matches their IDs if possible.
Parameters
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes for the person container (person, topbody, etc).- default: person.
KalmanTrackingProcessor
Basic Kalman-based Tracking Processor
Parameters
-
association_iou(float): Minimum IOU overlap percent for a track to match a tracker.- default: 0.3 (0.0-1.0).
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
max_age(float): Maximum age in FRAMES for a tracklet to survive.- default: 3.0 (0-999).
-
min_hits(int): Minimum number of hits for a tracklet to survive.- default: 3 (0-1000).
-
re_association_distance(float): Re-association distance to re-match an unmatched detection.- default: 50 (0.0-20000).
ReidentifierProcessor
DNN-embedding based reidentification processor @deprecated
Parameters
TaggedDoorCounterProcessor
Track boxes hitting into/out of a certain contour
Parameters
-
area_name(str): Name of the area this door enters or exits.- default: area1.
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
remove_timeout(float): Timeout (seconds) for stale boxes to be removed.- default: 15.
TrackingProcessor
Tracking processor with configurable tracker.
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: person.
-
debug_boxes(bool): Generate debug boxes for annotation.- default: True.
-
default_tracker(BASIC|UCMC|BOTSORT|NEOTRACKER|HEADTRACKER): Default tracker to use.- default: UCMC.
-
fallback_tracker(BASIC|UCMC|BOTSORT|NEOTRACKER|HEADTRACKER): Fallback tracker to use if default tracker cannot be used.- default: BASIC.
-
max_age(float): Maximum life in seconds for a lost tracklet to be removed.- default: 5.0 (1.0-20.0).
-
preserve_skeletons(bool): Whether or not to preserve skeleton frames.- default: False.
-
tracker_params(dict): Extra tracker parameters, passed to constructor.- default: {}.
-
update_feet(bool): Update feet position for each tracked box.- default: True.
Module transform
SkeletonToUpperBodyBoxProcessor
Processor that converts skeleton keypoints to upper body bounding boxes.
Parameters
replace_skeletons(bool): Replace skeletons with upper body boxes.- default: False.
Module retired (deprecated)
APIReportingProcessor
Generate HTTP(s) calls on box enter/exit states. @deprecated
Parameters
api_url(string): Reporting API endpoint.- default: http://api.end.pt.
GridBasedCalibrationProcessor
Grid based calibration for Broox Media Player tactile detection. Deprecated product. Do not use.
Parameters
-
calibration(showcam|grid|bypass|calibrated): Fire calibration manually.- default: bypass.
-
max_frame_count(int): Number of frames for calibration.- default: 300 (1-10000).
-
stream_outfile(string): Full path of the wall.jpg file.- default: /tmp/wall.jpg.
LightTrackingProcessor
DEPRECATED: Used to control an array of lights to react to people passing.
Parameters
MappingProcessor
Processor that maps coordinates over a 0-1 line over the defined contour
Parameters
PersonFaceStageProcessor
Keep track of ingressing person and face boxes in a virtual "stage", setting its box.state (entering, entered, exiting, exited...) Specialized version of StageProcessor
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
exit_time_interval(float): Time to wait to consider a box out of stage.- default: 3.0 (0.0-60).
-
ingress_time_interval(float): Time to wait to consider a box in stage.- default: 1.0 (0.0-60).
-
is_view_time_interval(float): Minimum attention time to consider presence a view event.- default: 2.0 (0.0-60).
-
send_status_post_url(str): Send a HTTP POST (JSON) request with the whole stage info every send_status_time_interval..- default: None.
-
send_status_time_interval(float): Send a MQTT status packet with the whole stage info every time interval. Zero disables..- default: 1.0 (0.0-60.0).
StageProcessor
Keep track of ingressing boxes in a virtual "stage", setting its box.state (entering, entered, exiting, exited...)
Parameters
-
box_type(face|person|upper_body|blob|hand|head|skeleton|placeholder|combo): Type of boxes this processor will handle.- default: face.
-
exit_time_interval(float): Time to wait to consider a box out of stage.- default: 3.0 (0.0-60).
-
ingress_time_interval(float): Time to wait to consider a box in stage.- default: 1.0 (0.0-60).
-
is_view_time_interval(float): Minimum attention time to consider presence a view event.- default: 2.0 (0.0-60).
-
send_status_post_url(str): Send a HTTP POST (JSON) request with the whole stage info every send_status_time_interval..- default: None.
-
send_status_time_interval(float): Send a MQTT status packet with the whole stage info every time interval. Zero disables..- default: 1.0 (0.0-60.0).