Skip to content

Processors

Processors run as separate threads in the pipeline, linked by input-output queues.

Processors define a process(snapshot) method that receives a Snap object containing both the input image(s) and the results of previous processors, performs actions on this data, and returns either the same snap or a modified version of it to be processed by later processors in the pipeline.

Processor shared state

In general, processors don't share state to avoid inter-thread locking. But a shared database of gathered information is available as a global_scope property on all of them.

This GlobalScope object contains:

  • Shared stage with Actors detected in previous frames.
  • A list of cameras, with calibration info.
  • A media_timeline with information about playing media to keep track of views and audience.

Available Processors

Module base

Common Processor Parameters

  • display (bool): Show processor on a system window.

    • default: False.
  • display_extra (bool): Show processor DEBUG output in a window.

    • default: False.
  • enabled (bool): Enable processor.

    • default: True.
  • max_fps (float): If processing more than this fps, bypass processor (zero or less disables).

    • default: -1.0.
  • stream (bool): Show in browser.

    • default: False.
  • tags (str): Apply this processor only to boxes with the given tags.

    • default: None.

BoxParallelProcessor

Superclass for a thread-pool parallelizing processor with minimum-interval checking

Parameters
  • number_of_workers (int): Number of worker threads.

    • default: 2 (1-64).
  • recheck_interval (float): Period of re-measurement (seconds).

    • default: 1.0 (0.0-60.0).
  • data_updates (bool): Should we send data updates for each new value.

    • default: False.

BypassProcessor

Bypass processor for streaming/display

Parameters

SampleParametersProcessor

Demo processor for all kinds of parameters. Does nothing.

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: face.
  • float01 (float): Floating point value from 0 to 1.

    • default: 1.0 (0.0-1.0).
  • float0inf (float): Floating point value from 0 to some large value.

    • default: 1.0 (0.0-9.9e+99).
  • int0_100 (int): Int form 0 to 100.

    • default: 10 (0-100).
  • line (line): Line selection.

    • default: [[0, 50], [100, 50]].
  • polygon (polygon): Polygon selection.

    • default: [[0, 0], [0, 100], [100, 100], [100, 0]].
  • polyline (polyline): Polyline selection.

    • default: [[0, 80], [100, 80], [80, 60]].
  • selector (value1|value2|value3): Selector.

    • default: value1.
  • square (square): Square selection.

    • default: [[0, 0], [0, 100], [100, 100], [100, 0]].
  • string (str): A string value.

    • default: Some value.
  • toggle (bool): Toggle Parameter.

    • default: False.
  • vector (vector): Vector selection.

    • default: [[0, 50], [100, 50]].

Module boxfilters

BoxAreaFilterProcessor

Filter boxes between the given min/max areas

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: face.
  • maxarea (area): Maximum area in pixels of the box to be detected (=width x height).

    • default: 80000 (0-4194304).
  • minarea (area): Minimum area in pixels of the box to be detected (=width x height).

    • default: 3000 (0-160000).

BoxCameraZonesFilterProcessor

Filter person+face boxes outside of camera defined zones Person boxes are filtered based on feet position. Face boxes are filtered projecting downwards.

Parameters
  • project_head_down (float): Project N heads downwards to guess feet position to filter out heads.
    • default: 7.0 (0.0-10.0).

BoxContourFilterProcessor

Discard boxes whose contact point (center, top-center, bottom-center) is outside the given contour

Parameters
  • box_type (face|person|blob|head|skeleton|any): Type of boxes this processor will filter.

    • default: person.
  • contour (polygon): Polygon where to detect collision.

    • default: [].
  • contact_point (top|center|bottom): Point to check for presence inside/outside the contour.

    • default: bottom.

BoxRatioFilterProcessor

Discard boxes whose size ratio (height/width) is below the set value

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will filter.

    • default: person.
  • minratio (float): Minumum ratio (height/width) for the box to be detected (e.g. in a face height > width).

    • default: 1.2 (0.0-10.0).

FocalBoxFilterProcessor

Filter Person/Head Bounding Boxes by estimated Focal-Point distance. Assuming an average height (~1.75m) for the person and for the head (~30cm), discard those boxes that are smaller than the average.

Parameters
  • exp_head_height (float): Expected average height of a head in meters.

    • default: 0.2 (0.0-0.5).
  • exp_person_height (float): Expected average height of a person in meters.

    • default: 1.8 (1.0-3.0).
  • threshold (float): Threshold distance to discard boxes (m).

    • default: 5.0 (0.0-1000.0).

SkeletonsToBoxesProcessor

Convert skeletons to face+body boxes, and discard them

Parameters
  • use_face (bool): Use face points to generate a face box.
    • default: False.

Module camera

CameraFloorDrawingProcessor

Draws a projection of the xy camera plane onto the current snapshot.

Parameters

CameraIntrinsicCalibrationProcessor

Calibrate a camera intrinsic parameters using a checkerboard pattern.

Parameters
  • grid_x (int): Grid x size.

    • default: 10 (1-100).
  • grid_y (int): Grid x size.

    • default: 7 (1-100).
  • max_rms_error (float): Max. RMS error in calibration.

    • default: 2.0 (0.0-200.0).
  • min_points (int): Minimum number of captured points.

    • default: 50 (1-500).
  • recalibrate (bool): Recalibrate camera trigger calibration.

    • default: False.
  • square_size_mm (float): Square size in mm.

    • default: 30 (0.0-200.0).

CameraParametersProcessor

Adjust the camera extrinsic parameters either by a 1 m. square on the ground or manually.

Parameters
  • calculate_square (trigger): Adjust translation and rotation based on the floor square.

    • default: False.
  • camera_h (float): Height from the camera to the ground where the square lays.

    • default: 2.0 (-50.0-50.0).
  • meter_square (polygon): A square of 1m sitting at 3m.

    • default: [].
  • position_x (float): Camera X position.

    • default: 0.0 (-1000.0-1000.0).
  • position_y (float): Camera Y position.

    • default: 0.0 (-1000.0-1000.0).
  • position_z (float): Camera Z position.

    • default: 0.0 (-1000.0-1000.0).
  • rotation_x (float): Camera X angle in degrees.

    • default: 0.0 (-360.0-360.0).
  • rotation_y (float): Camera Y angle in degrees.

    • default: 0.0 (-360.0-360.0).
  • rotation_z (float): Camera Z angle in degrees.

    • default: 0.0 (-360.0-360.0).
  • save_calibration (trigger): Save current calibration parameters.

    • default: False.
  • save_calibration_file (str): File where to save calibration parameters.

    • default: calibration.json.
  • square_d (float): Distance to the calibration square.

    • default: 3.0 (0.1-20.0).
  • square_l (float): Length in meters of the side of the calibration square.

    • default: 1.0 (0.1-20.0).

Module comm

MQTTPublishingProcessor

Publisher processor that outputs different box events to different MQTT paths

Parameters
  • box_type (face|person|blob|head|none): Type of boxes this processor will handle.

    • default: person.
  • minimal (bool): Only send box and hands, reducing packet size.

    • default: False.
  • send_biggest_n (int): Only send N biggest boxes.

    • default: 0 (0-100).
  • skel_send (str): Comma separated parts to send (hands,feet,chest) or 'none'.

    • default: hands.

MessagingProcessor

Send (MQTT) messages for all entering and exiting Boxes. Message payload is of the form:

{
"sent-by": "messaging",
"class": "box",
"action": "in|out",
"boxtype": "face|person|blob|head",
"id": "abcdefg",
"tags": ["tag1", "tag2", "tag3"],
"camera": "camera name"
}
Parameters

OSCPublishingProcessor

Publisher processor that outputs different box events to different OSC messages

Parameters
  • box_state (any|entered): Only emit boxes in this stage state.

    • default: any.
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: face.
  • target_host (str): Target host for OSC messages.

    • default: 127.0.0.1.
  • target_port (int): Target port of OSC messages.

    • default: 15000.

SingleTargetMQTTPublishingProcessor

Publisher processor that outputs different box events to different MQTT paths

Parameters
  • box_type (face|person|blob|head|none): Type of boxes this processor will handle.

    • default: person.
  • minimal (bool): Only send box and hands, reducing packet size.

    • default: False.
  • send_criteria (center|biggest): Only send center/biggest box.

    • default: biggest.
  • skel_send (str): Comma separated parts to send (hands,feet,chest) or 'none'.

    • default: hands.

StageOSCPublishingProcessor

Publisher processor that outputs different box events to different OSC messages

Parameters
  • box_state (any|entered|entering|exiting|exited): Only emit boxes in this stage state.

    • default: entered.
  • target_host (str): Target host for OSC messages.

    • default: 127.0.0.1.
  • target_port (int): Target port of OSC messages.

    • default: 15000.

StageTUIOPublishingProcessor

Publisher processor that outputs different box events to different TUIO messages

Parameters
  • box_state (any|entered|entering|exiting|exited): Only emit boxes in this stage state.

    • default: entered.
  • max_skeleton_speed (float): Maximum speed accepted to send skeleton hands.

    • default: 0.2 (0.0-1000.0).
  • skel_send (hands|chest|feet|full): Blob(s) to send for skeletons.

    • default: hands.
  • target_host (str): Target host for TUIO messages.

    • default: 127.0.0.1.
  • target_port (int): Target port of TUIO messages.

    • default: 15000.

TUIOPublishingProcessor

Publisher processor that outputs different box events to different TUIO messages

Parameters
  • box_state (any|entered): Only emit boxes in this stage state.

    • default: any.
  • box_type (face|person|blob|head|skeleton|other): Type of boxes this processor will handle.

    • default: face.
  • image_path (str): Path where to dump snap images as FSEQ.jpg files (if set).

    • default: .
  • max_skeleton_speed (float): Maximum speed accepted to send skeleton hands.

    • default: 0.2 (0.0-1000.0).
  • skel_send (hands|chest|feet|full): Blob(s) to send for skeletons.

    • default: hands.
  • target_host (str): Target host for TUIO messages.

    • default: 127.0.0.1.
  • target_port (int): Target port of TUIO messages.

    • default: 15000.

VirtualCamProcessor

Outputs video frames to a virtual camera device

Parameters
  • clip_h (int): Crop height from the center of the image (zero disables).

    • default: 0 (0-32000).
  • clip_w (int): Crop width from the center of the image (zero disables).

    • default: 0 (0-32000).
  • device (str): Device parameter for pyvirtualcam (default automatic).

    • default: None.
  • draw_floor (bool): Draw camera calibration floor on the image.

    • default: False.
  • virtualcam_driver (auto|v4l2loopback|obs|unitycapture): Force driver for the camera.

    • default: auto.
  • target_fps (int): Target FPS rate of the Virtual Camera.

    • default: 15 (5-60).

ZContoursProcessor

From the Z buffer, generate a mask between distance ranges. Also, calculate normals at every point and send the resulting mask to virtual camera.

Parameters
  • virtualcam_device (str): Device path (e.g. /dev/video12) for the camera.

    • default: None.
  • virtualcam_driver (auto|v4l2loopback|obs|unitycapture): Force driver for the camera.

    • default: auto.
  • max_z_threshold (int): Maximum z if using depth plane (mm).

    • default: 3000 (0-32768).
  • min_z_threshold (int): Minimum z if using depth plane (mm).

    • default: 1000 (0-32768).
  • target_fps (int): Target FPS rate of the Virtual Camera.

    • default: 15 (5-60).

Module detect

BlobsDetectorProcessor

Substract background and detect blobs on the image

Parameters
  • closing_kernel_size (int): Pixel radius for the closing used to reduce small dots.

    • default: 6 (1-100).
  • max_box_area (float): Maximum area of a detected blob.

    • default: 1000000 (0.0-9.99e+100).
  • max_box_ratio (float): Maximum width/height ratio of a detected blob.

    • default: 4.0 (0.0-9.99e+100).
  • max_z_threshold (int): Maximum z if using depth plane.

    • default: 5000 (0-32768).
  • min_box_area (float): Minimum area of a detected blob.

    • default: 100 (0.0-9.99e+100).
  • min_box_ratio (float): Minimum width/height ratio of a detected blob.

    • default: 0.0 (0.0-10).
  • min_z_threshold (int): Minimum z if using depth plane.

    • default: 0 (0-32768).
  • opening_kernel_size (int): Pixel radius for the opening used to reduce small dots.

    • default: 6 (1-100).
  • plane (rgb|z): Color plane to use.

    • default: z.

BoxFinderProcessor

Find boxes using the configured DNN

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: face.
  • contour (square): Area of the image where to detect.

    • default: [].
  • filter_out_iou (float): Filter out boxes overlapping intersection-over-union more than the value (zero disables).

    • default: 0.4 (0.0-1.0).
  • maxarea (area): Maximum area in pixels of the box to be detected (=width x height).

    • default: 80000 (0-4194304).
  • minarea (area): Minimum area in pixels of the box to be detected (=width x height).

    • default: 3000 (0-160000).
  • minratio (float): Minumum ratio (height/width) for the box to be detected (e.g. in a face height > width).

    • default: 1.2 (0.0-10.0).
  • threshold (float): Threshold of acceptance for the network detection.

    • default: 0.5 (0.0-1.0).
Networks
  • Network box_finder, defaults to FaceDetectionNetwork.

ClassDetectionProcessor

Find classes in the image, as blobs

Parameters
  • bottom_crop (int): Bottom crop area for detection.

    • default: 0 (0-4096).
  • classes (str): Classes to accept, comma separated (empty = all).

    • default: .
  • classes_set (str): Name of the class set (coco, peron-face-hand, yolov8...) or list of comma-separated classes.

    • default: coco.
  • maxarea (area): Maximum area in percent of the image or pixel area of the box to be detected (=width x height).

    • default: -1.0 (-1.0-4194304).
  • minarea (area): Minimum area in percent of the image or pixel area of the box to be detected (=width x height).

    • default: -1.0 (-1.0-160000).
  • policy (bestofclass|none): Apply a reduction policy of keeping a single item per class, with best score.

    • default: none.
  • threshold (float): Threshold of acceptance for the network detection.

    • default: 0.25 (0.0-1.0).
Networks
  • Network class_finder, defaults to SSDLiteObjectDetectionNetwork.

DepthBlobDetectorProcessor

Detect blobs in depth image.

Parameters
  • blob_area_max (int): Maximum blob area to accept.

    • default: 2097152 (0-16777216).
  • blob_area_min (int): Minimum blob area to accept.

    • default: 10 (0-16777216).
  • min_distance_between_blobs (int): Minumum distance between blobs.

    • default: 1 (0-2048).
  • morph_open_size (int): Size of the opening kernel to filter the depth image.

    • default: 10 (1-100).
  • threshold_max (int): Maximum distance threshold.

    • default: 30 (0-255).
  • threshold_min (int): Minumum distance threshold.

    • default: 10 (0-255).

SkeletonProcessor

Find persons and skeletal keypoints in an image.

Parameters
  • box_type (face|person|blob|head|skeleton): Type of boxes this processor will handle.

    • default: skeleton.
  • identify (bool): Identify (add id) to the skeletons.

    • default: True.
  • invert_lr (bool): Invert left and right.

    • default: True.
  • maxarea (area): Maximum area as percentage of the image or pixel area of the box to be detected (=width x height).

    • default: 280000 (0-4194304).
  • minarea (area): Minimum area as percentage of the image or pixel area of the box to be detected (=width x height).

    • default: 300 (0-160000).
  • minpercent (float): Minumum percent (0-1) of valid keypoints to accept a skeleton.

    • default: 0.4 (0.0-1.0).
  • short_raise (bool): Consider wrist higher than elbow a raised hand, else wrist higher than neck.

    • default: True.
Networks
  • Network skeleton_finder, defaults to SkeletonNetwork.

TopDepthBlobProcessor

Scan depth images for blobs between given distances. Used for person detection from a top-mounted looking straight down 3D camera.

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: person.
  • calibrated (bool): Check for recalibration.

    • default: False.
  • kernelsize (int): Kernel size for opening the detected image.

    • default: 5 (1-255).
  • maxarea (float): Maximum area in pixels of the box to be detected (=width x height).

    • default: 10000 (0-10000000).
  • max_threshold (float): Maximum threshold for blob detection.

    • default: 256.0 (0.0-255.0).
  • metric_unit (float): Divider to convert camera values to meters.

    • default: 1000.
  • minarea (float): Minimum area in pixels of the box to be detected (=width x height).

    • default: 20 (0-10000000).
  • min_threshold (float): Minimum threshold for blob detection.

    • default: 10.0 (0.0-255.0).
Networks
  • Network box_finder, defaults to TopDepthPersonDetectionNetwork.

Module features

EncoderProcessor

Assigns and recognizes box ids based on vector encodings

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: face.
  • maximum_overlap_time (float): Maximum time during which we will assign an id because of a IOU overlap.

    • default: 2.0 (0.0-1.0).
  • minimum_distance_weight (float): Minimum weight to consider a cosine distance valid.

    • default: 0.5 (0.0-1.0).
  • minimum_overlap_percent (float): Minumum IOU overlap percent to consider two boxes the same.

    • default: 0.4 (0.0-1.0).
  • number_of_workers (int): Number of worker threads.

    • default: 2 (1-64).
  • purge_less_than_frames (int): Purge identifications only seen less than that number of frames during purge_older_than_secs.

    • default: 20 (0-200).
  • purge_older_than_secs (float): Purge identifications older than this and seen purge_less_than_frames.

    • default: 8.0 (0.0-60.0).

FaceAgeGenderProcessor

Find age and gender for the snap face boxes, using separate networks for age and gender

Parameters
  • number_of_workers (int): Number of worker threads.

    • default: 2 (1-64).
  • recheck_interval (float): Period of re-measurement (seconds).

    • default: 1.0 (0.0-60.0).
  • data_updates (bool): Should we send data updates for each new value.

    • default: False.
Networks
  • Network age, defaults to FaceAgeNetwork.
  • Network gender, defaults to FaceGenderNetwork.

FaceAttentionProcessor

Calculate face attention as a percentage of total time looking to the camera surroundings.

Parameters
  • number_of_workers (int): Number of worker threads.

    • default: 2 (1-64).
  • recheck_interval (float): Period of re-measurement (seconds).

    • default: 1.0 (0.0-60.0).
  • data_updates (bool): Should we send data updates for each new value.

    • default: False.
  • straight_angle_limit (float): Maximum view angle that is considered being looking straight to the camera.

    • default: 25.0 (0.0-180.0).
Networks
  • Network gaze, defaults to FaceGazeNetwork.
  • Network pose, defaults to FacePoseNetwork.

FaceEmotionProcessor

Calculate dominant perceived emotional state

Parameters
  • number_of_workers (int): Number of worker threads.

    • default: 2 (1-64).
  • recheck_interval (float): Period of re-measurement (seconds).

    • default: 1.0 (0.0-60.0).
  • data_updates (bool): Should we send data updates for each new value.

    • default: False.
Networks
  • Network emotion, defaults to FaceEmotionNetwork.

FaceEncoderProcessor

Assigns and recognizes box ids based on rotated face encodings

Parameters
Networks
  • Network landmarks, defaults to FaceLandmarksNetwork.
  • Network identifier, defaults to FaceReidentificationNetwork.

FaceLandmarksProcessor

Retrieve landmarks and rotated face from the given face boxes in the snap.

Parameters
  • number_of_workers (int): Number of worker threads.

    • default: 2 (1-64).
  • recheck_interval (float): Period of re-measurement (seconds).

    • default: 1.0 (0.0-60.0).
  • save_rotated (bool): Save the rotated, warped face for later processing.

    • default: True.
  • data_updates (bool): Should we send data updates for each new value.

    • default: False.
Networks
  • Network landmarks, defaults to FaceLandmarksNetwork.

FeatureMergeProcessor

Assigns feature (class) blobs to the nearest overlapping person.

Parameters
  • min_iou (float): Minimum overlap needed to assign a class to a box.

    • default: 0.1 (0.0-1.0).
  • target_box_type (face|person|blob|head|other): Type of boxes where the classes will be assigned to.

    • default: person.

FrontFacingCounterProcessor

Approximate attention based on whether the face for a given person is detected or not.

Parameters

PersonAgeGenderProcessor

Find age and gender for the given face boxes

Parameters
Networks
  • Network age_gender, defaults to FaceAgeGenderVoloNetwork.

Module globalstage

ActorAgeGenderProcessor

Detect a single candidate age and gender, picking one per frame from the global stage in order of descending face box area.

Parameters
  • use_body (bool): Use face and body picture with the network.
    • default: True.
Networks
  • Network age_gender_network, defaults to FaceAgeGenderVoloNetwork.

ActorInZoneProcessor

Track boxes hitting into/out of a certain contour

Parameters
  • box_position (center|topcenter|bottomcenter): Position on the box to locate inside/outside the area.

    • default: bottomcenter.
  • box_type (face|person|blob|head|other): Type of boxes this processor will use for the detection.

    • default: person.
  • contour (polygon): Contour Polygon where to detect entrance.

    • default: [].
  • contour_name (str): Name of the detection contour.

    • default: contour.

ActorMultiZoneProcessor

Track boxes hitting into/out of certain contours. Contours can be dynamically configured by setting parameters:

  • contour#
  • contour_name#

where # is a number from 0 to 9

Contours can also be defined per-camera in visionnode.ini [camera] section

Parameters
  • box_position (center|topcenter|bottomcenter): Position on the box to locate inside/outside the area.

    • default: bottomcenter.
  • box_type (face|person|blob|head|other): Type of boxes this processor will use for the detection.

    • default: person.

ActorStageProcessor

Keep track of actors ingress in a virtual "stage", setting its state (entering, entered, exiting, exited...) Specialized for face and body ("person") boxes. @deprecated use StageUpdateProcessor/StageReportProcessor

Parameters
  • exit_time_interval (float): Time to wait to consider a box out of stage.

    • default: 3.0 (0.0-60).
  • ingress_time_interval (float): Time to wait to consider a box in stage.

    • default: 1.0 (0.0-60).
  • is_view_time_interval (float): Minimum attention time to consider presence a view event.

    • default: 2.0 (0.0-60).
  • send_status_post_url (str): Send a HTTP POST (JSON) request with the whole stage info every send_status_time_interval..

    • default: None.
  • send_status_time_interval (float): Send a MQTT status packet with the whole stage info every time interval. Zero disables..

    • default: 1.0 (0.0-60.0).

StageIdentifierProcessor

Matches incoming boxes to existing stage ones Later it will be the stage who controls actor lifetimes

Parameters

StageLoaderProcessor

Load IDENTIFIED person and face boxes into actors

Parameters
  • debug_boxes (bool): Generate debug boxes for annotation.

    • default: True.
  • exit_time_interval (float): Time to wait to consider an actor out of stage.

    • default: 6.0 (0.0-60).
  • ingress_time_interval (float): Time to wait to consider an actor in stage.

    • default: 3.0 (0.0-60).
  • lost_time_interval (float): Time to wait to consider an actor lost.

    • default: 2.0 (0.0-60).
  • purge_maxframes (int): Maximum frames boxes can disappear before we stop tracking them.

    • default: 150.
  • purge_maxtime (float): Maximum time boxes can disappear before we stop tracking them.

    • default: 4.0.

StageReadingProcessor

Superclass for processors acting on the global stage. Override and implement a metric to define order of box processing. One box will be processed per frame.

Parameters

StageReportProcessor

Keep track of ingress boxes in a virtual “stage”, setting its box state (entering, entered, exiting, exited).

Parameters
  • debug_boxes (bool): Generate debug boxes for annotation.

    • default: True.
  • enable_http (bool): Enable HTTP POSTing of stage to send_status_post_url.

    • default: False.
  • enable_mqtt (bool): Enable MQTT sending to user-defined channel send_status_mqtt_channel.

    • default: False.
  • is_view_time_interval (float): Minimum attention time to consider presence a view event.

    • default: 2.0 (0.0-60).
  • send_status_mqtt_channel (str): Use this MQTT channel instead of the default one for status messages.

    • default: /vision.
  • send_status_post_url (str): Send a HTTP POST (JSON) request with the whole stage info every send_status_time_interval..

    • default: http://localhost:8085.
  • send_status_time_interval (float): Send a MQTT status packet with the whole stage info every time interval. Zero disables..

    • default: 1.0 (0.0-60.0).

StageUpdateProcessor

Track boxes and update the state of the global stage

Parameters
  • debug_boxes (bool): Generate debug boxes for annotation.

    • default: True.
  • exit_time_interval (float): Time to wait to consider a box out of stage.

    • default: 3.0 (0.0-60).
  • ingress_time_interval (float): Time to wait to consider a box in stage.

    • default: 3.0 (0.0-60).
  • purge_maxframes (int): Maximum frames boxes can disappear before we stop tracking them.

    • default: 150.
  • purge_maxtime (float): Maximum time boxes can disappear before we stop tracking them.

    • default: 4.0.
  • tracker_args_face (dict): Tracking strategy arguments for face tracker.

    • default: {}.
  • tracker_args_person (dict): Tracking strategy arguments for person tracker.

    • default: {}.
  • tracker_face (BYTE|SORT|OCSORT|UCMC|none): Tracking strategy for faces.

    • default: BYTE.
  • tracker_person (BYTE|SORT|OCSORT|UCMC|none): Tracking strategy for persons.

    • default: BYTE.
  • tracker_priority (face|person|both): Priorize creating new trackers from unmatched boxes of this type.

    • default: person.

Module image

ImageRegionProcessor

Clip an image region and black out outside. Useful as a preprocessing step to blank outside interferences.

Parameters
  • contour (polygon): Polygon area of the image to clip.

    • default: [].
  • crop (bool): Crop the boundaries of the contour.

    • default: True.

ZDepthFilterProcessor

Mask the RGB image of the snapshot using the Z (depth) image, clipping between two intervals.

Parameters
  • threshold_max (float): Maximum distance threshold.

    • default: 3000 (0-4294967296).
  • threshold_min (float): Minumum distance threshold.

    • default: 1000 (0-4294967296).

Module tag

AreaHitProcessor

Track boxes hitting into/out of a certain contour

Parameters
  • box_name (str): Name of the hit box.

    • default: box1.
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: person.
  • contour (polygon): Polygon where to detect collision.

    • default: [].
  • remove_timeout (float): Timeout for known boxes to be removed.

    • default: 30.

AreaTaggerProcessor

Add a tag to any boxes whose center point (or top center or bottom center) enter the area of interest.

Parameters
  • area (polygon): Detection area for tag.

    • default: None.
  • box_position (center|topcenter|bottomcenter): Position on the box to locate inside/outside the area.

    • default: bottomcenter.
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: face.
  • tag (str): Tag to mark on area hit.

    • default: tag1.

DoorCounterProcessor

Track boxes passing a door and inside a given contour.

Parameters
  • area_limit (int): Maximum number of persons this area allows.

    • default: 10.
  • area_name (str): Name of the area this door enters or exits.

    • default: area1.
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: person.
  • door (polygon): Polygon representing the entry door.

    • default: [[0, 240], [640, 240]].
  • door_name (str): Name of the door.

    • default: door1.
  • inside (square): Square to mark the inside area (centroid will be used).

    • default: [[320, 400], [330, 400], [330, 410], [320, 410]].
  • remove_timeout (float): Timeout (seconds) for stale boxes to be removed.

    • default: 10.
  • track_bottom (bool): Count crossings at the feet, instead of the center of the person box.

    • default: True.

TryOnProcessor

Virtual Try-On Processor Sets up a table contour for pick and place actions

Parameters
  • box_try_on_type (face|person|blob|head|other): Type of box where the pickables should be inside-of to be considered a try-on.

    • default: face.
  • pickables (str): Comma separated list of pickable classes.

    • default: glasses1,glasses2,glasses3.
  • table_contour (polygon): Polygon of the pick/place table.

    • default: [[0.0, 0.85], [1.0, 0.85], [1.0, 1.0], [0.0, 1.0]].
  • use_global_stage (bool): Use boxes from the global stage.

    • default: True.

Module tracking

AreaKalmanTrackingProcessor

Kalman based box tracking with delimited areas of tracking and areas where no trackers are expected to be "born" (e.g. you don't expect someone to appear from nowhere in the middle of the frame).

Parameters
  • association_iou (float): Minimum IOU overlap percent for a track to match a tracker.

    • default: 0.3 (0.0-1.0).
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: person.
  • max_age (int): Maximum age in FRAMES for a tracklet to survive.

    • default: 50 (0-9999).
  • min_hits (int): Minimum number of hits for a tracklet to survive.

    • default: 3 (0-1000).
  • no_birth_area (polygon): Area of the image where no trackers should appear.

    • default: [].
  • re_association_distance (float): Re-association distance to re-match an unmatched detection.

    • default: 50 (0.0-200).
  • tracking_area (polygon): Area of the image where to track.

    • default: [].

BoxOverlapIdentifierProcessor

Assigns and recognizes box ids based on simple IOU overlap

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: person.
  • discard_older_than (float): Discard seen boxes when more than (secs) have passed.

    • default: 3 (0-9999).
  • update_iou (float): Minumum IOU between frames to preserve stable coordinates.

    • default: 0.8 (0.0-1.0).

FaceInsidePersonIdentifierProcessor

Assigns and recognizes faces inside person boxes and matches their IDs if possible.

Parameters

KalmanTrackingProcessor

Basic Kalman-based Tracking Processor

Parameters
  • association_iou (float): Minimum IOU overlap percent for a track to match a tracker.

    • default: 0.3 (0.0-1.0).
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: person.
  • max_age (float): Maximum age in FRAMES for a tracklet to survive.

    • default: 3.0 (0-999).
  • min_hits (int): Minimum number of hits for a tracklet to survive.

    • default: 3 (0-1000).
  • re_association_distance (float): Re-association distance to re-match an unmatched detection.

    • default: 50 (0.0-20000).

ReidentifierProcessor

DNN-embedding based reidentification processor @deprecated

Parameters

TaggedDoorCounterProcessor

Track boxes hitting into/out of a certain contour

Parameters
  • area_name (str): Name of the area this door enters or exits.

    • default: area1.
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: person.
  • remove_timeout (float): Timeout (seconds) for stale boxes to be removed.

    • default: 15.

TrackingProcessor

UCMC Tracker Based Processor

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: person.
  • max_age (float): Maximum life in seconds for a lost tracklet to be removed.

    • default: 5.0 (1.0-20.0).
  • preserve_skeletons (bool): Whether or not to preserve skeleton frames.

    • default: False.

Module retired (deprecated)

APIReportingProcessor

Generate HTTP(s) calls on box enter/exit states. @deprecated

Parameters
  • api_url (string): Reporting API endpoint.
    • default: http://api.end.pt.

GridBasedCalibrationProcessor

Grid based calibration for Broox Media Player tactile detection. Deprecated product. Do not use.

Parameters
  • calibration (showcam|grid|bypass|calibrated): Fire calibration manually.

    • default: bypass.
  • max_frame_count (int): Number of frames for calibration.

    • default: 300 (1-10000).
  • stream_outfile (string): Full path of the wall.jpg file.

    • default: /tmp/wall.jpg.

LightTrackingProcessor

DEPRECATED: Used to control an array of lights to react to people passing.

Parameters

MappingProcessor

Processor that maps coordinates over a 0-1 line over the defined contour

Parameters

PersonFaceStageProcessor

Keep track of ingressing person and face boxes in a virtual "stage", setting its box.state (entering, entered, exiting, exited...) Specialized version of StageProcessor

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: face.
  • exit_time_interval (float): Time to wait to consider a box out of stage.

    • default: 3.0 (0.0-60).
  • ingress_time_interval (float): Time to wait to consider a box in stage.

    • default: 1.0 (0.0-60).
  • is_view_time_interval (float): Minimum attention time to consider presence a view event.

    • default: 2.0 (0.0-60).
  • send_status_post_url (str): Send a HTTP POST (JSON) request with the whole stage info every send_status_time_interval..

    • default: None.
  • send_status_time_interval (float): Send a MQTT status packet with the whole stage info every time interval. Zero disables..

    • default: 1.0 (0.0-60.0).

StageProcessor

Keep track of ingressing boxes in a virtual "stage", setting its box.state (entering, entered, exiting, exited...)

Parameters
  • box_type (face|person|blob|head|other): Type of boxes this processor will handle.

    • default: face.
  • exit_time_interval (float): Time to wait to consider a box out of stage.

    • default: 3.0 (0.0-60).
  • ingress_time_interval (float): Time to wait to consider a box in stage.

    • default: 1.0 (0.0-60).
  • is_view_time_interval (float): Minimum attention time to consider presence a view event.

    • default: 2.0 (0.0-60).
  • send_status_post_url (str): Send a HTTP POST (JSON) request with the whole stage info every send_status_time_interval..

    • default: None.
  • send_status_time_interval (float): Send a MQTT status packet with the whole stage info every time interval. Zero disables..

    • default: 1.0 (0.0-60.0).