Installation
This guide walks you through the complete installation and setup process for the Broox Platform. From preparing your hardware, such as connecting and testing a USB Webcam, to configuring essential software components like Broox Studio, Controller, and Vision Node, you'll learn how to establish and manage a fully functional system. Follow these steps to ensure seamless integration, configure analytics pipelines, and access real-time data on your dashboard.
Verify your device meets the minimum requirements
Ensure your device meets the minimum requirements to run the desired solution effectively. Use Broox's Benchmark Tool to evaluate your system's performance and compatibility. Check your device performance following the guidelines here: Broox Benchmark Tool.
Setting up the hardware
-
Prepare your target hardware. In our example, a PC and a USB Webcam.
-
Connect the Webcam to the PC USB port.
-
Check your camera is working:
- If it is a USB Camera, use qv4l2:
sudo apt install qv4l2 qv4l2
- Press play. If the image appears, then it's ok. If not, check your cabling.
Installing and configuring the software
Required Software:
- Broox Studio (Via web browser)
- Broox Controller
- Broox Vision Node
Creating the Location
-
Start by logging into Broox Studio
-
Navigate to Installation/Locations:
-
Click on Add Location:
-
Fill the location information. Name, location, time zone, etc.
-
You can set an email address to receive alarms from the location installation and set a time window to ignore alarms:
-
Click on the created location:
-
A default Controller is created for the location.
-
Download the License File (A-LOCATION-NAME.license) by clicking the License button.
-
Copy the License File to the PC.
Installing the Broox Platform
-
Download the required
.deb
packages (V.V.V is the version).broox-controller_V.V.V_amd64.deb
broox-vision-node.V.V_amd64.deb
-
Open the Terminal (Ctrl+Alt+T).
-
cd
to the folder where you downloaded the.deb
packages. -
Run
sudo apt install ./<package.deb>
for each package.
Note: On certain distributions it might be necessary to install libv4l and python-tk from the distribution repositories.
-
Reply
Y
. The package will install. -
You can start the applications from the Applications screen.
Configuring Broox Controller
-
Start Broox Controller.
-
Expand the Configuration Section:
-
Click Read .license file.
-
Browse for the CONTROLLER_NAME.license file downloaded from the CMS
-
Press the Start button to start the controller.
-
You should see on Broox Studio the controller reload process (red→yellow→green):
Setting Up the Vision Node
-
Start Broox Vision Node
-
Expand the Configuration section.
-
Check the Controller is detected and selected.
-
Select the desired pipeline, for example traffic+age+gender+attention:
-
Configure the camera type and resolution.
-
USB webcam
- If you need to unplug and plug back the camera, then press "Rescan".
-
Network (RTSP/HTTP) camera
-
Create an RTSP stream on your camera and copy the URL.
- Access your camera settings and look for the option to enable RTSP streaming (Real-Time Streaming Protocol). Ensure that your camera supports RTSP.
- Configure stream parameters such as resolution and video format according to your needs.
- Once configured, you will receive an RTSP URL. This is the URL you'll use to connect to the stream on other devices or applications. Be sure to copy the full URL.
-
Verify the URL works in VLC
- Open VLC Media Player on your device.
- In the main menu, select "Media" and then "Open Network Stream."
- A window will appear with a text field where you can input the stream URL.
- Paste the RTSP URL you copied in the previous step into the text field under the "Network" section in VLC.
- Click "Play" to start streaming. If the URL is valid and your camera is properly configured, you should see the live stream in VLC.
- If the stream doesn’t start, check the camera configuration, the network, and make sure the URL is correctly entered.
- Select the 'Network (RTSP/HTTP)' option and paste the operational URL
- If you're using a device or application that requires RTSP stream configuration, make sure to select the appropriate network option, such as "Network (RTSP/HTTP)".
- Once this option is selected, a text field will appear where you need to enter the operational stream URL.
- Paste the URL you copied from the camera into this field, ensuring there are no extra spaces or mistakes.
-
-
To test, check the "Display previews..." checkmark at the bottom.
-
Press play. In a moment, a window showing the preview should appear (If not, check your camera and pipeline settings).
Advanced Settings
Introduction
AST (Advanced Settings Tool) is a robust and intuitive tool designed to provide users with advanced configuration options for the Vision Node (VN). It offers the flexibility to define areas of interest, fine-tune the camera calibration and configurating all parameters of the selected pipeline, ensuring that the Vision Node operates optimally for specific use cases.
Access
To access the Advanced Settings (AST), launch the Vision Node application on your system. Once the application is up and running, navigate to the bottom-right corner of the window, where you’ll find the 'Open Advanced Settings' button. Click on this button to open the settings interface and begin configuring the tool.
Camera Parameters Section
Introduction
This section provides essential details about the camera, such as its name, resolution, frames per second (FPS), rotation, and whether the image is flipped. Additionally, users can input necessary calibration values for the camera. This section ensures that both intrinsic and extrinsic camera parameters are properly configured, optimizing the camera’s performance.
How to Indicate Camera Preferences
To make the configuration process easier, two dropdown menus are provided: Manufacturer and Model. Once a manufacturer and model are selected, the Sensor, Focal Length, and Aspect Ratio fields will automatically populate. If the camera is not listed in the dropdown, select the "Custom" option under Manufacturer and manually input the camera's name (Model), sensor details, and focal length.
Calibration
Introduction
The Calibration section allows you to adjust both the extrinsic and intrinsic parameters of the camera. Here, users can configure the Sensor and Focal Length, while also marking a square on the image that aligns with real-world objects. Additionally, you can input the camera's distance, height, and the square's side length.
Calibrate the Camera
Start by activating the Calibration panel by toggling it on.
If the Manufacturer and Model fields are pre-filled, the Sensor and Focal Length will automatically be populated. If not, manually enter the required values.
To create a Marker Area, click the '+' button. A square will appear on the display, and you should position its vertices to match the real-world scene. For optimal accuracy, it’s recommended to physically outline a square in the environment and then replicate it on the display using the AST tool.
Once the Marker Area is aligned correctly, complete the remaining fields by entering the Distance (from the camera to the square), Camera Height, and Side Length (of the square).
Areas of Interest Section
Introduction
This section allows users to define areas of interest within the environment. A common application is setting up detection zones for specific regions, such as the entrance of a building, to track the number of people passing through.
Define Areas of Interest
To add a new detection area, click the "Add" button. The area will appear in editing mode, allowing you to adjust its shape and position. To enter editing mode, simply select the detection area from the list. You can rename the area by clicking the pencil icon or double-clicking the name field. If needed, you can delete the area by clicking the trash can icon.
Pipeline Configuration
Introduction
The Pipeline Configuration section enables the customization of processors for the selected pipeline. Each processor serves a unique function, and based on the requirements, users can enable or disable them to optimize the system’s processing capacity.
Pipeline Parameters
- Traffic Processor: This processor is responsible for locating and identifying actors in the image, assigning them a unique identifier. This processor is integral to the system and cannot be disabled. The modifiable parameters for this processor are:
- Threshold: Controls the confidence required for confirming a detection. Higher values mean greater confidence is needed for detection.
- Precision: Determines the quality of the detection. A higher value ensures better accuracy. The Traffic processor visualizes detections by drawing a rectangle around the actor and displaying information like the actor's ID, current zone, last zone, and state.
- Emotion Processor: This processor analyzes the actor’s facial expressions, categorizing them into neutral, happy, sad, angry, or surprised. The adjustable parameter is:
- Precision: Determines the accuracy of the emotion detection. The emotion label will be displayed on the actor’s image.
- Age & Gender Processor: This processor estimates the actor’s age and gender. The adjustable parameter is:
- Precision: Impacts the accuracy of age and gender predictions. The labels "Age" and "Gender" will be displayed on the actor's image.
- Attention Processor: This processor measures the actor’s attention level by calculating the tilt of the face in relation to the camera. The configurable parameters are:
- Angle: Adjusts the attention point.
- Precision: Controls the accuracy of the attention level. The "% Attention" label will appear on the actor’s image to indicate their attention level.
- Pose Processor: The Pose processor detects the actor’s joints and creates a virtual skeleton to predict their posture. The adjustable parameter is:
- Precision: Controls the accuracy of the posture prediction. A skeleton representation will be displayed on the actor’s image.
- API Output Processor: This processor sends the collected data to external systems. Similar to the Traffic processor, it is essential and cannot be disabled. The API Output allows for the use of three communication protocols:
- MQTT: Requires a communication channel
- HTTP: Needs a POST endpoint and timeout configuration.
- OSC: Requires an IP address and port. Multiple protocols can be enabled at once.
Save and Exit
Once all settings have been configured, click the "Save" button to apply your changes. If any mandatory fields are left incomplete, a notification will prompt you to fill in the missing information before proceeding.
In the event that the AST is unexpectedly closed, it retains the most recent configuration in memory. This provides you with the option to either continue with the latest modifications (Keep current Vision Node settings) or revert to the previously saved configuration (Restore last saved settings), ensuring a seamless experience and minimal disruption.
Usage Example
Connect the Camera
Start by connecting the camera to your computer.
Launch the Vision Node
Open the Vision Node application, select the 'Controller' running the pipeline, and choose the 'audience-analytics-low' pipeline.
Configure Video Input
In the "Video Input" section, select 'USB Webcam' as the Camera Type, and choose the camera you connected earlier. Ensure the two checkboxes in the bottom-left corner are checked, then click "Start."
Access AST
With the Vision Node running, click the 'Open Advanced Settings' button in the bottom-right corner to access the AST.
Configure Camera Parameters
Search for your camera in the Model list under the Manufacturer dropdown. If not listed, select "Custom", and manually input the camera's name, sensor details, and focal length.
Create Marker Area
Click the '+' button to create a marker area. Position the vertices in edit mode, and fill in the fields for Distance, Camera Height, and Side Length
Define Detection Zones
Add detection zones by selecting the areas of interest and configuring their size and placement.
Adjust Detection Presets
Enable or disable specific processors and tweak their settings according to your needs.
Detection example
This is an example of detection with the active Age & Gender processor and the sample zones we saw earlier.
Save Settings
After configuring all parameters, click "Save" to finalize your settings.
Adding the Installation
-
On Broox Studio select Installations:
-
Click on Add Installation:
-
Name the Installation:
-
Select the Nodes to use:
- For an analytics only installation, select only the Vision Node.
- For an interactive one, the Vision Node and the Mediaplayer.
-
Press Next.
-
We finally click Create on the summary page.
-
We can add a Canvas to the installation if we are going to display media by editing the installation.
- A canvas is a region of the screen that shows a campaign in Media Player.
- As we have no Campaign created, for now we Skip this step.
Analytics Dashboard
After some time, the Analytics Dashboard on Broox Studio will start processing and presenting data about the detections performed. In Studio, you can select the Location/Installation accordingly.
Note: Data might take some time to appear, check the time range and try refreshing.
Service setup for startup on boot
All Broox Platform components include sample systemd .service
files, installed at /lib/systemd/system/
.
Configuration files should be installed at /etc/broox
. Usual Linux conventions will be followed at runtime:
- Logs will go to
/var/log/<daemon>.log
- PID files go to
/var/run/<daemon>.pid
To set up a startup on boot:
- Copy preconfigured
controller.ini
andvisionnode.ini
at/etc/broox
- Enable the
broox-controller
andbroox-vision-node
services:
sudo systemctl enable broox-controller
sudo systemctl enable broox-vision-node
For a deeper look, read the Platform Boot on Linux Technical Note