Roverp CLI¶
Usage
The CLI tools use tyro:
- Positional arguments ("required") are passed as positional command line arguments
- Named arguments are passed as flagged command line arguments
roverp anonymize¶
Anonymize video by blurring faces.
Warning
Requires the anonymize extra (retina-face, tf-keras).
Expected Inputs and Outputs
Inputs: camera/video.avi
Outputs: _camera/video.avi"
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
path to the dataset. |
required |
out
|
str | None
|
output path; defaults to the same as |
None
|
Source code in processing/src/roverp/_cli/anonymize.py
roverp sensorpose¶
Get interpolated poses for a specific sensor.
Expected Inputs and Outputs
Inputs: _slam/trajectory.csv
Outputs: _{sensor}/pose.npz depending on the specified --sensor, with keys:
mask: binary mask, applied to the raw sensor data along the time axis, which denotes valid samples for the available poses.smoothing,start_threshold,filter_size: parameters used for pose interpolation.t,pos,vel,acc,rot: pose parameters; seePoses
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to the dataset. |
required |
sensor
|
str
|
Sensor timestamps to interpolate for. |
'radar'
|
smoothing
|
float
|
Smoothing coefficient; higher = more smooth. |
500.0
|
threshold
|
float
|
Exclude data points close to the starting point (in meters). |
1.0
|
Source code in processing/src/roverp/_cli/sensorpose.py
roverp report¶
Generate speed report.
Warning
The cartographer slam pipeline (make trajectory or make lidar) must
be run beforehand.
Expected Inputs and Outputs
Inputs: _radar/pose.npz, _slam/trajectory.csv
Outputs: _report/speed.pdf
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
path to the dataset. |
required |
out
|
str | None
|
output path; defaults to |
None
|
width
|
float
|
time series plot row width, in seconds. |
30.0
|
Source code in processing/src/roverp/_cli/report.py
roverp segment¶
Run semantic segmentation on collected video data.
Warning
Requires the semseg extra (torch, transformers):
By default, we use segformer-b5-finetuned-ade-640-640, which can be found
here;
the pytorch_model.bin weights should be downloaded and placed in
processing/models/segformer-b5-finetuned-ade-640-640.
Class Definitions
The original class definitions
have been reduced into 8 classes (indexed in alphabetical order), as
specified in models/segformer-b5-finetuned-ade-640-640/classes.yaml:
0=ceiling: ceiling, roofs, etc viewed from underneath.1=flat: flat ground such as roads, paths, floors, etc.2=nature: plants of all kinds such as trees, grass, and bushes.3=object: any free-standing objects other than plants which are not building-scale such as furniture, signs, and poles.4=person: any person who is not inside a vehicle.5=sky: the sky or other background.6=structure: buildings, fences, and other structures.7=vehicle: cars, busses, vans, trucks, etc.
Expected Inputs and Outputs
Inputs: camera/video.avi
Outputs: _camera/segment
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
path to the recording. |
required |
batch
|
int
|
batch size for processing; may need tuning, depending on your exact GPU. |
16
|
model
|
str
|
path to model weights; if running in the |
'./models/segformer-b5-finetuned-ade-640-640'
|
Source code in processing/src/roverp/_cli/segment.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | |