Skip to content
Snippets Groups Projects
Commit 213aff45 authored by Paul-Edouard Sarlin's avatar Paul-Edouard Sarlin
Browse files

Fix in KITTI sequential

parent 8c81aa4b
Branches
No related tags found
No related merge requests found
...@@ -113,16 +113,16 @@ python -m maploc.evaluation.mapillary [...] --output_dir ./viz_MGL/ --num 100 ...@@ -113,16 +113,16 @@ python -m maploc.evaluation.mapillary [...] --output_dir ./viz_MGL/ --num 100
To run the evaluation in sequential mode: To run the evaluation in sequential mode:
```bash ```bash
python -m maploc.evaluation.mapillary --experiment OrienterNet_MGL --sequential python -m maploc.evaluation.mapillary --experiment OrienterNet_MGL --sequential model.num_rotations=256
``` ```
The results should be close to the following: The results should be close to the following:
``` ```
Recall xy_seq_error: [29.73, 73.25, 91.17] at (1, 3, 5) m/° Recall xy_seq_error: [29.73, 73.25, 91.17] at (1, 3, 5) m/°
Recall yaw_seq_error: [46.55, 88.3, 96.45] at (1, 3, 5) m/° Recall yaw_seq_error: [46.55, 88.3, 96.45] at (1, 3, 5) m/°
``` ```
The sequential evaluation uses 10 frames by default. To increase this number: The sequential evaluation uses 10 frames by default. To increase this number, add:
```bash ```bash
python -m maploc.evaluation.mapillary [...] --sequential chunking.max_length=20 python -m maploc.evaluation.mapillary [...] chunking.max_length=20
``` ```
...@@ -142,7 +142,7 @@ python -m maploc.data.kitti.prepare ...@@ -142,7 +142,7 @@ python -m maploc.data.kitti.prepare
2. Run the evaluation with the model trained on MGL: 2. Run the evaluation with the model trained on MGL:
```bash ```bash
python -m maploc.evaluation.kitti --experiment OrienterNet_MGL python -m maploc.evaluation.kitti --experiment OrienterNet_MGL model.num_rotations=256
``` ```
You should expect the following results: You should expect the following results:
...@@ -155,9 +155,18 @@ Recall yaw_max_error: [29.22, 68.2, 84.49] at (1, 3, 5) m/° ...@@ -155,9 +155,18 @@ Recall yaw_max_error: [29.22, 68.2, 84.49] at (1, 3, 5) m/°
You can similarly export some visual examples: You can similarly export some visual examples:
```bash ```bash
python -m maploc.evaluation.kitti [...] --output_dir ./viz_KITTI/ --num 100 python -m maploc.evaluation.kitti [...] --output_dir ./viz_KITTI/ --num 100
```
To run in sequential mode:
```bash
python -m maploc.evaluation.kitti --experiment OrienterNet_MGL --sequential model.num_rotations=256
```
with results:
```
Recall directional_seq_error: [[81.94, 97.35, 98.67], [52.57, 95.6, 97.35]] at (1, 3, 5) m/°
Recall yaw_seq_error: [82.7, 98.63, 99.06] at (1, 3, 5) m/°
``` ```
To run in sequential mode, similarly add the `--sequential` flag.
</details> </details>
......
...@@ -15,7 +15,9 @@ def chunk_sequence( ...@@ -15,7 +15,9 @@ def chunk_sequence(
max_inter_dist=None, max_inter_dist=None,
max_total_dist=None, max_total_dist=None,
): ):
sort_array = data.get("capture_time", data.get("index", names or indices)) sort_array = data.get("capture_time", data.get("index"))
if sort_array is None:
sort_array = indices if names is None else names
indices = sorted(indices, key=lambda i: sort_array[i].tolist()) indices = sorted(indices, key=lambda i: sort_array[i].tolist())
centers = torch.stack([data["t_c2w"][i][:2] for i in indices]).numpy() centers = torch.stack([data["t_c2w"][i][:2] for i in indices]).numpy()
dists = np.linalg.norm(np.diff(centers, axis=0), axis=-1) dists = np.linalg.norm(np.diff(centers, axis=0), axis=-1)
......
...@@ -109,7 +109,7 @@ def evaluate_sequential( ...@@ -109,7 +109,7 @@ def evaluate_sequential(
progress: bool = True, progress: bool = True,
num_rotations: int = 512, num_rotations: int = 512,
mask_index: Optional[Tuple[int]] = None, mask_index: Optional[Tuple[int]] = None,
has_gps: bool = True, has_gps: bool = False,
): ):
chunk_keys = list(chunk2idx) chunk_keys = list(chunk2idx)
if shuffle: if shuffle:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment