Trade Study Evaluations¶
Now that we know how to make algorithms and how to make partial-stack AVs, we can leverage the simplicity of AVstack to initiate some trade studies. In these trade studies, we will be comparing and contrasting different designs against objective performance metrics on scenes.
We will first set up the scene managers. Here, we use the KITTI and nuScenes scene managers. We could just as easily add in the Carla scene manager, if we've downloaded a Carla dataset suitable for AVstack.
import os
import avstack
import avapi
from tqdm import tqdm
%load_ext autoreload
%autoreload 2
data_base = '../../lib-avstack-api/data/'
obj_data_dir_k = os.path.join(data_base, 'KITTI/object')
raw_data_dir_k = os.path.join(data_base, 'KITTI/raw')
obj_data_dir_n = os.path.join(data_base, 'nuScenes')
KSM = avapi.kitti.KittiScenesManager(obj_data_dir_k, raw_data_dir_k, convert_raw=False)
NSM = avapi.nuscenes.nuScenesManager(obj_data_dir_n)
SMs = [KSM, NSM]
Cannot import rss library -- don't worry about this unless you need 'safety' evals Jupyter environment detected. Enabling Open3D WebVisualizer. [Open3D INFO] WebRTC GUI backend enabled. [Open3D INFO] WebRTCWindowSystem: HTTP handshake server disabled. The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload
Set up AV Models¶
# lidar perception algorithm (3D)
li_perception = {0:'pointpillars',
1:'ssn',
2:'pointpillars',
3:'ssn',
4:'pointpillars',
5:'ssn'}
# camera perception algorithm (2D)
ca_perception = {0:None,
1:None,
2:'fasterrcnn',
3:'fasterrcnn',
4:'cascade_mask_rcnn',
5:'cascade_mask_rcnn'}
# tracking/fusion algorithm
tracking = {0:'basic-box-tracker',
1:'basic-box-tracker',
2:'basic-box-tracker-fusion-3stage',
3:'basic-box-tracker-fusion-3stage',
4:'basic-box-tracker-fusion-3stage',
5:'basic-box-tracker-fusion-3stage'}
# which sensor to use to evaluate performance
sensor_eval = {0:'main_lidar',
1:'main_lidar',
2:'main_camera',
3:'main_camera',
4:'main_camera',
5:'main_camera'}
# whether we only care about the front half of lidar data
filter_front = {0:False,
1:False,
2:True,
3:True,
4:True,
5:True}
# The base ego classes we will use for each (see the source code for details)
vs = avstack.ego.vehicle
AVs = {0:vs.LidarPerceptionAndTrackingVehicle,
1:vs.LidarPerceptionAndTrackingVehicle,
2:vs.LidarCameraPerceptionAndTrackingVehicle,
3:vs.LidarCameraPerceptionAndTrackingVehicle,
4:vs.LidarCameraPerceptionAndTrackingVehicle,
5:vs.LidarCameraPerceptionAndTrackingVehicle}
Run Trade Studies¶
The avapi package comes with a trade-study evaluation tool. There are many possible configuration options available, and we could not possibly enumerate them all here. We provide a selection of configuration options that are easy to understand.
# %%capture
# use ^^^ to suppress output
frame_res_all, seq_res_all = avapi.evaluation.run_trades(
SMs=SMs, # scene managers
AVs=AVs, # av models
li_perception=li_perception, # lidar perception
ca_perception=ca_perception, # camera perception
tracking=tracking, # tracking
sensor_eval=sensor_eval, # which sensor to use for ground-truth evaluations
sensor_eval_super=None, # if we need to use a larger field-of-view sensor to filter FPs
trade_type='standard', # only 'standard' is available at the moment
filter_front=filter_front, # whether to filter lidar data to the front-view only
n_trials_max=3, # number of scenes to evaluate
max_dist=100, # max distance of objects we care about
n_cases_max=5, # how many of the specified cases to run (in the dictionary of the above cell)
max_frames=150, # max possible frames per scene
frame_start=1, # which starting frame to use
save_result=True,
save_file_base='study-1-{}-seq-res.p',
trial_indices=None)
/home/spencer/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/third_party/mmdetection3d/mmdet3d/models/dense_heads/anchor3d_head.py:92: UserWarning: dir_offset and dir_limit_offset will be depressed and be incorporated into box coder in the future warnings.warn(
Running dataset Kitti over 3 trials Running trial 0, using index 0 Loads checkpoint by local backend from path: /home/spencer/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/third_party/mmdetection3d/checkpoints/kitti/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth Running dataset: KITTI, case 0
19%|████████████▋ | 20/107 [01:40<07:17, 5.03s/it]
--------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) Cell In[6], line 4 1 # %%capture 2 # use ^^^ to suppress output ----> 4 frame_res_all, seq_res_all = avapi.evaluation.run_trades( 5 SMs=SMs, # scene managers 6 AVs=AVs, # av models 7 li_perception=li_perception, # lidar perception 8 ca_perception=ca_perception, # camera perception 9 tracking=tracking, # tracking 10 sensor_eval=sensor_eval, # which sensor to use for ground-truth evaluations 11 sensor_eval_super=None, # if we need to use a larger field-of-view sensor to filter FPs 12 trade_type='standard', # only 'standard' is available at the moment 13 filter_front=filter_front, # whether to filter lidar data to the front-view only 14 n_trials_max=3, # number of scenes to evaluate 15 max_dist=100, # max distance of objects we care about 16 n_cases_max=5, # how many of the specified cases to run (in the dictionary of the above cell) 17 max_frames=150, # max possible frames per scene 18 frame_start=1, # which starting frame to use 19 save_result=True, 20 save_file_base='study-1-{}-seq-res.p', 21 trial_indices=None) File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-api/avapi/evaluation/trades.py:156, in run_trades(SMs, AVs, li_perception, ca_perception, tracking, n_cases_max, case_list_run, n_trials_max, trial_indices, frame_start, max_frames, max_framerate, max_dist, max_lidar_range, filter_front, sensor_eval, sensor_eval_super, trade_type, save_result, save_file_base, print_results, **kwargs) 146 AV = init_AV( 147 AVs, 148 i_case, (...) 153 **kwargs_in 154 ) 155 print(" Running dataset: {}, case {}".format(SD.name, i_case)) --> 156 AV, frame_results, seq_results = run_case( 157 SD, 158 AV, 159 i_trial, 160 i_case, 161 frame_start=frame_start, 162 max_framerate=max_framerate, 163 max_frames=max_frames, 164 max_dist=max_dist, 165 max_lidar_range=max_lidar_range, 166 filter_front=filter_front_in, 167 trade_type=trade_in, 168 sensor_eval=sensor_eval_in, 169 sensor_eval_super=sensor_eval_super, 170 **kwargs_in 171 ) 173 # -- store results 174 if print_results: File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-api/avapi/evaluation/trades.py:234, in run_case(SD, AV, i_trial, i_case, frame_start, max_framerate, max_frames, max_dist, max_lidar_range, filter_front, trade_type, sensor_eval, sensor_eval_super, **kwargs) 231 t_last = t_curr 233 # -- run tick --> 234 data_recur = tick_func( 235 data_recur, 236 AV, 237 data_manager, 238 SD, 239 frame, 240 filter_front, 241 max_lidar_range, 242 **kwargs 243 ) 245 # -- store results 246 res_frame = { 247 "Case": i_case, 248 "Dataset": SD.name, 249 "Trial": i_trial, 250 "Frame": frame, 251 } File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-api/avapi/evaluation/trades.py:321, in tick_standard(data_recur, AV, data_manager, SD, frame, filter_front, max_lidar_range, **kwargs) 317 data_manager.push( 318 SD.get_image(frame, sensor="main_camera") 319 ) # defaults to main camera 320 # -- run AV --> 321 AV.tick(frame=frame, data_manager=data_manager, timestamp=frame*SD.framerate) 322 return data_recur File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/avstack/ego/vehicle.py:37, in VehicleEgoStack.tick(self, frame, timestamp, data_manager, ground_truth, **kwargs) 34 self.timestamp = timestamp 36 # -- modules ---> 37 control = self._tick_modules( 38 frame=frame, 39 timestamp=timestamp, 40 data_manager=data_manager, 41 ground_truth=ground_truth, 42 **kwargs 43 ) 45 # --- post-modules 46 return control File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/avstack/ego/vehicle.py:295, in LidarPerceptionAndTrackingVehicle._tick_modules(self, frame, timestamp, data_manager, *args, **kwargs) 285 dets_3d = self.perception["object_3d"]( 286 data_manager.pop("lidar-0"), frame=frame, identifier="lidar_objects_3d" 287 ) 288 tracks_3d = self.tracking( 289 t=timestamp, 290 frame=frame, (...) 293 identifier="tracker-0", 294 ) # TODO: check platform --> 295 predictions = self.prediction(tracks_3d, frame=frame) 296 return tracks_3d, {"object_3d": dets_3d, "predictions": predictions} File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/avstack/modules/prediction/algorithms.py:34, in _PredictionAlgorithm.__call__(self, objects, frame, *args, **kwargs) 30 def __call__(self, objects, frame, *args, **kwargs): 31 """ 32 TODO: could make saving faster with multiproc too 33 """ ---> 34 predictions = self._predict_objects(objects, *args, **kwargs) 35 if self.save: 36 preds = [] File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/avstack/modules/prediction/algorithms.py:60, in KinematicPrediction._predict_objects(self, objects, use_pool, *args, **kwargs) 58 pred_objs = p.map(part_func, objects) 59 else: ---> 60 pred_objs = [part_func(obj) for obj in objects] 61 return {obj.ID: pred for obj, pred in zip(objects, pred_objs)} File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/avstack/modules/prediction/algorithms.py:60, in <listcomp>(.0) 58 pred_objs = p.map(part_func, objects) 59 else: ---> 60 pred_objs = [part_func(obj) for obj in objects] 61 return {obj.ID: pred for obj, pred in zip(objects, pred_objs)} File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/avstack/modules/prediction/algorithms.py:71, in KinematicPrediction._predict_per_object(dt_predicts, obj) 69 pass 70 for dt in dt_predicts: ---> 71 pred_obj[dt] = obj.predict(dt) 72 return pred_obj File ~/Documents/Projects/AVstack/avstack-docs/lib-avstack-core/avstack/environment/objects.py:245, in ObjectState.predict(self, dt) 243 box = deepcopy(self.box) 244 acc = deepcopy(self.acceleration) --> 245 att = deepcopy(self.attitude) 246 ang = deepcopy(self.angular_velocity) 247 VS = VehicleState(self.obj_type, ID=self.ID) File /usr/lib/python3.8/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File /usr/lib/python3.8/copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File /usr/lib/python3.8/copy.py:230, in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y File /usr/lib/python3.8/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File /usr/lib/python3.8/copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File /usr/lib/python3.8/copy.py:230, in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File /usr/lib/python3.8/copy.py:205, in _deepcopy_list(x, memo, deepcopy) 203 append = y.append 204 for a in x: --> 205 append(deepcopy(a, memo)) 206 return y File /usr/lib/python3.8/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File /usr/lib/python3.8/copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File /usr/lib/python3.8/copy.py:229, in _deepcopy_dict(x, memo, deepcopy) 227 y = {} 228 memo[id(x)] = y --> 229 for key, value in x.items(): 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y KeyboardInterrupt:
Make Results Tables¶
We can use the latex functionality of pandas to make tables that can go directly into a latex document.
Load the data¶
Each row is a specific case. Each column is either a descriptor or an output metric. Each cell with metrics may contain an aggregate per-scene metric or a list of per-frame metrics with teh list having length of the number of frames.
import pickle
import numpy as np
import pandas as pd
# load raw data
ds = ['kitti', 'nuscenes'] # need to update this based on the datasets used
data = []
for d in ds:
tab_file = 'study-1-{}-seq-res.p'.format(d) # this must match the save_file_base above
with open(tab_file, 'rb') as f:
data.append(pickle.load(f))
# convert to dataframe
df = pd.concat(pd.DataFrame.from_dict(dat) for dat in data)
print(df.shape)
df.head(7)
(25, 71)
Case | Dataset | Trial | Metrics_perception_object_3d_tot_TP | Metrics_perception_object_3d_tot_FP | Metrics_perception_object_3d_tot_FN | Metrics_perception_object_3d_tot_T | Metrics_perception_object_3d_mean_precision | Metrics_perception_object_3d_mean_recall | Metrics_tracking_HOTA_HOTA | ... | Metrics_prediction_std_ADE | Metrics_prediction_std_FDE | Metrics_prediction_n_with_truth | Metrics_prediction_n_objects | Metrics_perception_object_2d_tot_TP | Metrics_perception_object_2d_tot_FP | Metrics_perception_object_2d_tot_FN | Metrics_perception_object_2d_tot_T | Metrics_perception_object_2d_mean_precision | Metrics_perception_object_2d_mean_recall | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | KITTI | 0 | 402 | 134 | 44 | 446 | 0.762997 | 0.897040 | [0.6115882159203753, 0.6115882159203753, 0.611... | ... | 1.037896 | 2.196316 | 437 | 442 | NaN | NaN | NaN | NaN | NaN | NaN |
1 | 1 | KITTI | 0 | 194 | 99 | 252 | 446 | 0.500200 | 0.415999 | [0.40064066411277033, 0.40064066411277033, 0.4... | ... | 1.199937 | 1.862107 | 286 | 286 | NaN | NaN | NaN | NaN | NaN | NaN |
2 | 2 | KITTI | 0 | 402 | 134 | 44 | 446 | 0.762997 | 0.897040 | [0.7123721722473954, 0.7123721722473954, 0.712... | ... | 0.643589 | 1.676179 | 328 | 328 | 347.0 | 224.0 | 99.0 | 446.0 | 0.62293 | 0.73162 |
3 | 3 | KITTI | 0 | 194 | 99 | 252 | 446 | 0.500200 | 0.415999 | [0.48204490336550637, 0.48204490336550637, 0.4... | ... | 0.609636 | 1.207461 | 190 | 190 | 347.0 | 224.0 | 99.0 | 446.0 | 0.62293 | 0.73162 |
4 | 4 | KITTI | 0 | 402 | 134 | 44 | 446 | 0.762997 | 0.897040 | [0.7003655971382674, 0.7003655971382674, 0.700... | ... | 0.860504 | 1.507013 | 298 | 298 | 0.0 | 0.0 | 446.0 | 446.0 | 0.00000 | 0.00000 |
5 | 0 | KITTI | 1 | 349 | 466 | 25 | 374 | 0.371507 | 0.903611 | [0.5243092884836676, 0.5243092884836676, 0.524... | ... | 1.319843 | 3.869034 | 804 | 804 | NaN | NaN | NaN | NaN | NaN | NaN |
6 | 1 | KITTI | 1 | 211 | 299 | 163 | 374 | 0.245336 | 0.390823 | [0.4203230160938515, 0.4203230160938515, 0.420... | ... | 1.897539 | 4.178149 | 711 | 711 | NaN | NaN | NaN | NaN | NaN | NaN |
7 rows × 71 columns
Extract Interesting Results¶
To make the tables, we must define which metrics we are interested in, how to compute an "aggregate" metric, in the case of a cell being a list of per-frame metrics, and amongst datasets, how to evaluate the "best" performing, if we want to underline the best in the table.
The latex table generation relies on a couple of custom commands. These help make multi-column sub-cells within a single cell. Include the following in your latex preamble for this to work.
\newcommand{\tworowsubtablecenter}[2]{\begin{tabular}{@{}c@{}} #1 \\ #2 \end{tabular}}
\newcommand{\tworowsubtableleft}[2]{\begin{tabular}{@{}l@{}} #1 \\ #2 \end{tabular}}
# key: column name for metric we are interested in
# value: our short-name we would like to call this
metrics_of_interest = {'Metrics_perception_object_3d_mean_precision':'Per: 3D Prec.',
'Metrics_perception_object_3d_mean_recall':'Per: 3D Rec.',
'Metrics_perception_object_2d_mean_precision':'Per: 2D Prec.',
'Metrics_perception_object_2d_mean_recall':'Per: 2D Rec.',
'Metrics_tracking_HOTA_HOTA':'Trk: HOTA',
'Metrics_tracking_CLEAR_MOTA':'Trk: MOTA',
'Metrics_tracking_CLEAR_MOTP':'Trk: MOTP',
'Metrics_prediction_std_ADE':'Pred: ADE',
'Metrics_prediction_std_FDE':'Pred: FDE'}
# If not none, it assumes there is a list of metrics we need to infer over
# (e.g., per-frame or by-threshold in case of e.g., P/R curve)
expansion_types = {'Metrics_perception_object_3d_mean_precision':None,
'Metrics_perception_object_3d_mean_recall':None,
'Metrics_perception_object_2d_mean_precision':None,
'Metrics_perception_object_2d_mean_recall':None,
'Metrics_tracking_HOTA_HOTA':'value-at-middle',
'Metrics_tracking_CLEAR_MOTA':'value-at-middle',
'Metrics_tracking_CLEAR_MOTP':'value-at-middle',
'Metrics_prediction_std_ADE':None,
'Metrics_prediction_std_FDE':None}
# how to evaluate the "goodness" of a case compared to another
metric_best_evaluator = {'Metrics_perception_object_3d_mean_precision':np.nanargmax,
'Metrics_perception_object_3d_mean_recall':np.nanargmax,
'Metrics_perception_object_2d_mean_precision':np.nanargmax,
'Metrics_perception_object_2d_mean_recall':np.nanargmax,
'Metrics_tracking_HOTA_HOTA':np.nanargmax,
'Metrics_tracking_CLEAR_MOTA':np.nanargmax,
'Metrics_tracking_CLEAR_MOTP':np.nanargmax,
'Metrics_prediction_std_ADE':np.nanargmin,
'Metrics_prediction_std_FDE':np.nanargmin}
# Convert to table format in double/triple slash format
mark_best_in_cell = True
mask_best_in_col = True
single_subrow_formatter = '{}'
double_subrow_formatter = '\tworowsubtablecenter{{{}}}{{{}}}'
triple_subrow_formatter = '\tworowsubtablecenter{{{}}}{{\tworowsubtablecenter{{{}}}{{{}}}}}'
formatters = {1:single_subrow_formatter, 2:double_subrow_formatter, 3:triple_subrow_formatter}
dses = df["Dataset"].unique()
subrow_formatter = formatters[len(dses)]
print(f'Slash dataset ordering is: {dses}')
res_slash_agg = []
for i_case in df['Case'].unique():
res_slash = {'Case':i_case}
res_slash.update({'Data':subrow_formatter.format(*[d[0].upper() for d in dses])})
for met_k, met_v in metrics_of_interest.items():
res_met_slash = []
for dataset in df['Dataset'].unique():
met_res = df[(df['Case'] == i_case) & (df['Dataset'] == dataset)][met_k]
if expansion_types[met_k] == 'max-over-dict':
met_vals = [np.max(met.values()) for met in met_res]
elif expansion_types[met_k] == 'max-over-list':
met_vals = [np.max(met) for met in met_res]
elif expansion_types[met_k] == 'value-at-middle':
met_vals = [np.median(met) for met in met_res] # median accomplishes middle value
elif expansion_types[met_k] is None:
met_vals = met_res
else:
raise NotImplementedError('Expansion {} not implemented'.format(expansion_types[met_k]))
mn, std = np.nanmedian(met_vals), np.nanstd(met_vals)
res_met_slash.append((mn, std))
# slash format -- underlining best in cell
if mark_best_in_cell and (not all([np.isnan(mn) for mn, _ in res_met_slash])):
best_idx = metric_best_evaluator[met_k]([mn for mn, _ in res_met_slash])
else:
best_idx = None
res_met_slash_new = []
for i, (mn, std) in enumerate(res_met_slash):
if np.isnan(mn):
wstr = 'N/A'
else:
if (best_idx is not None) and (i==best_idx):
wstr = f'\\underline{{{mn:4.2f} +/- {std:4.2f}}}'
else:
wstr = f'{mn:4.2f} +/- {std:4.2f}'
res_met_slash_new.append(wstr)
# Format the whole slash
res_sla = subrow_formatter.format(*res_met_slash_new)
res_slash.update({met_v:res_sla})
res_slash_agg.append(res_slash)
Slash dataset ordering is: ['KITTI' 'nuScenes']
/home/spencer/.cache/pypoetry/virtualenvs/avstack-docs-l0eE3ZqO-py3.8/lib/python3.8/site-packages/numpy/lib/nanfunctions.py:1217: RuntimeWarning: All-NaN slice encountered r, k = function_base._ureduce(a, func=_nanmedian, axis=axis, out=out, /home/spencer/.cache/pypoetry/virtualenvs/avstack-docs-l0eE3ZqO-py3.8/lib/python3.8/site-packages/numpy/lib/nanfunctions.py:1878: RuntimeWarning: Degrees of freedom <= 0 for slice. var = nanvar(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
pd.set_option('display.max_colwidth',1000)
df_slash = pd.DataFrame(res_slash_agg)
lat_str = df_slash.to_latex(index=False, multirow=True, escape=False).replace('\\\\\n', '\\\\ \\midrule\n')
print(lat_str)
\begin{tabular}{rllllllllll} \toprule Case & Data & Per: 3D Prec. & Per: 3D Rec. & Per: 2D Prec. & Per: 2D Rec. & Trk: HOTA & Trk: MOTA & Trk: MOTP & Pred: ADE & Pred: FDE \\ \midrule \midrule 0 & \tworowsubtablecenter{K}{N} & \tworowsubtablecenter{0.57 +/- 0.20}{\underline{0.99 +/- 0.01}} & \tworowsubtablecenter{\underline{0.90 +/- 0.00}}{0.25 +/- 0.07} & \tworowsubtablecenter{N/A}{N/A} & \tworowsubtablecenter{N/A}{N/A} & \tworowsubtablecenter{\underline{0.57 +/- 0.04}}{0.11 +/- 0.04} & \tworowsubtablecenter{\underline{0.20 +/- 0.23}}{0.20 +/- 0.07} & \tworowsubtablecenter{\underline{3.66 +/- 0.12}}{2.68 +/- 0.16} & \tworowsubtablecenter{1.18 +/- 0.14}{\underline{0.26 +/- 0.08}} & \tworowsubtablecenter{3.03 +/- 0.84}{\underline{0.26 +/- 0.08}} \\ \midrule 1 & \tworowsubtablecenter{K}{N} & \tworowsubtablecenter{0.37 +/- 0.13}{\underline{1.00 +/- 0.02}} & \tworowsubtablecenter{\underline{0.40 +/- 0.01}}{0.19 +/- 0.06} & \tworowsubtablecenter{N/A}{N/A} & \tworowsubtablecenter{N/A}{N/A} & \tworowsubtablecenter{\underline{0.41 +/- 0.01}}{0.09 +/- 0.04} & \tworowsubtablecenter{0.00 +/- 0.15}{\underline{0.12 +/- 0.05}} & \tworowsubtablecenter{\underline{3.10 +/- 0.02}}{2.75 +/- 0.13} & \tworowsubtablecenter{1.55 +/- 0.35}{\underline{0.31 +/- 0.06}} & \tworowsubtablecenter{3.02 +/- 1.16}{\underline{0.31 +/- 0.06}} \\ \midrule 2 & \tworowsubtablecenter{K}{N} & \tworowsubtablecenter{0.57 +/- 0.20}{\underline{0.69 +/- 0.18}} & \tworowsubtablecenter{\underline{0.90 +/- 0.00}}{0.32 +/- 0.02} & \tworowsubtablecenter{0.46 +/- 0.16}{\underline{0.90 +/- 0.04}} & \tworowsubtablecenter{\underline{0.85 +/- 0.12}}{0.52 +/- 0.11} & \tworowsubtablecenter{\underline{0.63 +/- 0.09}}{0.11 +/- 0.04} & \tworowsubtablecenter{\underline{0.46 +/- 0.14}}{0.11 +/- 0.07} & \tworowsubtablecenter{\underline{3.62 +/- 0.18}}{2.89 +/- 0.30} & \tworowsubtablecenter{1.26 +/- 0.61}{\underline{1.05 +/- 0.43}} & \tworowsubtablecenter{3.33 +/- 1.65}{\underline{1.05 +/- 0.43}} \\ \midrule 3 & \tworowsubtablecenter{K}{N} & \tworowsubtablecenter{0.37 +/- 0.13}{\underline{0.67 +/- 0.19}} & \tworowsubtablecenter{\underline{0.40 +/- 0.01}}{0.24 +/- 0.03} & \tworowsubtablecenter{0.46 +/- 0.16}{\underline{0.90 +/- 0.04}} & \tworowsubtablecenter{\underline{0.85 +/- 0.12}}{0.52 +/- 0.11} & \tworowsubtablecenter{\underline{0.45 +/- 0.03}}{0.09 +/- 0.03} & \tworowsubtablecenter{\underline{0.33 +/- 0.02}}{0.07 +/- 0.03} & \tworowsubtablecenter{\underline{3.11 +/- 0.01}}{2.93 +/- 0.29} & \tworowsubtablecenter{1.33 +/- 0.72}{\underline{0.63 +/- 0.59}} & \tworowsubtablecenter{2.82 +/- 1.61}{\underline{0.63 +/- 0.59}} \\ \midrule 4 & \tworowsubtablecenter{K}{N} & \tworowsubtablecenter{0.57 +/- 0.20}{\underline{0.69 +/- 0.18}} & \tworowsubtablecenter{\underline{0.90 +/- 0.00}}{0.32 +/- 0.02} & \tworowsubtablecenter{\underline{0.00 +/- 0.00}}{0.00 +/- 0.00} & \tworowsubtablecenter{\underline{0.00 +/- 0.00}}{0.00 +/- 0.00} & \tworowsubtablecenter{\underline{0.64 +/- 0.06}}{0.12 +/- 0.03} & \tworowsubtablecenter{\underline{0.51 +/- 0.06}}{0.10 +/- 0.08} & \tworowsubtablecenter{\underline{3.68 +/- 0.13}}{2.89 +/- 0.10} & \tworowsubtablecenter{1.33 +/- 0.47}{\underline{1.06 +/- 0.33}} & \tworowsubtablecenter{2.96 +/- 1.45}{\underline{1.06 +/- 0.33}} \\ \midrule \bottomrule \end{tabular}
/tmp/ipykernel_134445/1312111871.py:3: FutureWarning: In future versions `DataFrame.to_latex` is expected to utilise the base implementation of `Styler.to_latex` for formatting and rendering. The arguments signature may therefore change. It is recommended instead to use `DataFrame.style.to_latex` which also contains additional functionality. lat_str = df_slash.to_latex(index=False, multirow=True, escape=False).replace('\\\\\n', '\\\\ \\midrule\n')