Metrics¶
This module provides a function to score how well agents follow the externally provided cue over time.
------------------------------------¶
compute_performance(experiment,
simulation_results,
detailed=False)¶
simulation_results,
detailed=False)
Compute the performance of the agents in the experiment using the simulation results.
The metric used is the dot product between the sign of cue vector and the simulation results,
which is the score of the agent. This means counting in a positive way the areas where the agent
is focusing more on the correct task and in a negative way the areas where the agent is
doing the wrong task. The group performance is simply the sum of the scores of all agents.
Parameters
-
experiment : Experiment
The experiment object that generated the simulation_results.
-
simulation_results : 2D numpy array
The simulation results used to compute the performance. The shape should be (number_of_agents * number_of_tasks, total_time).
-
detailed : bool, optional
Whether to return detailed information about the performance (performance values at each time step). The default is False.
Return
-
score : 1D numpy array
The performance of each agent. The shape is (number_of_agents,).
-
group_performance : float
The performance of the group.
-
detailed_score : 2D numpy array
The performance of each agent, at each time step, on each task. The shape is (total_time, number_of_agents*number_of_tasks).
This is returned only if detailed=True.