rtgym.agent.sensory package

Subpackages

Submodules

rtgym.agent.sensory.sensory module

The sensory module contains the Sensory class which is responsible for creating and managing spatially and movement-modulated sensory cells of the agent.

class Sensory(gym)[source]

Bases: object

The class object that manages the sensory system of the agent. More broadly, it handles all simulated neuronal responses of the agent.

When the gym is initialized, an Agent object is automatically created, which in turn creates a Sensory object. The Sensory object is initially just a placeholder and must be initialized with a sensory profile (a dictionary) that defines the simulated neuronal groups and their parameters.

During spatial traversal, RatatouGym separates the concerns of trajectory generation and neuronal response computation. Once a trajectory is generated, RatatouGym calls the get_response method of the Sensory object. This method takes the trajectory as input and computes the corresponding neuronal responses using the defined tuning curves.

This class should not be initialized directly. The RatatouGym class will automatically manage it.

Parameters:

gym (RatatouGym) – Parent RatatouGym object.

add_sensory(sensory_profile: Dict[str, Any])[source]

Add a sensory cell to the sensory system.

Parameters:

sensory_profile – Dictionary containing the sensory profile.

aggregate_res_maps(keys=None, str_filter=None, type_filter=None)[source]

Aggregate sensory response maps from spatial modalities.

Combines response maps from multiple spatial sensory modalities into a single array for analysis or decoding purposes.

Parameters:
  • keys (list, optional) – Keys to filter the sensories.

  • str_filter (str, optional) – String filter for sensory names.

  • type_filter (str, optional) – Type filter for sensory modalities.

Returns:

Aggregated sensory response maps of shape (n_cells, H, W).

Return type:

np.ndarray

Raises:

AssertionError – If non-spatial modulated sensory cells are included.

property arena
property common_params
compute_res()[source]
decode_response(response: numpy.ndarray, res_maps=None, keys=None, str_filter=None, type_filter=None, use_torch=True, device=None, method='euclidean', **kwargs)[source]

Decode sensory response into spatial coordinates using various optimization methods.

This method converts high-dimensional sensory responses (e.g., from place cells, grid cells) back to spatial coordinates. Multiple algorithms are available, ranging from exact brute-force search to fast approximate methods.

Parameters:
  • response (np.ndarray) – Sensory response array of shape: - (B, T, D) for trajectory decoding - (B, D) for single state decoding where B=batch size, T=time steps, D=feature dimensions

  • res_maps (np.ndarray, optional) – Precomputed response template maps. If None, computed from filtered sensory modalities.

  • keys (list, optional) – Specific sensory keys to include in decoding.

  • str_filter (str, optional) – String filter for sensory names.

  • type_filter (str, optional) – Type filter for sensory modalities.

  • use_torch (bool, optional) – Enable PyTorch acceleration (default: True).

  • device (str or torch.device, optional) – Computation device for PyTorch.

  • method (str, optional) – Decoding algorithm to use: - “euclidean”: Brute-force exact search (default) - “torch_euclidean”: GPU-accelerated exact search - “kdtree”: K-d tree for fast exact search - “faiss”: FAISS library for very fast approximate search - “interpolation”: Spatial interpolation with anchor points

  • **kwargs – Additional parameters passed to specific methods.

Returns:

Decoded coordinates wrapped in appropriate

dataclass. Shape matches input: (B,T,2) for trajectories, (B,2) for states.

Return type:

Union[Trajectory, AgentState]

Examples

>>> # Decode place cell responses to trajectory
>>> trajectory = sensory.decode_response(responses, method="kdtree")
>>>
>>> # Fast approximate decoding with FAISS
>>> trajectory = sensory.decode_response(responses, method="faiss", n_clusters=50)
filter_sensories(keys=None, str_filter=None, type_filter=None)[source]

This helps to find the keys of the sensory cells that match the given criteria.

It will prioritize the most specific filter. The specificity from most to least is:

keys > str_filter > type_filter

get_response(agent_data: AgentState | Trajectory, return_format='dict', keys=None, str_filter=None, type_filter=None)[source]

Get sensory responses for the given trajectory.

Parameters:
  • traj – rtgym.dataclass.Trajectory object.

  • return_format – Format of the returned responses. Can be ‘dict’ or ‘array’.

  • keys – List of sensory keys to get responses. If None, get responses for all.

  • str_filter – Filter the sensory keys by the given string.

  • type_filter – Filter the sensory keys by the given type.

Returns:

Sensory responses. The responses are of shape (n_cells, *arena_dimensions).

After indexing, it will be of shape (n_cells, n_batch). When return_format is ‘dict’, it will be a dictionary of responses. When return_format is ‘array’, it will be a numpy array of responses.

Return type:

responses

init_from_profile(sensory_profile)[source]
list_all()[source]

List all the sensory cells.

load(file_path)[source]

Load the sensory cells from a file.

Parameters:

file_path – Path to the file where the sensory cells are saved.

load_from_state_dict(state_dict, append=True)[source]

Load the sensory cells from a state dictionary.

Parameters:
  • state_dict – State dictionary of the sensory cells.

  • append – If True, append the sensory cells to the existing sensory cells. If False, replace the existing sensory cells.

num_sensories(keys=None, str_filter=None, type_filter=None)[source]
property s_res
save(file_path)[source]

Save the sensory cells to a file.

Parameters:

file_path – Path to the file where the sensory cells will be saved.

property t_res

Module contents