Skip to content

add_to_graph

This module provides a way to add custom functions to the graph.

To register a function in your graph, instead of doing:

  • x = function(*args, **kwds)

do this:

  • x = add_to_graph(function, *args, **kwds).

For this to work, there must be at least one Symbolic Data among *args, **kwds, but other data types are allowed to be there as well.

Example for using torch.concat:

from pytorch_symbolic import Input
from pytorch_symbolic.functions_utility import add_to_graph
import torch

v1 = Input((10,))
v2 = Input((20,))
output = add_to_graph(torch.concat, tensors=(v1, v2), dim=1)
output
<SymbolicTensor at 0x7ffb77ba87f0; 2 parents; 0 children>

This will work for most of the user custom functions, even if Symbolic Tensors are hidden in nested tuples, lists or dicts. You should also know that there is a small time overhead for __call__ during runtime for every function registered this way. This overhead should not be present when dealing with large models on GPU, because then CPU does its work before GPU finishes previous kernel computation.

Recommended, overhead-free way to use custom functions is to write yourself an nn.Module that does the same as the function of choice. Then you can use the model without sacrificing performance.

pytorch_symbolic.add_to_graph

pytorch_symbolic.add_to_graph(func: Callable | nn.Module, args, custom_name: str | None = None, kwds)

Register a custom func or a module in the computation graph.

This works will arbitrary functions and modules iff at least one Symbolic Data is among *args, **kwds.

This way of registering is flexible, but might add a small slowdown to the call, because it adds a wrapper for parsing arguments. If this is unacceptable, please create an torch.nn.Module that takes only Symbolic Data arguments.

Here all arguments, including Symbolic Data, should be passed after the func argument. The arguments can be mixed and matched, even nested in lists, tuples and dictionaries.

Convolution func example::

inputs = Input(shape=(3, 32, 32))
kernel = Input(batch_size=(16, 3, 3, 3))
bias = Input(batch_size=(16,))
output = add_to_graph(F.conv2d, input=inputs, weight=k, bias=bias, padding=1)