miniosl.inference¶
inference modules
- class miniosl.inference.InferenceModel(device)[source]¶
interface for inference using trained models
- eval(input: ndarray, *, take_softmax: bool = False) Tuple[ndarray, float, ndarray] [source]¶
return (move, value, aux) tuple, after softmax
- miniosl.inference.load(path: str, device: str = '', torch_cfg: dict = {}, *, compiled: bool = False, strict: bool = True, remove_aux_head: bool = False) InferenceModel [source]¶
factory method to load a model from file
- Parameters:
path – filepath,
device – torch device such as ‘cuda’, ‘cpu’,
torch_cfg – network specification needed for TorchInfer.