NAME

AI::TensorFlow::Libtensorflow::Manual::Quickstart - Start here for an overview of the library

DESCRIPTION

This provides a tour of libtensorflow to help get started with using the library.

CONVENTIONS

The library uses UpperCamelCase naming convention for method names in order to match the underlying C library (for compatibility with future API changes) and to make translating code from C easier as this is a low-level API.

As such, constructors for objects that correspond to libtensorflow data structures are typically called New. For example, a new AI::TensorFlow::Libtensorflow::Status object can be created as follows

use AI::TensorFlow::Libtensorflow::Status;
my $status = AI::TensorFlow::Libtensorflow::Status->New;

ok defined $status, 'Created new Status';

These libtensorflow data structures use destructors where necessary.

OBJECT TYPES

AI::TensorFlow::Libtensorflow::Status

Used for error-handling. Many methods take this as the final argument which is then checked after the method call to ensure that it completed successfully.

AI::TensorFlow::Libtensorflow::Tensor, AI::TensorFlow::Libtensorflow::DataType

A TFTensor is a multi-dimensional data structure that stores the data for inputs and outputs. Each element has the same data type which is defined by AI::TensorFlow::Libtensorflow::DataType thus a TFTensor is considered to be "homogeneous data structure". See Introduction to Tensors for more.

AI::TensorFlow::Libtensorflow::OperationDescription, AI::TensorFlow::Libtensorflow::Operation

An operation is a function that has inputs and outputs. It has a user-defined name (such as MyAdder) and library-defined type (such as AddN). AI::TensorFlow::Libtensorflow::OperationDescription is used to build an operation that will be added to a graph of other operations where those other operations can set the operation's inputs and get the operation's outputs. These inputs and outputs have types and dimension specifications, so that the operations only accept and emit certain TFTensors.

AI::TensorFlow::Libtensorflow::Graph

A set of operations with inputs and outputs linked together. This computation can be serialized along with parameters as part of a SavedModel.

AI::TensorFlow::Libtensorflow::Session, AI::TensorFlow::Libtensorflow::SessionOptions

A session drives the execution of a AI::TensorFlow::Libtensorflow::Graph. Specifics of how the session executes can be set via AI::TensorFlow::Libtensorflow::SessionOptions.

TUTORIALS

The object types in "OBJECT TYPES" are used in the following tutorials:

InferenceUsingTFHubMobileNetV2Model: image classification tutorial

This tutorial demonstrates using a pre-trained SavedModel and creating a AI::TensorFlow::Libtensorflow::Session with the LoadFromSavedModel method. It also demonstrates how to prepare image data for use as an input TFTensor.

InferenceUsingTFHubEnformerGeneExprPredModel: gene expression prediction tutorial

This tutorial builds on InferenceUsingTFHubMobileNetV2Model. It shows how to convert a pre-trained SavedModel from one that does not have a usable signature to a new model that does. It also demonstrates how to prepare genomic data for use as an input TFTensor.

DOCKER IMAGES

Docker (or equivalent runtime) images for the library along with all the dependencies to run the above tutorials are available at Quay.io under various tags which can be run as

docker run --rm -it -p 8888:8888 quay.io/entropyorg/perl-ai-tensorflow-libtensorflow:latest-nb-omnibus

and when the links come up on the terminal, click the link to http://127.0.0.1:8888/ in order to connect to the Jupyter Notebook interface via the web browser. In the browser, click on the notebook folder to access the notebooks.

GPU Docker support

If using the GPU Docker image for NVIDIA support, make sure that the TensorFlow Docker requirements are met and that the correct flags are passed to docker run, for example

docker run --rm --gpus all [...]

More information about NVIDIA Docker containers can be found in the NVIDIA Container Toolkit Installation Guide (specifically Setting up NVIDIA Container Toolkit) and User Guide.

Diagnostics

When using the Docker GPU image, you may come across the error

nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.

Make sure that you have installed the NVIDIA Container Toolkit correctly via the Installation Guide. Also make sure that you only have one Docker daemon installed. The recommended approach is to install via the official Docker releases at https://docs.docker.com/engine/install/. Note that in some cases, you may have other unofficial Docker installations such as the docker.io package or the Snap docker package, which may conflict with the official vendor-provided NVIDIA Container Runtime.

Docker Tags

latest: base image with only libtensorflow installed.
latest-nb-image-class: image containing dependencies needed to run

InferenceUsingTFHubMobileNetV2Model.

latest-nb-gene-expr-pred: image containing dependencies needed to run

InferenceUsingTFHubEnformerGeneExprPredModel.

latest-nb-obj-detect: image containing dependencies needed to run

InferenceUsingTFHubCenterNetObjDetect.

latest-nb-omnibus: image containing dependencies for all of the above

notebooks.

latest-gpu-nb-omnibus: same as latest-nb-omnibus but with NVIDIA GPU support.

SEE ALSO

TensorFlow home page

AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

The Apache License, Version 2.0, January 2004