NAME
AI::TensorFlow::Libtensorflow::Manual::GPU - GPU-specific installation and usage information.
DESCRIPTION
This guide provides information about using the GPU version of libtensorflow
. This is currently specific to NVIDIA GPUs as they provide the CUDA API that libtensorflow
targets for GPU devices.
INSTALLATION
In order to use a GPU with libtensorflow
, you will need to check that the hardware requirements and software requirements are met. Please refer to the official documentation for the specific hardware capabilities and software versions.
An alternative to installing all the software listed on the "bare metal" host machine is to use libtensorflow
via a Docker container and the NVIDIA Container Toolkit. See "DOCKER IMAGES" in AI::TensorFlow::Libtensorflow::Manual::Quickstart for more information.
RUNTIME
When running libtensorflow
, your program will attempt to acquire quite a bit of GPU VRAM. You can check if you have enough free VRAM by using the nvidia-smi
command which displays resource information as well as which processes are currently using the GPU. If libtensorflow
is not able to allocate enough memory, it will crash with an out-of-memory (OOM) error. This is typical when running multiple programs that both use the GPU.
If you have multiple GPUs, you can control which GPUs your program can access by using the CUDA_VISIBLE_DEVICES
environment variable provided by the underlying CUDA library. This is typically done by setting the variable in a BEGIN
block before loading AI::TensorFlow::Libtensorflow:
BEGIN {
# Set the specific GPU device that is available
# to this program to GPU index 0, which is the
# first GPU as listed in the output of `nvidia-smi`.
$ENV{CUDA_VISIBLE_DEVICES} = '0';
require AI::TensorFlow::Libtensorflow;
}
AUTHOR
Zakariyya Mughal <zmughal@cpan.org>
COPYRIGHT AND LICENSE
This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.
This is free software, licensed under:
The Apache License, Version 2.0, January 2004