Search⌘ K
AI Features

TensorFlow Framework

Explore the TensorFlow framework to understand tensors, computational graphs, and its comprehensive APIs. Learn how TensorFlow supports deep learning with efficient computation across CPUs, GPUs, and TPUs. This lesson also covers TensorBoard visualization, model deployment methods including TensorFlow Lite, and the framework's adaptability across platforms, preparing you to use TensorFlow for training and deploying deep learning models.

What is a tensor?

A tensor is a mathematical object which is an extension of an array to multiple dimensions. The following figure illustrates tensors of various dimensions.

Tensors are data structures used extensively in the TensorFlow (TF) framework to store and process multidimensional data. An nn-dimensional tensor is a collection of n1n-1 dimensional tensors. For instance, a 3D tensor can have multiple 2D tensors as its elements.

The rank of a tensor is the number of directions it has. Since scalar is a single number, it has a rank of 0. A vector is an array of scalar numbers; therefore, it has a rank of 1.

The TF framework

TF is an open-source framework for large-scale computations. It’s well-suited for training deep neural networks having millions of network parameters. Google uses it in numerous services, including:

  • Gmail

  • Google Cloud

  • Google Maps

  • Google Search

There are two major versions of TF: TF1 and TF2. There are some differences in both versions (more on this later). In this course, we work with TF2.

The word TensorFlow comprises two words: Tensor and Flow. The former represents a multidimensional tensor, whereas the latter defines a computational graph over which data stored in tensors can flow.

Graphs are data structures that have a set of:

  • tf.Operation objects for computations.

  • tf.Tensor objects for the data which flows between operations.

Note: The graph data structure can run with Python code, which makes the TF models portable across different platforms.

Therefore, TF works by constructing a graph and performing calculations on it using multidimensional arrays called tensors. In a computational graph, the edges correspond to tensors, whereas the nodes represent mathematical operations on these tensors. The TF framework:

  • Works on NN-dimensional tensors, similar to NumPy arrays.

  • Provides a Just-In-Time (JIT) compiler that optimizes complex computations for memory and speed.

  • Includes support for graphics processing units (GPUs) and distributed computing across multiple devices.

  • Allows us to export models trained in one environment to other environments, e.g., Linux to Android.

  • Offers excellent optimization methods to minimize numerous types of objective functions.

TF architecture

The following figure shows the basic architecture of the TF framework:

Our code interacts with both low-level and high-level APIs. The execution engine manages the efficient running of TF operations; it implements a distribution strategy for operations to be performed across multiple CPUs, GPUs, or Tensor Processing Units (TPUs).

TF APIs and platforms

TF provides several APIs to build, train, and deploy DL models. The Python code we write in TF can interact with:

  • High-level APIs, such as tf.estimator and Keras, for training neural networks.

  • Low-level APIs such as tf.nn, tf.losses, tf.metrics, and tf.optimizers.

  • Data input/output and processing APIs, tf.data, tf.image, and tf.io, which facilitate preprocessing steps for data preparation in a format acceptable for DL models.

  • APIs for mathematical functions: tf.math and tf.signal.

  • APIs for model deployment and optimization: tf.saved_model, tf.lite, tf.tpu, tf.quantization, and tf.distribute.

  • APIs to visualize models in TensorBoard: tf.summary.

An end-to-end platform, TensorFlow Extended (TFX), includes various libraries and tools for preprocessing, validation, and analysis of data, as well as analysis and serving of TF models.

Furthermore, pretrained NN models and architectures are freely available in TensorFlow Hub and TensorFlow Model Garden. We might employ transfer learning to reuse these free resources.

TF kernels

An efficient C++ code implements TF operations (or ops). Several TF operations have multiple implementations called kernels. Every kernel is dedicated to either a CPU, GPU, or TPU.

GPUs can perform efficient computations on digital signals and images. These can split computations and run them in parallel to speed up computations. TPUs are application-specific integrated circuits (ASICs) designed for DL operations. TPUs can outperform GPUs for DL operations.

Operating systems and web support

The TF framework can run on:

  • Desktop computers with the Windows, Linux, or macOS operating systems.

  • Cloud as a web service.

  • Browsers using TensorFlow.js, a JavaScript implementation of TF.

  • Mobile devices running Android or iOS through TensorFlow Lite.

TF models can be trained on multiple CPUs, GPUs, and TPUs. We can then run these trained models on different machines.

Conclusion

In this lesson, we’ve discussed the architecture of the TF framework and learned about its main features.