Keras Backends

In this Deep Learning Algorithms page, we will learn Keras backends, Switching from one backend to another, and Usage of abstract Keras backend for writing new code.


Keras backends

A model-level library called Keras provides advanced building elements that are helpful in creating deep learning models. It relies on the backend engine that is a well-specialized and optimized tensor manipulation library rather than enabling low-level operations like tensor products, convolutions, etc. directly. It doesn't choose only one tensor library and then implement Keras that is linked to that library. By smoothly integrating a wide variety of unique back-end engines into Keras, it manages the situation in a modular manner.

The three backend implementations that are currently offered are as follows:
 TensorFlow: 
This open-source symbolic tensor manipulation toolkit was created by Google.
 Theano: 
The LISA Lab at Université de Montréal has created an open-source framework for symbolic tensor manipulation.
 CNTK: 
In addition to being an open-source deep-learning toolkit, it was also created by Microsoft.

Switching from one backend to another

If you have trouble locating it there, you can make one!

$HOME/.keras/keras.json

If you have trouble locating it there, you can make one!

Note: Replace $HOME with %USERPROFILE%, particularly for Windows users.

The default configuration is as follows:

{
"image_data_format": "channels_last",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}

When you execute any Keras code, Keras will use the new settings; all you have to do is change the backend field here to "theano," "tensorflow," or "cntk."

Any definitions made in your config file will be overridden once you define the KERAS_BACKEND environment variable.

KERAS_BACKEND=tensorflow python -c "from keras import backend"
Using TensorFlow backend.

Since Keras can easily use other backends, you might be able to load many more backends than "tensorflow," "theano," or "cntk." By modifying the "backend" setting and keras.json, this can be accomplished. Let's say you want to use a Python module called "my_module" as an external backend. In such a scenario, the keras.json file can experience the following changes:

{
"image_data_format": "channels_last",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "my_package.my_module"
}

It must be validated and include features like placeholder, variable, and function in order to be used as an external backend.

It may generate an error including all the missing entries if the external backend is invalid.

keras.json details

The keras.json file's settings are listed as follows:

{
"image_data_format": "channels_last",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}

You can easily change the settings by changing $HOME/.keras/keras.json.

  • image_data_format: It can be declared as a string with one of the two values, "channels last" or "channels first," indicating the Keras data format convention. (It is returned by backend.image_data_format()).
  • The "channels last" function assumes (rows, cols, channels) for any two-dimensional data, such as an image, whereas the "channels first" function assumes (channels, rows, cols).
  • The "channels last" for any three-dimensional data will refer to (conv dim1, conv dim2, conv dim3, channels), while the "channels first" will refer to (channels, conv_dim1, conv_dim2, conv_dim3).
  • epsilon: It refers to a float, a fuzzily defined numerical constant used in some calculations to avoid division by zero.
  • floatx: A string of "float16," "float32," or "float64" is what it denotes. It has float precision by default.
  • backend: It's a string that includes "tensorflow," "theano," or "cntk."

Usage of abstract Keras backend for writing new code

You can increase the compatibility of your developed Keras module with both Theano (th) and TensorFlow by using the abstract Keras backend API (tf). Here is an introduction to it:

You may import the backend module by using:

from keras import backend as K

The code below, which is equivalent to tf.placeholder() or th.tensor.matrix(), th.tensor.tensor3(), etc., will create an input placeholder.


   inputs = K.placeholder(shape=(2, 4, 5))  
   # also works:  
   inputs = K.placeholder(shape=(None, 4, 5))  
   # also works:  
   inputs = K.placeholder(ndim=3)

The following code will create a variable, which will then be equivalent to either tf.Variable() or th.shared().


  import numpy as np  
  val = np.random.random((3, 4, 5))  
  var = K.variable(value=val)  
    
  # all-zeros variable:  
  var = K.zeros(shape=(3, 4, 5))  
  # all-ones:  
  var = K.ones(shape=(3, 4, 5))  

The majority of tensor operations that you would need will be carried out similarly to how you would in Theano or TensorFlow, including the following:


  # Initializing Tensors with Random Numbers  
  b = K.random_uniform_variable(shape=(3, 4), low=0, high=1) # Uniform distribution  
  c = K.random_normal_variable(shape=(3, 4), mean=0, scale=1) # Gaussian distribution  
  d = K.random_normal_variable(shape=(3, 4), mean=0, scale=1)  
    
  # Tensor Arithmetic  
  a = b + c * K.abs(d)  
  c = K.dot(a, K.transpose(b))  
  a = K.sum(b, axis=1)  
  a = K.softmax(b)  
  a = K.concatenate([b, c], axis=-1)  
  # etc...  

Backend functions

backend

keras.backend.backend()

The current backend name can be restored using the backend function.

Returns

It gives back a string that corresponds to the name of the backup that is currently being used.

Example

>>> keras.backend.backend()
'tensorflow'

symbolic

keras.backend.symbolic(func)

It can be characterized as a decorator that TensorFlow 2.0 uses to enter the Keras graph.

Arguments

  • func: It alludes to a decoration-related purpose.

Returns

It gives back a customized function.

eager

keras.backend.eager(func)

It can be described as a decorator that TensorFlow 2.0 uses to leave the Keras graph.

Arguments

  • func: It alludes to a decoration-related purpose.

Returns

It gives back a customized function.

get_uid

keras.backend.get_uid(prefix='')

It offers a singular UID with a string prefix.

Arguments

  • prefix: This alludes to a string.

Returns

The output of this backend function is an integer.

Example

>>> keras.backend.get_uid('dense')
1

>>> keras.backend.get_uid('dense')
2

Use this function to configure the manual variable initialization flags. The flag is represented as a Boolean that controls whether a variable needs initialization or not because it is self-instantiated by default.

Arguments

  • value: It refers to the Boolean value in Python.

epsilon

keras.backend.epsilon()

The value of the fuzz factor, which is employed in the numerical expressions, is returned by it.

Returns

It gives back a float.

Example

>>> keras.backend.epsilon()
1e-07

reset_uids

keras.backend.reset_uids()

Resetting the graph IDs is done using it.

epsilon

tf.keras.backend.epsilon()

The fuzz factor value it produces is used in the numerical expressions.

Returns

It gives a float value back.

Example

>>> tf.keras.backend.epsilon()
1e-07

set_epsilon

keras.backend.set_epsilon(e)

The value of the fuzz factor, which is employed in the numerical expressions, is set using it.

Arguments

  • e: It can be described as the new value of the epsilon represented by a float value.

Example

>>> from keras import backend as K
>>> K.epsilon()
1e-07
>>> K.set_epsilon(1e-05)
>>> K.epsilon()
1e-05

floatx

keras.backend.floatx()

It is used to output a float-type string, such as "float16," "float32," or "float64".

Returns

It provides a string of the current standard float type in return.

Example

>>> keras.backend.floatx()
'float32'

set_floatx

keras.backend.set_floatx(floatx)
Note: Kerus backends tutorial

It is used to set the float type's default value.

Arguments

  • floatx: A string of the float type, such as "float16," "float32," or "float64," is what is meant by this.

Example


 >>> from keras import backend as K  
 >>> K.floatx()  
 'float32'  
 >>> K.set_floatx('float16')  
 >>> K.floatx()  
 'float16'  

Raises

ValueError: Whenever there is an invalid value, then ValueError will be generated.

cast_to_floatx

keras.backend.cast_to_floatx(x)

It is used to convert Numpy arrays to the default float type in Keras.

Arguments

  • x: The Numpy array is referred to.

Returns

The Numpy array that was converted to the new type is returned.

Example


    >>> from keras import backend as K  
    >>> K.floatx()  
    'float32'  
    >>> arr = numpy.array([1.0, 2.0], dtype='float64')  
    >>> arr.dtype  
    dtype('float64')  
    >>> new_arr = K.cast_to_floatx(arr)  
    >>> new_arr  
    array([ 1.,  2.], dtype=float32)  
    >>> new_arr.dtype  
    dtype('float32')     

image_data_format

keras.backend.image_data_format()

It is used to return the convention for the default picture data format.

Returns

Either "channels first" or "channels last" is returned as a string.

Example

>>> keras.backend.image_data_format()
'channels_first'

set_image_data_format

keras.backend.set_image_data_format(data_format)

The value of the data format convention is set using this function.

Arguments

  • data_format: It can be specified as either a string named "channels first" or "channels last".

Example


 >>> from keras import backend as K  
 >>> K.image_data_format()  
 'channels_first'  
 >>> K.set_image_data_format('channels_last')  
 >>> K.image_data_format()  
 'channels_last'  

Raises

ValueError: Anytime there is an invalid data format value, a ValueError is produced.

learning_phase

keras.backend.learning_phase()

It produces a flag for the learning phase that can be used as an input for any Keras function that employs a different behavior during training and testing (0 = test, 1 = train).

Returns

The learning phase's scalar integer tensor or Python integer is returned.

set_learning_phase

keras.backend.set_learning_phase(value)

It is used to give the learning phase a fixed value.

Arguments

  • value: It can be described as an integer that indicates whether the learning phase value is 0 or 1.

Raises

ValueError: If the value is neither 0 nor 1, it is raised.

clear_session

keras.backend.clear_session()

It is employed to reset each and every state that Keras generates. Keras manages the global state that is used for the Functional model-building API implementation as well as to unify automatically generated layer names.

The global state will consume an increasing amount of memory over a set amount of time when several models are constructed in a loop, so you'll want to clear it.

It is used to erase Keras' existing graph and build a new one. As it eliminates clutter from outdated models and layers, it is highly helpful.

Example1: calling clear session () while looping through model creation


  for _ in range(100):  
    # Without `clear_session()`, each iteration of this loop will  
    # slightly increase the size of the global state managed by Keras  
    model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])  
    
  for _ in range(100):  
    # With `clear_session()` called at the beginning,  
    # Keras starts with a blank state at each iteration  
    # and memory consumption is constant over time.  
    tf.keras.backend.clear_session()  
    model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])     

Example2: resetting the layer name generation counter.


 >>> import tensorflow as tf  
 >>> layers = [tf.keras.layers.Dense(10) for _ in range(10)]  
 >>> new_layer = tf.keras.layers.Dense(10)  
 >>> print(new_layer.name)  
 dense_10  
 >>> tf.keras.backend.set_learning_phase(1)  
 >>> print(tf.keras.backend.learning_phase())  
 1  
 >>> tf.keras.backend.clear_session()  
 >>> new_layer = tf.keras.layers.Dense(10)  
 >>> print(new_layer.name)  
 dense  

is_sparse

keras.backend.is_sparse(tensor)

It is used to determine whether a tensor is sparse.

Arguments

  • tensor: It alludes to a tensor example.

Returns

It gives back a Boolean.

Example


  >>> from keras import backend as K  
  >>> a = K.placeholder((2, 2), sparse=False)   >>> print(K.is_sparse(a))  
  False  
  >>> b = K.placeholder((2, 2), sparse=True)  
  >>> print(K.is_sparse(b))  
  True  

to_dense

keras.backend.to_dense(tensor)

It returns the result of converting a sparse tensor to a dense tensor.

Arguments

  • tensor: It speaks of a certain tensor instance (potentially sparse).

Returns

A dense tensor is the result.

Example


  >>> from keras import backend as K  
  >>> b = K.placeholder((2, 2), sparse=True)  
  >>> print(K.is_sparse(b))  
  True  
  >>> c = K.to_dense(b)  
  >>> print(K.is_sparse(c))  
  False     

variable

keras.backend.variable(value, dtype=None, name=None, constraint=None)

Arguments

  • value: It is a numpy array that symbolizes the starting value of the tensor.
  • dtype: It speaks of a Tensor's type.
  • name: This denotes a string name for a tensor.
  • constraint: It refers to a projection function that may be used on the variable once an optimizer has been updated.

Returns

It gives back a variable instance made up of Keras metadata.

Example


  >>> from keras import backend as K  
  >>> val = np.array([[1, 2], [3, 4]])  
  >>> kvar = K.variable(value=val, dtype='float64', name='example_var')  
  >>> K.dtype(kvar)  
  'float64'  
  >>> print(kvar)  
  example_var  
  >>> K.eval(kvar)  
  array([[ 1.,  2.],  
        [ 3.,  4.]])    

is_variable

keras.backend.is_variable(x)

constant

keras.backend.constant(value, dtype=None, shape=None, name=None)

It resulted in the development of a special tensor.

Arguments

  • Value: It can be a list or a constant value.
  • dtype: It speaks of a Tensor's type.
  • name: This denotes a string name for a tensor.
  • shape: It can be described as the resulting tensor's optional dimensionality.

Returns

It also provides a special Tensor.

is_keras_tensor

keras.backend.is_keras_tensor(x)

It indicates whether or not x is a Keras tensor. A tensor that is returned by a Keras layer (Layer class) or by input is referred to as a "Keras tensor".

Arguments

  • x: It speaks of a potential tensor.

Returns

It gives back a Boolean that indicates whether or not the argument is a Keras tensor.

Raises

A ValueError is raised if x is not a symbolic tensor.

Example


  >>> from keras import backend as K  
  >>> from keras.layers import Input, Dense  
  >>> np_var = numpy.array([1, 2])  
  >>> K.is_keras_tensor(np_var) # A numpy array is not a symbolic tensor.  
  ValueError  
  >>> k_var = tf.placeholder('float32', shape=(1,1))  
  >>> # A variable indirectly created outside of keras is not a Keras tensor.  
  >>> K.is_keras_tensor(k_var)  
  False  
  >>> keras_var = K.variable(np_var)  
  >>> # A variable created with the keras backend is not a Keras tensor.  
  >>> K.is_keras_tensor(keras_var)  
  False  
  >>> keras_placeholder = K.placeholder(shape=(2, 4, 5))  
  >>> # A placeholder is not a Keras tensor.  
  >>> K.is_keras_tensor(keras_placeholder)  
  False  
  >>> keras_input = Input([10])  
  >>> K.is_keras_tensor(keras_input) # An Input is a Keras tensor.  
  True  
  >>> keras_layer_output = Dense(10)(keras_input)  
  >>> # Any Keras layer output is a Keras tensor.  
  >>> K.is_keras_tensor(keras_layer_output)  
  True  

is_tensor

keras.backend.is_tensor(x)

placeholder

keras.backend.placeholder(shape=None, ndim=None, dtype=None, sparse=False, name=None)

It assists in creating a temporary tensor and returning it.

Arguments

  • shape: It can be described as an integer tuple that includes The placeholder's shape is better represented with None entries.
  • ndim: It refers to the number of tensor axes, at least one of which specifies {shape, ndim}. If both are supplied, the shape is used.
  • dtype: It specifies the type of Placeholder with dtype.
  • sparse: A Boolean that indicates whether or not the placeholder has a sparse type can be used to define the term.
  • name: The placeholder's name is defined as a string by this optional argument.

Returns
It includes Keras metadata and returns a Tensor instance.
Example


  >>> from keras import backend as K  
  >>> input_ph = K.placeholder(shape=(2, 4, 5))  
  >>> input_ph._keras_shape  
  (2, 4, 5)  
  >>> input_ph  
  <tf.Tensor 'Placeholder_4:0' shape=(2, 4, 5) dtype=float32> 

is_placeholder

keras.backend.is_placeholder(x)

It indicates whether x is a placeholder or not.

Arguments

  • x: It can be described as a placeholder for a candidate.

Returns

It gives back a Boolean.

shape

keras.backend.shape(x)

It produces a tensor or variable's symbolic shape.

Arguments

  • x: It alludes to a variable or tensor.

Returns

A tensor of symbolic shape is what it returns.

Examples


  # TensorFlow example  
  >>> from keras import backend as K  
  >>> tf_session = K.get_session()  
  >>> val = np.array([[1, 2], [3, 4]])  
  >>> kvar = K.variable(value=val)  
  >>> inputs = keras.backend.placeholder(shape=(2, 4, 5))  
  >>> K.shape(kvar)  
  <tf.Tensor 'Shape_8:0' shape=(2,) dtype=int32>  
  >>> K.shape(inputs)  
  <tf.Tensor 'Shape_9:0' shape=(3,) dtype=int32>  
  # To get integer shape (Instead, you can use K.int_shape(x))  
  >>> K.shape(kvar).eval(session=tf_session)  
  array([2, 2], dtype=int32)  
  >>> K.shape(inputs).eval(session=tf_session)  
  array([2, 4, 5], dtype=int32)  

int_shape

keras.backend.int_shape(x)

It can be described as the shape of a variable or a tensor output as a tuple of int or None entries.

Arguments

  • x: It could be a variable or a tensor.

Returns

Either a tuple of integers or None of the elements are returned.

Example


  >>> from keras import backend as K  
  >>> inputs = K.placeholder(shape=(2, 4, 5))  
  >>> K.int_shape(inputs)  
  (2, 4, 5)  
  >>> val = np.array([[1, 2], [3, 4]])  
  >>> kvar = K.variable(value=val)  
  >>> K.int_shape(kvar)  
  (2, 2)  

Numpy implementation


  def int_shape(x):  
    return x.shape  

ndim

keras.backend.ndim(x)

It speaks about an integer that is given back as the number of axes in a tensor.

Arguments

  • x: It can be defined as a variable or a tensor.

Returns

The number of axes is output as an integer value.

Example


  >>> from keras import backend as K  
  >>> inputs = K.placeholder(shape=(2, 4, 5))  
  >>> val = np.array([[1, 2], [3, 4]])  
  >>> kvar = K.variable(value=val)  
  >>> K.ndim(inputs)  
  3  
  >>> K.ndim(kvar)  
  2    

Numpy implementation


    def int_shape(x):  
      return x.shape  

size

keras.backend.size(x, name=None)

The size of the tensor is output.

Arguments

  • x: It can be defined as a variable or a tensor.
  • name: The name of the operation is represented by this optional keyword argument.

Returns

The size of the tensor is returned.

Example


  >>> from keras import backend as K  
  >>> val = np.array([[1, 2], [3, 4]])  
  >>> kvar = K.variable(value=val)  
  >>> K.size(inputs)  
  <tf.Tensor: id=9, shape=(), dtype=int32, numpy=4> 

dtype

keras.backend.dtype(x)

It is possible to declare it as a string, and that string will be returned as a Keras tensor or variable's dtype.

Arguments

  • x: It can be defined as a variable or a tensor.

Returns

It returns the dtype of x for x.

Example


  >>> from keras import backend as K  
  >>> K.dtype(K.placeholder(shape=(2,4,5)))  
  'float32'  
  >>> K.dtype(K.placeholder(shape=(2,4,5), dtype='float32'))  
  'float32'  
  >>> K.dtype(K.placeholder(shape=(2,4,5), dtype='float64'))  
  'float64'  
  # Keras variable  
  >>> kvar = K.variable(np.array([[1, 2], [3, 4]]))  
  >>> K.dtype(kvar)  
  'float32_ref'  
  >>> kvar = K.variable(np.array([[1, 2], [3, 4]]), dtype='float32')  
  >>> K.dtype(kvar)  
  'float32_ref'  

Numpy implementation


    def int_shape(x):  
      return x.shape  

eval

keras.backend.eval(x)

It aids in tensor value evaluation.

Arguments

  • x: It fits the definition of a tensor.

Returns

The result is a Numpy array.

Example


    >>> from keras import backend as K  
    >>> kvar = K.variable(np.array([[1, 2], [3, 4]]), dtype='float32')  
    >>> K.eval(kvar)  
    array([[ 1.,  2.],  
           [ 3.,  4.]], dtype=float32)  

Numpy implementation


    def eval(x):  
        return x  

zeros

keras.backend.zeros(shape, dtype=None, name=None)

It facilitates the construction of all-zero variables and then returns them.

Arguments

  • shape: It is a tuple of numbers that represents the shape of the returned Keras variable.
  • dtype: The term "dtype" refers to a string that represents the data type of the returned Keras variable.
  • name: This is the string that identifies the name of the returned Keras variable.

Returns

It gives back a variable that has the Keras information and has the value 0.0 in it. It should be emphasized that if the shape is symbolic, a variable cannot be provided; instead, a tensor with a dynamic shape will be returned.

Example


    >>> from keras import backend as K  
    >>> kvar = K.zeros((3,4))  
    >>> K.eval(kvar)  
    array([[ 0.,  0.,  0.,  0.],  
       [ 0.,  0.,  0.,  0.],  
       [ 0.,  0.,  0.,  0.]], dtype=float32)  

Numpy implementation


    def zeros(shape, dtype=floatx(), name=None):  
        return np.zeros(shape, dtype=dtype)  

ones

keras.backend.ones(shape, dtype=None, name=None)

It aids in the initialization of an all-ones variable and the subsequent return of that variable.

Arguments

  • shape: It can be thought of as a tuple of numbers that symbolizes the form of the returned Keras variable.
  • dtype: It refers to a string whose data type matches that of the returned Keras variable.
  • name: It refers to the name of the returned Keras variable as represented by the string.

Returns

It produces a Keras variable with a value of 0.0 in it. It should be emphasized that if the shape is symbolic, a variable cannot be provided; instead, a tensor with a dynamic shape will be returned.

Example


    >>> from keras import backend as K  
    >>> kvar = K.ones((3,4))  
    >>> K.eval(kvar)  
    array([[ 1.,  1.,  1.,  1.],  
           [ 1.,  1.,  1.,  1.],  
           [ 1.,  1.,  1.,  1.]], dtype=float32) 

Numpy implementation


    def ones(shape, dtype=floatx(), name=None):  
        return np.ones(shape, dtype=dtype)  

eye

keras.backend.eye(size, dtype=None, name=None)

It facilitates the instantiation of an identity matrix and its subsequent return.

Arguments

  • size: Either a tuple indicating the number of rows and columns or an integer denoting the number of rows can be used to define it.
  • dtype: It refers to a string whose data type matches that of the returned Keras variable.
  • name: It refers to the name of the returned Keras variable as represented by the string.

Returns

The output is a Keras variable that is a representation of an identity matrix.

Example


   >>> from keras import backend as K  
   >>> K.eval(K.eye(3))  
    array([[ 1.,  0.,  0.],  
           [ 0.,  1.,  0.],  
           [ 0.,  0.,  1.]], dtype=float32)  
    >>> K.eval(K.eye((2, 3)))  
    array([[1., 0., 0.],  
           [0., 1., 0.]], dtype=float32)    

Numpy implementation


    def eye(size, dtype=None, name=None):  
        if isinstance(size, (list, tuple)):  
            n, m = size  
        else:  
            n, m = size, size  
        return np.eye(n, m, dtype=dtype)   

zeros_like

keras.backend.zeros_like(x, dtype=None, name=None)

It aids in instantiating a tensor with a similar form variable that is all zeros.

Arguments

  • x: It can be described as a Keras tensor or variable.
  • dtype: It refers to a string whose data type matches that of the returned Keras variable. The use of x dtype is related to None in this case.
  • name: It refers to the name of the returned Keras variable as represented by the string.

Returns

It gives back a Keras variable that is entirely zeroed out and has the shape of x.

Example


    >>> from keras import backend as K  
    >>> kvar = K.variable(np.random.random((2,3)))  
    >>> kvar_zeros = K.zeros_like(kvar)  
    >>> K.eval(kvar_zeros)  
    array([[ 0.,  0.,  0.],  
            [ 0.,  0.,  0.]], dtype=float32)  

Numpy implementation


    def zeros_like(x, dtype=floatx(), name=None):  
        return np.zeros_like(x, dtype=dtype) 

ones_like

keras.backend.ones_like(x, dtype=None, name=None)

It aids in instantiating variables with the same shape and all-one values as another tensor.

Arguments

  • x: It can be described as a Keras tensor or variable.
  • dtype: It refers to a string whose data type matches that of the returned Keras variable. The use of x dtype is related to None in this case.
  • name: It refers to the name of the returned Keras variable as represented by the string.

Returns

It gives back a Keras variable that is entirely zeroed out and has the shape of x.

Example


    >>> from keras import backend as K  
    >>> kvar = K.variable(np.random.random((2,3)))  
    >>> kvar_ones = K.ones_like(kvar)  
    >>> K.eval(kvar_ones)  
    array([[ 1.,  1.,  1.],  
        [ 1.,  1.,  1.]], dtype=float32)  

Numpy implementation


    def ones_like(x, dtype=floatx(), name=None):  
        return np.ones_like(x, dtype=dtype)  

identity

keras.backend.identity(x, name=None)

It produces a tensor whose content is similar to that of the input tensor.

Arguments

  • x: It refers to the tensor of the input.
  • name: It alludes to the string that serves as the variable's name and must be constructed.

Returns

It returns a tensor with the same content, type, and form.

random_uniform_variable

keras.backend.random_uniform_variable(shape, low, high, dtype=None, name=None, seed=None)

It placed a focus on the instantiation of a variable whose values were drawn uniformly across the population.

Arguments

  • shape: The returned Keras variable's shape is represented by a tuple of integers.
  • low: It designates the lower limit of the output interval as a float value.
  • high: It denotes a float value, which denotes the upper limit of the output interval.
  • dtype: This is a string that denotes the data type of the returned Keras variable.
  • name: It can be interpreted as a string corresponding to the name of the returned Keras variable.
  • Seed: An number that represents a random seed is known as a "seed".

Returns

It produces a Keras variable that contains drawn sample data.

Example


    # TensorFlow example  
    >>> kvar = K.random_uniform_variable((2,3), 0, 1)  
    >>> kvar  
    <tensorflow.python.ops.variables.Variable object at 0x10ab40b10>  
    >>> K.eval(kvar)  
    array([[ 0.10940075,  0.10047495,  0.476143  ],  
           [ 0.66137183,  0.00869417,  0.89220798]], dtype=float32)  

Numpy implementation

    
    def random_uniform_variable(shape, low, high, dtype=None, name=None, seed=None):  
        return (high - low) * np.random.random(shape).astype(dtype) + low  
    

random_normal_variable

keras.backend.random_normal_variable(shape, mean, scale, dtype=None, name=None, seed=None)

A variable whose values are taken from a normal distribution can be instantiated with its assistance.

Arguments

  • shape: It is a tuple of integers that describes the shape of the returned Keras variable.
  • mean: The term "mean" refers to a float that depicts the average value in a normal distribution.
  • scale: It refers to a float that symbolizes the standard deviation of the normal distribution.
  • dtype: It is a string that can be used to define the dtype of a returned Keras variable.
  • name: It makes reference to a String that encapsulates the name of the returned Keras variable.
  • seed: This term refers to the random seed, an integer.

Returns

It produces a Keras variable that contains drawn sample data.

Example


    # TensorFlow example  
    >>> kvar = K.random_normal_variable((2,3), 0, 1)  
    >>> kvar  
    <tensorflow.python.ops.variables.Variable object at 0x10ab12dd0>  
    >>> K.eval(kvar)  
    array([[ 1.19591331,  0.68685907, -0.63814116],  
           [ 0.92629528,  0.28055015,  1.70484698]], dtype=float32)   

Numpy implementation


    def random_normal_variable(shape, mean, scale, dtype=None, name=None, seed=None):  
        return scale * np.random.randn(*shape).astype(dtype) + mean  

count_params

keras.backend.count_params(x)

It produces a variable or tensor with a constant number of constituent parts.

Arguments

  • x: It speaks of a tensor or Keras variable.

Returns

It yields an integer that represents the total number of elements in x, or the sum of an array's static dimensions.

Exampl


    >>> kvar = K.zeros((2,3))  
    >>> K.count_params(kvar)  
    6  
    >>> K.eval(kvar)  
    array([[ 0.,  0.,  0.],  
        [ 0.,  0.,  0.]], dtype=float32)   

Numpy implementation


  def count_params(x):  
    return x.size  

cast

keras.backend.cast(x, dtype)

It is useful for casting a tensor to a specific dtype and then returning it. A Keras tensor will also be produced if you cast a Keras variable.

Arguments

  • x: It can be described as a variable or Keras tensor.
  • dtype: It alludes to a string that is either "float16," "float32," or "float64".

Returns

A Keras tensor with the dtype was produced.

Example


   >>> from keras import backend as K  
   > input = K.placeholder((2, 3), dtype='float32')  
   > input  
   <tf.Tensor 'Placeholder_2:0' shape=(2, 3) dtype=float32>  
   # It doesn't work in-place as below.  
   > K.cast(input, dtype='float16')  
   <tf.Tensor 'Cast_1:0' shape=(2, 3) dtype=float16>  
   >>> input  
   <tf.Tensor 'Placeholder_2:0' shape=(2, 3) dtype=float32>  
   # you need to assign it.  
   >>> input = K.cast(input, dtype='float16')  
   >>> input  
   <tf.Tensor 'Cast_2:0' shape=(2, 3) dtype=float16>  

update

keras.backend.update(x, new_x)

It aids in changing x's value to a new_x.

Arguments

  • x: A variable is referred to.
  • new_x: It can be described as a tensor whose shape is similar to that of x.

Returns

The x variable is updated as a result.

update_add

keras.backend.update_add(x, increment)

It includes an increment, which aids in updating x's value.

Arguments

  • x: A variable is referred to.
  • increment: It can be described as a tensor whose shape is similar to that of x.

Returns

The updated x variable is returned.

update_sub

keras.backend.update_sub(x, decrement)

To update the value of x, it subtracts the decrement.

Arguments

  • x: It is a variable that can be defined.
  • decrement: It speaks of a tensor that resembles the shape of an x.

Returns

The updated x variable is returned.

moving_average_update

keras.backend.moving_average_update(x, value, momentum)

It calculates the moving average for a given variable.

Arguments

  • x: A variable is referred to.
  • value: It can be characterized as a tensor with the same shape as x.
  • momentum: It speaks of the average static momentum.

Returns

It produces a result that is used to update the variable.

dot

keras.backend.dot(x, y)

It either multiplies two tensors or a variable to produce a tensor as a result.

Theano behavior is duplicated when multiplying one nD tensor by another nD tensor. (e.g. (2, 3) * (4, 3, 5) -> (2, 4, 5))

Arguments

  • x: It alludes to a variable or tensor.
  • y: It alludes to a variable or tensor.

Returns

After doing a dot product between x and y, it produces a tensor and returns that.

Examples


  # dot product between tensors  
  >>> x = K.placeholder(shape=(2, 3))  
  >>> y = K.placeholder(shape=(3, 4))  
  >>> xy = K.dot(x, y)  
  >>> xy  
  <tf.Tensor 'MatMul_9:0' shape=(2, 4) dtype=float32>  
  # dot product between tensors  
  >>> x = K.placeholder(shape=(32, 28, 3))  
  >>> y = K.placeholder(shape=(3, 4))  
  >>> xy = K.dot(x, y)  
  >>> xy  
  <tf.Tensor 'MatMul_9:0' shape=(32, 28, 4) dtype=float32>
  # Theano-like behavior example  
  >>> x = K.random_uniform_variable(shape=(2, 3), low=0, high=1)  
  >>> y = K.ones((4, 3, 5))  
  >>> xy = K.dot(x, y)  
  >>> K.int_shape(xy)  
  (2, 4, 5)  

Numpy implementation


  def dot(x, y):  
    return np.dot(x, y)  

batch_dot

keras.backend.batch_dot(x, y, axes=None)

When computing the batchwise dot product between x and y, where x and y are data contained within batches (i.e. in the shape of (batch_size)), batch_dot can be handy. It either produces a variable or a tensor with fewer dimensions than the input. Expand_dims, which ensures the ndim to be at least 2, can be used if we restrict the number of dimensions to 1.

Arguments

  • x: It refers to either a variable with an ndim higher than or equal to 2 or the Keras tensor.
  • y: The Keras tensor or variable with ndim higher than or equal to 2 is referred to.
  • axes: It can be described as an int or tuple(int, int) that emphasizes the target's reduced dimensions.

Returns

It produces a tensor whose form is the same as the result of adding the shapes of x and y together (). In this case, the shape of x corresponds to the lesser of the summed over dimensions, whereas the shape of y denotes the lesser of the batch dimension and the summed over dimensions. If the ultimate rank is 1, it is changed to (batch_size, 1).

Examples

Even if we never have to calculate the off-diagonal elements, if x and y are set to [[1, 2], [3, 4]] and [[5, 6], [7, 8]], batch dot(x, y, axes=1) = [[17], [53]], which is the major diagonal of x.dot(y.T).

Pseudocode:


    inner_products = []  
    for xi, yi in zip(x, y):  
        inner_products.append(xi.dot(yi))  
    result = stack(inner_products)  

Let the shapes of x and y be (100, 20) and, respectively, (100, 30, 20). If the axes are (1, 2), then iterate through each dimension in the x's shape and the y's shape to get the output shape of the resulting tensor:

  • shape[0] : 100 : include it in the output form
  • shape[1] : 20 : do not include it in the output form, dimension 1 of x has been summed over. (dot_axes[0] = 1)
  • shape[0] : 100 : do not include it in the output form, always ignore first dimension of y
  • shape[1] : 30 : include it in the output form.
  • shape[2] : 20 : do not include it in the output form, dimension 2 of y has been summed over. (dot_axes[1] = 2) output_shape = (100, 30)

  >>> x_batch = K.ones(shape=(32, 20, 1))  
  >>> y_batch = K.ones(shape=(32, 30, 20))  
  >>> xy_batch_dot = K.batch_dot(x_batch, y_batch, axes=(1, 2))  
  >>> K.int_shape(xy_batch_dot)  
  (32, 1, 30) 

transpose

keras.backend.transpose(x)

It is employed to transpose tensors before returning them.

Arguments

  • x: It could be a variable or a tensor.

Returns

A tensor is returned.

Examples


  >>> var = K.variable([[1, 2, 3], [4, 5, 6]])  
  >>> K.eval(var)  
  array([[ 1.,  2.,  3.],  
        [ 4.,  5.,  6.]], dtype=float32)  
  >>> var_transposed = K.transpose(var)  
  >>> K.eval(var_transposed)  
  array([[ 1.,  4.],  
        [ 2.,  5.],  
        [ 3.,  6.]], dtype=float32)  
  >>> inputs = K.placeholder((2, 3))  
  >>> inputs  
  <tf.Tensor 'Placeholder_11:0' shape=(2, 3) dtype=float32>  
  >>> input_transposed = K.transpose(inputs)  
  >>> input_transposed  
  <tf.Tensor 'transpose_4:0' shape=(3, 2) dtype=float32>  

Numpy implementation


  def transpose(x):  
    return np.transpose(x)     

gather

keras.backend.gather(reference, indices)

It aids in the retrieval of indices indices elements within the context of a tensor reference.

Arguments

  • reference: It alludes to a tensor.
  • indices: It is defined as an integer representing the tensor of indices.

Returns

It returns a tensor of the same type as the reference.

Numpy implementation


  def gather(reference, indices):  
    return reference[indices]  

max

keras.backend.max(x, axis=None, keepdims=False)

It determines the highest value of the tensor.

Arguments

  • x: It can be referred to as a variable or tensor.
  • axis: The term "axis" refers to an integer or integer list that is part of the [-rank(x), rank(x)] axis, which is used to calculate maximum values. It determines the largest possible overall dimensions if it is left at None (the default setting).
  • keepdim: A Boolean determines whether or not to keep the dimensions. When keepdims is set to False, the rank of the tensor is decreased by 1. Otherwise, the decreased dimension will be retained with length 1 if keepdims is set to True.

returns

The maximum values of x are represented by the tensor that is returned.

Numpy implementation


  def max(x, axis=None, keepdims=False):  
    if isinstance(axis, list):  
        axis = tuple(axis)  
    return np.max(x, axis=axis, keepdims=keepdims)  

min

keras.backend.min(x, axis=None, keepdims=False)

It determines the tensor's lowest value.

Arguments

  • x: It can be referred to as a variable or tensor.
  • axis: It alludes to an integer or integer list that is present inside [-rank(x), rank(x)], the axis used to determine minimum values. It determines the smallest overall dimensions if it is left at None (the default setting).
  • keepdims: The Boolean determines whether or not to keep the dimensions. When keepdims is set to False, the rank of the tensor is decreased by 1. Otherwise, the decreased dimension will be retained with length 1 if keepdims is set to True.

Returns

The minimum values of x are represented by a tensor that is returned.

Numpy implementation


  def min(x, axis=None, keepdims=False):  
    if isinstance(axis, list):  
      axis = tuple(axis)  
    return np.min(x, axis=axis, keepdims=keepdims)   

sum

keras.backend.sum(x, axis=None, keepdims=False)

It produces the total of all values included in a tensor along with the chosen axis.

Arguments

  • x: It can be referred to as a variable or tensor.
  • axis: This phrase alludes to an integer or integer list that is present inside [-rank(x), rank(x)], the axis on which the sum is calculated. It determines the overall dimensions if it is left at None (the default setting).
  • keepdims: The Boolean determines whether or not to keep the dimensions. When keepdims is set to False, the rank of the tensor is decreased by 1. Otherwise, the decreased dimension will be retained with length 1 if keepdims is set to True.

Returns

It gives back a tensor that contains the sum of x.

Numpy implementation


  def sum(x, axis=None, keepdims=False):  
    if isinstance(axis, list):  
        axis = tuple(axis)  
    return np.sum(x, axis=axis, keepdims=keepdims)  

prod

keras.backend.prod(x, axis=None, keepdims=False)

It computes the multiplication of values inside a tensor along with the particular axis.

Arguments

  • x: It can be referred to as a variable or tensor.
  • axis: It alludes to an integer or list of numbers that are present inside [-rank(x), rank(x)], the axis that is utilized to calculate the product. It determines the total product dimensions if it is left at None (the default setting).
  • keepdims: The Boolean determines whether or not to keep the dimensions. When keepdims is set to False, the rank of the tensor is decreased by 1. Otherwise, the decreased dimension will be retained with length 1 if keepdims is set to True.

Returns

It gives back a tensor that contains the product of the x's elements.

Numpy implementation


  def prod(x, axis=None, keepdims=False):  
    if isinstance(axis, list):  
        axis = tuple(axis)  
    return np.prod(x, axis=axis, keepdims=keepdims)  

cumsum

keras.backend.cumsum(x, axis=0)

It calculates the total sum of values inside a tensor along with the given axis.

Arguments

  • x: It can be referred to as a variable or tensor.
  • axis: It makes reference to an integer, which serves as the axis on which the total is calculated.

Returns

It gives back a tensor that contains the whole sum of x values along one axis.

Numpy implementation


  def cumsum(x, axis=0):  
    return np.cumsum(x, axis=axis)     

cumprod

keras.backend.cumprod(x, axis=0)

It calculates the sum of the values inside a tensor along with the particular axis.

Arguments

  • x: It can be referred to as a variable or tensor.
  • axis: It makes reference to an integer, which serves as the axis on which the product is calculated.

Returns

It gives back a tensor that contains the sum of all the x-values along the axis.

Numpy implementation


  def cumprod(x, axis=0):  
    return np.cumprod(x, axis=axis)    

var

keras.backend.var(x, axis=None, keepdims=False)

It calculates the variance of the tensor together with the chosen axis.

Arguments

  • x: It can be referred to as a variable or tensor.
  • axis: This phrase refers to an integer or list of integers that are present inside [-rank(x), rank(x)], the axis that is used to calculate variance. It determines the overall variance dimension if it is left at None (the default setting).
  • keepdims: The Boolean determines whether or not to keep the dimensions. When keepdims is set to False, the rank of the tensor is decreased by 1. Otherwise, the decreased dimension will be retained with length 1 if keepdims is set to True.

Returns

It provides the variance of the elements contained in the tensor x.

Numpy implementation


  def var(x, axis=None, keepdims=False):  
    if isinstance(axis, list):  
      axis = tuple(axis)  
    return np.var(x, axis=axis, keepdims=keepdims) 

rnn


 tf.keras.backend.rnn(  
    step_function,  
    inputs,  
    initial_states,  
    go_backwards=False,  
    mask=None,  
    constants=None,  
    unroll=False,  
    input_length=None,  
    time_major=False,  
    zero_output_for_mask=False,  
 )  

It is helpful for repeating above the tensor dimension.

Arguments

  • step_function: They call it the RNN step function. It includes the justification provided below:
    • input: It contains a tensor with the shape of (samples,...) that represents the input for a batch of samples at a specific time step. It excludes the temporal dimension.
    • states: It can be described as a list of tensors.
    • new_states: It can be described as a list of tensors with the same length and shape as the states, where the first state must be the output tensor from the list's preceding timestep.

Returns

It generates a form tensor (samples, output_dim)

  • inputs: It either refers to a tensor of temporal data that is at least three-dimensional and has the shape of (samples, time, etc.) or it refers to a nested tensor with each component having the shape of (samples, time, etc.). (samples, time, ...).
  • initial_states: It can be characterized as a tensor with the form (samples, state size) that contains the state's starting values to be used in the step function. The initial states will come after the nested structure if the state size has a nested shape.
  • go_backwards: It can be thought of as a Boolean, and if it is set to True, an interaction above the time dimension will be carried out in reverse order before returning a reversed sequence.
  • mask: It speaks of a binary tenor with the form (samples, time, 1) that contains a zero for each and every element that has been concealed.
  • constants: It can be described as a list of constant values that are dispersed at each and every step.
  • unroll: It shows whether the RNN should be unrolled or whether a metaphorical while-loop should be utilized.
  • input_length: Depending on whether the time dimension has a fixed length or not, it can be described as an integer or one-dimensional tensor. It will be utilized for masking if it is set to variable-length input when there is no defined mask.
  • time_major: It has a Boolean definition. If it is set to true, the shape of the input and output will be (timesteps, batch,...), and if it is set to false, it will be (batch, timesteps,...). Using time_major = True is a very effective strategy because transposition is avoided both at the start and the end of RNN computation. However, as the majority of TensorFlow data is batch-major, this function by default accepts input and produces batch-major output.
  • zero_output_for_mask: It refers to a Boolean value that, if set to true, causes the previous step output to be returned and the masked timestep output to be zero.

Returns

It returns a tuple of shape (last output, outputs, new states), where last output is a reference to the most recent output of rnn, which has the form (samples,...), outputs is a reference to a tensor of shape (samples, time,...), such that each entry outputs[s, t] corresponds to the output of the step function for sample s and time t.

Raises

  • ValueError: A value error is produced if the input dimension is less than three.
  • ValueError: It may also rise if unroll is set to True and the input timestep is not a fixed value.
  • ValueError: When the state is absent (i.e., len(states) == 0) and the mask is present but not set to None, it is also generated.