Element-wise operations

An element-wise operation operates on corresponding elements between tensors.

Two tensors must have the same shape in order to perform element-wise operations on them.

Suppose we have the following two tensors(Both of these tensors are rank-2 tensors with a shape of 2 \(\times\) 2):

  1. t1 = torch.tensor([
  2. [1, 2],
  3. [3, 4]
  4. ], dtype=torch.float32)
  5. t2 = torch.tensor([
  6. [9, 8],
  7. [7, 6]
  8. ], dtype=torch.float32)

The elements of the first axis are arrays and the elements of the second axis are numbers.

  1. # Example of the first axis
  2. > print(t1[0])
  3. tensor([1., 2.])
  4. # Example of the second axis
  5. > print(t1[0][0])
  6. tensor(1.)

Addition is an element-wise operation.

  1. > t1 + t2
  2. tensor([[10., 10.],
  3. [10., 10.]])

In fact, all the arithmetic operations, add, subtract, multiply, and divide are element-wise operations. There are two ways we can do this:

  1. Using these symbolic operations:
  1. > t + 2
  2. tensor([[3., 4.],
  3. [5., 6.]])
  4. > t - 2
  5. tensor([[-1., 0.],
  6. [1., 2.]])
  7. > t * 2
  8. tensor([[2., 4.],
  9. [6., 8.]])
  10. > t / 2
  11. tensor([[0.5000, 1.0000],
  12. [1.5000, 2.0000]])
  1. Or equivalently, these built-in tensor methods:
  1. > t.add(2)
  2. tensor([[3., 4.],
  3. [5., 6.]])
  4. > t.sub(2)
  5. tensor([[-1., 0.],
  6. [1., 2.]])
  7. > t.mul(2)
  8. tensor([[2., 4.],
  9. [6., 8.]])
  10. > t.div(2)
  11. tensor([[0.5000, 1.0000],
  12. [1.5000, 2.0000]])

Broadcasting tensors

Broadcasting is the concept whose implementation allows us to add scalars to higher dimensional tensors.

We can see what the broadcasted scalar value looks like using the broadcast_to()Numpy function:

  1. > np.broadcast_to(2, t.shape)
  2. array([[2, 2],
  3. [2, 2]])
  4. //This means the scalar value is transformed into a rank-2 tensor just like t, and //just like that, the shapes match and the element-wise rule of having the same //shape is back in play.

Trickier example of broadcasting

  1. t1 = torch.tensor([
  2. [1, 1],
  3. [1, 1]
  4. ], dtype=torch.float32)
  5. t2 = torch.tensor([2, 4], dtype=torch.float32)

Even through these two tensors have differing shapes, the element-wise operation is possible, and broadcasting is what makes the operation possible.

  1. > np.broadcast_to(t2.numpy(), t1.shape)
  2. array([[2., 4.],
  3. [2., 4.]], dtype=float32)
  4. >t1 + t2
  5. tensor([[3., 5.],
  6. [3., 5.]])

When do we actually use broadcasting? We often need to use broadcasting when we are preprocessing our data, and especially during normalization routines.


Comparison operations are element-wise. For a given comparison operation between tensors, a new tensor of the same shape is returned with each element containing either a 0 or a 1.

  1. > t = torch.tensor([
  2. [0, 5, 0],
  3. [6, 0, 7],
  4. [0, 8, 0]
  5. ], dtype=torch.float32)

Let's check out some of the comparison operations.

  1. > t.eq(0)
  2. tensor([[1, 0, 1],
  3. [0, 1, 0],
  4. [1, 0, 1]], dtype=torch.uint8)
  5. > t.ge(0)
  6. tensor([[1, 1, 1],
  7. [1, 1, 1],
  8. [1, 1, 1]], dtype=torch.uint8)
  9. > t.gt(0)
  10. tensor([[0, 1, 0],
  11. [1, 0, 1],
  12. [0, 1, 0]], dtype=torch.uint8)
  13. > t.lt(0)
  14. tensor([[0, 0, 0],
  15. [0, 0, 0],
  16. [0, 0, 0]], dtype=torch.uint8)
  17. > t.le(7)
  18. tensor([[1, 1, 1],
  19. [1, 1, 1],
  20. [1, 0, 1]], dtype=torch.uint8)

Element-wise operations using functions

Here are some examples:

  1. > t.abs()
  2. tensor([[0., 5., 0.],
  3. [6., 0., 7.],
  4. [0., 8., 0.]])
  5. > t.sqrt()
  6. tensor([[0.0000, 2.2361, 0.0000],
  7. [2.4495, 0.0000, 2.6458],
  8. [0.0000, 2.8284, 0.0000]])
  9. > t.neg()
  10. tensor([[-0., -5., -0.],
  11. [-6., -0., -7.],
  12. [-0., -8., -0.]])
  13. > t.neg().abs()
  14. tensor([[0., 5., 0.],
  15. [6., 0., 7.],
  16. [0., 8., 0.]])

Element-wise operations的更多相关文章

  1. 向量的一种特殊乘法 element wise multiplication

    向量的一种特殊乘法 element wise multiplication 物体反射颜色的计算采用这样的模型: vec3 reflectionColor = objColor * lightColor ...

  2. [C2P1] Andrew Ng - Machine Learning

    About this Course Machine learning is the science of getting computers to act without being explicit ...

  3. TensorRT 3:更快的TensorFlow推理和Volta支持

    TensorRT 3:更快的TensorFlow推理和Volta支持 TensorRT 3: Faster TensorFlow Inference and Volta Support 英伟达Tens ...

  4. (转)A Beginner's Guide To Understanding Convolutional Neural Networks Part 2

    Adit Deshpande CS Undergrad at UCLA ('19) Blog About A Beginner's Guide To Understanding Convolution ...

  5. Understanding Convolution in Deep Learning

    Understanding Convolution in Deep Learning Convolution is probably the most important concept in dee ...

  6. Must Know Tips/Tricks in Deep Neural Networks

    Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)   Deep Neural Networks, especially C ...

  7. [Tensorflow] Cookbook - Neural Network

    In this chapter, we'll cover the following recipes: Implementing Operational Gates Working with Gate ...

  8. [Tensorflow] Cookbook - Object Classification based on CIFAR-10

    Convolutional Neural Networks (CNNs) are responsible for the major breakthroughs in image recognitio ...

  9. Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)

    http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html Deep Neural Networks, especially Conv ...

  10. [转]An Intuitive Explanation of Convolutional Neural Networks

    An Intuitive Explanation of Convolutional Neural Networks https://ujjwalkarn.me/2016/08/11/intuitive ...

随机推荐

  1. 使用fastjson将list、map转换成json,出现$ref

    这是转换时出现的问题情况( map >> json ) 引用是通过"$ref"来表示的 引用 描述 "$ref":".." 上一 ...

  2. 北京交大yum

    [base] name=CentOS-$releasever - Base #mirrorlist=http://mirrorlist.centos.org/?release=$releasever& ...

  3. Mark 创建路径(c#)-动态分段

    http://bbs.esrichina-bj.cn/ESRI/viewthread.php?action=printable&tid=128564 public void CreateRou ...

  4. Office WORD如何输入长下划线

    选中一段文字,点击下划线按钮,可以添加下划线   同样,选中一段空格,点下划线,也可以添加下划线    

  5. Linux程序设计(搭建开发环境--curses)

    看官们.咱们今天要说的内容.是前面内容的一点小补充,详细的内容是:安装curses开发包.以搭建 开发环境.闲话休说,言归正转. 我们在前面说过搭建开发环境的内容,主要说了开发环境中的GCC和VIM, ...

  6. Android——SlidingMenu学习总结

    来源 SlidingMenu是github上比較火开源库.很强大,不但但是简单的设置实现两側滑动菜单,还能够设置菜单的阴影.渐变色.划动模式等. 下载地址:https://github.com/jfe ...

  7. strtok函数

    strtok函数是cstring文件里的函数 strtok函数是cstring文件里的函数 其功能是截断字符串 原型为:char *strtok(char s[],const char *delin) ...

  8. leetcode_Repeated DNA Sequences

    描写叙述: All DNA is composed of a series of nucleotides abbreviated as A, C, G, and T, for example: &qu ...

  9. mysql最新版中文参考手册在线浏览

    MySQL是最流行的开放源码SQL数据库管理系统,具有快速.可靠和易于使用的特点.同时MySQL也是一种关联数据库管理系统,具有很高的响应速度和灵活性.又因为mysql拥有良好的连通性.速度和安全性, ...

  10. Boost中的Timer的使用——计算时间流逝

    使用Boost中的Timer库计算程序的运行时间 程序开发人员都会面临一个共同的问题,即写出高质量的代码完毕特定的功能.评价代码质量的一个重要标准就是算法的运行效率,也就是算法的运行时间.为了可靠的提 ...