Jetson TX2上的demo

一、快速傅里叶-海动图 sample

The CUDA samples directory is copied to the home directory on the device by JetPack. The built binaries are in the following directory:

/home/ubuntu/NVIDIA_CUDA-<version>_Samples/bin/armv7l/linux/release/gnueabihf/

这里的version需要看你自己安装的CUDA版本而定

Run the samples at the command line or by double-clicking on them in the file browser. For example, when you run the oceanFFT sample, the following screen is displayed.

二、车辆识别加框sample

nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/backend$

./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264

--trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt

--trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel --trt-forcefp32 0 --trt-proc-interval 1 -fps 10

三、GEMM(通用矩阵乘法)测试

nvidia@tegra-ubuntu:/usr/local/cuda/samples/7_CUDALibraries/batchCUBLAS$ ./batchCUBLAS -m1024 -n1024 -k1024

batchCUBLAS Starting...

GPU Device 0: "NVIDIA Tegra X2" with compute capability 6.2

==== Running single kernels ====

Testing sgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024  alpha = (0xbf800000, -1) beta= (0x40000000, 2)#### args: lda=1024 ldb=1024 ldc=1024

^^^^ elapsed = 0.00372291 sec  GFLOPS=576.83@@@@ sgemm test OK

Testing dgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024  alpha = (0x0000000000000000, 0) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024

^^^^ elapsed = 0.10940003 sec  GFLOPS=19.6296@@@@ dgemm test OK

==== Running N=10 without streams ====

Testing sgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024  alpha = (0xbf800000, -1) beta= (0x00000000, 0)#### args: lda=1024 ldb=1024 ldc=1024

^^^^ elapsed = 0.03462315 sec  GFLOPS=620.245@@@@ sgemm test OK

Testing dgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024  alpha = (0xbff0000000000000, -1) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024

^^^^ elapsed = 1.09212208 sec  GFLOPS=19.6634@@@@ dgemm test OK

==== Running N=10 with streams ====

Testing sgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024  alpha = (0x40000000, 2) beta= (0x40000000, 2)#### args: lda=1024 ldb=1024 ldc=1024

^^^^ elapsed = 0.03504515 sec  GFLOPS=612.776@@@@ sgemm test OK

Testing dgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024  alpha = (0xbff0000000000000, -1) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024

^^^^ elapsed = 1.09177494 sec  GFLOPS=19.6697@@@@ dgemm test OK

==== Running N=10 batched ====

Testing sgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024  alpha = (0x3f800000, 1) beta= (0xbf800000, -1)#### args: lda=1024 ldb=1024 ldc=1024

^^^^ elapsed = 0.03766394 sec  GFLOPS=570.17@@@@ sgemm test OK

Testing dgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024  alpha = (0xbff0000000000000, -1) beta= (0x4000000000000000, 2)#### args: lda=1024 ldb=1024 ldc=1024

^^^^ elapsed = 1.09389901 sec  GFLOPS=19.6315@@@@ dgemm test OK

Test Summary0 error(s)

四、内存带宽测试

nvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/bandwidthTest$ ./bandwidthTest

[CUDA Bandwidth Test] - Starting...

Running on...

Device 0: NVIDIA Tegra X2

Quick Mode

Host to Device Bandwidth, 1 Device(s)

PINNED Memory Transfers

Transfer Size (Bytes)    Bandwidth(MB/s)

33554432            20215.8

Device to Host Bandwidth, 1 Device(s)

PINNED Memory Transfers

Transfer Size (Bytes)    Bandwidth(MB/s)

33554432            20182.2

Device to Device Bandwidth, 1 Device(s)

PINNED Memory Transfers

Transfer Size (Bytes)    Bandwidth(MB/s)

33554432            35742.8

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

五、设备查询

nvidia@tegra-ubuntu:~/work/TensorRT/tmp/usr/src/tensorrt$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery

nvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ls

deviceQuery  deviceQuery.cpp  deviceQuery.o  Makefile  NsightEclipse.xml  readme.txt

nvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ./deviceQuery

./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X2"

CUDA Driver Version / Runtime Version          8.0 / 8.0

CUDA Capability Major/Minor version number:    6.2

Total amount of global memory:                 7851 MBytes (8232062976 bytes)

( 2) Multiprocessors, (128) CUDA Cores/MP:     256 CUDA Cores

GPU Max Clock rate:                            1301 MHz (1.30 GHz)

Memory Clock rate:                             1600 Mhz

Memory Bus Width:                              128-bit

L2 Cache Size:                                 524288 bytes

Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)

Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers

Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers

Total amount of constant memory:               65536 bytes

Total amount of shared memory per block:       49152 bytes

Total number of registers available per block: 32768

Warp size:                                     32

Maximum number of threads per multiprocessor:  2048

Maximum number of threads per block:           1024

Max dimension size of a thread block (x,y,z): (1024, 1024, 64)

Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)

Maximum memory pitch:                          2147483647 bytes

Texture alignment:                             512 bytes

Concurrent copy and kernel execution:          Yes with 1 copy engine(s)

Run time limit on kernels:                     No

Integrated GPU sharing Host Memory:            Yes

Support host page-locked memory mapping:       Yes

Alignment requirement for Surfaces:            Yes

Device has ECC support:                        Disabled

Device supports Unified Addressing (UVA):      Yes

Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0

Compute Mode:

< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = NVIDIA Tegra X2Result = PASS

六、大型项目的测试

详情查看https://developer.nvidia.com/embedded/jetpack

这里面还有一些项目

Jetson TX2上的demo(原创)的更多相关文章

  1. 在Jetson TX2上显示摄像头视频并使用python进行caffe推理

    参考文章:How to Capture Camera Video and Do Caffe Inferencing with Python on Jetson TX2 与参考文章大部分都是相似的,如果 ...

  2. 在Jetson TX2上捕获、显示摄像头视频

    参考文章:How to Capture and Display Camera Video with Python on Jetson TX2 与参考文章大部分都是相似的,如果不习惯看英文,可以看看我下 ...

  3. 在Jetson TX2上安装caffe和PyCaffe

    caffe是Nvidia TensorRT最支持的深度学习框架,因此在Jetson TX2上安装caffe很有必要.顺便说一句,下面的安装是支持python3的. 先决条件 在Jetson TX2上完 ...

  4. 在Jetson TX2上安装OpenCV(3.4.0)

    参考文章:How to Install OpenCV (3.4.0) on Jetson TX2 与参考文章大部分都是相似的,如果不习惯看英文,可以看看我下面的描述 在我们使用python3进行编程时 ...

  5. Jetson TX2安装tensorflow(原创)

    Jetson TX2安装tensorflow 大致分为两步: 一.划分虚拟内存 原因:Jetson TX2自带8G内存这个内存空间在安装tensorflow编译过程中会出现内存溢出引发的安装进程奔溃 ...

  6. Jetson TX2 安装JetPack3.3教程

    Jetson TX2 刷机教程(JetPack3.3版本) 参考网站:https://blog.csdn.net/long19960208/article/details/81538997 版权声明: ...

  7. 02-NVIDIA Jetson TX2 通过JetPack 3.1刷机完整版(踩坑版)

    未经允许,不得擅自改动和转载 文 | 阿小庆 2018-1-20 本文继第一篇文章:01-NVIDIA Jetson TX2开箱上电显示界面 TX2 出厂时,已经自带了 Ubuntu 16.04 系统 ...

  8. Jetson TX2火力全开

    Jetson Tegra系统的应用涵盖越来越广,相应用户对性能和功耗的要求也呈现多样化.为此NVIDIA提供一种新的命令行工具,可以方便地让用户配置CPU状态,以最大限度地提高不同场景下的性能和能耗. ...

  9. 在TX2上多线程读取视频帧进行caffe推理

    参考文章:Multi-threaded Camera Caffe Inferencing TX2之多线程读取视频及深度学习推理 背景 一般在TX2上部署深度学习模型时,都是读取摄像头视频或者传入视频文 ...

随机推荐

  1. python入门之函数

    为什么要用函数 python的函数是由一个新的语句编写,即def ,def是可执行的语句--函数并不存在,知道python运行了def后才存在. 函数是通过赋值函数传递的,参数通过赋值传递给函数. d ...

  2. hibernate使用注解配置索引

    添加普通索引 @Table(name="t_token", indexes={@Index(name="token_strIndex", columnList= ...

  3. Android 服务_笔记

    Service服务 服务(Service)是Android中的四大组件之一,适用于开发无界面.长时间运行的应用功能,服务是在后台运行,服务的创建与Activity类似,只需要继承Service和在An ...

  4. 13、ABPZero系列教程之拼多多卖家工具 微信公众号开发前的准备

    因为是开发阶段,我需要在本地调试,而微信开发需要配置域名,这样natapp.cn就有了用武之地,应该说natapp就是为此而生的. natapp.cn是什么 这是一个内网映射的网站,支持微信公众号.小 ...

  5. .net整理

    CLR via C# 1 关于CLI,CTS,CLS,CIL,.Net Framework,CLR,FCL图 CLI:Common Language Infrastructure,是公共语言架构: C ...

  6. OOAD-设计模式(一)概述

    前言 在我们很多时候设计代码都是需要用到各种不一样的设计模式的,接下来着几篇给大家领略一下设计模式.知道设计模式的作用,以及在代码的具体体现.很多时候我们看不懂代码就是因为我们不知道它使用的设计模式. ...

  7. Codechef:Path Triples On Tree

    Path Triples On Tree 题意是求树上都不相交或者都相交的路径三元组数量. 发现blog里没什么树形dp题,也没有cc题,所以来丢一道cc上的树形dp题. 比较暴力,比较恶心 #inc ...

  8. UVALive3882-And Then There Was One-约瑟夫问题-递推

    And Then There Was One Time limit: 3.000 seconds Let's play a stone removing game. Initially, n ston ...

  9. 最短路(spfa)

    http://acm.hdu.edu.cn/showproblem.php?pid=2544 最短路 Time Limit: 5000/1000 MS (Java/Others)    Memory ...

  10. c++(合并排序)

    前面一篇博客提到的快速排序是排序算法中的一种经典算法.和快速排序一样,合并排序是另外一种经常使用的排序算法.那么合并排序算法有什么不同呢?关键之处就体现在这个合并上面.    合并算法的基本步骤如下所 ...