eural Networks API

Note: The Neural Networks API is available in Android 8.1 and higher system images. The header file is available in the latest version of the NDK. We encourage you to send us your feedback via the Android 8.1 Preview issue tracker.

The Android Neural Networks API (NNAPI) is an Android C API designed for running computationally intensive operations for machine learning on mobile devices. NNAPI is designed to provide a base layer of functionality for higher-level machine learning frameworks (such as TensorFlow Lite, Caffe2, or others) that build and train neural networks. The API is available on all devices running Android 8.1 (API level 27) or higher.

NNAPI supports inferencing by applying data from Android devices to previously trained, developer-defined models. Examples of inferencing include classifying images, predicting user behavior, and selecting appropriate responses to a search query.

On-device inferencing has many benefits:

  • Latency: You don’t need to send a request over a network connection and wait for a response. This can be critical for video applications that process successive frames coming from a camera.
  • Availability: The application runs even when outside of network coverage.
  • Speed: New hardware specific to neural networks processing provide significantly faster computation than with general-use CPU alone.
  • Privacy: The data does not leave the device.
  • Cost: No server farm is needed when all the computations are performed on the device.

There are also trade-offs that a developer should keep in mind:

  • System utilization: Evaluating neural networks involve a lot of computation, which could increase battery power usage. You should consider monitoring the battery health if this is a concern for your app, especially for long-running computations.
  • Application size: Pay attention to the size of your models. Models may take up multiple megabytes of space. If bundling large models in your APK would unduly impact your users, you may want to consider downloading the models after app installation, using smaller models, or running your computations in the cloud. NNAPI does not provide functionality for running models in the cloud.

Understanding the Neural Networks API runtime


NNAPI is meant to be called by machine learning libraries, frameworks, and tools that let developers train their models off-device and deploy them on Android devices. Apps typically would not use NNAPI directly, but would instead directly use higher-level machine learning frameworks. These frameworks in turn could use NNAPI to perform hardware-accelerated inference operations on supported devices.

Based on the app’s requirements and the hardware capabilities on a device, Android’s neural networks runtime can efficiently distribute the computation workload across available on-device processors, including dedicated neural network hardware, graphics processing units (GPUs), and digital signal processors (DSPs).

For devices that lack a specialized vendor driver, the NNAPI runtime relies on optimized code to execute requests on the CPU.

The diagram below shows a high-level system architecture for NNAPI.

Figure 1. System architecture for Android Neural Networks API

eural Networks API

Note: The Neural Networks API is available in Android 8.1 and higher system images. The header file is available in the latest version of the NDK. We encourage you to send us your feedback via the Android 8.1 Preview issue tracker.

The Android Neural Networks API (NNAPI) is an Android C API designed for running computationally intensive operations for machine learning on mobile devices. NNAPI is designed to provide a base layer of functionality for higher-level machine learning frameworks (such as TensorFlow Lite, Caffe2, or others) that build and train neural networks. The API is available on all devices running Android 8.1 (API level 27) or higher.

NNAPI supports inferencing by applying data from Android devices to previously trained, developer-defined models. Examples of inferencing include classifying images, predicting user behavior, and selecting appropriate responses to a search query.

On-device inferencing has many benefits:

  • Latency: You don’t need to send a request over a network connection and wait for a response. This can be critical for video applications that process successive frames coming from a camera.
  • Availability: The application runs even when outside of network coverage.
  • Speed: New hardware specific to neural networks processing provide significantly faster computation than with general-use CPU alone.
  • Privacy: The data does not leave the device.
  • Cost: No server farm is needed when all the computations are performed on the device.

There are also trade-offs that a developer should keep in mind:

  • System utilization: Evaluating neural networks involve a lot of computation, which could increase battery power usage. You should consider monitoring the battery health if this is a concern for your app, especially for long-running computations.
  • Application size: Pay attention to the size of your models. Models may take up multiple megabytes of space. If bundling large models in your APK would unduly impact your users, you may want to consider downloading the models after app installation, using smaller models, or running your computations in the cloud. NNAPI does not provide functionality for running models in the cloud.

Understanding the Neural Networks API runtime


NNAPI is meant to be called by machine learning libraries, frameworks, and tools that let developers train their models off-device and deploy them on Android devices. Apps typically would not use NNAPI directly, but would instead directly use higher-level machine learning frameworks. These frameworks in turn could use NNAPI to perform hardware-accelerated inference operations on supported devices.

Based on the app’s requirements and the hardware capabilities on a device, Android’s neural networks runtime can efficiently distribute the computation workload across available on-device processors, including dedicated neural network hardware, graphics processing units (GPUs), and digital signal processors (DSPs).

For devices that lack a specialized vendor driver, the NNAPI runtime relies on optimized code to execute requests on the CPU.

The diagram below shows a high-level system architecture for NNAPI.

Figure 1. System architecture for Android Neural Networks API

 

参考:https://developer.android.com/ndk/guides/neuralnetworks/index.html

android NDK 神经网络API——是给tensorflow lite调用的底层API,应用开发者使用tensorflow lite即可的更多相关文章

  1. Android NDK 初探,生成so文件以及调用so文件方法

    因为最近业务上涉及安全的问题 然后有一些加密解密的方法和key的存储问题 本来想存储到手机里面,但是网上说一般敏感信息不要存储到手机Sdcard上 而且我这个如果从网络建立通信获取的话,又太耗时,所以 ...

  2. Android NDK学习(五):Java调用Native代码流程总结

    编写一个Java类,并且在某个方法签名的修饰符中加上native修饰符. 使用Javac命令编译第一步中的Java类,使之成为一个class文件. 使用Javah -jni 包名.类名 生成Jni接口 ...

  3. TensorFlow Lite demo——就是为嵌入式设备而存在的,底层调用NDK神经网络API,注意其使用的tf model需要转换下,同时提供java和C++ API,无法使用tflite的见后

    Introduction to TensorFlow Lite TensorFlow Lite is TensorFlow’s lightweight solution for mobile and ...

  4. 【译】Android NDK API 规范

    [译]Android NDK API 规范 译者按: 修改R代码遇到Lint tool的报错,搜到了这篇文档,aosp仓库地址:Android NDK API Guidelines. 975a589 ...

  5. Android NDK开发初识

    神秘的Android NDK开发往往众多程序员感到兴奋,但又不知它为何物,由于近期开发应用时,为了是开发的.apk文件不被他人解读(反编译),查阅了很多资料,其中有提到使用NDK开发,怀着好奇的心理, ...

  6. 初识Android NDK

    本文介绍Windows环境下搭建Android NDK开发环境,并创建一个简单的使用Native代码的Android Application. 一.环境搭建 二.JNI函数绑定 三.例子 一.环境搭建 ...

  7. Android NDK开发入门实例

    AndroidNDK是能使Android应用开发者把从c/c++编译而来的本地代码嵌入到应用包中的一系列工具的组合. 注意: AndroidNDK只能用于Android1.5及以上版本中. I. An ...

  8. [原]如何用Android NDK编译FFmpeg

    我们知道在Ubuntu下直接编译FFmpeg是很简单的,主要是先执行./configure,接着执行make命令来编译,完了紧接着执行make install执行安装.那么如何使用Android的ND ...

  9. android ndk环境配置(转)

    转载自:http://jingyan.baidu.com/article/3ea51489e7a9bd52e61bbac7.html android sdk 更新到 r23 时,eclipse 自带 ...

随机推荐

  1. UltraEdit(UE)window破解方法

      安装UltraEdit(一路下一步,无难点)成功后,打开软件弹出如下使用模式提示信息.   关掉UltraEdit软件,同时  断本机网络.重新打开UltraEdit软件:   点击[输入许可证密 ...

  2. BAT文件如何注释

    1.BAT文件中如何注释: 1.:: 注释内容(第一个冒号后也可以跟任何一个非字母数字的字符) 2.rem 注释内容(不能出现重定向符号和管道符号) 3.echo 注释内容(不能出现重定向符号和管道符 ...

  3. dom4j.jar 的调试方法

    1.将jar包的路径写在 classpath下 在cmd窗口中,查看 classpath的内容是否已经加上该路径,win7 下cmd窗口一定要是管理员身份执行 2.在D盘新建一个DOM4JWriter ...

  4. 关于css定位的一些总结

    #pay_pic{ overflow: hidden; width: 200px; margin: 0 auto; } table.dataintable { margin-top: 15px; bo ...

  5. ubuntu.16.04 安装.net core记录

    jack@ubuntu:~$ sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotne ...

  6. vue和iview中native点击事件修饰

    在父组件中给子组件绑定一个原生的事件,就将子组件变成了普通的HTML标签,不加'. native'事件是无法触 在vue中使用iview的dropdownMenu 上单纯的@click也不生效,要写成 ...

  7. Django URL(路由系统)

    Django URL Django 1.11版本 URLconf官方文档 URL配置(URLconf)就像 Django 所支撑网站的目录.它的本质是URL模式以及要为该URL模式调用的视图函数之间的 ...

  8. JS练习:表格全选与全不选

    代码: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title ...

  9. CODEVS 3500

    题目描述 输入3个数a,b,c,求a^b mod c=?输入描述          三个数a,b,c输出描述         一个数,即a^b mod c 的答案.样例输入5 10 9样例输出 4 基 ...

  10. 《hello-world》第八次团队作业:Alpha冲刺-Scrum Meeting 5

    项目 内容 这个作业属于哪个课程 2016级计算机科学与工程学院软件工程(西北师范大学) 这个作业的要求在哪里 实验十二 团队作业8:软件测试与Alpha冲刺 团队名称 <hello--worl ...