TensorFlow Lite demo——就是为嵌入式设备而存在的,底层调用NDK神经网络API,注意其使用的tf model需要转换下,同时提供java和C++ API,无法使用tflite的见后
Introduction to TensorFlow Lite
TensorFlow Lite is TensorFlow’s lightweight solution for mobile and embedded devices. It enables on-device machine learning inference with low latency and a small binary size. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API.
TensorFlow Lite uses many techniques for achieving low latency such as optimizing the kernels for mobile apps, pre-fused activations, and quantized kernels that allow smaller and faster (fixed-point math) models.
Most of our TensorFlow Lite documentation is on Github for the time being.
What does TensorFlow Lite contain?
TensorFlow Lite supports a set of core operators, both quantized and float, which have been tuned for mobile platforms. They incorporate pre-fused activations and biases to further enhance performance and quantized accuracy. Additionally, TensorFlow Lite also supports using custom operations in models.
TensorFlow Lite defines a new model file format, based on FlatBuffers. FlatBuffers is an open-sourced, efficient cross platform serialization library. It is similar to protocol buffers, but the primary difference is that FlatBuffers does not need a parsing/unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation. Also, the code footprint of FlatBuffers is an order of magnitude smaller than protocol buffers.
TensorFlow Lite has a new mobile-optimized interpreter, which has the key goals of keeping apps lean and fast. The interpreter uses a static graph ordering and a custom (less-dynamic) memory allocator to ensure minimal load, initialization, and execution latency.
TensorFlow Lite provides an interface to leverage hardware acceleration, if available on the device. It does so via the Android Neural Networks library, released as part of Android O-MR1.
Why do we need a new mobile-specific library?
Machine Learning is changing the computing paradigm, and we see an emerging trend of new use cases on mobile and embedded devices. Consumer expectations are also trending toward natural, human-like interactions with their devices, driven by the camera and voice interaction models.
There are several factors which are fueling interest in this domain:
Innovation at the silicon layer is enabling new possibilities for hardware acceleration, and frameworks such as the Android Neural Networks API make it easy to leverage these.
Recent advances in real-time computer-vision and spoken language understanding have led to mobile-optimized benchmark models being open sourced (e.g. MobileNets, SqueezeNet).
Widely-available smart appliances create new possibilities for on-device intelligence.
Interest in stronger user data privacy paradigms where user data does not need to leave the mobile device.
Ability to serve ‘offline’ use cases, where the device does not need to be connected to a network.
We believe the next wave of machine learning applications will have significant processing on mobile and embedded devices.
TensorFlow Lite developer preview highlights
TensorFlow Lite is available as a developer preview and includes the following:
A set of core operators, both quantized and float, many of which have been tuned for mobile platforms. These can be used to create and run custom models. Developers can also write their own custom operators and use them in models.
A new FlatBuffers-based model file format.
On-device interpreter with kernels optimized for faster execution on mobile.
TensorFlow converter to convert TensorFlow-trained models to the TensorFlow Lite format.
Smaller in size: TensorFlow Lite is smaller than 300KB when all supported operators are linked and less than 200KB when using only the operators needed for supporting InceptionV3 and Mobilenet.
Pre-tested models:
All of the following models are guaranteed to work out of the box:
Inception V3, a popular model for detecting the the dominant objects present in an image.
MobileNets, a family of mobile-first computer vision models designed to effectively maximize accuracy while being mindful of the restricted resources for an on-device or embedded application. They are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation. MobileNet models are smaller but lower in accuracy than Inception V3.
On Device Smart Reply, an on-device model which provides one-touch replies for an incoming text message by suggesting contextually relevant messages. The model was built specifically for memory constrained devices such as watches & phones and it has been successfully used to surface Smart Replies on Android Wear to all first-party and third-party apps.
Quantized versions of the MobileNet model, which runs faster than the non-quantized (float) version on CPU.
New Android demo app to illustrate the use of TensorFlow Lite with a quantized MobileNet model for object classification.
Java and C++ API support
Note: This is a developer release, and it’s likely that there will be changes in the API in upcoming versions. We do not guarantee backward or forward compatibility with this release.
Getting Started
We recommend you try out TensorFlow Lite with the pre-tested models indicated above. If you have an existing mode, you will need to test whether your model is compatible with both the converter and the supported operator set. To test your model, see the documentation on GitHub.
Retrain Inception-V3 or MobileNet for a custom data set
The pre-trained models mentioned above have been trained on the ImageNet data set, which consists of 1000 predefined classes. If those classes are not relevant or useful for your use case, you will need to retrain those models. This technique is called transfer learning, which starts with a model that has been already trained on a problem and will then be retrained on a similar problem. Deep learning from scratch can take days, but transfer learning can be done fairly quickly. In order to do this, you'll need to generate your custom data set labeled with the relevant classes.
The TensorFlow for Poets codelab walks through this process step-by-step. The retraining code supports retraining for both floating point and quantized inference.
TensorFlow Lite Architecture
The following diagram shows the architectural design of TensorFlow Lite:
Starting with a trained TensorFlow model on disk, you'll convert that model to the TensorFlow Lite file format (.tflite
) using the TensorFlow Lite Converter. Then you can use that converted file in your mobile application.
Deploying the TensorFlow Lite model file uses:
Java API: A convenience wrapper around the C++ API on Android.
C++ API: Loads the TensorFlow Lite Model File and invokes the Interpreter. The same library is available on both Android and iOS.
Interpreter: Executes the model using a set of kernels. The interpreter supports selective kernel loading; without kernels it is only 100KB, and 300KB with all the kernels loaded. This is a significant reduction from the 1.5M required by TensorFlow Mobile.
On select Android devices, the Interpreter will use the Android Neural Networks API for hardware acceleration, or default to CPU execution if none are available.
You can also implement custom kernels using the C++ API that can be used by the Interpreter.
Future Work
In future releases, TensorFlow Lite will support more models and built-in operators, contain performance improvements for both fixed point and floating point models, improvements to the tools to enable easier developer workflows and support for other smaller devices and more. As we continue development, we hope that TensorFlow Lite will greatly simplify the developer experience of targeting a model for small devices.
Future plans include using specialized machine learning hardware to get the best possible performance for a particular model on a particular device.
Next Steps
For the developer preview, most of our documentation is on GitHub. Please take a look at the TensorFlow Lite repository on GitHub for more information and for code samples, demo applications, and more.
转自:https://www.tensorflow.org/mobile/tflite/
官方的 demo如下:
TensorFlow Lite Demo for Android
The TensorFlow Lite demo is a camera app that continuously classifies whatever it sees from your device's back camera, using a quantized MobileNet model.
You'll need an Android device running Android 5.0 or higher to run the demo.
To get you started working with TensorFlow Lite on Android, we'll walk you through building and deploying our TensorFlow demo app in Android Studio.
It's also possible to build the demo app with Bazel, but we only recommend this for advanced users who are very familiar with the Bazel build environment. For more information on that, see our page on Github.
Build and deploy with Android Studio
Clone the TensorFlow repository from GitHub if you haven't already:
git clone https://github.com/tensorflow/tensorflow
Install the latest version of Android Studio from here.
From the Welcome to Android Studio screen, use the Import Project (Gradle, Eclipse ADT, etc) option to import the
tensorflow/contrib/lite/java/demo
directory as an existing Android Studio Project.Android Studio may prompt you to install Gradle upgrades and other tool versions; you should accept these upgrades.
Download the TensorFlow Lite MobileNet model from here.
Unzip this and copy the
mobilenet_quant_v1_224.tflite
file to the assets directory:tensorflow/contrib/lite/java/demo/app/src/main/assets/
Build and run the app in Android Studio.
You'll have to grant permissions for the app to use the device's camera. Point the camera at various objects and enjoy seeing how the model classifies things!
无法使用tensorflowlite的人咋办呢?见:https://www.tensorflow.org/mobile/mobile_intro
ntroduction to TensorFlow Mobile
TensorFlow was designed from the ground up to be a good deep learning solution for mobile platforms like Android and iOS. This mobile guide should help you understand how machine learning can work on mobile platforms and how to integrate TensorFlow into your mobile apps effectively and efficiently.
About this Guide
This guide is aimed at developers who have a TensorFlow model that’s successfully working in a desktop environment, who want to integrate it into a mobile application, and cannot use TensorFlow Lite. Here are the main challenges you’ll face during that process:
- Understanding how to use Tensorflow for mobile.
- Building TensorFlow for your platform.
- Integrating the TensorFlow library into your application.
- Preparing your model file for mobile deployment.
- Optimizing for latency, RAM usage, model file size, and binary size.
看例子,对于我来说,可以直接使用tensorflolite不用这么折腾哇,因为需要ndk交叉编译这些东西。
Building TensorFlow on Android
To get you started working with TensorFlow on Android, we'll walk through two ways to build our TensorFlow mobile demos and deploying them on an Android device. The first is Android Studio, which lets you build and deploy in an IDE. The second is building with Bazel and deploying with ADB on the command line.
Why choose one or the other of these methods?
The simplest way to use TensorFlow on Android is to use Android Studio. If you aren't planning to customize your TensorFlow build at all, or if you want to use Android Studio's editor and other features to build an app and just want to add TensorFlow to it, we recommend using Android Studio.
If you are using custom ops, or have some other reason to build TensorFlow from scratch, scroll down and see our instructions for building the demo with Bazel.
Build the demo using Android Studio
Prerequisites
If you haven't already, do the following two things:
Install Android Studio, following the instructions on their website.
Clone the TensorFlow repository from Github:
git clone https://github.com/tensorflow/tensorflow
Building
Open Android Studio, and from the Welcome screen, select Open an existing Android Studio project.
From the Open File or Project window that appears, navigate to and select the
tensorflow/examples/android
directory from wherever you cloned the TensorFlow Github repo. Click OK.If it asks you to do a Gradle Sync, click OK.
You may also need to install various platforms and tools, if you get errors like "Failed to find target with hash string 'android-23' and similar.
Open the
build.gradle
file (you can go to 1:Project in the side panel and find it under the Gradle Scripts zippy under Android). Look for thenativeBuildSystem
variable and set it tonone
if it isn't already:// set to 'bazel', 'cmake', 'makefile', 'none'
def nativeBuildSystem = 'none'Click the Run button (the green arrow) or use Run -> Run 'android' from the top menu.
If it asks you to use Instant Run, click Proceed Without Instant Run.
Also, you need to have an Android device plugged in with developer options enabled at this point. See here for more details on setting up developer devices.
This installs three apps on your phone that are all part of the TensorFlow Demo. See Android Sample Apps for more information about them.
Adding TensorFlow to your apps using Android Studio
To add TensorFlow to your own apps on Android, the simplest way is to add the following lines to your Gradle build file:
allprojects {
repositories {
jcenter()
}
}
dependencies {
compile 'org.tensorflow:tensorflow-android:+'
}
This automatically downloads the latest stable version of TensorFlow as an AAR and installs it in your project.
Build the demo using Bazel
Another way to use TensorFlow on Android is to build an APK using Bazel and load it onto your device using ADB. This requires some knowledge of build systems and Android developer tools, but we'll guide you through the basics here.
First, follow our instructions for @{$install/install_sources$installing from sources}. This will also guide you through installing Bazel and cloning the TensorFlow code.
Download the Android SDK and NDK if you do not already have them. You need at least version 12b of the NDK, and 23 of the SDK.
In your copy of the TensorFlow source, update the WORKSPACE file with the location of your SDK and NDK, where it says <PATH_TO_NDK> and <PATH_TO_SDK>.
Run Bazel to build the demo APK:
bazel build -c opt //tensorflow/examples/android:tensorflow_demo
Use ADB to install the APK onto your device:
adb install -r bazel-bin/tensorflow/examples/android/tensorflow_demo.apk
Note: In general when compiling for Android with Bazel you need --config=android
on the Bazel command line, though in this case this particular example is Android-only, so you don't need it here.
This installs three apps on your phone that are all part of the TensorFlow Demo. See Android Sample Apps for more information about them.
Android Sample Apps
The Android example code is a single project that builds and installs three sample apps which all use the same underlying code. The sample apps all take video input from a phone's camera:
TF Classify uses the Inception v3 model to label the objects it’s pointed at with classes from Imagenet. There are only 1,000 categories in Imagenet, which misses most everyday objects and includes many things you’re unlikely to encounter often in real life, so the results can often be quite amusing. For example there’s no ‘person’ category, so instead it will often guess things it does know that are often associated with pictures of people, like a seat belt or an oxygen mask. If you do want to customize this example to recognize objects you care about, you can use the TensorFlow for Poets codelab as an example for how to train a model based on your own data.
TF Detect uses a multibox model to try to draw bounding boxes around the locations of people in the camera. These boxes are annotated with the confidence for each detection result. Results will not be perfect, as this kind of object detection is still an active research topic. The demo also includes optical tracking for when objects move between frames, which runs more frequently than the TensorFlow inference. This improves the user experience since the apparent frame rate is faster, but it also gives the ability to estimate which boxes refer to the same object between frames, which is important for counting objects over time.
TF Stylize implements a real-time style transfer algorithm on the camera feed. You can select which styles to use and mix between them using the palette at the bottom of the screen, and also switch out the resolution of the processing to go higher or lower rez.
When you build and install the demo, you'll see three app icons on your phone, one for each of the demos. Tapping on them should open up the app and let you explore what they do. You can enable profiling statistics on-screen by tapping the volume up button while they’re running.
Android Inference Library
Because Android apps need to be written in Java, and core TensorFlow is in C++, TensorFlow has a JNI library to interface between the two. Its interface is aimed only at inference, so it provides the ability to load a graph, set up inputs, and run the model to calculate particular outputs. You can see the full documentation for the minimal set of methods in TensorFlowInferenceInterface.java
The demos applications use this interface, so they’re a good place to look for example usage. You can download prebuilt binary jars at ci.tensorflow.org.
TensorFlow Lite demo——就是为嵌入式设备而存在的,底层调用NDK神经网络API,注意其使用的tf model需要转换下,同时提供java和C++ API,无法使用tflite的见后的更多相关文章
- 移动端目标识别(1)——使用TensorFlow Lite将tensorflow模型部署到移动端(ssd)之TensorFlow Lite简介
平时工作就是做深度学习,但是深度学习没有落地就是比较虚,目前在移动端或嵌入式端应用的比较实际,也了解到目前主要有 caffe2,腾讯ncnn,tensorflow,因为工作用tensorflow比较多 ...
- 移动端目标识别(2)——使用TENSORFLOW LITE将TENSORFLOW模型部署到移动端(SSD)之TF Lite Developer Guide
TF Lite开发人员指南 目录: 1 选择一个模型 使用一个预训练模型 使用自己的数据集重新训练inception-V3,MovileNet 训练自己的模型 2 转换模型格式 转换tf.GraphD ...
- 移动端目标识别(3)——使用TensorFlow Lite将tensorflow模型部署到移动端(ssd)之Running on mobile with TensorFlow Lite (写的很乱,回头更新一个简洁的版本)
承接移动端目标识别(2) 使用TensorFlow Lite在移动设备上运行 在本节中,我们将向您展示如何使用TensorFlow Lite获得更小的模型,并允许您利用针对移动设备优化 ...
- TensorFlow Lite for Android示例
一.TensorFlow Lite TensorFlow Lite 是用于移动设备和嵌入式设备的轻量级解决方案.TensorFlow Lite 支持 Android.iOS 甚至树莓派等多种平台. ...
- 嵌入式设备web服务器比较
目录(?)[-] Boa Thttpd Mini_httpd Shttpd Lighttpd Goahead AppWeb Apache 开发语言和开发工具 结论 备注 现在在嵌入式设备中所使用的 ...
- 嵌入式设备web服务器
操作系统:ubuntu10.04 前言: 为了提高对设备的易操作性,很多设备中提供pc机直接通过浏览器操作设备的功能.这就需要在设备中实现web服务器. 现在在嵌入式设备中所使用的web服 ...
- 谷歌发布 TensorFlow Lite [官方网站,文档]
机器学习社区:http://tensorflow123.com/ 简介 TensorFlow Lite TensorFlow Lite 是 TensorFlow 针对移动和嵌入式设备的轻量级解决方案. ...
- 【ARM-Linux开发】ARM嵌入式设备Linux系统启动步骤和方式
1). 简介 本文简单介绍ARM嵌入式设备基于嵌入式Linux操作系统时候的启动步骤和启动方式, 区别与X86平台,ARM平台下并没有一个标准的启动步骤,不同ARM SoC都会使用各自定义的boot ...
- Tensorflow Lite从入门到精通
TensorFlow Lite 是 TensorFlow 在移动和 IoT 等边缘设备端的解决方案,提供了 Java.Python 和 C++ API 库,可以运行在 Android.iOS 和 Ra ...
随机推荐
- H5 标签属性、input属性
高亮文字: 全部商品只要<mark>6.18</mark> 结果: 加拼音文字: <ruby>變<rt>bian</rt></ ...
- SQL Server 中4个系统数据库,Master、Model、Msdb、Tempdb。
(1)Master数据库是SQL Server系统最重要的数据库,它记录了SQL Server系统的所有系统信息.这些系统信息包括所有的登录信息.系统设置信息.SQL Server的初始化信息和其他系 ...
- std::vector遍历
std::vector是我在标准库中实用最频繁的容器.总结一下在遍历和创建vector时需要注意的一些地方. 在不考虑线程安全问题的前提下,在C++11中有五种遍历方式. 方式一 for (size_ ...
- 学不好Linux?我们分析看看正确的学习方法是什么-马哥教育
2018年里,Linux运维的职位数量和平均薪资水平仍然持续了去年的强劲增幅,比很多开发岗位涨的都快.从研究机构的数据来看,Linux职位数量和工资水平涨幅均在IT行业的前五之列,比去年的表现还要好一 ...
- Silverlight之我见——数据批示(2)
接着上一回的话题,继续来研究数据批示特性,先拿简单的RageAttribute来弄弄,接着上次的示例,添加一个Age属性,并加上RangeAttribute. [Range(20,60,ErrorMe ...
- Linux - redis主从同步
目录 Linux - redis主从同步 环境准备 配置主从同步 测试写入数据,主库写入数据,检查从库数据 手动进行主从复制故障切换 Linux - redis主从同步 原理: 从服务器向主服务器发送 ...
- AtCoder ABC 085C/D
C - Otoshidama 传送门:https://abc085.contest.atcoder.jp/tasks/abc085_c 有面值为10000.5000.1000(YEN)的纸币.试用N张 ...
- vue开发规范
一.简介 团队合作中规范文档是必须的,在多人合作的项目只有定义好一定的编码规范才会使得开发井井有序,代码一目了然,下边将谈一下个人对vue使用规范的一些看法. 二.规范案例 1.组件命名 组件文件名应 ...
- HDU 1836 畅通工程
畅通工程 Time Limit: 1000ms Memory Limit: 32768KB This problem will be judged on HDU. Original ID: 18636 ...
- noip2013 Day2 T2 花匠 解题报告
题目: 3289 花匠 2013年NOIP全国联赛提高组 时间限制: 1 s 空间限制: 128000 KB 题目描述 Description 花匠栋栋种了一排花,每株花都有自己的高度.花儿越长越大, ...