Flags:
--port=8500 int32 Port to listen on for gRPC API
--grpc_socket_path="" string If non-empty, listen to a UNIX socket for gRPC API on the given path. Can be either relative or absolute path.
--rest_api_port=0 int32 Port to listen on for HTTP/REST API. If set to zero HTTP/REST API will not be exported. This port must be different than the one specified in --port.
--rest_api_num_threads=16 int32 Number of threads for HTTP/REST API processing. If not set, will be auto set based on number of CPUs.
--rest_api_timeout_in_ms=30000 int32 Timeout for HTTP/REST API calls.
--enable_batching=false bool enable batching
--batching_parameters_file="" string If non-empty, read an ascii BatchingParameters protobuf from the supplied file name and use the contained values instead of the defaults.
--model_config_file="" string If non-empty, read an ascii ModelServerConfig protobuf from the supplied file name, and serve the models in that file. This config file can be used to specify multiple models to serve and other advanced parameters including non-default version policy. (If used, --model_name, --model_base_path are ignored.)
--model_name="default" string name of model (ignored if --model_config_file flag is set)
--model_base_path="" string path to export (ignored if --model_config_file flag is set, otherwise required)
--max_num_load_retries=5 int32 maximum number of times it retries loading a model after the first failure, before giving up. If set to 0, a load is attempted only once. Default: 5
--load_retry_interval_micros=60000000 int64 The interval, in microseconds, between each servable load retry. If set negative, it doesn't wait. Default: 1 minute
--file_system_poll_wait_seconds=1 int32 Interval in seconds between each poll of the filesystem for new model version. If set to zero poll will be exactly done once and not periodically. Setting this to negative value will disable polling entirely causing ModelServer to indefinitely wait for a new model at startup. Negative values are reserved for testing purposes only.
--flush_filesystem_caches=true bool If true (the default), filesystem caches will be flushed after the initial load of all servables, and after each subsequent individual servable reload (if the number of load threads is 1). This reduces memory consumption of the model server, at the potential cost of cache misses if model files are accessed after servables are loaded.
--tensorflow_session_parallelism=0 int64 Number of threads to use for running a Tensorflow session. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--tensorflow_intra_op_parallelism=0 int64 Number of threads to use to parallelize the executionof an individual op. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--tensorflow_inter_op_parallelism=0 int64 Controls the number of operators that can be executed simultaneously. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--ssl_config_file="" string If non-empty, read an ascii SSLConfig protobuf from the supplied file name and set up a secure gRPC channel
--platform_config_file="" string If non-empty, read an ascii PlatformConfigMap protobuf from the supplied file name, and use that platform config instead of the Tensorflow platform. (If used, --enable_batching is ignored.)
--per_process_gpu_memory_fraction=0.000000 float Fraction that each process occupies of the GPU memory space the value is between 0.0 and 1.0 (with 0.0 as the default) If 1.0, the server will allocate all the memory when the server starts, If 0.0, Tensorflow will automatically select a value.
--saved_model_tags="serve" string Comma-separated set of tags corresponding to the meta graph def to load from SavedModel.
--grpc_channel_arguments="" string A comma separated list of arguments to be passed to the grpc server. (e.g. grpc.max_connection_age_ms=2000)
--enable_model_warmup=true bool Enables model warmup, which triggers lazy initializations (such as TF optimizations) at load time, to reduce first request latency.
--version=false bool Display version
--monitoring_config_file="" string If non-empty, read an ascii MonitoringConfig protobuf from the supplied file name

  

Tensorflow Serving 参数的更多相关文章

  1. 学习笔记TF067:TensorFlow Serving、Flod、计算加速,机器学习评测体系,公开数据集

    TensorFlow Serving https://tensorflow.github.io/serving/ . 生产环境灵活.高性能机器学习模型服务系统.适合基于实际数据大规模运行,产生多个模型 ...

  2. tensorflow 模型保存与加载 和TensorFlow serving + grpc + docker项目部署

    TensorFlow 模型保存与加载 TensorFlow中总共有两种保存和加载模型的方法.第一种是利用 tf.train.Saver() 来保存,第二种就是利用 SavedModel 来保存模型,接 ...

  3. tensorflow serving 之minist_saved_model.py解读

    最近在学习tensorflow serving,但是就这样平淡看代码可能觉得不能真正思考,就想着写个文章看看,自己写给自己的,就像自己对着镜子演讲一样,写个文章也像自己给自己讲课,这样思考的比较深,学 ...

  4. Tensorflow Serving 模型部署和服务

    http://blog.csdn.net/wangjian1204/article/details/68928656 本文转载自:https://zhuanlan.zhihu.com/p/233614 ...

  5. tensorflow serving 编写配置文件platform_config_file的方法

    1.安装grpc gRPC 的安装: $ pip install grpcio 安装 ProtoBuf 相关的 python 依赖库: $ pip install protobuf 安装 python ...

  6. 基于TensorFlow Serving的深度学习在线预估

    一.前言 随着深度学习在图像.语言.广告点击率预估等各个领域不断发展,很多团队开始探索深度学习技术在业务层面的实践与应用.而在广告CTR预估方面,新模型也是层出不穷: Wide and Deep[1] ...

  7. 139、TensorFlow Serving 实现模型的部署(二) TextCnn文本分类模型

    昨晚终于实现了Tensorflow模型的部署 使用TensorFlow Serving 1.使用Docker 获取Tensorflow Serving的镜像,Docker在国内的需要将镜像的Repos ...

  8. TensorFlow Serving简介

    一.TensorFlow Serving简介 TensorFlow Serving是GOOGLE开源的一个服务系统,适用于部署机器学习模型,灵活.性能高.可用于生产环境. TensorFlow Ser ...

  9. docker部署tensorflow serving以及模型替换

    Using TensorFlow Serving with Docker 1.Ubuntu16.04下安装docker ce 1-1:卸载旧版本的docker sudo apt-get remove ...

随机推荐

  1. 力扣1052. 爱生气的书店老板-C语言实现-中等难度

    题目 传送门 文本 今天,书店老板有一家店打算试营业 customers.length 分钟.每分钟都有一些顾客(customers[i])会进入书店,所有这些顾客都会在那一分钟结束后离开. 在某些时 ...

  2. vue页面嵌套其他页面判断是否生产https

    if (location.protocol.indexOf('https') > -1) { var oMeta = document.createElement('meta'); oMeta. ...

  3. go语言-csp模型-并发通道

    [前言]go语言的并发机制以及它所使用的CSP并发模型 一.CSP并发模型 CSP模型是上个世纪七十年代提出的,用于描述两个独立的并发实体通过共享的通讯 channel(管道)进行通信的并发模型. C ...

  4. keras报错:AttributeError: '_thread._local' object has no attribute 'value'

    需求是使用pyqt5中的槽函数运行keras模型训练,为了不让工具在模型训练的过程中出现假死的现象,于是把训练操作放到单独的线程中运行,于是问题来了,训练操作在主线程运行时正常,但是界面假死,假若训练 ...

  5. 任务3 PHP配置 1. PHP基础配置

    查看PHP配置文件得位置 #/ucsr/local/php/bin/php -i |grep -i "loaded configuration file" # cp /usr/lo ...

  6. CF557E Ann and Half-Palindrome 题解

    算法:dp+字典树 题目链接Ann and Half-Palindrome   在CF刷字符串题的时候遇到了这题,其实并没有黑题这么难,个人感觉最多是紫题吧(虽然一开始以为是后缀自动机的神仙题).   ...

  7. android分析之消息处理

    前序:每个APP对应一个进程,该进程内有一个ActivityThread的线程,称为主线程(即UI主线程),此外,还有其他线程,这个再论. android的消息系统分析. 每个Thread只对应一个L ...

  8. beego框架panic: 'GetSecurityInf' method doesn't exist in the controller CorporateInfcontroller问题解决

    在使用beego框架时,出现类似于panic: 'GetSecurityInf' method doesn't exist in the controller CorporateInfcontroll ...

  9. IntelliJ IDEA热部署配置总结

    Intellij IDEA 4种配置热部署的方法: 热部署可以使的修改代码后,无须重启服务器,就可以加载更改的代码. 第1种:修改服务器配置,使得IDEA窗口失去焦点时,更新类和资源 菜单Run -& ...

  10. 从yield到yield from再到python协程

    yield 关键字 def fib(): a,b = 0,1 while 1: yield b a,b = b,a+b yield是在:PEP 255 -- Simple Generators 这个p ...